As you probably know, most computer displays read from your video card's memory and "refresh" their display periodically. Of course, if you update the video memory while this scanning is happening, you get "tearing" (you see one part of one image and part of the next), and it looks ugly.
My app was looking ugly.
Good OSes prevent you from doing this, but GDI on Windows XP is not such a beast. (Vista and Win7 might do this right). For 3-d, lots of video cards are disabling vertical refresh waiting by default (probably for the benchmarks).
It turns out to be sort of painful to get a regular GDI window to do this right, but I found a pretty interesting optimization.
DirectDraw provides a GetScanLine method, so you can just call DirectDrawCreate, and then GetScanLine on the returned pointer. (It's a little more code for multiple monitors.) But simple vertical sync means you just busy-wait on this method to return a value less than the previous one. I've heard about people using multimedia timers to do this, without busy-waiting.
Of course, by waiting like this, you end up spinning a CPU for about 10ms everytime you want to redraw. It's not very fast and not very efficient.
But, it turns out that most applications don't redraw the whole screen on every draw. So because the GetScanLine API returns an actual value, you can wait for the scanline being drawn to be outside the region you want to draw. My application knows these numbers, and so it was easy to code up.
Since I'm redrawing only about 1/4 the screen every frame, this method requires ~1ms per frame, so it's almost not worth using multimedia timers at all.
Knowing how fast the monitor refreshes, you can also predict how long it might take to get the monitor to a "safe" region and use timers when you have to wait more than a millisecond.
But hey, if you know interesting and more power-efficient busy-wait methods, leave a comment.