No screen can display any number of frames per second. A common refresh rate for flat panel displays is 60 Hz (meaning the image is refreshed 60 times per second). But there are also some with 120 Hz (for stereoscopy) or 200 Hz. Since 60 Hz on CRT screens flickers a lot, these old boxes can often also do 75, 85, or 100 Hz. No matter how often a monitor can refresh its image per second - there is no reason (apart from exceptional cases like benchmarking) to let the graphics card calculate more images, especially since this leads to unwanted artifacts (tearing).
Therefore, there is a technique called V-Sync. Here, the graphics card waits until the new image has been transferred to the screen before starting the next one. It is highly recommended to turn on V-Sync if the computer can render more frames per second than the screen is capable of displaying anyway. But what happens when V-Sync is turned on and the computer can't match the screen's native frame rate?
Let's say the screen refreshes 60 times per second, but the GPU can only manage 50 frames per second. As soon as the graphics card receives the synchronization signal, it copies the backbuffer contents into the front buffer and renders the next frame into the backbuffer. 16.67ms later, the next synchronization signal arrives, but the graphics card is not yet ready. So nothing is copied into the front buffer. Instead, the screen displays the same image twice in succession. Only another 3.33ms later does the graphics chip finish the picture and must now wait 13.34ms (!) for the next sync signal. Until this comes, a total of 33.33ms has passed - in other words, we have a frame rate of only 30 fps, although the computer would manage 50 fps!
Triple buffering
The problem here is that the graphics card does nothing while waiting for the sync signal. But there is a solution for this: Since we are not allowed to write into the backbuffer yet, we just write into another backbuffer! While we write into the second backbuffer, the first one is copied into the frontbuffer as soon as the sync signal comes again. In the next frame we write into the first backbuffer again, while the second one waits to be copied. Thus we achieve the same performance as without V-Sync, but at the same time we got rid of the tearing.
A disadvantage of this method is that we need a bit more video memory. However, the additional consumption is kept within limits, because only one additional color buffer is needed - we don't need the Z and stencil buffers again. Of course, you also get more micro-stutters - but let's be honest: 50 fps with micro-stutters is still better than 30 fps without.
OpenGL itself does not offer the possibility to switch triple buffering on or off. This is up to the driver. You can emulate triple buffering using an FBO, but then you actually have quadruple buffering if the driver already takes care of it. (source: [1])
Variable screen frequency
Currently (2014) there are efforts in the industry to develop displays with variable refresh rate. Names for this are Adaptive Sync (DisplayPort standard), G-Sync (Nvidia) or FreeSync (AMD). The problem of synchronization between screen and graphics card is to be approached from the other side than before: It is no longer the screen that dictates the clock rate at which the graphics card should deliver image data, but the graphics card signals the screen when it is done with the calculations. It then transmits the new image to the screen, which then displays it immediately.
The advantage: The latency (time between user input and image output) is reduced, triple buffering becomes unnecessary and the micro-stutters caused by the latter disappear. Tearing is also not seen, since synchronization still takes place.