For some reason, there seems to be quite a bit of confusion by various posters here over the above statement, starting with erocker:
People, I don't see how I can put it any more clearly. The idea is to isolate each individual CPU's true performance, so the last thing you want to do is give the graphics card any significant work to do. Heck, if the card could be switched off altogether (possible in Unreal Tournament 2003/4) then you'd have an even more accurate result.
And it doesn't matter if one CPU achieves 200fps and the other 1200fps (6 times faster) you're measuring performance differences between them. This difference will become plenty obvious as time goes by and games become more demanding, giving the faster one a longer useful life. So for example, when the slower one achieves only a useless 10fps, the faster one will still be achieving 60fps and be highly useable.
Of course, it's also a good idea to supplement these tests with real-world resolutions too, as there can be unexpected results sometimes.
Thanks to John Doe for replying with some excellent answers to this misunderstanding.
Will do.
It isn't, as I've explained above in this post.
Yes, of course, lol. Benchmark a bunch of older games with vsync on and they'll all peg at a solid 60fps.