newtekie1
Semi-Retired Folder
- Joined
- Nov 22, 2005
- Messages
- 28,473 (4.06/day)
- Location
- Indiana, USA
Processor | Intel Core i7 10850K@5.2GHz |
---|---|
Motherboard | AsRock Z470 Taichi |
Cooling | Corsair H115i Pro w/ Noctua NF-A14 Fans |
Memory | 32GB DDR4-3600 |
Video Card(s) | RTX 2070 Super |
Storage | 500GB SX8200 Pro + 8TB with 1TB SSD Cache |
Display(s) | Acer Nitro VG280K 4K 28" |
Case | Fractal Design Define S |
Audio Device(s) | Onboard is good enough for me |
Power Supply | eVGA SuperNOVA 1000w G3 |
Software | Windows 10 Pro x64 |
Might want to read the article again.
http://www.pcper.com/files/imagecache/article_max_width/review/2013-03-25/howgameswork.jpg
Yes, I read it, and the parts you quoted confirm what I said. They aren't taking any readings from the T_present, that is just where they are inserting the overlay on each frame the engine spits out. Their readings are taken at the end of the line, what the user actually sees. This is a far better method than FRAPs.
Maybe but if the problem (in Eyefinity) is as bad as Ryan is say, then I have an extremely hard time believing most people would just play if off after looking at their frame rates.
I don't, I remember reading an article where they took a bunch of people that said they couldn't stand playing games on anything less than 60FPS, turned the framerate counter off, limited the games to 30FPS and 90% of the people that supposedly knew they could tell the difference when a game was below 60FPS said the game felt totally smooth to them.
Some people obviously had to complain otherwise we wouldn't have multiple websites testing the complains, we wouldn't have nVidia developing a tool to test it(arguably just to make their competition look bad), and Dave's been complaining about it for a while now.
Last edited: