• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

[PCPER]Frame-rating, what it is, what is does, and why it's important.

Joined
Oct 9, 2009
Messages
716 (0.13/day)
Location
Finland
System Name RGB-PC v2.0
Processor AMD Ryzen 7950X
Motherboard Asus Crosshair X670E Extreme
Cooling Corsair iCUE H150i RGB PRO XT
Memory 4x16GB DDR5-5200 CL36 G.SKILL Trident Z5 NEO RGB
Video Card(s) Asus Strix RTX 2080 Ti
Storage 2x2TB Samsung 980 PRO
Display(s) Acer Nitro XV273K 27" 4K 120Hz (G-SYNC compatible)
Case Lian Li O11 Dynamic EVO
Audio Device(s) Audioquest Dragon Red + Sennheiser HD 650
Power Supply Asus Thor II 1000W + Cablemod ModMesh Pro sleeved cables
Mouse Logitech G500s
Keyboard Corsair K70 RGB with low profile red cherrys
Software Windows 11 Pro 64-bit
Yes indeed this change is for the better for every gamer regardless the brand of their hardware. Almost put tears in my eyes when NVIDIA admitted the existence of microstuttering problems in the past (GTX 295) and now this is even better, the problem child also came forth.

In all honesty I think both need shared memory between the GPUs, better frame sync (also sub-frame piece syncing, tearing!) functionality from DX side and changing the rendering method to sub-frame pieces (this has been tried before, problems with tearing and lower performance makes it less sexy to use) to completely get away with this problem. Until then it is matter of trying to smooth things on the driver side which is very tricky indeed and exactly why even AMD is having so much trouble.

I have had too many crossfire and SLi setups in the past and at some point I just went to fastest possible single GPU and haven't had a problem since. This ment decreasing image quality settings but so far I rather take that than all the multi-gpu problems. 120Hz screen(s) would love to see some multi-GPU action though as performance is not good enough even with single Titan in many cases!

Now that we finally have this reasonable conversation about the rendering issues I will gladly do my best to help in any possible way to fix the problem. After all, this is what I always wanted to come out. The truth, not some marketing BS or fanboy chatter.

I can't belive how long ago I started, back in the days when we had no real term for frametime issues like microstuttering. Now often some people get confused about terms and completely mis-use the term microstuttering which is why I don't like it anymore. It is like you say there is criminal out on the street and people start calling tips that my neighbour looks suspicious and evil although he is doing gardening like every day. :D

Not all problems are related to frametimes, microstuttering term suffers from popularity?
Ball is now on AMD hands. Come out clean and explain what you are doing.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.50/day)

That's from a few days ago. When speaking of Anandtech, I was speaking in terms of what they reported three months earlier, which the article you linked is a follow-up to.
from your link:

AMD has been clear with us from the start that the primary reason they even ended up in this situation is because they weren’t doing sufficient competitive analysis, and that they have revised their driver development process so that they now do this analysis to prevent future problems.

and

They’re already working on changing how they do frame pacing for multi-GPU setups, and come July we’re going to have the chance to see the results of AMD’s latest efforts there.


So...what was your point?

That's not a public statement. That's Anandtech reporting what AMD told THEM, not AMD making a public statement.

I mean, really, that kinda settles all your questions in this thread right there, problems with testing or not. IF you want to call that AMD's public statement, then why are you trying to raise issues with the information provided?


AMD has been clear with us from the start that the primary reason they even ended up in this situation is because they weren’t doing sufficient competitive analysis


Because really.. it's just that.. it's info. In January AMD said they'd have a fix in March. They don't, and now it's April. In the meantime, here's some more info so you understand why it's taking so long, and why the issue is complex.


There's nothing new here, no conspiracy.
 
Joined
Apr 30, 2012
Messages
3,881 (0.83/day)
Here you go for the visualy impared.

I drew Xs to help your eyes out. :)



Additional lights that turn on
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.50/day)
Here you go for the visualy impared.

I drew Xs to help your eyes out. :)


Nobody cares.


:eek:


AMD has been clear with us from the start that the primary reason they even ended up in this situation is because they weren’t doing sufficient competitive analysis

Who cares about Ryan's videos? They are merely examples, not perfection...that's up to AMD's driver team to deal with. You're making a big deal out of nothing.
 
Joined
Apr 30, 2012
Messages
3,881 (0.83/day)
Nobody cares.


Who cares about Ryan's videos?

The whole purpose of benching is limiting influences if your not gonna bother doing that dont bench at all

The workload is different so offload is different to GPU sheesh

Really your asking that question.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.50/day)
Really your asking that question.

Sure. All Ryan is illustrating is what AMD is doing to fix things, and the types of testing they now use. In that, the videos are fine.

For somebody claiming to want to ensure the data is interpreted properly, you sure seem to have a completely different agenda. Really, you do seem to be claiming this teting is flawed, there are other issues, people are not looking at the right things, blah blah...


It's not meant to present fine details. I also didn't link his videos.. I linked his explanation of the testing hardware and what it shows. The rest is stuff the end user shouldn't have to worry about, and AMD has stated that how this ends up is going to be different for each user, so they are going to present different options, even.


AMD's already said the testing is right, and you're here months later trying to point out problems with it. Just saying, man.
 
Joined
Apr 30, 2012
Messages
3,881 (0.83/day)
For somebody claiming to want to ensure the data is interpreted properly, you sure seem to have a completely different agenda.

Which is what ?

I wants to know if the Data deing spit out by the game engine is equal in-both cases. If the 2 GPUs in question are processing same amount of data per frame. You can even throw in APUs.

Since FRAPS and FCAT take and insert frame info at the same spot before it reaches Direct X.

Then what ? Should we ignore that too ? Hope you dont conduct your own test like that.

Asking questions isnt a bad thing. Ignoring Answers is.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.50/day)
Ignoring Answers is.

Exactly. And you've ignored that this isn't an answer.


It's merely a way to effectively convey to the end user what's happening, and what all this "stutter" that people like me complain about. For years people have said they don't see it, it doesn't exist, blah blah blah...so here's a way it can be presented so that any user can see it. the flipping colors make it pretty obvious.

You are looking at image quality, when that's not what this is about. This is about the smoothness of the animation presented to the end user, and in those videos...the only important thing there is the motion of the character's arms.

It is clearly smoother in NVidia's implementation.


How or why it's smoother...is the question. The quality of the image...may have something to do with that...sure, but that is but one of many things can lead to problems.


And that's why none of this will change how VGAs are tested and reviewed. The second card in Crossfire IS working...it's simply not displaying it's image properly, so it appears that it's useless. It's not useless, though, it's merely out of sync, and in such a way that V-sync doesn't fix the issue.


AMD has stated, overall, that this is due to a memory management problem in the driver. A fix was expected in March, and we were given the first 13.1 beta.


Now it's march, there's no driver, so here's why. The problem is complex, has many facets, and takes time to manage. AMD has asked for four more months. I stated, before any of this "new" stuff went public, that I was expecting four more months. Go figure.
 
Joined
Feb 19, 2006
Messages
6,270 (0.90/day)
Location
New York
Processor INTEL CORE I9-9900K @ 5Ghz all core 4.7Ghz Cache @1.305 volts
Motherboard ASUS PRIME Z390-P ATX
Cooling CORSAIR HYDRO H150I PRO RGB 360MM 6x120mm fans push pull
Memory CRUCIAL BALLISTIX 3000Mhz 4x8 32gb @ 4000Mhz
Video Card(s) EVGA GEFORECE RTX 2080 SUPER XC HYBRID GAMING
Storage ADATA XPG SX8200 Pro 1TB 3D NAND NVMe,Intel 660p 1TB m.2 ,1TB WD Blue 3D NAND,500GB WD Blue 3D NAND,
Display(s) 50" Sharp Roku TV 8ms responce time and Philips 75Hz 328E9QJAB 32" curved
Case BLACK LIAN LI O11 DYNAMIC XL FULL-TOWER GAMING CASE,
Power Supply 1600 Watt
Software Windows 10
Xzibit, I also wonder about the visuals that are rendered to the end users monitor, I have wondered if both camps with equal settings do they show different things and if one camp may be suspect of not letting their card display everything like cutting out lights to get better render times. I have my opinion about that with the camps but is indeed a different topic that has some relations.
 
Joined
Apr 30, 2012
Messages
3,881 (0.83/day)
Xzibit, I also wonder about the visuals that are rendered to the end users monitor, I have wondered if both camps with equal settings do they show different things and if one camp may be suspect of not letting their card display everything like cutting out lights to get better render times. I have my opinion about that with the camps but is indeed a different topic that has some relations.

Thats what peaked my interest.

Like I said I dont have a SLI or Crossfire setup in either my Nvidia or AMD systems. Since those test arent limited to Multi-Card configurations. Hes also testing FCAT with single cards thats what i'm looking at myself.

People have noted discrepancies in the videos hes put out with the LOD differences.

It seams he did post the write-up

I'm interested to know if either 7000/600 series is doing more/less work for the benched results.
 

the54thvoid

Super Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
13,281 (2.40/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
Whether or not what card is doing more work is still not the point. The point is whether the delivery of the visual image is smoother or not.

Something people ought to be aware of as well is that AMD can deal with quite complex lighting (Dirt Showdown with global lighting destroys Nvidia cards because the lighting coding was developed from an AMD showcase of lighting for that game). Sleeping Dogs may use similar lighting effects (AMD title) which would help explain the differences you are highlighting.

The GCN architecture is fantastic. And in many respects is better (and sometimes worse) than Nvidia's Kepler. However, what Dave is so patiently putting across is not the difference in appearance of quality but the smoothness of the image.

And like he's said, AMD acknowledged this following the initial latency storm and then again in a discussion with Anandtech.

There isn't an argument about that to be had here. Your point is for another thread - who has better IQ. Certainly in your pics - it's AMD.
 
Joined
Oct 9, 2009
Messages
716 (0.13/day)
Location
Finland
System Name RGB-PC v2.0
Processor AMD Ryzen 7950X
Motherboard Asus Crosshair X670E Extreme
Cooling Corsair iCUE H150i RGB PRO XT
Memory 4x16GB DDR5-5200 CL36 G.SKILL Trident Z5 NEO RGB
Video Card(s) Asus Strix RTX 2080 Ti
Storage 2x2TB Samsung 980 PRO
Display(s) Acer Nitro XV273K 27" 4K 120Hz (G-SYNC compatible)
Case Lian Li O11 Dynamic EVO
Audio Device(s) Audioquest Dragon Red + Sennheiser HD 650
Power Supply Asus Thor II 1000W + Cablemod ModMesh Pro sleeved cables
Mouse Logitech G500s
Keyboard Corsair K70 RGB with low profile red cherrys
Software Windows 11 Pro 64-bit
Joined
Feb 8, 2012
Messages
3,014 (0.63/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
This image from guru3d's article illustrates the problem with muilti gpu setups. AMD is simply delivering frames as fast as it can and as soon as it can. nVidia is syncing and delaying frames.

 
Joined
Oct 26, 2011
Messages
3,145 (0.65/day)
Processor 8700k Intel
Motherboard z370 MSI Godlike Gaming
Cooling Triple Aquacomputer AMS Copper 840 with D5
Memory TridentZ RGB G.Skill C16 3600MHz
Video Card(s) GTX 1080 Ti
Storage Crucial MX SSDs
Display(s) Dell U3011 2560x1600 + Dell 2408WFP 1200x1920 (Portrait)
Case Core P5 Thermaltake
Audio Device(s) Essence STX
Power Supply AX 1500i
Mouse Logitech
Keyboard Corsair
Software Win10
Sadly you can't do much magic with AFR, you can just improve how smooth it presents.

I think that improving bandwidth between GPUs and making them render parts of the screen instead of 1x1x1x1 would be better.
 
Joined
Feb 8, 2012
Messages
3,014 (0.63/day)
Location
Zagreb, Croatia
System Name Windows 10 64-bit Core i7 6700
Processor Intel Core i7 6700
Motherboard Asus Z170M-PLUS
Cooling Corsair AIO
Memory 2 x 8 GB Kingston DDR4 2666
Video Card(s) Gigabyte NVIDIA GeForce GTX 1060 6GB
Storage Western Digital Caviar Blue 1 TB, Seagate Baracuda 1 TB
Display(s) Dell P2414H
Case Corsair Carbide Air 540
Audio Device(s) Realtek HD Audio
Power Supply Corsair TX v2 650W
Mouse Steelseries Sensei
Keyboard CM Storm Quickfire Pro, Cherry MX Reds
Software MS Windows 10 Pro 64-bit
Sadly you can't do much magic with AFR, you can just improve how smooth it presents.

I think that improving bandwidth between GPUs and making them render parts of the screen instead of 1x1x1x1 would be better.

Problem with that approach is there would be much unnecessary duplicated work in a single frame on both GPUs. With alternate frames each GPU does separate work for its own frame only VRAM is duplicated.

Processing parts of the screen for different lighting passes in a deferred engine did prove to be a good idea, if I'm not mistaken Dice did it in Frostbite 2 engine using direct compute.
 
Joined
Apr 30, 2012
Messages
3,881 (0.83/day)
This image from guru3d's article illustrates the problem with muilti gpu setups. AMD is simply delivering frames as fast as it can and as soon as it can. nVidia is syncing and delaying frames.

http://www.guru3d.com/index.php?ct=articles&action=file&id=3108

I'm curious to know if this is forward thinking on Nvidias part for Cloud, Tegra, Shield Project (Remote Gaming)
Spacing out frames would mean less work for the GPU with less variance less noticable until you start pumping more data through and we arent there just yet.

Would it add overal game latency ?
Wouldnt more demanding games see higher variance and at higher resolutions ?
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.50/day)
Would it add overal game latency ?
Wouldnt more demanding games see higher variance and at higher resolutions ?

Possibly, but I'm pretty sure that that was why G80 has separate I/O chip specifically to handle that load and minimize any latency that might be introduced. Eventually that silicon was integrated directly into the GPU itself.


AMD simply killed the AIW line, and that's truly when these problems started for me, personally. ATI used to have awesome capture and display hardware AND software.

And, that visual representation was covered by other articles already. We already know(or at least I know) that the issue is a lack of sync of the frames from the secondary card. AMD is claiming that the "runts" are these frames, and that's why the second card seems useless.
 

brandonwh64

Addicted to Bacon and StarCrunches!!!
Joined
Sep 6, 2009
Messages
19,542 (3.47/day)
I figured this would turn into an arguement somewere along the line. Dave is right with this article he linked us cause I am just now noticing things that my Xfire was doing that vanished when I went single 7970.

I know dave and he would not have posted this if he has not seen the effects of it himself.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.50/day)
I figured this would turn into an arguement somewere along the line. Dave is right with this article he linked us cause I am just now noticing things that my Xfire was doing that vanished when I went single 7970.

I know dave and he would not have posted this if he has not seen the effects of it himself.

Making the move from dual 6950/6970, to single 7970, where the performance is nearly identical, really does present things very differently than any other "upgrade" before.

Considering that the number of "shaders" has been reduced as well, I can't help but feel that it is hard to get an accurate idea of the full picture without actually having the hardware in hand, and swapping between the two configurations. It can be difficult to separate what's changed due to moving to a single GPU, and what's due to the more efficient GPU design.

But, at the same time, I hardly see this is as a movement that will change how reviews are done.


In the end, sure, I noticed this a long time ago, and yes, I have talked about it to countless people. Maybe two or three knew right away what I was talking about, while many more had similar configs, so it's always been clear to me that this is an issue that not everyone is really going to be affected by, number one because many simply don't do multi-GPU, and secondly, not everyone is going to be as sensitive to this as I am. The one thing that you have, Brandon, that others don't, is out countless conversations in TeamSpeak about it. You can now check and see if you see any of the things I did, now that you're going down that exact road I did.

Enjoy the trip. :p
 
Joined
Apr 30, 2012
Messages
3,881 (0.83/day)
Possibly, but I'm pretty sure that that was why G80 has separate I/O chip specifically to handle that load and minimize any latency that might be introduced. Eventually that silicon was integrated directly into the GPU itself.


AMD simply killed the AIW line, and that's truly when these problems started for me, personally. ATI used to have awesome capture and display hardware AND software.

And, that visual representation was covered by other articles already. We already know(or at least I know) that the issue is a lack of sync of the frames from the secondary card. AMD is claiming that the "runts" are these frames, and that's why the second card seems useless.

Yes.

I was more looking at the Single solution comparison and the difference on the Nvidia side with Single and SLI.

The Mid-end (Lower-end still to come) GPUs results seams more interesting

You'd expect when you add another card into the mix to lower the variance or maintain that of a single but it still increases. Once you go above 1440p resolution its seams its a question of how much you want to tolerate.

These results just deter multi-gpu solutions.
 

cadaveca

My name is Dave
Joined
Apr 10, 2006
Messages
17,232 (2.50/day)
These results just deter multi-gpu solutions.

Perhaps. For sure, right now, on the AMD side of things. Perhaps some of the tech and know-how from buying 3dfx years ago has proved valuable here for Nvidia when dealing with this stuff.


I know, for sure, that ADM is trying to get this fixed. For me, it's very obvious, since some drivers "stutter" more than others, those that don't seem to be missing all benefits from the second card, and those in between tend to give me headaches or make me feel ill.


None of that every happens in single-GPU.


When moving up resolutions, you have to expect that the higher workload is going to affect the timings between frames. That's a given. You can really only accurately compare different resolutions, for this "investigation", if the performance offered by each is similar. You can't ignore that the workload is different.

If a GUP rendered 1080p and 1600p with the same level of performance, that's fine, but it should be quite clear to most users, by now, that there are different cards from each brand for a reason...not because they offer different performance...they are offerings for users running different resolutions.

You don't need a 7970 to run 720p, and trying to game @ 1600p on a 7770 is clearly a mistake. As far as I am concerned, the industry's lack of assigning GPUs to a specific resolution really hampers the ability of PC makers to provide well built and well-balanced configurations. There should be set "rules" for system configs to meet certain performance levels in certain resolutions.
 
Joined
Apr 30, 2012
Messages
3,881 (0.83/day)
Even the 7970 vs 680 comparison

@ 1080p
Going SLI seams only worth it to lower FT
At the price of adding another card at full price ?

@ 1440p
Going SLI looses its value by 50% FT slightly lowers and V rises.
Maybe if there was a SLI with a secondary card at 50% of the price with no outputs just SLI connectors.
I recall 3Dfx Voodoo series had a setup like that. Where you bought secondary card slightly cheaper.

@ 5760x1080
The only game that ran "remotely" decent was Dirt 3

Seams like just marketing to get you to buy an additional card for the full price.
 
Joined
Oct 21, 2005
Messages
7,102 (1.01/day)
Location
USA
System Name Computer of Theseus
Processor Intel i9-12900KS: 50x Pcore multi @ 1.18Vcore (target 1.275V -100mv offset)
Motherboard EVGA Z690 Classified
Cooling Noctua NH-D15S, 2xSF MegaCool SF-PF14, 4xNoctua NF-A12x25, 3xNF-A12x15, AquaComputer Splitty9Active
Memory G-Skill Trident Z5 (32GB) DDR5-6000 C36 F5-6000J3636F16GX2-TZ5RK
Video Card(s) ASUS PROART RTX 4070 Ti-Super OC 16GB, 2670MHz, 0.93V
Storage 1x Samsung 990 Pro 1TB NVMe (OS), 2x Samsung 970 Evo Plus 2TB (data), ASUS BW-16D1HT (BluRay)
Display(s) Dell S3220DGF 32" 2560x1440 165Hz Primary, Dell P2017H 19.5" 1600x900 Secondary, Ergotron LX arms.
Case Lian Li O11 Air Mini
Audio Device(s) Audiotechnica ATR2100X-USB, El Gato Wave XLR Mic Preamp, ATH M50X Headphones, Behringer 302USB Mixer
Power Supply Super Flower Leadex Platinum SE 1000W 80+ Platinum White, MODDIY 12VHPWR Cable
Mouse Zowie EC3-C
Keyboard Vortex Multix 87 Winter TKL (Gateron G Pro Yellow)
Software Win 10 LTSC 21H2
I came close to pulling the trigger on a second 7850... glad I didn't. I don't think its really optimal to run either crossfire or sli, it seems like theres too many inherent problems, especially for AMD. Hopefully AMD will get a fix on this.
 

qubit

Overclocked quantum bit
Joined
Dec 6, 2007
Messages
17,865 (2.85/day)
Location
Quantum Well UK
System Name Quantumville™
Processor Intel Core i7-2700K @ 4GHz
Motherboard Asus P8Z68-V PRO/GEN3
Cooling Noctua NH-D14
Memory 16GB (2 x 8GB Corsair Vengeance Black DDR3 PC3-12800 C9 1600MHz)
Video Card(s) MSI RTX 2080 SUPER Gaming X Trio
Storage Samsung 850 Pro 256GB | WD Black 4TB | WD Blue 6TB
Display(s) ASUS ROG Strix XG27UQR (4K, 144Hz, G-SYNC compatible) | Asus MG28UQ (4K, 60Hz, FreeSync compatible)
Case Cooler Master HAF 922
Audio Device(s) Creative Sound Blaster X-Fi Fatal1ty PCIe
Power Supply Corsair AX1600i
Mouse Microsoft Intellimouse Pro - Black Shadow
Keyboard Yes
Software Windows 10 Pro 64-bit
This is funny. I dont even notice with my 690. :laugh:.. oh wait....... :laugh:

690? Good choice there, dude. :cool:

Ya know what I've got? A pair of Asus HD 3850X2 cards I got cheap a few years ago. At the time, I ran them together and singly and in neither case did the performance even come close to my GTX 285. That's four AMD GPUs getting spanked by one NVIDIA GPU, lol.

Compare them to my GTX 590 and it's total humiliation. :laugh:

What chance do you think that this runt frame problem was present even then? ;) Shame I can't run those advanced tests and find out.
 
Top