# [PCPER]Frame-rating, what it is, what is does, and why it's important.



## cadaveca (Mar 29, 2013)

For those that don't want to read, and appreciate video:











Ryan has done a good job, and at the same time, I'd like to point out the cost of the systems used to test and show this, and the huge amount of work involved. 


Shows real well how Crossfire is useless right now, FPS be damned, nearly any review showing FPS in Crossfire shows useless numbers, potentially faked, and it's not any reviewer's fault....technically, AMD has been cheating.


*Yeah, all of you that spent money on a second card for Crossfire, did it for nothing...really nothing...for now.*


And THAT is why AMD is going to fix it.




I said a while ago, I give them 4 months...if they don't, I smell a big huge lawsuit, and AMD's demise as a company.

For real. This could kill AMD's graphics division.

Now, the question is..did AMD really know about it? I say yes. Proving that, on the other hand...


----------



## radrok (Mar 29, 2013)

Man I really hope they get this sorted out, they must.

I've been loyal to ATI for years but seriously it's been nothing short of headaches.


----------



## Nordic (Mar 29, 2013)

Is it bad if this makes me wonder how cheap amd gpu's will become?


----------



## cadaveca (Mar 29, 2013)

radrok said:


> Man I really hope they get this sorted out, they must.







			
				Ryan said:
			
		

> the runt frames are every second frame



= real performance = 50% reported FPS = singlecard FPS. Wow, me reporting for years that the second card in Crossfire seemed useless really was true. Go figure.  I apologize, but I cannot help but feel a bit vindicated here.



			
				Ryan said:
			
		

> The behavior is present on 50% of titles





Yeah, it's a big issue, and yeah, they'll fix it. I do have faith. Maybe this is why they fired so much of their driver staff.


----------



## MxPhenom 216 (Mar 29, 2013)

Wow! That was pretty interesting video. Will definitely sit down and read the article as well. Maybe during my flight home on Saturday from Hawaii. Thanks for posting this Dave. AMD needs to get this fixed!


----------



## Dillinger (Mar 29, 2013)

What a great post, very informative


----------



## BiggieShady (Mar 29, 2013)

Ah, finally an update!  I've been following PCPER articles about frame latencies for some time now ...


----------



## Outback Bronze (Mar 29, 2013)

I can see they wernt using vsync. I saw screen tearing. Would this make a difference?

Interesting article although im happy playing crysis 3 with my 7950 xfire. I was getting 20fps Vsyncd all maxed and now i get 50/60 coz of new 13.3 beta.


----------



## MxPhenom 216 (Mar 29, 2013)

Outback Bronze said:


> I can see they wernt using vsync. I saw screen tearing. Would this make a difference?
> 
> Interesting article although im happy playing crysis 3 with my 7950 xfire. I was getting 20fps Vsyncd all maxed and now i get 50/60 coz of new 13.3 beta.



They said they will have vsync tests in later parts of the article. Should be in a few weeks.


----------



## Divide Overflow (Mar 29, 2013)

So the extractor program _that nVidia helped write_ shows problems with ATI systems?  Well it's not like nVidia software would do something like hamper performance, disable ATI features or anything else like that.   Oh wait....


----------



## Outback Bronze (Mar 29, 2013)

MxPhenom 216 said:


> They said they will have vsync tests in later parts of the article. Should be in a few weeks.



Yeah doh  i should have finished watching the entire video. Cheers.


----------



## BiggieShady (Mar 29, 2013)

Divide Overflow said:


> Well it's not like nVidia software would do something like hamper performance, disable ATI features or anything else like that. Oh wait....



That particular software is analyzing video captured data (what gamer sees) and it is not in position to affect anything. As you can see in this image, FRAPS can't get this data because it's a software process operating on a different OS layer (than GPU driver) :







Even if measured at GPU driver level, it would not be as accurate as video capture analysis.


----------



## LAN_deRf_HA (Mar 29, 2013)

Divide Overflow said:


> So the extractor program _that nVidia helped write_ shows problems with ATI systems?  Well it's not like nVidia software would do something like hamper performance, disable ATI features or anything else like that.   Oh wait....



If you'd been following along you'd know all this tool does is confirm what AMD themselves admitted. Their focus has been on fixing stuttering on single cards first, and they say they'll shift focus to multi gpu in a few months. My guess is to coincide with the reference 7990 launch. I don't think they knew they had these problems before very recently and them prioritizing the more common configurations is to be expected. They'll get there.


----------



## BiggieShady (Mar 29, 2013)

cadaveca said:


> Now, the question is..did AMD really know about it? I say yes. Proving that, on the other hand...



This image proves it for me, it would be hard for AMD driver team to miss all these missed frames


----------



## cadaveca (Mar 29, 2013)

BiggieShady said:


> This image proves it for me, it would be hard for AMD driver team to miss all these missed frames
> 
> http://www.pcper.com/files/review/2013-03-25/run.stats__3.png



5760x1080 is the biggest offender, for sure. I've mentioned many times that it is odd that that res with 50% more pixels than 2560x1600 performs better than 2560x1600 does, and not @ 66% of the performance of 2560x1600, as you'd expect.



It is an interesting topic for discussion. I know AMD can fix it. 1000%. only since the 7990 is launching. I mean, it explains why they refused to launch a dual GPU card for so long...they new dual-GPU wasn't working right.

They did the same with the 5970.


They even mentioned that Crossfire and Eyefinity were not supported together with 5-series cards.


I don't think it's as bad as even I have portrayed it.. it's a huge issue, in that it makes me physically ill, and unable to enjoy playing my games when using multi-GPU configurations. In a way, I'm a physical meat and bone representation of Ryan's fancy software and hardware. 

Be that as it may, I am now more interested in the why, the when, and not much else. Fixing it isn't a big deal, really. I mean, I fix it daily...by not using my 2nd, third, and fourth GPUs.






The real unfortunate part is the cost in testing such things is still prohibitive for nearly everyone. SSD Raid 0, TB's of data, Thunderbolt...multiple GPUs...oh look, thing's I've put on my shopping list for W1zz to get me in the past that I ended up buying myself.  Oh well.


OH, and Happy Easter, everyone! Look at the shit the bunny left behind!


----------



## Rowsol (Mar 29, 2013)

Fascinating.


----------



## cadaveca (Mar 29, 2013)

Rowsol said:


> Fascinating.



Meh. Most people seem to not care.


----------



## the54thvoid (Mar 29, 2013)

cadaveca said:


> Meh. Most people seem to not care.



I think it's because this is definitely an area where logical arguments win and juvenile brand loyal sabre rattling is pointless.  I posted a link to an Anandtech article (in another thread) that also looked at the issue that coincided with an AMD conference call.  Amd have hand on heart admitted it all and that they were relying on a tool for the FPS numbers.  While Nvidia started a few years back trying to argue for other measuring tools.

It's pretty black and white.  The usual 'fan'atic posters wont come here.


----------



## tokyoduong (Mar 29, 2013)

and there's people here that will swear up and down that they never have problems with CF lol.

Just the slight microstutter CF produce alone already gives me headaches. I know others don't experience it but I do. I really hope they fix this soon because it doesn't look like a hard fix.


----------



## Xzibit (Mar 29, 2013)

Atleast PCPerspective got AMD and Nvidia to agree on something.

That this isnt a good way to measure frame latency.


----------



## tokyoduong (Mar 29, 2013)

cadaveca said:


> Meh. Most people seem to not care.



I do and hence why I don't own any CF systems. I always felt weird using CF since the 2k series running 2850. That's not even accounting all the bugs associated with it. It always gives me headaches and makes me not want to play games. Any gaming past an hour starts becoming torturous. I always thought it was probably just my eyes being weird so i avoid it and didn't complain.

I've set up many CF systems for many people and it works like it should. Testing it was always a pain for me. Most people didn't think the microstutter was a big deal and didn't bother them. So to each his own but this new discovery just makes my view about CF/SLI just a little stronger. It is just too many problems to deal with. I would rather just update my card more often than double up the GPU.

BTW i don't think this will kill their graphics division. There's very few people that actually do CF. As long as they fix it in time then it shouldn't be much of a problem. Worst case, they send everyone a $50 check that can prove they bought CF set up.


----------



## DRDNA (Mar 29, 2013)

Well this is really good news as things will get better and performance will increase and we will be happy!


----------



## Sasqui (Mar 29, 2013)

cadaveca said:


> Meh. Most people seem to not care.



Certainly those with 7xxx CrossFire must.

If all that analysis is correct, I think you're spot on about the 7990 delays.


----------



## DRDNA (Mar 29, 2013)

Sasqui said:


> Certainly those with 7xxx CrossFire must.
> 
> If all that analysis is correct, I think you're spot on about the 7990 delays.



Can you speculate how long this issue has been going on? Is it all the way back to the X800 days?


----------



## Xzibit (Mar 29, 2013)

The only real issue i have with this is Ryan has gone on record saying Nvidia wrote the overlay for the frame-grabbing for there software which is also nvidias.

So it throws a wrench into the testing..

It would have been better if a 3rd-party would be the one writing the software to grab the frames.

They way they are grabbed after the renderer could benefit or cater specific to nvidia since they wrote it.


----------



## KainXS (Mar 29, 2013)

I kinda guess something was going on since I had my old HD 3870 setup(but still had 4890 setup too) but it did not seem too bad, maybe thats why ATI never prioritized it, does anyone feel the 2900XT had this problem. 

I can't see the old X800's and X1900's having had this problem, I had a X1950 setup and it ran beautifully back then.


----------



## cadaveca (Mar 29, 2013)

DRDNA said:


> Can you speculate how long this issue has been going on? Is it all the way back to the X800 days?



Nope. I suppose anyone can now build this test setup, get the software, and test other configurations. Might need to test every card with every driver version.



Kind of a pain in the ass, really.



KainXS said:


> I kinda guess something was going on since I had my old HD 3870 setup(but still had 4890 setup too) but it did not seem too bad, maybe thats why ATI never prioritized it, does anyone feel the 2900XT had this problem.
> 
> I can't see the old X800's and X1900's having had this problem, I had a X1950 setup and it ran beautifully back then.





Yeah, I never knew how good 2900XT was, since I had used always in CFX, never single card.


Then one card died.


And it seemed better. I just assumed it was the bad card, until I got my replacement.


----------



## PatoRodrigues (Mar 29, 2013)

Really, really nice.

Sites like Guru3D are already mentioning that they'll be changing their methods to measure frame latency.

FCAT seems like a great tool, good to know that nVidia actually dedicated time and resources for this.

Indeed, fascinating.


----------



## cadaveca (Mar 29, 2013)

PatoRodrigues said:


> they'll be changing their methods to measure frame latency.



Useless. You need a dedicated system that captures what's sent over the cable to the monitor, anything inside the system is suspect for not showing actual performance. Frame latency matters, sure, but not in this specific context.


----------



## PatoRodrigues (Mar 29, 2013)

Looking to contribute with this thread, i'll post another recent article about it...

http://www.tomshardware.com/reviews/graphics-card-benchmarking-frame-rate,3466.html

Not PCPER, but look... this is a serious subject, and any information is welcome. I guess.


----------



## erocker (Mar 29, 2013)

So... Fix around July? Read something the other day about that...


----------



## cadaveca (Mar 29, 2013)

erocker said:


> So... Fix around July? Read something the other day about that...



Nope, that's outside mentioned when Crossfire will START to be fixed, not 100%. No way to tell when it'll be working right, could be earlier, even.


----------



## Xzibit (Mar 29, 2013)

If only AMD had a forum.

They could pull an Nvidia and have hacker hack it.  Wait 7 months.  Minimize Micro-stutters and Throttling in drivers and then bring back the forums after.  

Quick AMD make a forum


----------



## DRDNA (Mar 29, 2013)

I guess my only real worry is the end result will actually end up lowering performance of already released chips, but resulting in truer better feeling (maybe).


----------



## techtard (Mar 29, 2013)

I haven't used Crossfire in a long time, and ended up selling because it was too hot and loud to be worth it.

Good to see they admitted they have a problem and are fixing it.



Xzibit said:


> If only AMD had a forum.
> 
> They could pull an Nvidia and have hacker hack it.  Wait 7 months.  Minimize Micro-stutters and Throttling in drivers and then bring back the forums after.
> 
> Quick AMD make a forum



They do have forums, they just don't use them!
I used to lurk in their forums years ago, but there never seemed tro be dev presence. Maybe they shut down.


----------



## erocker (Mar 29, 2013)

techtard said:


> If only AMD had a forum.





http://forums.amd.com/game/


----------



## DRDNA (Mar 29, 2013)

erocker said:


> http://forums.amd.com/game/



Just skimmed threw their forum and now I remember why I fell in love with TPU!


----------



## Xzibit (Mar 29, 2013)

Found it.

Just makes it sound bad and makes it very suspecious.  That someone working with Nvidia brought the problem up. They tried to write the software himself but couldnt and then goes to Nvidia for help to write it.










starts at 24:00 mark 


It would be better to get an un-bias method to grabbing the screens all along the process.  That way you know whats the issue in the pipeline and if either AMD or Nvidia are doing things different along the way for a more comprehensive analysis.


----------



## cadaveca (Mar 29, 2013)

Xzibit said:


> It would be better to get an un-bias method to grabbing the screens all along the process.  That way you know whats the issue in the pipeline and if either AMD or Nvidia are doing things different along the way for a more comprehensive analysis.




Or, have another programmer verify the NVidia software, which AMD has done already, it seems, since they admitted the problem is real, and are working to fix it.


I mean, it's not a question of whether this is real or not. AMD said yes, it is.


----------



## Xzibit (Mar 29, 2013)

cadaveca said:


> Or, have another programmer verify the NVidia software, which AMD has done already, it seems, since they admitted the problem is real, and are working to fix it.
> 
> 
> I mean, it's not a question of whether this is real or not. AMD said yes, it is.



Im not saying there isnt an issue. 

I want to see everything from when it leaves the game engine up to the when its displayed.

Right now its only a CF/SLI

I'm thinking beyond that to comparing single gpu chips on not just windows platforms. There more players in mobile so it be interesting to disect the differences all along the pipeline.


----------



## cadaveca (Mar 29, 2013)

Xzibit said:


> Im not saying there isnt an issue.
> 
> I want to see everything from when it leaves the game engine up to the when its displayed.
> 
> ...



Sure, I guess in that aspect, I understand, but I'm pretty sure that this is so basic, really, that there is little reason to be wary of the results.

I'd kind of like more info into what's going on, honestly, but the fact of the matter is that this is something for developers and AMD programmers to deal with, not the end user.


And the only really "new" thing about what's being done is that the video output over the cable is being captured, and HDCP isn't an issue.


----------



## DRDNA (Mar 29, 2013)

cadaveca said:


> And the only really "new" thing about what's being done is that the video output over the cable is being captured, and HDCP isn't an issue.



I really wonder too how they kept the handshake with HDCP


----------



## PatoRodrigues (Mar 29, 2013)

Those are some really scary numbers to say the truth.

And i even got suspicious, because 680 SLI always seems to have absolutely ZERO drops or runts.


----------



## Xzibit (Mar 29, 2013)

PatoRodrigues said:


> Those are some really scary numbers to say the truth.
> 
> And i even got suspicious, because 680 SLI always seems to have absolutely ZERO drops or runts.



Thats why Nvidia is buying/sending reviewers one of those setups. Its mentioned at the end of the Podcast.  They see a PR opportunity why not take it.


----------



## cadaveca (Mar 29, 2013)

Xzibit said:


> Thats why Nvidia is buying/sending reviewers one of those setups.



I guess we could say that the real cause of the "issue" needs to be identified before you'll be happy, which is acceptable.


I mean, we could go full conspiracy here and say that perhaps the problem is due to Nnidia's influence at developers. Perhaps games have coding that in intentionally placed to make this a problem in the first place. Who knows.




And Nvidia is not sending me one of these setups. So it's not like they are just jumping on marketing opportunities here either, since I don't have a single Nvidia product in my house, at all. Nothing but AMD GPUs in MY reviews.


----------



## PatoRodrigues (Mar 29, 2013)

Now i'm really, REALLY curious to see how the reference HD7990 performs.

I think reviewers will have some massive work to do in the next few weeks.


----------



## cadaveca (Mar 29, 2013)

PatoRodrigues said:


> Now i'm really, REALLY curious to see how the reference HD7990 performs.
> 
> I think reviewers will have some massive work to do in the next few weeks.



Nope.



Just Ryan @ PCPER does. His new job is verifiying that drivers work properly(whatever "properly" means).





And as long as they are, every other reviewer can just keep on doing what they do.


----------



## Xzibit (Mar 29, 2013)

cadaveca said:


> I guess we could say that the real cause of the "issue" needs to be identified before you'll be happy, which is acceptable.
> 
> 
> I mean, we could go full conspiracy here and say that perhaps the problem is due to Nnidia's influence at developers. Perhaps games have coding that in intentionally placed to make this a problem in the first place. Who knows.
> ...



This doesnt effect me cause I dont Crossfire/SLI

And if I did I'd just turn on V-Sync in Crossfire and Adaptive V-Sync in SLI

Maybe games with Frame limiters would help, I dont know i'm just reading graphs.

One thing is for sure I would sleep well at night


----------



## cadaveca (Mar 29, 2013)

Xzibit said:


> Maybe games with Frame limiters would help, I dont know i'm just reading graphs.





			
				PCPER said:
			
		

> This doesn't happen in every game though



So there's more than one issue here.


Naturally.


----------



## LAN_deRf_HA (Mar 29, 2013)

AMD made it sound like all these issues exist because they didn't know they were there.


----------



## cadaveca (Mar 29, 2013)

LAN_deRf_HA said:


> AMD made it sound like all these issues exist because they didn't know they were there.



Prove they did know. That's gonna be hard. I mean, many enthusiasts have been complaining about "Microstutter" for many years. Did they ever see those complaints?


Did they recognize the problem?


I don't think it's really that important, unless it's a problem they cannot fix.


----------



## DRDNA (Mar 29, 2013)

LAN_deRf_HA said:


> AMD made it sound like all these issues exist because they didn't know they were there.



My guess is they did not know until it was pointed out to them, I doubt that red would behead themselves and I am sure they will have a resolution to the issue.


----------



## HammerON (Mar 29, 2013)

Thanks Dave for the info.
I switched to the HD 7970's back in November from two GTX 580's as they are far better GPU's for crunching. I worried about the "microstuttering" that I had been hearing about for years. The last AMD/ATI multi-card set-up I had was two 4870's. I am a little surprised at the findings as I have not had an issue with BF3 and one issue with Crysis 3 where I have to Alt+tab to get crossfire to work. So I have been under the assumption (yes I know better than to assume) that all has been working well, when clearly it is not.
I do not seem to have issues with microstuttering and I am able to get (at least I think as I am using Fraps) 60 to 70 FPS in  Crysis 3 MP running at 2560x1600 with vsync and motion blur off and SMAA low (1x) while everything else is set to Very High (AF 16).
I have looked at Afterburner on many occasions to see what it was saying as far as GPU usage and this is what I normally see:






So from what I am reading in this thread is that these GPU's are underperforming and I am not getting the most for my money. Sad indeed


----------



## cadaveca (Mar 29, 2013)

HammerON said:


> So from what I am reading these GPU's are underperforming and I am not getting the most for my money. Sad indeed



Well, on the other hand, does a single GPU deliver the same experience for you?

Because for me, it does, even slightly better on occasion.



However, if we take into account things like BluRay's 24 FPS being good enough for many, and we say that your 70FPS is actually 35, it is understandable that many users would just not ever notice.

What kind of sucks, to me, is that that means all the extra power usage is for nothing as well, nevermind, the cost of the cards.

I am also pretty sure that GPU-Compute is handled differently.


----------



## radrok (Mar 30, 2013)

Basically that's what I've experienced too in the past years.

If a game was smooth on single GPU then it was no problem on multi GPU.

As soon as the single GPU configuration wasn't enough everything started stuttering in multi GPU config.

So that's why triple 30" was unplayable most of the times?


----------



## Xzibit (Mar 30, 2013)

Let the PR war begin!!!!

Nvidias FCAT


----------



## PatoRodrigues (Mar 30, 2013)

> We’re proud of the work that we’ve put into this – and we think it can help gamers get the experience they’re paying for. So we’re opening up our FCAT solution, making the scripts and software associated with FCAT freely-modifiable and redistributable. The technical press has already dug in, and the results have been dramatic.
> 
> Our hope: that third-party apps can replicate and replace our tools, giving gamers what they need to be sure they’re getting all of the graphics quality they’re paying for.



I'm surprised, actually. I thought they would take advantage of the tool. I was totally wrong shadedshu). 

And that's great.


----------



## Xzibit (Mar 30, 2013)

They already put the last part to the test.



> Our hope: that third-party apps can replicate and replace our tools, giving gamers what they need to be sure they’re getting all of the graphics quality they’re paying for.



You can provide support but Nvidia wont open source it.  So expect all the Nvidia AIB partners with Benching & Monitoring tools for the gpu to implement support. 

I suspect this is how the PR will go.  Nvidia will dump it to reviewers with the kits like they already have and will encourage AIB partners to implement support.  Then run a marketing campaing and make it seam they had nothing to do with it.  Sneaky but effective.


----------



## Zubasa (Mar 30, 2013)

Xzibit said:


> They already put the last part to the test.
> 
> 
> 
> ...


Even if they let people know they have something to do with the tool, as long as the results are true they are in a good spot.
After all who's damn fault is it that Crossfire have problems


----------



## Xzibit (Mar 30, 2013)

Its AMDs ofcourse

But Nvidias FCAT is just FRAPS at the other end of the pipeline.  We need to know more of what goes on inbetween with each step for a better analysis.

Fraps lets you know how the Game Engines is spiting out frames.

Nvidias FCAT tell you how they are displayed at the end of the pipeline.

Game Engine is  X--X--X--X--X

GPU 1 displays   X--X-----X--X

GPU 2 displays   X-X-X-X-X

You need a better understanding of how its being interpreted along the way.  GPU 2 can look smoother but the timing is off and GPU 1 might be missing something but the timing reflect that of the games intention.

The TechReport did a better job combining the two.


----------



## Zubasa (Mar 30, 2013)

Xzibit said:


> You need a better understanding of how its being interpreted along the way.  GPU 2 can look smoother but the timing is off and GPU 1 might be missing something but the timing reflect that of the games intention.
> 
> The TechReport did a better job combining the two.


More information is always nice, but to be honest now that AMD admits it a problem on their hand I am more interest in when will they get a fix.


----------



## cadaveca (Mar 30, 2013)

Zubasa said:


> More information is always nice, but to be honest now that AMD admits it a problem on their hand I am more interest in when will they get a fix.



Soon, I suppose. I understand that AMD even has an open offer for a job to any programmer that can fix this immediately. Interested parties should contact Justin Boggs @ AMD. If you don't know how to get a hold of him, and really want that job, send me a PM.


Now, I know, 100%, that I told AMD staff that this was an issue, but that staff member may have just kept that to themselves. I also told W1zz that I thought this was happening many months ago, too. But what I found when telling people about this problem, responses were rather...well..bland. I approach every type of user, form general public to contacts at hardware companies, and they all ignored the issue.



We heard a few months ago, that AMD has issues with memory management, and they were working to fix it(expected release of that driver was NOW, but there is no such driver yet, and instead, we get more information and another time extension.) 


We also have a 7990 launch imminent. Or not. it seems to me, that this driver problem is what held AMD back from releasing the 7990, and I am pretty sure that even though AMD might get 7990 working right, that does not mean magically single-card Crossfire will be fixed, either.


All those "7990's" already sold by Powercolor and ASUS...don't work right.  That's really funny.


----------



## Xzibit (Mar 30, 2013)

cadaveca said:


> We also have a 7990 launch imminent. Or not. it seems to me, that this driver problem is what held AMD back from releasing the 7990, and I am pretty sure that even though AMD might get 7990 working right, that does not mean magically single-card Crossfire will be fixed, either.
> 
> 
> All those "7990's" already sold by Powercolor and ASUS...don't work right.  That's really funny.



Who ever bought a 7990 should have been paying closer attention 

Hexus interview with Powercolor last year there were hints of issues with the original concept
@ 4:00 mark











Here is something interesting on Nvidia FCAT

Kyle_Bennett HardOCP Editor-in-Chief post on it



> Again this comes down to interpretation of the data and what is being compared in any review. If CrossFire was no better than a comparable single GPU card, then real world gaming testing of highest attainable settings and resolutions would expose this easily. Again, it is why we were first in the industry to start this tremendously resource intensive testing years ago.
> 
> We have been talking to NVIDIA about frametime testing and collection for a long time now and there is good information back from inside the NVIDIA organization that HardOCP GPU reviews was the catalyst for this coming about. We had the opportunity to help develop the program tools with NVIDIA but chose not to. PCPer has put an incredible about of time and money into this program that we were simply not comfortable with spending. PCPer has done a great deal of needed work on this with NVIDIA, which is commendable, but I am not sure data collection on this front will prove to be the end all be all in GPU reviews. It all still comes down to evaluating the end user gaming experience and how well the hardware allows you to achieve you wants and needs on this front. Frame time data collection will never be something that any users can use at home easily so it will never be more than a review data point. Focus on the user experience will still have the most impact on video card sales making sure the end user gets what he wants and needs.


----------



## mastrdrver (Mar 31, 2013)

I don't get it. If the maximum resolution of the capture card is 2560x1440 at 60hz, then how is he capturing Eyefinity/Surround resolutions?

Is the capture card only 1 of the 3 monitors?

I find this odd since just about everyone who has experienced both nVidia and AMD multi monitor solutions always say to go with AMD.


----------



## newtekie1 (Mar 31, 2013)

mastrdrver said:


> I don't get it. If the maximum resolution of the capture card is 2560x1440 at 60hz, then how is he capturing Eyefinity/Surround resolutions?
> 
> Is the capture card only 1 of the 3 monitors?



Basically, yes.  He only has to capture the one monitor with the color bar.



mastrdrver said:


> I find this odd since just about everyone who has experienced both nVidia and AMD multi monitor solutions always say to go with AMD.



Yeah, but they only say that because the AMD solution has more memory(typically 3GB vs. 2GB) and the AMD solution was giving better framerate.  But they were basing their framerate numbers on FRAPs or some other similar program, which obviously isn't giving the real picture.


----------



## redeye (Mar 31, 2013)

cadaveca said:


> Meh. Most people seem to not care.



yes!... 

(slight bit off topic WRT crossfire):
how many people care that Second life runs better on Nvidia cards compared to AMD cards... (and secondlife is "cpu bound"... and it s openGL, and runs on linux!...)

annoys me that a gt640/gtx650 runs Second life faster/the same speed as a 7970...

well, this issue is in the same league as why WOW Runs 50% faster on an nvidia card of the same price range as the amd cards...


TL;DR just watch... a few months after the PS4 is released, all of these AMD problems will have Disappeared... meaning it will take about a year for these crossfire/gpu problems to get fixed.


----------



## Xzibit (Mar 31, 2013)

The Nvidia FCAT testing method is worse then FRAPS.  

The miss-understanding of it and interpretation is just odd to me.  I wouldnt be surpise it if came back to bite the 4 known sites Nvidia sent these test kits to.

To be clear:
Nvidias FCAT talks about a whole new issue.  Not the FRAPS test nor the AMD issue they were already addressing.


----------



## newtekie1 (Mar 31, 2013)

Xzibit said:


> The Nvidia FCAT testing method is worse then FRAPS.



Why exactly?  

It seems to me that reading the framerate the user actually sees would be a far better method and far more informative to the user than reading the framerate the game engine generates long before it ever actually makes it to the user.


----------



## Xzibit (Mar 31, 2013)

newtekie1 said:


> Why exactly?
> 
> It seems to me that reading the framerate the user actually sees would be a far better method and far more informative to the user than reading the framerate the game engine generates long before it ever actually makes it to the user.



Neither give a real-picture of whats going on in.  FRAPS tries to gives a static picture at a certain point but even those vary tremendously depending on system setup.  FCAT combines data from 3 different points.  A time interval after the data has gone through the GPU Render and a time cap for the overlay prior to entering the queing process much like FRAPS.

If the goal is to measure output frames then the FCAT should dump the overlay capture.  The time interval after the data has gone through the GPU render would serve best without imposing data prior to the queing process.


----------



## mastrdrver (Mar 31, 2013)

newtekie1 said:


> Basically, yes.  He only has to capture the one monitor with the color bar.



The problem I have is that if it is this problematic (as Ryan suggests), then were are the scores of crying and screaming going on with people that have this setup? I've yet to hear of anyone complaining about this (the blank screen).



> Yeah, but they only say that because the AMD solution has more memory(typically 3GB vs. 2GB) and the AMD solution was giving better framerate.  But they were basing their framerate numbers on FRAPs or some other similar program, which obviously isn't giving the real picture.



The reason I've been given have always been better experience on the AMD side. Never has the reason been because of frame buffer size or framerate.


----------



## newtekie1 (Mar 31, 2013)

Xzibit said:


> Neither give a real-picture of whats going on in.  FRAPS tries to gives a static picture at a certain point but even those vary tremendously depending on system setup.  FCAT combines data from 3 different points.  A time interval after the data has gone through the GPU Render and a time cap for the overlay prior to entering the queing process much like FRAPS.
> 
> If the goal is to measure output frames then the FCAT should dump the overlay capture.  The time interval after the data has gone through the GPU render would serve best without imposing data prior to the queing process.



That would only be an issue if the overlay capture was happening in software, it isn't, the overlay is captured after it is output to the monitor.



mastrdrver said:


> The problem I have is that if it is this problematic (as Ryan suggests), then were are the scores of crying and screaming going on with people that have this setup? I've yet to hear of anyone complaining about this (the blank screen).
> 
> 
> 
> The reason I've been given have always been better experience on the AMD side. Never has the reason been because of frame buffer size or framerate.



Most people just won't notice. They'll look at their framerate counter and assume everything is peachy because it is reading more than what they had with a single card.  You really have to analyze it to notice that every 3rd frame is basically being wasted with AMD's crossfire.  Every 3rd frame is being rendered, so it counts towards the framerate reported by software, but it actually isn't being displayed to the user.

Everything I've seen always pointed to more memory on the AMD side and better framerates, which a lot of idiots seem to equate to a better experience, but this kind of proves that those idiots don't have a clue.


----------



## Xzibit (Mar 31, 2013)

newtekie1 said:


> That would only be an issue if the overlay capture was happening in software, it isn't, the overlay is captured after it is output to the monitor.



Might want to read the article again.









> Following that is t_present, the point at which the game engine and graphics card communicate to say that they are ready to pass information for the next frame to be rendered and displayed.  What is important about this time location is that this is where FRAPS gets its time stamps and data and also where the overlay that we use for our Frame Rating method is inserted.





> NVIDIA was responsible for developing the color overlay that sits between the game and DirectX (in the same location of the pipeline as FRAPS essentially) as well as the software extractor that reads the captured video file to generate raw information about the lengths of those bars in an XLS file.


----------



## qubit (Apr 1, 2013)

CrossFire looks pretty craptastic with what I've seen in that PC Per video. Those problems shouldn't have been there in the first place, let along waiting 120 days or so for a fix! It doesn't look like AMD make any good products nowadays, does it?

I'm glad that I've stuck with nvidia and intel for the last few years. I bought a GTX 590 dual GPU card recently for a good price and I've not noticed any problems with it with the little bit of gaming I've done on it so far.

Are you gonna go nvidia now, Dave?


----------



## mastrdrver (Apr 1, 2013)

newtekie1 said:


> Most people just won't notice. They'll look at their framerate counter and assume everything is peachy because it is reading more than what they had with a single card.  You really have to analyze it to notice that every 3rd frame is basically being wasted with AMD's crossfire.  Every 3rd frame is being rendered, so it counts towards the framerate reported by software, but it actually isn't being displayed to the user.
> 
> Everything I've seen always pointed to more memory on the AMD side and better framerates, which a lot of idiots seem to equate to a better experience, but this kind of proves that those idiots don't have a clue.



Maybe but if the problem (in Eyefinity) is as bad as Ryan is say, then I have an extremely hard time believing most people would just play if off after looking at their frame rates.

On another note there was an interesting comment over on the B3D forum suggesting that the problem may lie with hyperthreading on the Intel processor. There was an article referenced that showed little frame time variance between an IVB Pentium and i3 with the only difference being the i3 has HT. When the article used an IVB i7, frame times started going all over the place (the graph was with BF3 and the GTX 650 Ti). It was suggested that since the i7 shouldn't be as stressed as say the i3, then Windows 7 may be shuffling core parking on the processor. Since nVidia has hardware to get frames timed right (if this theory is correct), then it would be less susceptible to any of this.

I know from my own experience that I see a solid 20 fps drop with my two 5870s in BF3 mp when I enable HT. Even when I reduce in game settings to make the frame rate higher, it doesn't feel the same even though the frame rate is showing the same. I have not turned HT back on since.

This might also explain why the Tom's Hardware reviewe should "less" problems in their Crossfire setup since they used a i5 without HT. At least in their review, the second card did not become useless even though PCPer used some of the same games that Tom's did and PCPer showed far different results with the 7970 (as compared to the 7870 with Tom's).


----------



## qubit (Apr 1, 2013)

mastrdrver said:


> Maybe but if the problem (in Eyefinity) is as bad as Ryan is say, then I have an extremely hard time believing most people would just play if off after looking at their frame rates.
> 
> On another note there was an interesting comment over on the B3D forum suggesting that the problem may lie with hyperthreading on the Intel processor. There was an article referenced that showed little frame time variance between an IVB Pentium and i3 with the only difference being the i3 has HT. When the article used an IVB i7, frame times started going all over the place (the graph was with BF3 and the GTX 650 Ti). It was suggested that since the i7 shouldn't be as stressed as say the i3, then Windows 7 may be shuffling core parking on the processor. Since nVidia has hardware to get frames timed right (if this theory is correct), then it would be less susceptible to any of this.
> 
> ...



We really need to see PC Per run their tests with HT on and off. I'm sure they've thought of this by now and people have suggested it in their forums.

HT is one of those things that can hurt performance in some cases and this might be one of them, due to the realtime nature of gaming graphics rendering.


----------



## newtekie1 (Apr 1, 2013)

Xzibit said:


> Might want to read the article again.
> 
> http://www.pcper.com/files/imagecache/article_max_width/review/2013-03-25/howgameswork.jpg



Yes, I read it, and the parts you quoted confirm what I said.  They aren't taking any readings from the T_present, that is just where they are inserting the overlay on each frame the engine spits out.  Their readings are taken at the end of the line, what the user actually sees.  This is a far better method than FRAPs.



mastrdrver said:


> Maybe but if the problem (in Eyefinity) is as bad as Ryan is say, then I have an extremely hard time believing most people would just play if off after looking at their frame rates.



I don't, I remember reading an article where they took a bunch of people that said they couldn't stand playing games on anything less than 60FPS, turned the framerate counter off, limited the games to 30FPS and 90% of the people that supposedly knew they could tell the difference when a game was below 60FPS said the game felt totally smooth to them.

Some people obviously had to complain otherwise we wouldn't have multiple websites testing the complains, we wouldn't have nVidia developing a tool to test it(arguably just to make their competition look bad), and Dave's been complaining about it for a while now.


----------



## cadaveca (Apr 1, 2013)

newtekie1 said:


> Dave's been complaining about it for a while now.



And at the same time, I can only "share war stories" with a couple of people on the same level. Most don't seem to be as sensitive to FPS as I am. Like I literally said to W1zz that it was like the secondary card was doing the work, but it never gets displayed (I only remember this because W1zz and I don't talk about random stuff too often, mostly TPU-work).


But ask any other Crossfire user here, bar one of two, and they all enjoy their systems.




qubit said:


> We really need to see PC Per run their tests with HT on and off. I'm sure they've thought of this by now and people have suggested it in their forums.
> 
> HT is one of those things that can hurt performance in some cases and this might be one of them, due to the realtime nature of gaming graphics rendering.





HT does NOT play a big role in this. THAT might play a role in *INPUT latency*, but after dealing with this for YEARS, people telling me it's not a real problem, blah, blah, I've done a tonne of testing and research into this.

I have both 3570K and 3770K to test that theory, actually. No reason for me to buy 3570K, at all, except for that. I did my testing, and now that $200 chip sits on the shelf, since I don't need it for any other reason.

I also bought i5 760 and i7 870.

I tend to test that whole each "HT causes lag problems" thing with every generation, and I have found it to be not true. I also find disabling HT doesn't affect normal usage temps, either, but many people report that, too. SO whatever.


----------



## qubit (Apr 1, 2013)

cadaveca said:


> HT does NOT play a big role in this. THAT might play a role in *INPUT latency*, but after dealing with this for YEARS, people telling me it's not a real problem, blah, blah, I've done a tonne of testing and research into this.
> 
> I have both 3570K and 3770K to test that theory, actually. No reason for me to buy 3570K, at all, except for that. I did my testing, and now that $200 chip sits on the shelf, since I don't need it for any other reason.
> 
> ...



Ok, it's good to know that HT isn't the culprit here and I trust your testing.

I did see years ago on Ars or something about HT reducing performance in certain situations, though. It could have even been in the P4 era though. It's so long ago, I don't remember any details or which applications they were talking about.

For the record, I keep HT on and have had zero problems with it.


----------



## cadaveca (Apr 1, 2013)

qubit said:


> For the record, I keep HT on and have had zero problems with it.



There have been numerous titles that on launch have had issues with HT, that cannot be denied, but that's a coding issue, not a hardware issue, and that's the software running badly on HT cores, not a driver issue.


That's part of the problem in dealing and explaining this issue, since there are many other things that can cause similar behavior. Separating one from the other can be rather difficult.

That's what makes this so hard for AMD to solve quickly...eliminating all the other issues that might be present may take some time, but since they've already confirmed this problem, and said they are already working to fix it, I'm not that worried about it, to be honest. I will, however, be selling off my extra AMD vgas, this week most likely. I need to get some NVidia GPUs.


----------



## Xzibit (Apr 1, 2013)

newtekie1 said:


> Why exactly?
> 
> It seems to me that reading the framerate the user actually sees would be a far better method and far more informative to the user than reading the framerate the game engine generates long before it ever actually makes it to the user.





Xzibit said:


> If the goal is to measure output frames then the FCAT should dump the overlay capture.  The time interval after the data has gone through the GPU render would serve best without imposing data prior to the queing process.





newtekie1 said:


> Yes, I read it, and the parts you quoted confirm what I said.  They aren't taking any readings from the T_present, that is just where they are inserting the overlay on each frame the engine spits out.  Their readings are taken at the end of the line, what the user actually sees.  This is a far better method than FRAPs.



I think were saying the same thing in two different ways.

The overlay frame tagging doesnt serve a purpose to the end count.  Comparing it to final frames output does.


----------



## cadaveca (Apr 1, 2013)

Xzibit said:


> I think were saying the same thing in two different ways.
> 
> The overlay frame tagging doesnt serve a purpose to the end count.  Comparing it to final frames output does.



FCAT is required because of how video capture takes place. That overlay makes it very easily to quickly recognize what's going on, but at the same time, I do feel there is a better way to deal with this, and it's something that has to be done by the developer of every single title.

And again, since AMD admitted it is a problem, I have no lasting issues with this method of testing, and neither does AMD, it seems.


----------



## Xzibit (Apr 1, 2013)

cadaveca said:


> FCAT is required because of how video capture takes place. That overlay makes it very easily to quickly recognize what's going on, but at the same time, I do feel there is a better way to deal with this, and it's something that has to be done by the developer of every single title.
> 
> And again, since AMD admitted it is a problem, I have no lasting issues with this method of testing, and neither does AMD, it seems.



Were arent talking about FCAT as a whole we are talking about output count.

I dont see overlay frame tagging useful in that sense. You can take count anywhere in the process just doing it closes to the user would be best.


----------



## cadaveca (Apr 1, 2013)

Xzibit said:


> Were arent talking about FCAT as a whole we are talking about output count.
> 
> I dont see overlay frame tagging useful in that sense. You can take count anywhere in the process just doing it closes to the user would be best.



Closest to the user would be over the cable to the monitor, which is where the displayed frames are actually captured. This is what is being done.


However, not all rendered frames are displayed to the user, so you are right, capturing data about what is showed to the end user is most relevant, however, you need to know what actually makes up what the end user sees, so you have to do something like what is done here with FCAT to find that out.


The overlay is required to identify what makes up the frames that are displayed to the user, which is obviously not a simple thing of "frameX was rendered, frame X gets displayed."

It's more like Frame x, y, z, and t were rendered, 60% frame x was used, frame y was 2%, frame z was dropped, and frame t was 38%. That's four frames making up one frame the end user sees, and if only the data passed over the cable is captured, you'd have no idea where it came from.


The only way to properly find that out is to produce the overlay either as the frame is rendered by the 3D engine, or immediately after it was 100% complete. 


Honestly, what's going on is a very complex subject, and armchair research isn't going to help much here. After literally dealing with this issue for years, and people saying it's not real, it's something else, blah blah blah, there's going to be very little that will sway me personally in any other direction.


You can literally go back through years of posts here on TPU and find me complaining and talking about this problem. Years.


----------



## tokyoduong (Apr 1, 2013)

So FRAPS tells you how smooth the engine spits out the action.

FCAT tells you how smooth frames are delivered after the engine is done with everything and time the frame output.

There is flaws in both methods. One tells you how smooth the game is and the other tells you how smooth the frames are. It's dumb to take one over the other. 

FCAT's flaws is when games and drivers do this

engine output
x xxxx   x     x    x    xx

frame output
x  x  x  x  x  x  x  x  x  x

As you can see, NVIDIA or AMD can really fix this issue with FCAT with a bandaid fix to make themselves look better.
Smooth frames =/= smooth gaming experience. These guys just need to fix their freaking drivers to deliver both frames and action smoothly. Even at the cost of less fps, I will take it. I can watch movies at 24 or 30 fps so I'm sure I can deal with a smooth gaming experience at only 30 or 60 fps.

Else, their only option is to make it so retardedly fast that the human eye can't possibly see. But then, erratic frames are not perceivable because my brain corrects it so it will give a lot of people, like me, plenty of headaches even though I can't tell.

Hate to say it but a closed platform like consoles seems to be the wave of the future for mainstream trouble free gaming. PC gaming is/had become the early adopter platform.


----------



## Xzibit (Apr 1, 2013)

tokyoduong said:


> Else, their only option is to make it so retardedly fast that the human eye can't possibly see. But then, erratic frames are not perceivable because my brain corrects it so it will give a lot of people, like me, plenty of headaches even though I can't tell.



Thats what FRAPS was telling us before. FCAT just comfirms it and introduces variance.

A disruption coming out of the Game Engine it self due to what ever issues will cause a visual overlap in frames being displayed.

Thats a different issue then what FCAT is try'n to convey.

AMD knew that was an issue and was working on it thru Microsoft GPUView


----------



## newtekie1 (Apr 1, 2013)

Xzibit said:


> Were arent talking about FCAT as a whole we are talking about output count.
> 
> I dont see overlay frame tagging useful in that sense. You can take count anywhere in the process just doing it closes to the user would be best.



How do you expect the capture card and software to detect runt and dropped frames without some kind of pattern overlayed on each frame?

Output count is not what we are talking about, we are talking about perceived framerate that the user sees.


----------



## the54thvoid (Apr 1, 2013)

Xzibit said:


> Thats what FRAPS was telling us before. FCAT just comfirms it and introduces variance.



I can't tell if you're on the side of FCAT or not? So please do not take this post as a retort to you. 

Fraps is useless for set ups that have latency issues.  I speak from direct experience.  There are also issues with benchmarks though i don't know if they use the same methods that Fraps relies on.

My 3Dmark scores on both editions were higher on my crossfire set up yet there was obvious stutter.  The initial scenes were all fine but the final mix including cpu physics was a bitch to watch, especially on 3DMark 2013.
Despite a lower score, my single card was measurably smoother.

Now I'm not in total agreement with certain aspects.  I'm still adamant my BF3 experience on crossfire was perfect.  My single card offers no better a visual feast (or worse for that matter).
Crossfired 7970's on Tomb Raider was more juddery than a single card despite higher fps, likewise on Crysis 3 (though that was much harder to call).

Anything that tells us what the end user sees is far more relevant than what Fraps alone tells us.  Like i say, from direct experience, high fps numbers are irrelevant if the picture isn't butter smooth so if fraps isn't telling me what I'm seeing, it's not very useful.

And to throw a cat in amongst the pigeons, FCAT confuses me....


----------



## Xzibit (Apr 1, 2013)

newtekie1 said:


> How do you expect the capture card and software to detect runt and dropped frames without some kind of pattern overlayed on each frame?
> 
> Output count is not what we are talking about, we are talking about perceived framerate that the user sees.





newtekie1 said:


> It seems to me that reading the framerate the user actually sees would be a far better method and far more informative to the user than reading the framerate the game engine generates long before it ever actually makes it to the user.



Output count was what i was talking about. I see your talking about two different things.  Overlay tags are inserted to be compared.  I was saying there useless if your just counting output frames.  Obviously you want the comparison.

Just take a frame count.
1.) As it leaves the game engine
2.) As it leaves the Video Card

Preferably you take it at several stages in the pipeline
1.) Game engine
2.) Direct X
3.) GPU
4.) Output


If perceived framerate was an issue we'd all be playing at the required amount and anything more would be useless.  The issue is consistancy.

Right now FRAPS and FCAT cant even agree on that.



the54thvoid said:


> And to throw a cat in amongst the pigeons, FCAT confuses me....



It is confusing.

I'm not against what FRAPS or FCAT does but how its being interpreted to mean other things then what it does.  Thats my point.


----------



## Xzibit (Apr 2, 2013)

I'm just wondering if all this time he learned how to bech from Nvidia.  j/k










As you can tell I'm not good at drawing straight arrows 







Wondering if thats the reason he stopped posting results cause he realized hes not properly testing or worse if he was.  In which case opens the flood gates to more questions.


----------



## GC_PaNzerFIN (Apr 2, 2013)

I have been saying about this since GF8 and HD 2900 series. Guess how many believed me then? 

All the hatred of fanboys and like I have had to deal with when they have been blindly defending company and telling me there is no problem LA LA LAAA.


----------



## cadaveca (Apr 2, 2013)

Xzibit said:


> Wondering if thats the reason he stopped posting results cause he realized hes not properly testing or worse if he was.  In which case opens the flood gates to more questions.




Did you not know that both AMD and NVidia deliver completely different images and colors in games? They basically render in opposite ways, so you cannot directly compare NVidia quality vs AMD, although this has been done many times over through the years. Every time someone checks visual quality, it's different.


Has nothing to do with testing methods. You're the only one that is concerned about THAT.



GC_PaNzerFIN said:


> I have been saying about this since GF8 and HD 2900 series. Guess how many believed me then?
> 
> All the hatred of fanboys and like I have had to deal with when they have been blindly defending company and telling me there is no problem LA LA LAAA.



Yeah, I know you and I have a similar story, and a similar history of systems built, too. We've even talked about it in the past. And yeah, seems to have started with DX10, DX9 Crossfire was great, and still is.

The truth of the matter is that what we spend on rigs and the rigs we build are a minority, and most users haven't even got a second card, never mind a high-end system built a week after parts have released to the public. That's the failure of buying into tech early...you get all the problems, too. The problems aren't that big of a deal, honestly...I expect that...it's the users without any experience with high-end hardware commenting on an issue they have no experience with that is the most entertaining. 


All I see this resulting in is that a problem I have had for years is now identified, and that it can fixed. This news, isn't bad news, really...it's notice that the issue is being looked at. If you have any issues wit hCrossfire and 7-series cards right now, well, you gotta deal with it.


And that's why I started this thread. AMD has admitted Crossfire is problematic, and will be, for some time. If you run into issues while suing Crossfire, you'll have to wait for better drivers. I really think AMD should be making a public statement abuot this whole ordeal, and quell the fanboys.


----------



## DRDNA (Apr 2, 2013)

cadaveca said:


> And yeah, seems to have started with DX10, DX9 Crossfire was great, and still is.



So there are no runts at all in DX9? I never caught that part of the info.


----------



## cadaveca (Apr 2, 2013)

DRDNA said:


> So there are no runts at all in DX9? I never caught that part of the info.



either performance is at the point it doesn't matter and doesn't present a problem, or the issue is confined to DX10/DX11 only.


DX9 problems are per-app. DX10/DX11 will take a global fix to driver as a base(should be here by July), and then, maybe per-app fixes.


Assuming I understood what AMD said properly.

The only thing that bugs me is that the 7-series launched near 18 months ago, and still no working Crossfire, and I've been told I got to wait even longer yet. I am also not really comfortable with hearing a few months ago ,that a big driver update was coming to fix DX10/Dx11 stutter, and that it was memory management, and that this driver should be out my March, and now that it's March, I'm being told July.


Delay after delay after delay = me moving to NVidia. I'll keep a single 7970.


----------



## radrok (Apr 2, 2013)

cadaveca said:


> Delay after delay after delay = me moving to NVidia.



That's what I did and I didn't regret it.

Been supporting ATI for too much while getting the shaft.


----------



## qubit (Apr 2, 2013)

cadaveca said:


> The only thing that bugs me is that the 7-series launched near 18 months ago, and still no working Crossfire, and I've been told I got to wait even longer yet. I am also not really comfortable with hearing a few months ago ,that a big driver update was coming to fix DX10/Dx11 stutter, and that it was memory management, and that this driver should be out my March, and now that it's March, I'm being told July.
> 
> 
> Delay after delay after delay = me moving to NVidia. I'll keep a single 7970.



Exactly. 

I wouldn't like to be treated this way after spending lots of money on AMD products. Come to the green side and enjoy some awesome graphics cards! 

You'd be surprised how even something as old as an 8800 GTX still works. It will be kinda slow, obviously, but everything works just fine even after all this time.


----------



## DRDNA (Apr 2, 2013)

radrok said:


> That's what I did and I didn't regret it.
> 
> Been supporting ATI for too much while getting the shaft.





cadaveca said:


> either performance is at the point it doesn't matter and doesn't present a problem, or the issue is confined to DX10/DX11 only.
> 
> 
> DX9 problems are per-app. DX10/DX11 will take a global fix to driver as a base(should be here by July), and then, maybe per-app fixes.
> ...





qubit said:


> Exactly.
> 
> I wouldn't like to be treated this way after spending lots of money on AMD products. Come to the green side and enjoy some awesome graphics cards!
> 
> You'd be surprised how even something as old as an 8800 GTX still works. It will be kinda slow, obviously, but everything works just fine even after all this time.



Well you all do provide a compelling point to jump ship, but I will wait for the official release of the HD 7990 with the corrected CF and then make my decision. I think there is still a bit of life left in my cards.


----------



## cadaveca (Apr 2, 2013)

DRDNA said:


> Well you all do provide a compelling point to jump ship, but I will wait for the official release of the HD 7990 with the corrected CF and then make my decision.



Yeah, I think it's a bit early. In the least AMD is being pretty open about this, is talking about it, and seemingly working towards fixing it. As much as some people may want to suggest that there is no problem here, this is something that AMD has been talking about openly for months now. Just more and more info is being presented to the end user in a palatable way, IMHO. We can all go back to Anandtech's article about AMD's stutter, and the forthcoming fix that hasn't materialized yet. I think AMD is madly at work trying to fix this.


If they don't, I have $1225 worth of videocards I want a refund on.

And even if they don't fix it, I'll still keep a single AMD card. The thing is, for me, that I have 3x IPS monitors sitting on my desk here for Eyefinity, and AMD doesn't deliver acceptable performance for my needs for such a configuration. A configuration I bought because they said they could do it.


----------



## Kaynar (Apr 2, 2013)

I already knew that AMD was the best choice for single GPU setup and the worst choice for multi-gpu, but thanks for scientifically confirming this PCPER!


----------



## BiggieShady (Apr 2, 2013)

qubit said:


> You'd be surprised how even something as old as an 8800 GTX still works.



Yeah, I have GTX 260 in another machine that happily runs Skyrim on high @1280x1024 on my old 19 inch LCD panel ... even after all these years running hot, it just won't die 

On topic, I was wondering how is it even possible that alternate frame rendering mode produces runt frames? 
Think about it, both cards have identical data in VRAM, frames rendered are of similar complexity and there are periods in tests when runt frames do not exist.

It all stinks of heuristic predictions gone bad when rendering frames ahead ... no I wasn't going for a rhyme.
If AMD is using some form of heuristics when determinig AFR timings for rendering frames ahead - maybe runts happen when predicted timings do not agree with what really happened.

Now that I have solved half of the problem  please, AMD, fix the rest of it.


----------



## Xzibit (Apr 2, 2013)

cadaveca said:


> Did you not know that both AMD and NVidia deliver completely different images and colors in games? They basically render in opposite ways, so you cannot directly compare NVidia quality vs AMD, although this has been done many times over through the years. Every time someone checks visual quality, it's different.
> 
> 
> Has nothing to do with testing methods. You're the only one that is concerned about THAT.



Might want to look again. 

If it was just difference in colors.  I wouldnt bother.
Its lighting and texture LOD issues.

In the Crysis 3 comparison it seams to be LOD issues aswell.

Download the videos at 720p and look for yourself.  Thats why I include the Video and a Picture. You have to be *BLIND* not to notice the differance. AT 1:20 mark 7950 CF turn on a light to the right and at 1:25 another light turns on right infront. You realy cant miss it unless your *BLIND*.

If you think i'm the only one to notice your not following the discussion in different forums.




cadaveca said:


> AMD has admitted Crossfire is problematic, and will be, for some time. If you run into issues while suing Crossfire, you'll have to wait for better drivers. I really think AMD should be making a public statement abuot this whole ordeal, and quell the fanboys.



Did you not read AMD statements.


----------



## GC_PaNzerFIN (Apr 2, 2013)

Yes indeed this change is for the better for every gamer regardless the brand of their hardware. Almost put tears in my eyes when NVIDIA admitted the existence of microstuttering problems in the past (GTX 295) and now this is even better, the problem child also came forth.  

In all honesty I think both need shared memory between the GPUs, better frame sync (also sub-frame piece syncing, tearing!) functionality from DX side and changing the rendering method to sub-frame pieces (this has been tried before, problems with tearing and lower performance makes it less sexy to use) to completely get away with this problem. Until then it is matter of trying to smooth things on the driver side which is very tricky indeed and exactly why even AMD is having so much trouble.  

I have had too many crossfire and SLi setups in the past and at some point I just went to fastest possible single GPU and haven't had a problem since. This ment decreasing image quality settings but so far I rather take that than all the multi-gpu problems. 120Hz screen(s) would love to see some multi-GPU action though as performance is not good enough even with single Titan in many cases! 

Now that we finally have this reasonable conversation about the rendering issues I will gladly do my best to help in any possible way to fix the problem. After all, this is what I always wanted to come out. The truth, not some marketing BS or fanboy chatter. 

I can't belive how long ago I started, back in the days when we had no real term for frametime issues like microstuttering. Now often some people get confused about terms and completely mis-use the term microstuttering which is why I don't like it anymore. It is like you say there is criminal out on the street and people start calling tips that my neighbour looks suspicious and evil although he is doing gardening like every day. 

Not all problems are related to frametimes, microstuttering term suffers from popularity? 
Ball is now on AMD hands. Come out clean and explain what you are doing.


----------



## cadaveca (Apr 2, 2013)

Xzibit said:


> Did you not read AMD statements.



That's from a few days ago. When speaking of Anandtech, I was speaking in terms of what they reported three months earlier, which the article you linked is a follow-up to.
from your link:



> AMD has been clear with us from the start that the primary reason they even ended up in this situation is because they weren’t doing sufficient competitive analysis, and that they have revised their driver development process so that they now do this analysis to prevent future problems.



and 



> They’re already working on changing how they do frame pacing for multi-GPU setups, and come July we’re going to have the chance to see the results of AMD’s latest efforts there.




So...what was your point?

That's not a public statement. That's Anandtech reporting what AMD told THEM, not AMD making a public statement.

I mean, really, that kinda settles all your questions in this thread right there, problems with testing or not. IF you want to call that AMD's public statement, then why are you trying to raise issues with the information provided?




> AMD has been clear with us from the start that the primary reason they even ended up in this situation is because they weren’t doing sufficient competitive analysis




Because really.. it's just that.. it's info. In January AMD said they'd have a fix in March. They don't, and now it's April. In the meantime, here's some more info so you understand why it's taking so long, and why the issue is complex.


There's nothing new here, no conspiracy.


----------



## Xzibit (Apr 2, 2013)

Here you go for the visualy impared.

I drew Xs to help your eyes out. 







Additional lights that turn on


----------



## cadaveca (Apr 2, 2013)

Xzibit said:


> Here you go for the visualy impared.
> 
> I drew Xs to help your eyes out.




Nobody cares.







> AMD has been clear with us from the start that the primary reason they even ended up in this situation is because they weren’t doing sufficient competitive analysis



Who cares about Ryan's videos? They are merely examples, not perfection...that's up to AMD's driver team to deal with. You're making a big deal out of nothing.


----------



## Xzibit (Apr 2, 2013)

cadaveca said:


> Nobody cares.
> 
> 
> Who cares about Ryan's videos?



The whole purpose of benching is limiting influences if your not gonna bother doing that dont bench at all

The workload is different so offload is different to GPU sheesh 

Really your asking that question.


----------



## cadaveca (Apr 2, 2013)

Xzibit said:


> Really your asking that question.



Sure. All Ryan is illustrating is what AMD is doing to fix things, and the types of testing they now use. In that, the videos are fine.

For somebody claiming to want to ensure the data is interpreted properly, you sure seem to have a completely different agenda. Really, you do seem to be claiming this teting is flawed, there are other issues, people are not looking at the right things, blah blah...


It's not meant to present fine details. I also didn't link his videos.. I linked his explanation of the testing hardware and what it shows. The rest is stuff the end user shouldn't have to worry about, and AMD has stated that how this ends up is going to be different for each user, so they are going to present different options, even.


AMD's already said the testing is right, and you're here months later trying to point out problems with it. Just saying, man.


----------



## Xzibit (Apr 2, 2013)

cadaveca said:


> For somebody claiming to want to ensure the data is interpreted properly, you sure seem to have a completely different agenda.



Which is what ?

I wants to know if the Data deing spit out by the game engine is equal in-both cases.  If the 2 GPUs in question are processing same amount of data per frame.  You can even throw in APUs.

Since FRAPS and FCAT take and insert frame info at the same spot before it reaches Direct X.

Then what ? Should we ignore that too ?  Hope you dont conduct your own test like that.

Asking questions isnt a bad thing. Ignoring Answers is.


----------



## cadaveca (Apr 2, 2013)

Xzibit said:


> Ignoring Answers is.



Exactly. And you've ignored that this isn't an answer.


It's merely a way to effectively convey to the end user what's happening, and what all this "stutter" that people like me complain about. For years people have said they don't see it, it doesn't exist, blah blah blah...so here's a way it can be presented so that any user can see it. the flipping colors make it pretty obvious.

You are looking at image quality, when that's not what this is about. This is about the smoothness of the animation presented to the end user, and in those videos...the only important thing there is the motion of the character's arms.

It is clearly smoother in NVidia's implementation.


How or why it's smoother...is the question. The quality of the image...may have something to do with that...sure, but that is but one of many things can lead to problems.


And that's why none of this will change how VGAs are tested and reviewed. The second card in Crossfire IS working...it's simply not displaying it's image properly, so it appears that it's useless. It's not useless, though, it's merely out of sync, and in such a way that V-sync doesn't fix the issue.


AMD has stated, overall, that this is due to a memory management problem in the driver. A fix was expected in March, and we were given the first 13.1 beta.


Now it's march, there's no driver, so here's why. The problem is complex, has many facets, and takes time to manage. AMD has asked for four more months. I stated, before any of this "new" stuff went public, that I was expecting four more months. Go figure.


----------



## DRDNA (Apr 2, 2013)

Xzibit, I also wonder about the visuals that are rendered to the end users monitor, I have wondered if both camps with equal settings do they show different things and if one camp may be suspect of not letting their card display everything like cutting out lights to get better render times. I have my opinion about that with the camps but is indeed a different topic that has some relations.


----------



## Xzibit (Apr 2, 2013)

DRDNA said:


> Xzibit, I also wonder about the visuals that are rendered to the end users monitor, I have wondered if both camps with equal settings do they show different things and if one camp may be suspect of not letting their card display everything like cutting out lights to get better render times. I have my opinion about that with the camps but is indeed a different topic that has some relations.



Thats what peaked my interest.

  Like I said I dont have a SLI or Crossfire setup in either my Nvidia or AMD systems.  Since those test arent limited to Multi-Card configurations.  Hes also testing FCAT with single cards thats what i'm looking at myself.

People have noted discrepancies in the videos hes put out with the LOD differences.

It seams he did post the write-up 

I'm interested to know if either 7000/600 series is doing more/less work for the benched results.


----------



## the54thvoid (Apr 2, 2013)

Whether or not what card is doing more work is still not the point.  The point is whether the delivery of the visual image is smoother or not.  

Something people ought to be aware of as well is that AMD can deal with quite complex lighting (Dirt Showdown with global lighting destroys Nvidia cards because the lighting coding was developed from an AMD showcase of lighting for that game).  Sleeping Dogs may use similar lighting effects (AMD title) which would help explain the differences you are highlighting.

The GCN architecture is fantastic.  And in many respects is better (and sometimes worse) than Nvidia's Kepler.  However, what Dave is so patiently putting across is not the difference in appearance of quality but the smoothness of the image.

And like he's said, AMD acknowledged this following the initial latency storm and then again in a discussion with Anandtech.

There isn't an argument about that to be had here.  Your point is for another thread - who has better IQ.  Certainly in your pics - it's AMD.


----------



## GC_PaNzerFIN (Apr 2, 2013)

http://www.pcper.com/reviews/Graphi...ils-Capture-based-Graphics-Performance-Test-4

Take a look at the amount of only partially rendered frames with CF. Only thing they do is increase FPS although barely anything makes it to the screen.


----------



## BiggieShady (Apr 4, 2013)

This image from guru3d's article illustrates the problem with muilti gpu setups. AMD is simply delivering frames as fast as it can and as soon as it can. nVidia is syncing and delaying frames.


----------



## radrok (Apr 4, 2013)

Sadly you can't do much magic with AFR, you can just improve how smooth it presents.

I think that improving bandwidth between GPUs and making them render parts of the screen instead of 1x1x1x1 would be better.


----------



## BiggieShady (Apr 4, 2013)

radrok said:


> Sadly you can't do much magic with AFR, you can just improve how smooth it presents.
> 
> I think that improving bandwidth between GPUs and making them render parts of the screen instead of 1x1x1x1 would be better.



Problem with that approach is there would be much unnecessary duplicated work in a single frame on both GPUs. With alternate frames each GPU does separate work for its own frame only VRAM is duplicated.

Processing parts of the screen for different lighting passes in a deferred engine did prove to be a good idea, if I'm not mistaken Dice did it in Frostbite 2 engine using direct compute.


----------



## Xzibit (Apr 4, 2013)

BiggieShady said:


> This image from guru3d's article illustrates the problem with muilti gpu setups. AMD is simply delivering frames as fast as it can and as soon as it can. nVidia is syncing and delaying frames.
> 
> http://www.guru3d.com/index.php?ct=articles&action=file&id=3108



I'm curious to know if this is forward thinking on Nvidias part for Cloud, Tegra, Shield Project (Remote Gaming)
Spacing out frames would mean less work for the GPU with less variance less noticable until you start pumping more data through and we arent there just yet.

Would it add overal game latency ?
Wouldnt more demanding games see higher variance and at higher resolutions ?


----------



## cadaveca (Apr 4, 2013)

Xzibit said:


> Would it add overal game latency ?
> Wouldnt more demanding games see higher variance and at higher resolutions ?



Possibly, but I'm pretty sure that that was why G80 has separate I/O chip specifically to handle that load and minimize any latency that might be introduced. Eventually that silicon was integrated directly into the GPU itself. 


AMD simply killed the AIW line, and that's truly when these problems started for me, personally. ATI used to have awesome capture and display hardware AND software.

And, that visual representation was covered by other articles already. We already know(or at least I know) that the issue is a lack of sync of the frames from the secondary card. AMD is claiming that the "runts" are these frames, and that's why the second card seems useless.


----------



## brandonwh64 (Apr 4, 2013)

I figured this would turn into an arguement somewere along the line. Dave is right with this article he linked us cause I am just now noticing things that my Xfire was doing that vanished when I went single 7970. 

I know dave and he would not have posted this if he has not seen the effects of it himself.


----------



## cadaveca (Apr 4, 2013)

brandonwh64 said:


> I figured this would turn into an arguement somewere along the line. Dave is right with this article he linked us cause I am just now noticing things that my Xfire was doing that vanished when I went single 7970.
> 
> I know dave and he would not have posted this if he has not seen the effects of it himself.



Making the move from dual 6950/6970, to single 7970, where the performance is nearly identical, really does present things very differently than any other "upgrade" before.

Considering that the number of "shaders" has been reduced as well, I can't help but feel that it is hard to get an accurate idea of the full picture without actually having the hardware in hand, and swapping between the two configurations. It can be difficult to separate what's changed due to moving to a single GPU, and what's due to the more efficient GPU design.

But, at the same time, I hardly see this is as a movement that will change how reviews are done.


In the end, sure, I noticed this a long time ago, and yes, I have talked about it to countless people. Maybe two or three knew right away what I was talking about, while many more had similar configs, so it's always been clear to me that this is an issue that not everyone is really going to be affected by, number one because many simply don't do multi-GPU, and secondly, not everyone is going to be as sensitive to this as I am. The one thing that you have, Brandon, that others don't, is out countless conversations in TeamSpeak about it. You can now check and see if you see any of the things I did, now that you're going down that exact road I did.

Enjoy the trip.


----------



## Xzibit (Apr 4, 2013)

cadaveca said:


> Possibly, but I'm pretty sure that that was why G80 has separate I/O chip specifically to handle that load and minimize any latency that might be introduced. Eventually that silicon was integrated directly into the GPU itself.
> 
> 
> AMD simply killed the AIW line, and that's truly when these problems started for me, personally. ATI used to have awesome capture and display hardware AND software.
> ...



Yes.

I was more looking at the Single solution comparison and the difference on the Nvidia side with Single and SLI.

The Mid-end (Lower-end still to come) GPUs results seams more interesting

You'd expect when you add another card into the mix to lower the variance or maintain that of a single but it still increases.  Once you go above 1440p resolution its seams its a question of how much you want to tolerate.

These results just deter multi-gpu solutions.


----------



## cadaveca (Apr 4, 2013)

Xzibit said:


> These results just deter multi-gpu solutions.



Perhaps. For sure, right now, on the AMD side of things. Perhaps some of the tech and know-how from buying 3dfx years ago has proved valuable here for Nvidia when dealing with this stuff.


I know, for sure, that ADM is trying to get this fixed. For me, it's very obvious, since some drivers "stutter" more than others, those that don't seem to be missing all benefits from the second card, and those in between tend to give me headaches or make me feel ill.


None of that every happens in single-GPU.


When moving up resolutions, you have to expect that the higher workload is going to affect the timings between frames. That's a given. You can really only accurately compare different resolutions, for this "investigation", if the performance offered by each is similar. You can't ignore that the workload is different.

If a GUP rendered 1080p and 1600p with the same level of performance, that's fine, but it should be quite clear to most users, by now, that there are different cards from each brand for a reason...not because they offer different performance...they are offerings for users running different resolutions.

You don't need a 7970 to run 720p, and trying to game @ 1600p on a 7770 is clearly a mistake. As far as I am concerned, the industry's lack of assigning GPUs to a specific resolution really hampers the ability of PC makers to provide well built and well-balanced configurations. There should be set "rules" for system configs to meet certain performance levels in certain resolutions.


----------



## Xzibit (Apr 4, 2013)

Even the 7970 vs 680 comparison 

@ 1080p
Going SLI seams only worth it to lower FT
At the price of adding another card at full price ?

@ 1440p
Going SLI looses its value by 50% FT slightly lowers and V rises.
Maybe if there was a SLI with a secondary card at 50% of the price with no outputs just SLI connectors.
I recall 3Dfx Voodoo series had a setup like that. Where you bought secondary card slightly cheaper.

@ 5760x1080
The only game that ran "remotely" decent was Dirt 3

Seams like just marketing to get you to buy an additional card for the full price.


----------



## EarthDog (Apr 4, 2013)

cadaveca said:


> Yeah, all of you that spent money on a second card for Crossfire, did it for nothing...really nothing...for now.


This is funny. I dont even notice with my 690. .. oh wait.......


----------



## Vario (Apr 5, 2013)

I came close to pulling the trigger on a second 7850... glad I didn't.  I don't think its really optimal to run either crossfire or sli, it seems like theres too many inherent problems, especially for AMD.  Hopefully AMD will get a fix on this.


----------



## qubit (Apr 5, 2013)

EarthDog said:


> This is funny. I dont even notice with my 690. .. oh wait.......



690? Good choice there, dude. 

Ya know what I've got? A pair of Asus HD 3850X2 cards I got cheap a few years ago. At the time, I ran them together and singly and in neither case did the performance even come _close_ to my GTX 285. That's *four* AMD GPUs getting spanked by one NVIDIA GPU, lol.

Compare them to my GTX 590 and it's total humiliation. 

What chance do you think that this runt frame problem was present even then?  Shame I can't run those advanced tests and find out.


----------



## PatoRodrigues (Apr 5, 2013)

Damn, i would love to invest in a capture system for this.

A few friends would love to test their cards with FCAT.... And i bet that here in Brazil, there's no reviewer willing to do something like this.

Maybe in a near feature. Those 650 MB/s are INSANE.


----------



## BiggieShady (Apr 5, 2013)

PatoRodrigues said:


> Those 650 MB/s are INSANE.



Obviously, for capture analysis you don't need to write full frames - it just shows FCAT system is functioning properly. 
For example at 1600p you could write every single frame for analysis with only vertical colored information, frames could be 1 pixel wide and 1600 pixels high. 
That would be around 300 KB/s ... if you use 4 bit color (as those colored bars are in the first place) it would peak at 50 KB/s


----------

