• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

ASUS GeForce GTX 570

I like the addition of a system power usage on the voltage tweaking page :) (sorry if I missed that on other reviews, but it's helpful to see)
 
But that wasn't the point, they refreshed GF100 to create a more efficient design, of course if all they wanted to do was reduce the clock speeds in OCCT and Furmark they would of just added a power limiter to the 400 series. But they instead addressed the issues while giving the cards a nice performance boost.

That's the point, the 570 offers no performance boost over the comparable last gen. 480sp GPU, just lower power and pricing. While I appreciate both of those "features" I'm just a bit disappointed that's all they've done.
 
That's the point, the 570 offers no performance boost over the comparable last gen. 480sp GPU, just lower power and pricing. While I appreciate both of those "features" I'm just a bit disappointed that's all they've done.

Comparable "last gen" is GTX470 so yes it does offer a significant performance boost.

It's not next gen anyway and everybody knows that. It's all part of marketing wars, AMD named Barts HD6000 although it's not a new generation either so Nvidia is forced to move up one generation too.

And speaking of Barts (sorry because is offtopic), there's one thing I realized when looking at this review and that is that Barts is not as fast as it first appeared to be:

perfrel_1920.gif


It's a lot slower than Cypress despite running at nearly 10% higher clocks. I'm 100% sure that Barts looked like it was faster because of the new optimizations on the newer drivers and what we are looking at now in the chart above is HD58xx cards performing much better (also compared to GTX470) than they did on Barts launch.

From Guru3d

Speaking of AMD, the ATI graphics team at default driver settings applies an image quality optimization which can be seen, though very slightly and in certain conditions. It gives their cards ~8% extra performance. NVIDIA does not apply such a tweak and opts better image quality. We hope to see that move from AMD/ATI soon as well.

It's just that extra 8% that made Barts look so fast on release reviews and now that new drivers have been used on all the cards is the reason that HD5xxx cards are faster now.
 
Comparable "last gen" is GTX470 so yes it does offer a significant performance boost.

It's not next gen anyway and everybody knows that. It's all part of marketing wars, AMD named Barts HD6000 although it's not a new generation either so Nvidia is forced to move up one generation too.

And speaking of Barts (sorry because is offtopic), there's one thing I realized when looking at this review and that is that Barts is not as fast as it first appeared to be:

http://tpucdn.com/reviews/ASUS/GeForce_GTX_570/images/perfrel_1920.gif

It's a lot slower than Cypress despite running at nearly 10% higher clocks. I'm 100% sure that Barts looked like it was faster because of the new optimizations on the newer drivers and what we are looking at now in the chart above is HD58xx cards performing much better (also compared to GTX470) than they did on Barts launch.

From Guru3d



It's just that extra 8% that made Barts look so fast on release reviews and now that new drivers have been used on all the cards is the reason that HD5xxx cards are faster now.

Still after watching that, 6850 still faster than gtx 460 1GB and 6870 faster than 5850 and equal to gtx 470, and has way better scaling in crossfire than cypress ever had. So it's fairly fast for its price tag imho. Whatever that 8% was i don't see it relative to gtx 470, hd5850 or gtx 460 1GB, maybe hd5870.

EDIT: do you have an image that compares with and without the optimization?
 
Well, you've obviously been given information that I wasn't aware of. I just assumed that more performance was always what it was about. They could have saved themselves a lot of money and effort if they just added the software tweak to reduce peak consumption with Furmark/OCCT to the 480 and dropped the price.

Except that isn't what lowered the power consumption, that only lowers power consumption when OCCT and Furmark are run, not everywhere else. So the lower power consumption we see everywhere else was thanks to the tweaks.

Performance is not always the driving force between tweaks and refreshes. Look at some history and do a little research:

G80 -> G92 = Not for Performance, for Power and Heat Improvments
RV600 -> RV670 = Not for Performance, for Power and Heat Improvements
G70 -> G71 = Not for Performance, for Power and Heat Improvements

That's the point, the 570 offers no performance boost over the comparable last gen. 480sp GPU, just lower power and pricing. While I appreciate both of those "features" I'm just a bit disappointed that's all they've done.


Again, looking at one specification of the graphics card, and simply comparing two graphics card on that single spec alone and saying the two are the same is obsurd. This isn't a GTX480. Yes, it has 480SPs like the GTX480, but it also has a 320-bit memory bus like the GTX470. Why doesn't the HD5770 perform better than the HD4890? They are both 800SP cards, so you must be fuming that the HD5770 doesn't outperform the HD4890. It is a real disappointment to you, I'm sure. Why not make some more obsurd comparisons and then base your opinion on those? The 1600SP HD5870 gets its ass handed to it by the 512SP GTX580, the HD5870 must be a huge piece of shit by your standards.:shadedshu
 
Last edited:
Comparable "last gen" is GTX470 so yes it does offer a significant performance boost.

It's not next gen anyway and everybody knows that. It's all part of marketing wars, AMD named Barts HD6000 although it's not a new generation either so Nvidia is forced to move up one generation too.

And speaking of Barts (sorry because is offtopic), there's one thing I realized when looking at this review and that is that Barts is not as fast as it first appeared to be:

http://tpucdn.com/reviews/ASUS/GeForce_GTX_570/images/perfrel_1920.gif

It's a lot slower than Cypress despite running at nearly 10% higher clocks. I'm 100% sure that Barts looked like it was faster because of the new optimizations on the newer drivers and what we are looking at now in the chart above is HD58xx cards performing much better (also compared to GTX470) than they did on Barts launch.

From Guru3d



It's just that extra 8% that made Barts look so fast on release reviews and now that new drivers have been used on all the cards is the reason that HD5xxx cards are faster now.

I was comparing the 2 480sp parts. The gtx-470 should be slower. It's older and has fewer SP.

AMD optimizations are irrelevant. A 1600sp Barts would be more than 8% faster than Cypress. I know you don't agree. We'll never be able to settle that other than to try and apply some common sense. So, I guess we'll just disagree about it.

AMD's fault that nVidia changed to the 500 series. OK, if you say so. :rolleyes:
 
Still after watching that, 6850 still faster than gtx 460 1GB and 6870 faster than 5850 and equal to gtx 470, and has way better scaling in crossfire than cypress ever had. So it's fairly fast for its price tag imho. Whatever that 8% was i don't see it relative to gtx 470, hd5850 or gtx 460 1GB, maybe hd5870.

But it does tell a very different story than "Barts it's as fast as Cypress while having less SPs". It's been demostrated like a million times that a HD5850 @ HD5870 clocks is just as fast as HD5870, so that clearly means that 1440 SPs at 850 Mhz are 12% faster (91/81 = 1.12 = +12%) than 1120 SPs @ 900 Mhz as you can see in the chart above. And probably you could actually disable more SPs and would still get similar performance/clock down to 1280 SP. That's why Cypress is only about 60% faster than RV790 or Juniper at same clocks despite being 2x them, because the dispatch unit was never able to feed so many SIMDs. Why do you think that Barts has 2 of them but only 14 SIMD units? Because that's the hot spot.

That's why a 1600 SP Barts would only be just as fast as Cypress (+/- 5%), because Barts actually is Cypress with 6 SIMDs less.

And about the GTX460 and GTX470 and how they relate to Barts performance... any guess why both cards got a 50 Mhz bump just some weeks before they launched? Where do you think a 725 Mhz HD6850 would be in the chart?

EDIT: do you have an image that compares with and without the optimization?

Sure, you can find some here:

http://www.guru3d.com/article/exploring-ati-image-quality-optimizations/
 
im just waiting for custom board layout, it would be nice since stock cooler is kinda boring
 
For about 450 USD, you can purchase a Sapphire HD 5970, which is still a power house monster.
 
But it does tell a very different story than "Barts it's as fast as Cypress while having less SPs". It's been demostrated like a million times that a HD5850 @ HD5870 clocks is just as fast as HD5870, so that clearly means that 1440 SPs at 850 Mhz are 12% faster (91/81 = 1.12 = +12%) than 1120 SPs @ 900 Mhz as you can see in the chart above. And probably you could actually disable more SPs and would still get similar performance/clock down to 1280 SP. That's why Cypress is only about 60% faster than RV790 or Juniper at same clocks despite being 2x them, because the dispatch unit was never able to feed so many SIMDs. Why do you think that Barts has 2 of them but only 14 SIMD units? Because that's the hot spot.

That's why a 1600 SP Barts would only be just as fast as Cypress (+/- 5%), because Barts actually is Cypress with 6 SIMDs less.

And about the GTX460 and GTX470 and how they relate to Barts performance... any guess why both cards got a 50 Mhz bump just some weeks before they launched? Where do you think a 725 Mhz HD6850 would be in the chart?



Sure, you can find some here:

http://www.guru3d.com/article/exploring-ati-image-quality-optimizations/

And what about the amazing crossfire scaling of barts?

EDIT: i read the article, honestly i thought it was worst.
 
Last edited:
And what about the amazing crossfire scaling of barts?

And why does midrange always scale better than high-end on multi-GPU setups?

Because system is less of a "bottleneck"

and

Lower SP count = better internal management and utilization of resources = better scaling

And besides that, has anyone tested HD58xx Crossfire scaling with the newer drivers? I have not seen any review doing so. Maybe scaling is just better with newer drivers and that alongside with the lower SP count (= better utilization) makes Barts look much better, when it's not "much" better, only a bit better.

Almost everyone compares reviews and reviews are made when cards are launched. Comparing HD5xxx CF scaling and HD68xx CF scaling reviews by W1zzard, for example, is pointless right now, there's been a full year of optimizations in between.

Actually I'm just asking, has anyone extensively compared them with latest drivers to see if that's true?

EDIT: i read the article, honestly i thought it was worst.

But you can see that there IS a 8% performance difference, which was my point. Regarding the visuals, it's an optimization and an optimization should never be part of default settings no matter to what extent is noticeable or how many people are actually able to see it while gaming. 99.9% of people would not be able to tell the difference between a "raw" 25 GB 1080p Blu-ray disk movie and a good 5GB 1080p DivX rip, but that's not a green card for anyone to start selling DVDs with lossy DivX movies on it as if they were Blu-rays or simply as HD.

Very few people is able to see the difference between an actual diamond and zirconia or moissanite, but if you buy a diamond and pay for a diamond you want a diamond. You get the point.

AMD should be honest about it and disable them by default.

For me it is very noticeable and annoying. You can't almost see it on acreenshots, but on games (or videos) it is very noticeable, at least for many people. Me, I wouldn't probably care so much because the first thing I do when I install new drivers is going to the CP and enable the High Quality profile. Regardless of that I don't like companies cheating and I do consider it cheating. For me "if you don't see it, it's not cheating" is not an excuse.
 
And why does midrange always scale better than high-end on multi-GPU setups?

Because system is less of a "bottleneck"

and

Lower SP count = better internal management and utilization of resources = better scaling

And besides that, has anyone tested HD58xx Crossfire scaling with the newer drivers? I have not seen any review doing so. Maybe scaling is just better with newer drivers and that alongside with the lower SP count (= better utilization) makes Barts look much better, when it's not "much" better, only a bit better.

Almost everyone compares reviews and reviews are made when cards are launched. Comparing HD5xxx CF scaling and HD68xx CF scaling reviews by W1zzard, for example, is pointless right now, there's been a full year of optimizations in between.

Actually I'm just asking, has anyone extensively compared them with latest drivers to see if that's true?



But you can see that there IS a 8% performance difference, which was my point. Regarding the visuals, it's an optimization and an optimization should never be part of default settings no matter to what extent is noticeable or how many people are actually able to see it while gaming. 99.9% of people would not be able to tell the difference between a "raw" 25 GB 1080p Blu-ray disk movie and a good 5GB 1080p DivX rip, but that's not a green card for anyone to start selling DVDs with lossy DivX movies on it as if they were Blu-rays or simply as HD.

Very few people is able to see the difference between an actual diamond and zirconia or moissanite, but if you buy a diamond and pay for a diamond you want a diamond. You get the point.

AMD should be honest about it and disable them by default.

For me it is very noticeable and annoying. You can't almost see it on acreenshots, but on games (or videos) it is very noticeable, at least for many people. Me, I wouldn't probably care so much because the first thing I do when I install new drivers is going to the CP and enable the High Quality profile. Regardless of that I don't like companies cheating and I do consider it cheating. For me "if you don't see it, it's not cheating" is not an excuse.

Man you can write...lol
So, barts is nothing, they could have launched it at the beginning of the year?
Comparing to old benchmarks?, correct me if i'm wrong but every site benchmarks with the latest drivers all the graphic cards!, only W1zz didn't put 5870 and 5850 crossfire result but a lot of other sites did, and barts scales way better than cypress acording to them.
 
Man you can write...lol
So, barts is nothing, they could have launched it at the beginning of the year?

Definately.

correct me if i'm wrong but every site benchmarks with the latest drivers all the graphic cards!

Then I correct you. :p

Most reviews that I remember, use older drivers for older cards and the launch drivers (beta drivers most of the times) with the new cards. Maybe my memory is failing on this.

Anyway, can you link me to one of those reviews? I don't remember seeing Hd58xx CF on HD68xx reviews, but I may have just overlooked them.

And please link me to a extensive review, not one of those who test 3-4 games at one resolution... that's far from conclusive and most probably than not any advantage seen there is especific optimizations made to those games and the games used in the review as well as the settings were "suggestions" from the manufacturer...
 
Definately.



Then I correct you. :p

Most reviews that I remember, use older drivers for older cards and the launch drivers (beta drivers most of the times) with the new cards. Maybe my memory is failing on this.

Anyway, can you link me to one of those reviews? I don't remember seeing Hd58xx CF on HD68xx reviews, but I may have just overlooked them.

And please link me to a extensive review, not one of those who test 3-4 games at one resolution... that's far from conclusive and most probably than not any advantage seen there is especific optimizations made to those games and the games used in the review as well as the settings were "suggestions" from the manufacturer...

Look here:
Techreport
Anandtech
Guru3d
 
Last edited:
Back
Top