• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASUS GeForce GTX 570

Joined
Nov 6, 2009
Messages
1,299 (0.24/day)
Location
Ohio
System Name Vegnagun
Processor Ryzen 5950x
Motherboard Asus B550 Gaming-E
Cooling Noctua NH-U14
Memory 4x8 G. Skill 3800mhz CL14
Video Card(s) EVGA RTX 3080 FTW3
Storage WD SN850 2tb
Display(s) Viotek 1080p 120hz
Case Fractal Design Define 7
Power Supply Corsair AX850
Mouse Logitech
Keyboard Logitech 815 tactile
Software Windows 10 Education
Benchmark Scores top 1% in the world for weekly score in Killzone 2 :)
I like the addition of a system power usage on the voltage tweaking page :) (sorry if I missed that on other reviews, but it's helpful to see)
 
Joined
Jan 13, 2009
Messages
424 (0.07/day)
But that wasn't the point, they refreshed GF100 to create a more efficient design, of course if all they wanted to do was reduce the clock speeds in OCCT and Furmark they would of just added a power limiter to the 400 series. But they instead addressed the issues while giving the cards a nice performance boost.

That's the point, the 570 offers no performance boost over the comparable last gen. 480sp GPU, just lower power and pricing. While I appreciate both of those "features" I'm just a bit disappointed that's all they've done.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.49/day)
Location
Reaching your left retina.
That's the point, the 570 offers no performance boost over the comparable last gen. 480sp GPU, just lower power and pricing. While I appreciate both of those "features" I'm just a bit disappointed that's all they've done.

Comparable "last gen" is GTX470 so yes it does offer a significant performance boost.

It's not next gen anyway and everybody knows that. It's all part of marketing wars, AMD named Barts HD6000 although it's not a new generation either so Nvidia is forced to move up one generation too.

And speaking of Barts (sorry because is offtopic), there's one thing I realized when looking at this review and that is that Barts is not as fast as it first appeared to be:



It's a lot slower than Cypress despite running at nearly 10% higher clocks. I'm 100% sure that Barts looked like it was faster because of the new optimizations on the newer drivers and what we are looking at now in the chart above is HD58xx cards performing much better (also compared to GTX470) than they did on Barts launch.

From Guru3d

Speaking of AMD, the ATI graphics team at default driver settings applies an image quality optimization which can be seen, though very slightly and in certain conditions. It gives their cards ~8% extra performance. NVIDIA does not apply such a tweak and opts better image quality. We hope to see that move from AMD/ATI soon as well.

It's just that extra 8% that made Barts look so fast on release reviews and now that new drivers have been used on all the cards is the reason that HD5xxx cards are faster now.
 
Joined
Oct 15, 2010
Messages
951 (0.19/day)
System Name Little Boy / New Guy
Processor AMD Ryzen 9 5900X / Intel Core I5 10400F
Motherboard Asrock X470 Taichi Ultimate / Asus H410M Prime
Cooling ARCTIC Liquid Freezer II 280 A-RGB / ARCTIC Freezer 34 eSports DUO
Memory TeamGroup Zeus 2x16GB 3200Mhz CL16 / Teamgroup 1x16GB 3000Mhz CL18
Video Card(s) Asrock Phantom RX 6800 XT 16GB / Asus RTX 3060 Ti 8GB DUAL Mini V2
Storage Patriot Viper VPN100 Nvme 1TB / OCZ Vertex 4 256GB Sata / Ultrastar 2TB / IronWolf 4TB / WD Red 8TB
Display(s) Compumax MF32C 144Hz QHD / ViewSonic OMNI 27 144Hz QHD
Case Phanteks Eclipse P400A / Montech X3 Mesh
Power Supply Aresgame 850W 80+ Gold / Aerocool 850W Plus bronze
Mouse Gigabyte Force M7 Thor
Keyboard Gigabyte Aivia K8100
Software Windows 10 Pro 64 Bits
Comparable "last gen" is GTX470 so yes it does offer a significant performance boost.

It's not next gen anyway and everybody knows that. It's all part of marketing wars, AMD named Barts HD6000 although it's not a new generation either so Nvidia is forced to move up one generation too.

And speaking of Barts (sorry because is offtopic), there's one thing I realized when looking at this review and that is that Barts is not as fast as it first appeared to be:

http://tpucdn.com/reviews/ASUS/GeForce_GTX_570/images/perfrel_1920.gif

It's a lot slower than Cypress despite running at nearly 10% higher clocks. I'm 100% sure that Barts looked like it was faster because of the new optimizations on the newer drivers and what we are looking at now in the chart above is HD58xx cards performing much better (also compared to GTX470) than they did on Barts launch.

From Guru3d



It's just that extra 8% that made Barts look so fast on release reviews and now that new drivers have been used on all the cards is the reason that HD5xxx cards are faster now.

Still after watching that, 6850 still faster than gtx 460 1GB and 6870 faster than 5850 and equal to gtx 470, and has way better scaling in crossfire than cypress ever had. So it's fairly fast for its price tag imho. Whatever that 8% was i don't see it relative to gtx 470, hd5850 or gtx 460 1GB, maybe hd5870.

EDIT: do you have an image that compares with and without the optimization?
 

newtekie1

Semi-Retired Folder
Joined
Nov 22, 2005
Messages
28,473 (4.14/day)
Location
Indiana, USA
Processor Intel Core i7 10850K@5.2GHz
Motherboard AsRock Z470 Taichi
Cooling Corsair H115i Pro w/ Noctua NF-A14 Fans
Memory 32GB DDR4-3600
Video Card(s) RTX 2070 Super
Storage 500GB SX8200 Pro + 8TB with 1TB SSD Cache
Display(s) Acer Nitro VG280K 4K 28"
Case Fractal Design Define S
Audio Device(s) Onboard is good enough for me
Power Supply eVGA SuperNOVA 1000w G3
Software Windows 10 Pro x64
Well, you've obviously been given information that I wasn't aware of. I just assumed that more performance was always what it was about. They could have saved themselves a lot of money and effort if they just added the software tweak to reduce peak consumption with Furmark/OCCT to the 480 and dropped the price.

Except that isn't what lowered the power consumption, that only lowers power consumption when OCCT and Furmark are run, not everywhere else. So the lower power consumption we see everywhere else was thanks to the tweaks.

Performance is not always the driving force between tweaks and refreshes. Look at some history and do a little research:

G80 -> G92 = Not for Performance, for Power and Heat Improvments
RV600 -> RV670 = Not for Performance, for Power and Heat Improvements
G70 -> G71 = Not for Performance, for Power and Heat Improvements

That's the point, the 570 offers no performance boost over the comparable last gen. 480sp GPU, just lower power and pricing. While I appreciate both of those "features" I'm just a bit disappointed that's all they've done.


Again, looking at one specification of the graphics card, and simply comparing two graphics card on that single spec alone and saying the two are the same is obsurd. This isn't a GTX480. Yes, it has 480SPs like the GTX480, but it also has a 320-bit memory bus like the GTX470. Why doesn't the HD5770 perform better than the HD4890? They are both 800SP cards, so you must be fuming that the HD5770 doesn't outperform the HD4890. It is a real disappointment to you, I'm sure. Why not make some more obsurd comparisons and then base your opinion on those? The 1600SP HD5870 gets its ass handed to it by the 512SP GTX580, the HD5870 must be a huge piece of shit by your standards.:shadedshu
 
Last edited:
Joined
Jan 13, 2009
Messages
424 (0.07/day)
Comparable "last gen" is GTX470 so yes it does offer a significant performance boost.

It's not next gen anyway and everybody knows that. It's all part of marketing wars, AMD named Barts HD6000 although it's not a new generation either so Nvidia is forced to move up one generation too.

And speaking of Barts (sorry because is offtopic), there's one thing I realized when looking at this review and that is that Barts is not as fast as it first appeared to be:

http://tpucdn.com/reviews/ASUS/GeForce_GTX_570/images/perfrel_1920.gif

It's a lot slower than Cypress despite running at nearly 10% higher clocks. I'm 100% sure that Barts looked like it was faster because of the new optimizations on the newer drivers and what we are looking at now in the chart above is HD58xx cards performing much better (also compared to GTX470) than they did on Barts launch.

From Guru3d



It's just that extra 8% that made Barts look so fast on release reviews and now that new drivers have been used on all the cards is the reason that HD5xxx cards are faster now.

I was comparing the 2 480sp parts. The gtx-470 should be slower. It's older and has fewer SP.

AMD optimizations are irrelevant. A 1600sp Barts would be more than 8% faster than Cypress. I know you don't agree. We'll never be able to settle that other than to try and apply some common sense. So, I guess we'll just disagree about it.

AMD's fault that nVidia changed to the 500 series. OK, if you say so. :rolleyes:
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.49/day)
Location
Reaching your left retina.
Still after watching that, 6850 still faster than gtx 460 1GB and 6870 faster than 5850 and equal to gtx 470, and has way better scaling in crossfire than cypress ever had. So it's fairly fast for its price tag imho. Whatever that 8% was i don't see it relative to gtx 470, hd5850 or gtx 460 1GB, maybe hd5870.

But it does tell a very different story than "Barts it's as fast as Cypress while having less SPs". It's been demostrated like a million times that a HD5850 @ HD5870 clocks is just as fast as HD5870, so that clearly means that 1440 SPs at 850 Mhz are 12% faster (91/81 = 1.12 = +12%) than 1120 SPs @ 900 Mhz as you can see in the chart above. And probably you could actually disable more SPs and would still get similar performance/clock down to 1280 SP. That's why Cypress is only about 60% faster than RV790 or Juniper at same clocks despite being 2x them, because the dispatch unit was never able to feed so many SIMDs. Why do you think that Barts has 2 of them but only 14 SIMD units? Because that's the hot spot.

That's why a 1600 SP Barts would only be just as fast as Cypress (+/- 5%), because Barts actually is Cypress with 6 SIMDs less.

And about the GTX460 and GTX470 and how they relate to Barts performance... any guess why both cards got a 50 Mhz bump just some weeks before they launched? Where do you think a 725 Mhz HD6850 would be in the chart?

EDIT: do you have an image that compares with and without the optimization?

Sure, you can find some here:

http://www.guru3d.com/article/exploring-ati-image-quality-optimizations/
 
Joined
Mar 26, 2010
Messages
9,875 (1.87/day)
Location
Jakarta, Indonesia
System Name micropage7
Processor Intel Xeon X3470
Motherboard Gigabyte Technology Co. Ltd. P55A-UD3R (Socket 1156)
Cooling Enermax ETS-T40F
Memory Samsung 8.00GB Dual-Channel DDR3
Video Card(s) NVIDIA Quadro FX 1800
Storage V-GEN03AS18EU120GB, Seagate 2 x 1TB and Seagate 4TB
Display(s) Samsung 21 inch LCD Wide Screen
Case Icute Super 18
Audio Device(s) Auzentech X-Fi Forte
Power Supply Silverstone 600 Watt
Mouse Logitech G502
Keyboard Sades Excalibur + Taihao keycaps
Software Win 7 64-bit
Benchmark Scores Classified
im just waiting for custom board layout, it would be nice since stock cooler is kinda boring
 
Joined
Jul 8, 2010
Messages
1,190 (0.23/day)
System Name Titan
Processor AMD FX-8350 4.6ghz
Motherboard AsRock 990FX Fatal1ty Professional
Cooling Corsair H100
Memory G.Skill RipjawsX 8gb
Video Card(s) XFX Double D 7970 3gb
Storage Western Digital Green 2TB 64mb Cache/ Western Digital 500gb
Display(s) iMac Display 2560 x 1440
Case NZXT Switch 810 White
Audio Device(s) Creative Sound Blaster Z
Power Supply Corsair AX1200
Software Windows 7 64-bit
For about 450 USD, you can purchase a Sapphire HD 5970, which is still a power house monster.
 
Joined
Oct 15, 2010
Messages
951 (0.19/day)
System Name Little Boy / New Guy
Processor AMD Ryzen 9 5900X / Intel Core I5 10400F
Motherboard Asrock X470 Taichi Ultimate / Asus H410M Prime
Cooling ARCTIC Liquid Freezer II 280 A-RGB / ARCTIC Freezer 34 eSports DUO
Memory TeamGroup Zeus 2x16GB 3200Mhz CL16 / Teamgroup 1x16GB 3000Mhz CL18
Video Card(s) Asrock Phantom RX 6800 XT 16GB / Asus RTX 3060 Ti 8GB DUAL Mini V2
Storage Patriot Viper VPN100 Nvme 1TB / OCZ Vertex 4 256GB Sata / Ultrastar 2TB / IronWolf 4TB / WD Red 8TB
Display(s) Compumax MF32C 144Hz QHD / ViewSonic OMNI 27 144Hz QHD
Case Phanteks Eclipse P400A / Montech X3 Mesh
Power Supply Aresgame 850W 80+ Gold / Aerocool 850W Plus bronze
Mouse Gigabyte Force M7 Thor
Keyboard Gigabyte Aivia K8100
Software Windows 10 Pro 64 Bits
But it does tell a very different story than "Barts it's as fast as Cypress while having less SPs". It's been demostrated like a million times that a HD5850 @ HD5870 clocks is just as fast as HD5870, so that clearly means that 1440 SPs at 850 Mhz are 12% faster (91/81 = 1.12 = +12%) than 1120 SPs @ 900 Mhz as you can see in the chart above. And probably you could actually disable more SPs and would still get similar performance/clock down to 1280 SP. That's why Cypress is only about 60% faster than RV790 or Juniper at same clocks despite being 2x them, because the dispatch unit was never able to feed so many SIMDs. Why do you think that Barts has 2 of them but only 14 SIMD units? Because that's the hot spot.

That's why a 1600 SP Barts would only be just as fast as Cypress (+/- 5%), because Barts actually is Cypress with 6 SIMDs less.

And about the GTX460 and GTX470 and how they relate to Barts performance... any guess why both cards got a 50 Mhz bump just some weeks before they launched? Where do you think a 725 Mhz HD6850 would be in the chart?



Sure, you can find some here:

http://www.guru3d.com/article/exploring-ati-image-quality-optimizations/

And what about the amazing crossfire scaling of barts?

EDIT: i read the article, honestly i thought it was worst.
 
Last edited:

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.49/day)
Location
Reaching your left retina.
And what about the amazing crossfire scaling of barts?

And why does midrange always scale better than high-end on multi-GPU setups?

Because system is less of a "bottleneck"

and

Lower SP count = better internal management and utilization of resources = better scaling

And besides that, has anyone tested HD58xx Crossfire scaling with the newer drivers? I have not seen any review doing so. Maybe scaling is just better with newer drivers and that alongside with the lower SP count (= better utilization) makes Barts look much better, when it's not "much" better, only a bit better.

Almost everyone compares reviews and reviews are made when cards are launched. Comparing HD5xxx CF scaling and HD68xx CF scaling reviews by W1zzard, for example, is pointless right now, there's been a full year of optimizations in between.

Actually I'm just asking, has anyone extensively compared them with latest drivers to see if that's true?

EDIT: i read the article, honestly i thought it was worst.

But you can see that there IS a 8% performance difference, which was my point. Regarding the visuals, it's an optimization and an optimization should never be part of default settings no matter to what extent is noticeable or how many people are actually able to see it while gaming. 99.9% of people would not be able to tell the difference between a "raw" 25 GB 1080p Blu-ray disk movie and a good 5GB 1080p DivX rip, but that's not a green card for anyone to start selling DVDs with lossy DivX movies on it as if they were Blu-rays or simply as HD.

Very few people is able to see the difference between an actual diamond and zirconia or moissanite, but if you buy a diamond and pay for a diamond you want a diamond. You get the point.

AMD should be honest about it and disable them by default.

For me it is very noticeable and annoying. You can't almost see it on acreenshots, but on games (or videos) it is very noticeable, at least for many people. Me, I wouldn't probably care so much because the first thing I do when I install new drivers is going to the CP and enable the High Quality profile. Regardless of that I don't like companies cheating and I do consider it cheating. For me "if you don't see it, it's not cheating" is not an excuse.
 
Joined
Oct 15, 2010
Messages
951 (0.19/day)
System Name Little Boy / New Guy
Processor AMD Ryzen 9 5900X / Intel Core I5 10400F
Motherboard Asrock X470 Taichi Ultimate / Asus H410M Prime
Cooling ARCTIC Liquid Freezer II 280 A-RGB / ARCTIC Freezer 34 eSports DUO
Memory TeamGroup Zeus 2x16GB 3200Mhz CL16 / Teamgroup 1x16GB 3000Mhz CL18
Video Card(s) Asrock Phantom RX 6800 XT 16GB / Asus RTX 3060 Ti 8GB DUAL Mini V2
Storage Patriot Viper VPN100 Nvme 1TB / OCZ Vertex 4 256GB Sata / Ultrastar 2TB / IronWolf 4TB / WD Red 8TB
Display(s) Compumax MF32C 144Hz QHD / ViewSonic OMNI 27 144Hz QHD
Case Phanteks Eclipse P400A / Montech X3 Mesh
Power Supply Aresgame 850W 80+ Gold / Aerocool 850W Plus bronze
Mouse Gigabyte Force M7 Thor
Keyboard Gigabyte Aivia K8100
Software Windows 10 Pro 64 Bits
And why does midrange always scale better than high-end on multi-GPU setups?

Because system is less of a "bottleneck"

and

Lower SP count = better internal management and utilization of resources = better scaling

And besides that, has anyone tested HD58xx Crossfire scaling with the newer drivers? I have not seen any review doing so. Maybe scaling is just better with newer drivers and that alongside with the lower SP count (= better utilization) makes Barts look much better, when it's not "much" better, only a bit better.

Almost everyone compares reviews and reviews are made when cards are launched. Comparing HD5xxx CF scaling and HD68xx CF scaling reviews by W1zzard, for example, is pointless right now, there's been a full year of optimizations in between.

Actually I'm just asking, has anyone extensively compared them with latest drivers to see if that's true?



But you can see that there IS a 8% performance difference, which was my point. Regarding the visuals, it's an optimization and an optimization should never be part of default settings no matter to what extent is noticeable or how many people are actually able to see it while gaming. 99.9% of people would not be able to tell the difference between a "raw" 25 GB 1080p Blu-ray disk movie and a good 5GB 1080p DivX rip, but that's not a green card for anyone to start selling DVDs with lossy DivX movies on it as if they were Blu-rays or simply as HD.

Very few people is able to see the difference between an actual diamond and zirconia or moissanite, but if you buy a diamond and pay for a diamond you want a diamond. You get the point.

AMD should be honest about it and disable them by default.

For me it is very noticeable and annoying. You can't almost see it on acreenshots, but on games (or videos) it is very noticeable, at least for many people. Me, I wouldn't probably care so much because the first thing I do when I install new drivers is going to the CP and enable the High Quality profile. Regardless of that I don't like companies cheating and I do consider it cheating. For me "if you don't see it, it's not cheating" is not an excuse.

Man you can write...lol
So, barts is nothing, they could have launched it at the beginning of the year?
Comparing to old benchmarks?, correct me if i'm wrong but every site benchmarks with the latest drivers all the graphic cards!, only W1zz didn't put 5870 and 5850 crossfire result but a lot of other sites did, and barts scales way better than cypress acording to them.
 

Benetanegia

New Member
Joined
Sep 11, 2009
Messages
2,680 (0.49/day)
Location
Reaching your left retina.
Man you can write...lol
So, barts is nothing, they could have launched it at the beginning of the year?

Definately.

correct me if i'm wrong but every site benchmarks with the latest drivers all the graphic cards!

Then I correct you. :p

Most reviews that I remember, use older drivers for older cards and the launch drivers (beta drivers most of the times) with the new cards. Maybe my memory is failing on this.

Anyway, can you link me to one of those reviews? I don't remember seeing Hd58xx CF on HD68xx reviews, but I may have just overlooked them.

And please link me to a extensive review, not one of those who test 3-4 games at one resolution... that's far from conclusive and most probably than not any advantage seen there is especific optimizations made to those games and the games used in the review as well as the settings were "suggestions" from the manufacturer...
 
Joined
Oct 15, 2010
Messages
951 (0.19/day)
System Name Little Boy / New Guy
Processor AMD Ryzen 9 5900X / Intel Core I5 10400F
Motherboard Asrock X470 Taichi Ultimate / Asus H410M Prime
Cooling ARCTIC Liquid Freezer II 280 A-RGB / ARCTIC Freezer 34 eSports DUO
Memory TeamGroup Zeus 2x16GB 3200Mhz CL16 / Teamgroup 1x16GB 3000Mhz CL18
Video Card(s) Asrock Phantom RX 6800 XT 16GB / Asus RTX 3060 Ti 8GB DUAL Mini V2
Storage Patriot Viper VPN100 Nvme 1TB / OCZ Vertex 4 256GB Sata / Ultrastar 2TB / IronWolf 4TB / WD Red 8TB
Display(s) Compumax MF32C 144Hz QHD / ViewSonic OMNI 27 144Hz QHD
Case Phanteks Eclipse P400A / Montech X3 Mesh
Power Supply Aresgame 850W 80+ Gold / Aerocool 850W Plus bronze
Mouse Gigabyte Force M7 Thor
Keyboard Gigabyte Aivia K8100
Software Windows 10 Pro 64 Bits
Definately.



Then I correct you. :p

Most reviews that I remember, use older drivers for older cards and the launch drivers (beta drivers most of the times) with the new cards. Maybe my memory is failing on this.

Anyway, can you link me to one of those reviews? I don't remember seeing Hd58xx CF on HD68xx reviews, but I may have just overlooked them.

And please link me to a extensive review, not one of those who test 3-4 games at one resolution... that's far from conclusive and most probably than not any advantage seen there is especific optimizations made to those games and the games used in the review as well as the settings were "suggestions" from the manufacturer...

Look here:
Techreport
Anandtech
Guru3d
 
Last edited:
Top