• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

ASUS GeForce GTX 570

the 580 seems overpriced now

I should have gotten 2 of these........ maybe i will after i sell my 480, actually i will for sure!

W1zz would u lower the score of the 580 now? This matters a lot to me
 
Last edited:
:banghead:oh yeah forgot about that heard it has issues though

Yeah, Lucid is still currently in its infancy, and ATI and Nvidia is not playing nice with them.
 
Yeah, Lucid is still currently in its infancy, and ATI and Nvidia is not playing nice with them.

Yeah they are the issue! Can almost understand their p.o.v. though. Sort of like putting a Mustang engine in a Camaro the purist wont have it and the corporations don't want to share the profits.
 
So the new 480sp (gtx-570) isn't any faster, slower actually, than the last gen 480sp (gtx-480)? How is this a win? It uses less power and is cooler than the most power hungry, hottest GPU ever. That doesn't seem like a really big accomplishment. It is cheaper than the gtx-480.

Suppose Barts was 1600sp, what do you think it's performance would be? Certainly it would kick the hell out of the 5870, not just match it.
 
So the new 480sp (gtx-570) isn't any faster, slower actually, than the last gen 480sp (gtx-480)? How is this a win? It uses less power and is cooler than the most power hungry, hottest GPU ever. That doesn't seem like a really big accomplishment. It is cheaper than the gtx-480.

it IS an accomplishment, it's on par with what was for 9 months the fastest single GPU you could buy, now its cooler, quieter and a considerable amount cheaper too. also consider it likely that it would have been a GTX475 had ATi not jumped to next generation naming. you have to just think of it as a first gen refresh because thats what it is. can't wait to see what ATi can pull off in terms of a single GPU this generation, Nvidia always tends to lead there and that's why I prefer them.

Suppose Barts was 1600sp, what do you think it's performance would be? Certainly it would kick the hell out of the 5870, not just match it.

nope, essentially it would match it. the reason they cut down the SP's in the first place was that cypress was unbalanced in terms of ROPS and SP's (too many SP's), so they tuned that a little, and slightly beefed up the tesselation.

supposing barts were 1600 sp essentially is a 5870.
 
nope, essentially it would match it. the reason they cut down the SP's in the first place was that cypress was unbalanced in terms of ROPS and SP's (too many SP's), so they tuned that a little, and slightly beefed up the tesselation.

supposing barts were 1600 sp essentially is a 5870.


Sorry, I don't buy that premise at all. A 1600sp Barts would cream the 5870. They, of course, would increase other areas of the card to match.
 
we'll never know, then thats not a 1600sp barts is it :confused:

Whatever. You can try and avoid the point I was making. It's not hard to improve on the hottest, most power hungry design ever after 8mos. Especially if you are going to make it slower overall, clock for clock.
 
This is a good value high performance card. Kinda like the 5850 was.

One thing some of you may have notice is that the 2nd Generation NV 40nm chips are a much more mature. On the other hand, the ATI 40nm has always been mature and so is less special when the 68xx cards were launched. Maybe the 69xx cards could change that perception.

The next critical thing for GPU performance upgrade will highly depend on the new manufacturing tech. I think both ATI and NV are counting on GF or TMSC to give them 28nm or similar.
 
Whatever. You can try and avoid the point I was making. It's not hard to improve on the hottest, most power hungry design ever after 8mos. Especially if you are going to make it slower overall, clock for clock.

no problems it's just a difference in opinion. however I did already make my answer to your point, that the refinement to GF100 is a decent accomplishment after Nvidia dropped the ball so badly with it.

GF110 is essentially the same performance clock for clock, what theyve been able to do with the refinement is increase yeilds and up the clockspeeds. clock for clock you'd have to be comparing the 570 to a 470, where it is faster solely because more SP's are active, and perhaps a little in FP16 heavy titles (as GF110 is capable of double GF100 in that regard). the real advantage is decent clockspeed increases, and much better OC headroom for enthusiasts, and lets face it, GF110 is an enthusiast GPU.

All I think is that Nvidia have done a good job turning around the bad situation they were in, we now have a GTX580 which consistantly beats a 480 while using less power and making less heat, and likewise for the 470, however power is almost the same, but the performance delta is bigger than 480-580. more like 20-25% (570 vs 470) as oppose to 10-15% (580 vs 480).
 
2560X1600 conclusion

The Conclusion made this statement which is unclear to me what it means.

"In most of the latest DirectX 10 and DirectX 11 games, the GTX 570 will provide you comfortable gameplay with quite some eye-candy enabled, at 1920 x 1200 resolutions. It will also make gaming at 2560 x 1600 possible with some loss of detail."

I just bought a 30 HP ZR30w IPS LCD and I am wondering if the 580 has the same quality loss issue. What cards are you comparing the 'quality' factor against exactly? Why is there a quality loss in the 570? Thanks for any clarification on this statement

Edit: It just dawned on my you're talking about turning down game graphic settings in order to get proper FPS.
 
Last edited:
supposing barts were 1600 sp essentially is a 5870.

How you think barts with 1600 sp would perform just like a 5870 is beyond my comprehension, considering 1120 sp barts is faster than 1440 sp cypress.

*On topic: seing how 570 is 22% avg faster than 470, that would be 35% faster than 5850, so, HD 6950 would "only" have to be 40% - 45% faster than 5850 in order to be faster than gtx 570 and i bet 6950 is going to cost 299USD.

If anyone cares to think, one would reach this conclusion:

9800GTX = HD4850
GTX 285 < HD5850
GTX 480(GTX 570) < HD6950 ??
 
Whatever. You can try and avoid the point I was making. It's not hard to improve on the hottest, most power hungry design ever after 8mos. Especially if you are going to make it slower overall, clock for clock.

You can look at it this way. If GF100 ==> R600 then GF110 ==> RV670, but without the luxury and advantage of having an smaller node. Nvidia has done a much much better job "fixing" GF100 than AMD did "fixing" R600. Who knows what will happen with the next chip.

How you think barts with 1600 sp would perform just like a 5870 is beyond my comprehension, considering 1120 sp barts is faster than 1440 sp cypress.

He thinks that because it's the plain truth. 1120 SP Barts is NOT faster than 1440 SP Cypres, not at all. lol. What's beyond my comprehension is how you can even make that comparison. 1440 SP Cypress @ 900 Mhz >> Barts @ 900 Mhz and both attain similar clocks when OCed, so it's not as if Barts coud do 1200 Mhz.

The reason that Barts with 1120 SPs is similar to Cypress is because the architecture couldn't handle so many SPs to begin with, hence they lowered the ammount of them.
 
Last edited:
You can look at it this way. If GF100 ==> R600 then GF110 ==> RV670, but without the luxury and advantage of having an smaller node. Nvidia has done a much much better job "fixing" GF100 than AMD did "fixing" R600. Who knows what will happen with the next chip.

Much much better?? why?
I agree better, but no that much much much much.........like if it was 50% or something
gtx 580 20% faster than gtx 480 and better at perf per watt
hd 3870 = hd 2900xt but much much much better at perf per watt
 
Much much better?? why?
I agree better, but no that much much much much.........like if it was 50% or something
gtx 580 20% faster than gtx 480 and better at perf per watt
hd 3870 = hd 2900xt but much much much better at perf per watt

No it didn't have better perf/watt really if we consider that the node change already yields about 2x the perf/watt. All the improvements came from going 55nm. Nvidia has achieved significantly better performance at lower power consumption on the same "fucked up" node.
 
No it didn't have better perf/watt really if we consider that the node change already yields about 2x the perf/watt. All the improvements came from going 55nm. Nvidia has achieved significantly better performance at lower power consumption on the same "fucked up" node.

Then you can't compare the two, you are based on different scenarios, a fair comparison is if amd had to do de trick at the same nm.

EDIT: a fair comparison will be when cayman is out :)
 
How you think barts with 1600 sp would perform just like a 5870 is beyond my comprehension, considering 1120 sp barts is faster than 1440 sp cypress.

keep in mind they did make other tweaks, mainly to tesselation performance, but not to straight forward shader architecture.

and now that your comparing 1120sp barts to 1440sp cypress, keep in mind the clockspeed differences between the two. 1440sp cypress is a 5850 clocked at 725mhz core, the 1120sp barts is clocked at 900mhz core, with the same amount of ROPS.

keep in mind also that when a 5850 and 5870 are clocked the same the difference is almost nul, somewhere in the vicinity of 2-3% while losing 10% of it's sp's. this confirms the chip was unbalanced in terms of ROPS vs SP's.

I am being swayed to think that a "barts" with 1600 sp would be a tad faster, but only because it would have more tesselation performance, not more shader performance, just keep in mine 6800's are clocked faster than 5800's to help make up for their lack in sp's

5850 = 725mhz, 6850 = 775mhz
5870 = 850mhz, 6870 = 900mhz

again it's just a difference in opinion, and we will never know the difference because such a card wont be made now.

sorry for being soo off topic :o
 
keep in mind they did make other tweaks, mainly to tesselation performance, but not to straight forward shader architecture.

and now that your comparing 1120sp barts to 1440sp cypress, keep in mind the clockspeed differences between the two. 1440sp cypress is a 5850 clocked at 725mhz core, the 1120sp barts is clocked at 900mhz core, with the same amount of ROPS.

keep in mind also that when a 5850 and 5870 are clocked the same the difference is almost nul, somewhere in the vicinity of 2-3% while losing 10% of it's sp's. this confirms the chip was unbalanced in terms of ROPS vs SP's.

I am being swayed to think that a "barts" with 1600 sp would be a tad faster, but only because it would have more tesselation performance, not more shader performance, just keep in mine 6800's are clocked faster than 5800's to help make up for their lack in sp's

5850 = 725mhz, 6850 = 775
5870 = 850mhz, 6870 = 900

again it's just a difference in opinion, and we will never know the difference because such a card wont be made now.

You make a good point, the card's shaders aren't being used at its full potential, i guess that's what amd addressed with the revamped shader system in the 6900's or whatever that is called.
 
Then you can't compare the two, you are based on different scenarios, a fair comparison is if amd had to do de trick at the same nm.

Then maybe they would do as good as job as Nvidia has done now, but that's out of the question Nvidia has done a better job than AMD did back then and that's all that matters according to what MY point was.

HD3870 vs HD2900XT perf/watt (same perf)

perfwatt.gif


68% vs 100%

GTX480 vs GTX570 perf/watt

perfwatt_1920.gif


74% vs 100% without a node change.

So let's define what "much much better" is, because it's clear that we are both using it liberally, but ultimately if HD3870 had a much much better perf/watt then Nvidia is doing a much much better job. ;)

EDIT: a fair comparison will be when cayman is out :)

This is not AMD vs Nvidia. Why does people have to convert every discussion in an AMD is better than Nvidia, or viceversa argument? Its stupid.

I am talking about how the 2 companies stepped out from their failure and ho Nvidia seems like it's doing better. does it mean Nvidia is better? No. Does it mean next Nvidia chip will be a lot better no, but it does open a very very big posibility.
 
You make a good point, the card's shaders aren't being used at its full potential, i guess that's what amd addressed with the revamped shader system in the 6900's or whatever that is called.

yeah they must have figured out the proper ratio needed for using the chip to its peak potential, and decided to amp up the tesselation while in there.
 
So the new 480sp (gtx-570) isn't any faster, slower actually, than the last gen 480sp (gtx-480)?

You know there are more specs than just number of SPs.

How is this a win? It uses less power and is cooler than the most power hungry, hottest GPU ever. That doesn't seem like a really big accomplishment. It is cheaper than the gtx-480.

Lower power consumption and lower temperatures was the point of the tweaks, not to make the card faster clock for clock. In fact, the card would likely be equal clock for clock if the memory system was kept the same.
 
great review, awesome overclocking for this card
 
Lower power consumption and lower temperatures was the point of the tweaks, not to make the card faster clock for clock. In fact, the card would likely be equal clock for clock if the memory system was kept the same.

Well, you've obviously been given information that I wasn't aware of. I just assumed that more performance was always what it was about. They could have saved themselves a lot of money and effort if they just added the software tweak to reduce peak consumption with Furmark/OCCT to the 480 and dropped the price.
 
Well, you've obviously been given information that I wasn't aware of. I just assumed that more performance was always what it was about. They could have saved themselves a lot of money and effort if they just added the software tweak to reduce peak consumption with Furmark/OCCT to the 480 and dropped the price.

But that wasn't the point, they refreshed GF100 to create a more efficient design, of course if all they wanted to do was reduce the clock speeds in OCCT and Furmark they would of just added a power limiter to the 400 series. But they instead addressed the issues while giving the cards a nice performance boost.
 
Back
Top