# NVIDIA GeForce GTX 580 1536 MB



## W1zzard (Nov 7, 2010)

Today NVIDIA releases their new GeForce GTX 580 which is based on their Fermi architecture. The card is 20% faster than the GTX 480, yet requires less power. NVIDIA has also optimized fan noise making this the quietest highest-end card on the market today.

*Show full review*


----------



## qubit (Nov 9, 2010)

This looks like a great card. I think I'll get myself one in a few months.


----------



## mdsx1950 (Nov 9, 2010)

Woohooo. Awesome!


----------



## scaminatrix (Nov 9, 2010)

Fantasticness! Been waiting hours for this...


----------



## Fourstaff (Nov 9, 2010)

And so I was right  About the time when people said that the GTX580 was coming out, I jokingly predicted that its going to be a GTX480 done right. And it was. Should have named it GTX485 rather than GTX580.


----------



## Frick (Nov 9, 2010)

Looks good actually. On par with the 5970, better power consumption than the 480.. Nice.


----------



## arroyo (Nov 9, 2010)

This card makes the same transformation as going from 8800GT to 9800GT. Better but somehow the same.


----------



## arnoo1 (Nov 9, 2010)

you're late xd

in this review 86c under crysis load? that's high,
linus tech tips of ncix on youtube he had a open test bench and the core only went up to 69 with 54% fan offcourse you have better temps on a open test bench but that difference is pretty big 

but still great review love the overvolt part,
and damn it's fast


----------



## Scrizz (Nov 9, 2010)

nice, looking good


----------



## qubit (Nov 9, 2010)

Fourstaff said:


> And so I was right  About the time when people said that the GTX580 was coming out, I jokingly predicted that its going to be a GTX480 done right. And it was. Should have named it GTX485 rather than GTX580.



I quite agree, it definitely should deserve the GTX 485 name, because it's not a new architecture, just an improved Fermi, with a _way_ better cooler. Nevertheless, the card is still a good buy. It would make a decent upgrade from my GTX 285 for example.

Of course, we now all want to see how the AMD response to this performs. 

Oh and next time W!zzy tells me he don't have none for review, I'm gonna take that with that large pinch of salt.


----------



## AFQ (Nov 9, 2010)

@Wizzard

Nice card, you should also start including Vantage in your reviews.


----------



## W1zzard (Nov 9, 2010)

arnoo1 said:


> open test bench and the core only went up to 69 with 54% fan



open bench outside in iceland will be even cooler but what does it tell the user what temps to expect? thats the point of a review isnt it? for ncix it's to sell product



AFQ said:


> you should also start including Vantage in your reviews.



congrats. you are the first person to EVER complain about that. i've had people complain about pretty much everything else, except for vantage. vantage is gay, wont use. maybe 3dmark 11 is better



qubit said:


> Oh and next time W!zzy tells me he don't have none for review, I'm gonna take that with that large pinch of salt



as mentioned before, nvidia's response to the feedback in that thread was that they tried to get me a board on the weekend and managed to achieve the impossible!


----------



## blibba (Nov 9, 2010)

Any F@H power/temps/performance?


----------



## jasper1605 (Nov 9, 2010)

Thanks for the review Wiz!  Always appreciated.  Now it's on to see what the 6970 will do


----------



## Red_Machine (Nov 9, 2010)

Excellent review, W1zz!  Thanks for your valued opinions.


----------



## gumpty (Nov 9, 2010)

Colour me blind, but the noise graphs don't have the data for the HD6870 or HD6850.

Not that I need to see them there - I know from the one I have that it's loud.


----------



## CDdude55 (Nov 9, 2010)

Very nice, a good amount faster for less power.


----------



## 983264 (Nov 9, 2010)

GTX 580 is the current fastest single gpu card as of now, and HD 6970 is not yet released... This is a serious battle for the fastest single gpu card...


----------



## KainXS (Nov 9, 2010)

damn man seriously, its not even 15% faster, damn man the naming for these cards is getting really screwed up, wtf amd and nvidia

at least it uses less power


----------



## heky (Nov 9, 2010)

In Crysis it reached 86°C, and if overclocked with voltage tuning it started to throttle at 97°C. So much for being sooo much better.
Oh and it still doesnt beat the 5970, but costs more.

Am really wondering what the new AMD 6950/70 will bring to the table.


----------



## NdMk2o1o (Nov 9, 2010)

Underwhelmed a tad, 10-15% better than the 480 :shadedshu


----------



## BraveSoul (Nov 9, 2010)

digging this. got to love the voltage with power consumption graphs... the card looks good, quieter, faster, and yet uses less power, and should still manage to heat up a small room. perfect timing as its really getting cold here in NY
_____________________________





Antec1200 filter project


----------



## yogurt_21 (Nov 9, 2010)

W1zzard said:


> maybe 3dmark 11 is better



one can only hope, thanks for the review, top notch as always.


----------



## Delta6326 (Nov 9, 2010)

yummy i didn't know these where coming out this year! and before 6970! that was fast moving


----------



## Bjorn_Of_Iceland (Nov 9, 2010)

Lol! This card shouldve been released last year! XD

Lets see what 6970 would bring, if its better than 5970, and priced correctly, its definitely something to look forward to!


----------



## CDdude55 (Nov 9, 2010)

KainXS said:


> damn man seriously, its not even 15% faster, damn man the naming for these cards is getting really screwed up, wtf amd and nvidia
> 
> at least it uses less power



It really should of been named ''GTX 485'' or something of the sort, as it's still based on the same architecture and process, just with more efficacy added in.


----------



## AFQ (Nov 9, 2010)

W1zzard said:


> congrats. you are the first person to EVER complain about that. i've had people complain about pretty much everything else, except for vantage. vantage is gay, wont use. maybe 3dmark 11 is better



lol, any idea when 3DMark 11 will release? You must know better than us. 

and can you please tell how you bypassed the voltage limitation?


----------



## douglatins (Nov 9, 2010)

Performance wise its only 15% better, but it has better thermals and load


----------



## W1zzard (Nov 9, 2010)

AFQ said:


> and can you please tell how you bypassed the voltage limitation?



you mean the power limiter? i just measure power at a high sample rate that is faster than nvidia's system so i can measure before the limiter can kick in.

technically it's not a voltage limitation. voltage is pretty much constant on the 12v line but current (amps) change. 

power (watts) = voltage * current, thats why nvidia also measures voltage, multiplies (in the driver) and gets the power draw by that formula


----------



## AFQ (Nov 9, 2010)

W1zzard said:


> you mean the power limiter? i just measure power at a high sample rate that is faster than nvidia's system so i can measure before the limiter can kick in.
> 
> technically it's not a voltage limitation. voltage is pretty much constant on the 12v line but current (amps) change.
> 
> power (watts) = voltage * current, thats why nvidia also measures voltage, multiplies (in the driver) and gets the power draw by that formula



Ah...thanks for clarification.


----------



## mdm-adph (Nov 9, 2010)

_Still_ uses more power than a 5970 while being slower.  Fail.



> In order to stay within the 300 W power limit, NVIDIA has added a power draw limitation system to their card. When either Furmark or OCCT are detected running by the driver, three sensors measure the inrush current and voltage on all 12 V lines (PCI-E slot, 6-pin, 8-pin) to calculate power. As soon as the power draw exceeds a predefined limit, the card will automatically clock down and restore clocks as soon as the overcurrent situation has gone away.



Why, those tricky devils.


----------



## qubit (Nov 9, 2010)

W1zzard said:


> as mentioned before, nvidia's response to the feedback in that thread was that they tried to get me a board on the weekend and managed to achieve the impossible!



I'm sorry, I didn't see your previous response and didn't know.

Anyway, that's mighty awesome! You must have burned the midnight oil to get this review done by the deadline? Kudos dude.


----------



## btarunr (Nov 9, 2010)

mdm-adph said:


> _Still_ uses more power than a 5970 while being slower.  Fail.
> 
> 
> 
> Why, those tricky devils.



But we managed to put those devils to sleep (disabled that throttling logic). Check out the "Maximum" graph for Furmark power draw with throttling logic disabled.


----------



## mdm-adph (Nov 9, 2010)

btarunr said:


> But we managed to put those devils to sleep (disabled that throttling logic). Check out the "Maximum" graph for Furmark power draw with throttling logic disabled.



Aye, will check it out.  It just always freaks me out when a company _directly_ targets reviewers and benchmarking programs in that way, though.  Too close to home.


----------



## douglatins (Nov 9, 2010)

the card throtles itself on Furmark? Lame


----------



## btarunr (Nov 9, 2010)

Their contention is that since Furmark/OCCT don't reflect "real-world usage" (hear hear, like racing doesn't reflect real-world usage of "cars" i.e. driving from office to home), they decided to play nanny with a supposedly enthusiast-grade product. It's kind of like fitting the speed-governor device used in school buses in a Porsche 911.


----------



## mdm-adph (Nov 9, 2010)

btarunr said:


> Their contention is that since Furmark/OCCT don't reflect "real-world usage" (hear hear, like racing doesn't reflect real-world usage of "cars" i.e. driving from office to home), they decided to play nanny with a supposedly enthusiast-grade product. It's kind of like fitting the speed-governor device used in school buses in a Porsche 911.



If it's not "real-world usage," then they should ignore it.  Nobody is going to see the Furmark results besides us enthusiasts anyway.

Doing something like this makes it look like they're trying to hide something.  Since the limiter only kicks in during either Furmark or OCCT, couldn't more power be conceivably drawn during some really, really, really heavy gaming?  I mean something really intensive?


----------



## ToTTenTranz (Nov 9, 2010)

Card is good, pricing isn't so bad either.
But GTX *5*80? So from now on, nVidia is going to change the family number denominator twice a year? By 2013 we'll have GTX 1180?


I guess 2010 is by far the year with the crappiest naming conventions for graphics cards, ever.


----------



## Hayder_Master (Nov 9, 2010)

glad to see your review w1z , dig it guys


----------



## jfgwapo (Nov 9, 2010)

Nice review, as always.

Nice performance gain vs 480, good power reduction too.


----------



## wahdangun (Nov 9, 2010)

wow, its just fully enabled GTX 580, 


btw where is bastenegia ?? where is 128 TMUS he claimed ?


----------



## Imsochobo (Nov 9, 2010)

Frick said:


> Looks good actually. On par with the 5970, better power consumption than the 480.. Nice.



Let's not forget that.

a.\ its almost a year since 5970 came out.
b.\ not beating 5970 in power usage, per/watt

But its great news!
Nvidia have had lack of products that compete with with AMD.
460 can compete, and 580 are their strongest ones.
But they're fighting a whole very good lineup from amd though.
maybe my 5850 will go under launch price now ? instead of over...


----------



## Animalpak (Nov 9, 2010)

finally my new directx11 video card is out.


----------



## trickson (Nov 9, 2010)

Thanks for the nice review W1zzard ! 
Man they are pumping out video cards like mad lately . I just got crossfire with 5770's . I feel like I need to ask bill gates for some help just to keep up with all the new tech coming out every week it seems like some thing new is coming out to make every thing I have obsolete !


----------



## alexsubri (Nov 9, 2010)

great review W1zz, at least its almost in par with 5970


----------



## N3M3515 (Nov 9, 2010)

The price is nothing we can do about it, after all it is the fastest SINGLE gpu card.
I'm disappointed anyway, as i was especting at least 20% average increase over GTX480.
On single card side of things, still 5970 rules, performance, power comsumption and price wise.
This card is on average 24% faster than 5870, so, all the 6970 has to do is be 30% faster(than the 5870) and take the single GPU performance crown from GTX580, which i think is very probable.
Then, 5970 undefeated, passes its crown to 6990.


----------



## newtekie1 (Nov 9, 2010)

CDdude55 said:


> It really should of been named ''GTX 485'' or something of the sort, as it's still based on the same architecture and process, just with more efficacy added in.



Again, this comes down to ATi moving to the next generation first with their cards that are still based on the same architecture and process.  The same reason nVdia had to move to the 9800 series  when they had planned to release everything under the 8800 series.  In marketting it would seem like nVidia is a generation behind if ATi is roling out their "next gen" cards and nVidia is still putting out GTX400 series cards.



btarunr said:


> Their contention is that since Furmark/OCCT don't reflect "real-world usage" (hear hear, like racing doesn't reflect real-world usage of "cars" i.e. driving from office to home), they decided to play nanny with a supposedly enthusiast-grade product. It's kind of like fitting the speed-governor device used in school buses in a Porsche 911.




While I somewhat agree, I also see that ATi has done the same thing with previous cards.  Except they did it too late as an afterthought when people started killing hardware with Furmark.  Personally, I don't think Furmark should be used for anything more than testing stability(and now it can't even do that).  So use Kombuster instead.  Power use under Furmark is completely un-realistic.  Using your anology, it is like judging the fuel mileage of a race car based on 100% full throttle the entire race, it just doesn't happen.  That is why when I look at power consumption, I always look at the Peak graph, because that is real world.



mdm-adph said:


> _Still_ uses more power than a 5970 while being slower.  Fail.
> 
> 
> 
> Why, those tricky devils.




Yes, but it is an unnoticeable difference, and for almost $100 less.  I'll take it.


----------



## largon (Nov 9, 2010)

Too little too late. 
I reckon Cayman will be one tough adversary for GF110.


----------



## Imsochobo (Nov 9, 2010)

newtekie1 said:


> Again, this comes down to ATi moving to the next generation first with their cards that are still based on the same architecture and process.  The same reason nVdia had to move to the 9800 series  when they had planned to release everything under the 8800 series.  In marketting it would seem like nVidia is a generation behind if ATi is roling out their "next gen" cards and nVidia is still putting out GTX400 series cards.



Where is ati the same arch from 5000 to 6000
Its rather diffrent, how can a 1120 shader card trumph my 1536 shader card.


----------



## johnnyfiive (Nov 9, 2010)

BTW guys, I think minimum frame rates are being overlooked... by a lot of you guys.
For each person saying the card is inferior to the 5970, that its too late, that its not really worth the $559 price tag... you're wrong. It beats the 480, by a decent mount, same with the 5870. It performs as well in most cases, as the 5970.., and the kicker, it has a equal or sometimes better minimum frame rate as the 5970, has CUDA, and much improved tessellation performance.

So what am I missing that the majority of you guys are bashing on?

http://www.hardwarecanucks.com/foru...s/37789-nvidia-geforce-gtx-580-review-11.html

Look at those minimum frame rates fellas. I love the reviews on TPU, but minimum frame rate really should be added to reviews. It makes a big difference. Max frame rate is nice to know.. but its not nearly as important as the minimum fps rate. Think about it, if you have a 60hz monitor, is having 120 max, 55 min better than having 110 max, 65 min? The 580 is awesome for being single GPU.

And you can't tell me this isn't impressive...

















As for it being more expensive than a 5970... some 5970's are $600, and its OLD architecture. It's just two 5850's basically. For being a single GPU, improving Fermi architecture, improving thermals and power consumption... this is a great card for $559.


----------



## qubit (Nov 9, 2010)

I'm waiting for the Charlie D article frothing at the mouth that he was right. 

This is still a really good card, even if it's not at the level that some journalist thinks it should be at.

Look, it's really easy for all of us to say the Fermi design should be this or that, but we're not the poor sods who have to design and implement it. The devil's in the details with stuff like this. And there's a _lot_ of details.

Heck, we're even seeing that AMD's latest top-end is getting pushed back due to the same sorts of issues.


----------



## Red_Machine (Nov 9, 2010)

Hear hear, johnny!


----------



## bear jesus (Nov 9, 2010)

Great review as always but i have to admit I'm disappointed  I know there was only so much that could be done going from gf100 to gf110 but i was really hoping for the 580 to beat a pair of 6870's in crossfire.

Admittedly i would have wanted to run 3 monitors off one card and luckily i didn't have any hope for nvidia to do that once i saw the pictures of the display outputs, so the 580 would not have been up to the task, i was just hoping for more options for a single high end card than just the 6970 

I wish this was the card that nvidia released last year so they would have had something even better out right now, but I'm not saying it's a bad card as it does give great fps and has reduced it's power usage but it really is the 485 or basically what the 480 was supposed to be.


----------



## hv43082 (Nov 9, 2010)

Wow, this release lowered the ATI 5970 cost to $470 after $30 MIR from newegg.


----------



## johnnyfiive (Nov 9, 2010)

johnnyfiive said:


> BTW guys, *I think minimum frame rates are being overlooked... by a lot of you guys.*
> For each person saying the card is inferior to the 5970, that its too late, that its not really worth the $559 price tag... you're wrong. It beats the 480, by a decent mount, same with the 5870. It performs as well in most cases, as the 5970.., and the kicker, it has a equal or sometimes better minimum frame rate as the 5970, has CUDA, and much improved tessellation performance.
> 
> So what am I missing that the majority of you guys are bashing on?
> ...



Quoting this, cause this needs to be on page 3.

*Edit:* As for folding... it slaughters the 480.

http://www.hardwarecanucks.com/foru...s/37789-nvidia-geforce-gtx-580-review-20.html

The 580 is awesome guys...


----------



## dies900 (Nov 9, 2010)

I´m really surprised how much less power and more performance
it should have been 480 or maybe 485
nice work nvidia(Y)


----------



## bear jesus (Nov 9, 2010)

If you think about it this really is what nvidia intended to bring up against the 5870/90, if they had done it then they would have thrashed ati/amd so bad by having a single chip card that beats their dual chip card in many tests.



johnnyfiive said:


> *Edit:* As for folding... it slaughters the 480.
> 
> http://www.hardwarecanucks.com/foru...s/37789-nvidia-geforce-gtx-580-review-20.html
> 
> The 580 is awesome guys...



I agree the 580 will give some people an awesome increase in PPD


----------



## johnnyfiive (Nov 9, 2010)

The 480 easily beats the 5870... the 580 is single GPU, manhandles the 5870, easily beats 480, and is neck and neck with the 5970 in the majority of games/tests. What more do ya want bear!?


----------



## bear jesus (Nov 9, 2010)

johnnyfiive said:


> The 480 easily beats the 5870... the 580 is single GPU, manhandles the 5870, easily beats 480, and is neck and neck with the 5970 in the majority of games/tests. What more do ya want bear!?



I want a good thrashing, double the fps of an ati chip 

If the 580 had beaten the 5970/6870 crossfire by a bigger margin it could have been enough for me to consider going back to a single monitor at 2560x1600 monitor, but right now it's not thus


----------



## johnnyfiive (Nov 9, 2010)

Here ya go, exactly what you asked for.

Look at the minimum FPS...

Source: Anandtech.com






Impressed finally?


----------



## bear jesus (Nov 9, 2010)

johnnyfiive said:


> Here ya go, exactly what you asked for.
> 
> Look at the minimum FPS...
> 
> ...



 ok yes that is impressive, but now i have to go find anandtech's min fps for 6870 crossfire to be truly impressed.


----------



## johnnyfiive (Nov 9, 2010)

It's in that image I posted above. AMD's CrossFire doesn't do so great in certain games at 2560, sometimes it just plain sucks because of the 256bit memory bus. 6870 CrossFire is in that image above, second from the bottom. The fact is, two 580's will dominate anything at 2560, theres no question about that. You pay a lot for two 580's... but the performance omg.


----------



## CDdude55 (Nov 9, 2010)

Ya i think a lot of people forget about the minimum FPS, which is actually much more important then the max in most cases. I don't think the 580 was meant to take on the 6900's though, as it's not a ground up design and more of a refresh fixing the things the 480 was plagued with.


----------



## Athlon2K15 (Nov 9, 2010)

I think the 580 does it's job. It goes head to head with the 5970 where before there was nothing.It's slightly cheaper than a 5970, I do believe next year we will see something new from nvidia...like Kepler which will probably beat out anything ATI has in the works now


----------



## qubit (Nov 9, 2010)

johnnyfiive said:


> Here ya go, exactly what you asked for.
> 
> Look at the minimum FPS...
> 
> ...



Nice find Johnny. Minimum fps is indeed absolutely the most important factor in rendering performance. Imagine a hypothetical system that never ever dropped below 60fps under all conditions? You'd have 100% guaranteed smooth gameplay for ever and the rest wouldn't matter.

EDIT: I've just twigged that the ATI MisFires actually give much _lower_ performance than the single card.  And how old is this game now...? You'd think they'd have a profile out for it, wouldn't you. It's glitches like this which is why I'm not keen on dual card setups.


----------



## johnnyfiive (Nov 9, 2010)

qubit said:


> Nice find Johnny. Minimum fps is indeed absolutely the most important factor in rendering performance. Imagine a hypothetical system that never ever dropped below 60fps under all conditions? You'd have 100% guaranteed smooth gameplay for ever and the rest wouldn't matter.



Exactly qubit, its a huge factor that people seem to overlook.

Also, take this into consideration. The GTX 580 is about 18-20% faster than a 5870. That means for the 6970 to even compete with the GTX 580, it has to be at least 20% faster than the 5870. For it to be superior to the GTX 580, it needs to be 30-32% faster than the 5870.. do you guys think that's possible... 30-32% faster than the 5870? I sure don't, but I hope so, lol.


----------



## f22a4bandit (Nov 9, 2010)

I'm pleasantly surprised this is actually a hard launch! I have to give respect to Nvidia, they produced a very nice card. I think they also realize that THIS is what fermi should have been from the start.


----------



## bear jesus (Nov 9, 2010)

johnnyfiive said:


> It's in that image I posted above. AMD's CrossFire doesn't do so great in certain games at 2560, sometimes it just plain sucks because of the 256bit memory bus. 6870 CrossFire is in that image above, second from the bottom. The fact is, two 580's will dominate anything at 2560, theres no question about that. You pay a lot for two 580's... but the performance omg.



 sorry i'm still half asleep  if the 6970 beats the 580 i will have to buy one though as i can keep my current monitors and psu, only time will tell.


----------



## newtekie1 (Nov 9, 2010)

Imsochobo said:


> Where is ati the same arch from 5000 to 6000
> Its rather diffrent, how can a 1120 shader card trumph my 1536 shader card.



It is the same architecture, just optimized.  Just like RV670 was the same architecture as RV600 but optimized.  It is not a rebuild from the ground up like, for example, RV770/90 to Cypress or G200 to Fermi.



bear jesus said:


> I agree the 580 will give some people an awesome increase in PPD



I think, just like the GTX480 this GTX580 is going to be priced too high to attract folders.  Overclocked I would guess the GTX580 is going to do maybe 18,000 PPD.  But at $500+ you can get two GTX470s which will pull 16,000 a piece when overclocked.  So for the same cash, you can get 32,000 PPD vs. 18,000 PPD with the GTX580.


----------



## the54thvoid (Nov 9, 2010)

qubit said:


> ATI MisFires



Thats my word!!

Anyhow - This is what the 480 should have been and back in Sep 09 I'd have bought this for sure.  I may buy one but i'll need to see the HD6970 and decide on that.  But let's face it folks - we get 5970 performance - similar power consumption and less noise - it IS a good card - so there can be no bitching.

But, HD6970? - i'm keeping cash in wallet for now.


----------



## Jeffredo (Nov 9, 2010)

This was a pleasant surprise this morning.  Excellent card and a well timed move by Nvidia.  About the only way I would be disappointed is if I had purchased a GTX 480 in the past seven months.  Anyway, that was then - this is now.  What's in a name?  I guess it doesn't really matter, but it would have made more sense to keep a GTX 4XX tag (GTX 485 or GTX 490) since it is a refinement instead of a totally new architecture.  Given the other guys have also jumped on the out-of-sequence naming bandwagon with the HD 6850 and HD 6870 anything goes.


----------



## the54thvoid (Nov 9, 2010)

johnnyfiive said:


> Here ya go, exactly what you asked for.
> 
> Look at the minimum FPS...
> 
> ...



No actually.  The 2560 res is a problem for AMD Cards.  *The minimum frames at 1920x1200 are still higher for 5970 and crossfired 58 cards.* 

http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/6


----------



## wolf (Nov 9, 2010)

W1zzard said:


> as mentioned before, nvidia's response to the feedback in that thread was that they tried to get me a board on the weekend and managed to achieve the impossible!



So they are wathcing TPU threads.... I hope they get some good ideas 

fantastic review W1z, good to see you got a card and benched it all in time. I personally think the GTX580 is a decent move for Nvidia.

now what I'm interested in is to see some scaled down versions of GF110, and what sort of performance they provide and power they consume. for instance GTX470 specs on a GF110 core might just be win. perhaps win enough to jam two on a card...


----------



## johnnyfiive (Nov 9, 2010)

the54thvoid said:


> No actually.  The 2560 res is a problem for AMD Cards.  *The minimum frames at 1920x1200 are still higher for 5970 and crossfired 58 cards.*
> 
> http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/6



They better be, its two cards with 2 GB's of total memory. If I paid $700 for two 5870's and a $550 GTX 580 beat my min fps I'd be pretty upset.


----------



## bear jesus (Nov 9, 2010)

wolf said:


> now what I'm interested in is to see some scaled down versions of GF110,



Great point, GF114/gtx 560 (i assume) should be awesome and even more so in sli.


----------



## crow1001 (Nov 9, 2010)

Decent review, but Dirt 2 still tested in DX9, and I wish you would drop the lower res results for the beastly cards, 1024x768/1280x224 are not relevant to the card being benched, you're better of dumping them and spending the time benching another game.


----------



## Yellow&Nerdy? (Nov 9, 2010)

Great review. The card just fails to impress me... And the re-naming crap is even worse than what AMD did. Guess Nvidia "did it again". I'm not willing to spend 480 euros for 10-15% performance increase over the 480 and a little better thermals, to be honest. I guess I'll just have to wait for Cayman. Atleast that's a new chip, instead of the 580, which is more like a tweaked/fixed 480. I would like to see future cards based on the GF104 though, like a GF114 maybe.


----------



## qubit (Nov 9, 2010)

wolf said:


> now what I'm interested in is to see some scaled down versions of GF110, and what sort of performance they provide and power they consume. for instance GTX470 specs on a GF110 core might just be win. perhaps win enough to jam two on a card...



Now, at the risk of being horribly wrong here, isn't the GF110 actually a _scaled up_ GF104 which had at least some of these improvements in it?


----------



## DonInKansas (Nov 9, 2010)

mdm-adph said:


> If it's not "real-world usage," then they should ignore it.  Nobody is going to see the Furmark results besides us enthusiasts anyway.



I find this point hilarious.  The GTX 580 is the exact definition of an enthusiast card; it's not like you're gonna see Dell slapping these bad boys into machines anytime soon.  

And as for the naming scheme, I fall back on what I've said previously.  I don't care if they call it NVidia  PewPew 485,800.626.  Those who are in the market for this will be able to find it.


----------



## wolf (Nov 9, 2010)

qubit said:


> Now, at the risk of being horribly wrong here, isn't the GF110 actually a _scaled up_ GF104 which had at least some of these improvements in it?



while it does takes some improvements from GF104, is it far closer to GF100 than GF104, namely the clusters of CUDA cores remain at 32, rather than GF104's 48. they basically took GF100 in february (before release even) and said to themselves, whats wrong with this GPU? and started optimising it.

it will be interesting to see however... now they have to fill a gap between the GTX460-1gb and the GTX580 with either, a scaled down GF110 or a GF104 variant, or a new GPU altogether. I still think a 384sp GF104 would do nicely in there.


----------



## LAN_deRf_HA (Nov 9, 2010)

Wait, so I can't use OCCT to stability test this card? It's the only program that really worked on fine tuning my 470 overclock. Furmark would pass with very unstable clocks.

Feels like we're totally shafted by this 28nm delay. So we get a holdover card using some of the architecture tricks they were planning for the 28nm cards, then we get the other 15% when 28nm finally arrives. Instead of getting that 30% all at once we have it spread out over twice as much time. With the price and measly improvement I'm not sure I'd recommend this card to anybody, at least not current 5xxx/6xxx/4xx series owners.


----------



## wolf (Nov 9, 2010)

LAN_deRf_HA I think you will be able to use OCCT, you just need to find a way to circumvent the power limiter, which a fair few reviewers have done already.


----------



## douglatins (Nov 9, 2010)

Are you guys forgetting you can get much better performance for less? take a look ant what a 6870 CFX can do here

This is BC2, i believe to be the best game for benchmark comparison.







Thats 50% better for 20USD less

AMD should release a 500USD dual 6850/6870 now a 6890 of sorts.


----------



## N3M3515 (Nov 9, 2010)

douglatins said:


> Are you guys forgetting you can get much better performance for less? take a look ant what a 6870 CFX can do here
> 
> http://img.techpowerup.org/101109/09-11-2010 16.22.19 Screenshot..png
> 
> Thats 50% better for *80USD* less



Actually that's the diff.


----------



## wolf (Nov 9, 2010)

douglatins said:


> Are you guys forgetting you can get much better performance for less? take a look ant what a 6870 CFX can do here
> 
> http://img.techpowerup.org/101109/09-11-2010 16.22.19 Screenshot..png
> 
> ...



as shown in one game.... but your comparing apples to oranges, 6870's or GTX460's are cheaper and faster yes, but with both you occupy two pci-e slots, and are already at your maximum amount of cards.

with a GTX580 you have one beastly GPU, and you can add at least two more down the track if you want to. thats just a couple of reasons people would want a GTX580 over two lesser cards.

theres always a value proposition, but there will be people with their heart set on this card, and you won't be able to talk them out of it with those kind of options.


----------



## erocker (Nov 9, 2010)

Two 6870's will run $500 bucks. $250 + $250 (shipping included) I can get a GTX 580 for $508 shipped. That's an $8 dollar difference.


----------



## mtosev (Nov 9, 2010)

hehe. my hd 5970 is still faster


----------



## N3M3515 (Nov 9, 2010)

Don't think so
Cheapest GTX 580
U$568 shipping included

Cheapest HD 6870
U$494 shipping included

That's U$74.26 to be exact.


----------



## douglatins (Nov 9, 2010)

erocker said:


> Two 6870's will run $500 bucks. $250 + $250 (shipping included) I can get a GTX 580 for $508 shipped. That's an $8 dollar difference.



PowerColor AX6870 1GBD5-M2DH Radeon HD 6870 1GB 25...

30USD difference


----------



## erocker (Nov 9, 2010)

N3M3515 said:


> Don't think so
> Cheapest GTX 580
> U$568 shipping included
> 
> ...



I'm not basing it off of Newegg's inflated prices.

http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=6942947&CatId=3669


----------



## douglatins (Nov 9, 2010)

Still it could be more expensive and still a better deal for 50% increase. And the point of hey you could add one later is moot, since no one wants 500W of video cards or more, say you wait 6 months for getting a second 580, maybe maxwell are out, a card that is 2x the performace of a 580, for 400USD, i dont get adding more later, just replace everything since stuff get more efficient

Erocker you bought one right? I look like you are reinforcing your mind or some sorts


----------



## erocker (Nov 9, 2010)

Meh, either way I'm personally unimpressed with current offerings from both sides. I have a little hope for 6970 but I'll most likely just wait until 28nm even though it pains me to wait! I also agree that the 6870's in CrossFire are a better deal with better performance.


----------



## Ross211 (Nov 9, 2010)

I'm really hoping for a price war in over a week when AMD's real deal drops.

Whatsup with the rumor that GPU are going to increase in price 2-3 weeks after the HD 6850 and HD 6870 launch?  Anyone heard this ?  Is it gonna be like how the HD 5000 series was sold way higher than the MSRP at launch?


----------



## Tatty_One (Nov 9, 2010)

heky said:


> In Crysis it reached 86°C, and if overclocked with voltage tuning it started to throttle at 97°C. So much for being sooo much better.
> Oh and it still doesnt beat the 5970, but costs more.
> 
> Am really wondering what the new AMD 6950/70 will bring to the table.



Costs more?  Not in UK it don't, it is considerably cheaper, in fact 5% slower than a 5970 but at least 15% cheaper.


----------



## Black Panther (Nov 9, 2010)

Great review!

Now for my subjective opinion, I am impressed. It matches the 5970 well, msrp is cheaper at least in EU, and we're speaking of a single gpu card here. (Now I'm no nvidia fanboi - I own a 5970 and one of the nvidia cards in my laptop died after only a bit more than 2 years of use but I see nice potential in this card - nvidia would pwn the 5970 if it releases a dual gpu version of the 580)


----------



## pantherx12 (Nov 9, 2010)

johnnyfiive said:


> They better be, its two cards with 2 GB's of total memory. If I paid $700 for two 5870's and a $550 GTX 580 beat my min fps I'd be pretty upset.



The memory is irrelevant since it's still the x amount of identical data on each set of memory.

So still counts as 1gb really.


----------



## Red_Machine (Nov 9, 2010)

I just pre-ordered the Palit version from eBuyer for a grand total of £384.  Hopefully I'll get one from the first batch allocated to me and not end up waiting ages for it!


----------



## qubit (Nov 9, 2010)

Red_Machine said:


> I just pre-ordered the Palit version from eBuyer for a grand total of £384.  Hopefully I'll get one from the first batch allocated to me and not end up waiting ages for it!



That's a good price. Enjoy.


----------



## newtekie1 (Nov 9, 2010)

douglatins said:


> Are you guys forgetting you can get much better performance for less? take a look ant what a 6870 CFX can do here
> 
> This is BC2, i believe to be the best game for benchmark comparison.
> 
> ...



There are a couple problems with that possition.

The main one, and the one that caused me to stop using dual-card solutions, is that when a new game comes out Crossfire and SLi both have to be optimized for it before they really work.  So while BC2 might be one of the better examples of a game that gives very good performance scaling with SLi and Crossfire, other games do not yeild that great of performance scaling especially when they are first released.

In fact sometimes when a game is released, SLi or Crossfire doesn't work at all, so then you are stuck with the performance of a single card, and there have even been cases where a driver that allowed Crossfire or SLi took 2+ Weeks to be released.

With a single card, you know right away that if you buy a game on launch day, you are going to get the kick-ass performance you paid for and not mid-range performance when you put out high-end money.


----------



## Red_Machine (Nov 9, 2010)

qubit said:


> That's a good price. Enjoy.



I intend to!


----------



## LAN_deRf_HA (Nov 9, 2010)

wolf said:


> LAN_deRf_HA I think you will be able to use OCCT, you just need to find a way to circumvent the power limiter, which a fair few reviewers have done already.



Could you point a method out? All I've seen is wizzard's momentary method and reviewers switching to unknown programs.


----------



## Frick (Nov 9, 2010)

Red_Machine said:


> I intend to!



No offense, but you should totally get a new CPU to match it.


----------



## Red_Machine (Nov 9, 2010)

I know.  I'm considering getting a quad core if my payout from my old boss if healthy enough.  But I was reccommended to wait until the summer when Bulldozer comes out.


----------



## TRIPTEX_CAN (Nov 9, 2010)

For anyone in the market for a FAST single GPU card the 580 looks pretty good but being 12 months behind the 5000 series and a few weeks before the 6900 series I dont think Nvidia will see the sales they expect. 

I find it strange to see enthusiast priced GPUs release only to side-grade the previous generation. 

A 580x2 card would be pretty sweet (and actually feasible) but if a single GPU us $500 the dual GPU models will be stupidly expensive. 

Not to rain on this launch but I think this might be a short lived victory celebration for the green camp.


----------



## Frag_Maniac (Nov 9, 2010)

Great review W1zzard. Can we expect an upcoming Pci-Ex Scaling review on one of these as was done on the 5870 and 480 to see by how much they exceed 8x? In a perfect world I'd go with a 32 lane, X58 MB running 16x/16x, but I'm not so savvy at CPU OCing and have been hit by unexpected auto expenses lately. I'm also thinking the 1156 platform will be dirt cheap once 1155 debuts. The 875k would be much easier for me to OC too.


----------



## WhiteLotus (Nov 9, 2010)

I am really not impressed with the idle noise. I mean when I want to watch a film I don't want to even hear anything else other than the film.

But I guess I prefer silence and mild performance over loud and lots of performance.


----------



## DaedalusHelios (Nov 9, 2010)

Both companies are doing this small upgrade to their current line up. AMD and Nvidia chose to do this. AMD 6-series is not so far, and will not be, a huge performance upgrade either. Nvidia fixed what AMD/ATi fanboys were complaining about on the 480 and made the better version a GTX 580. But of course some people always complain. The 580 is what I predicted it to be. A better card than the 480 and I will be waiting to see if AMD can produce a game changing price war or if they will just bring more of the same.



WhiteLotus said:


> I am really not impressed with the idle noise. I mean when I want to watch a film I don't want to even hear anything else other than the film.
> 
> But I guess I prefer silence and mild performance over loud and lots of performance.



Lets hope you wouldn't use a GTX 580 as an HTPC card. Even if you did for some reason, it has fan control support and HD Video would not throttle that card enough to need more than 20% fan. Remember to switch it back when gaming though.


----------



## Tatty_One (Nov 9, 2010)

WhiteLotus said:


> I am really not impressed with the idle noise. I mean when I want to watch a film I don't want to even hear anything else other than the film.
> 
> But I guess I prefer silence and mild performance over loud and lots of performance.



To be fair, lots of cards with reference coolers from whichever side can be noisy under load and often if the fan profiles are poor.... under idle speeds as well, thats why I go for non reference cards with better cooling designs, damn my old xfx 4870x2 even with a fan speed of 60% was like a tornado engine, scared the cats off trying to shit on my lawn outside my study window though


----------



## newtekie1 (Nov 9, 2010)

TRIPTEX_CAN said:


> For anyone in the market for a FAST single GPU card the 580 looks pretty good but being 12 months behind the 5000 series and a few weeks before the 6900 series I dont think Nvidia will see the sales they expect.
> 
> I find it strange to see enthusiast priced GPUs release only to side-grade the previous generation.
> 
> ...



Look at history, this price point will not stay long if ATi manages to get a card out that matches it.  But while this card is on the market only being challenged by the HD5970 it will be priced in the same ballpark.



WhiteLotus said:


> I am really not impressed with the idle noise. I mean when I want to watch a film I don't want to even hear anything else other than the film.
> 
> But I guess I prefer silence and mild performance over loud and lots of performance.




Fan noise to me is usually a moot point in the review, because I almost never use the stock fan profiles.  With this card idling at only 45°C, there is no reason the fan speed couldn't be turned down to make it quieter, probably even damn near silent, while idle.


----------



## N3M3515 (Nov 9, 2010)

All that is needed for the GTX580 to go down in price is that 6970 performs equal to it for 400 USD.


----------



## the54thvoid (Nov 9, 2010)

Black Panther said:


> Great review!
> 
> Now for my subjective opinion, I am impressed. It matches the 5970 well, msrp is cheaper at least in EU, and we're speaking of a single gpu card here. (Now I'm no nvidia fanboi - I own a 5970 and one of the nvidia cards in my laptop died after only a bit more than 2 years of use but I see nice potential in this card - *nvidia would pwn the 5970* if it releases a dual gpu version of the 580)



Yes it would.  But thats a moot point as the 6970 if crossfired will also annihilate the 5970.  The 5970 is a year old - it's now 'old tech'.  We shouldn't be comparing the 580 to a 5970.  I know the 580 was meant to be 'last year' too but it wasn't.  

The real comparison is 580 to 6970.

But for the record, I am trying very hard to not buy a GTX 580 now because i want to hold for either:
A) Custom cooling solutions like (http://www.techpowerup.com/134204/Sparkle-Announces-Calibre-X580-Graphics-Card.html) or
B) The HD 6970.


----------



## arnoo1 (Nov 9, 2010)

Red_Machine said:


> I intend to!



lol, you will see no performans difference whit that slow basis system of yours lol, you have a major cpu botlleneck and memmory botlleneck, even for my system it's to fast.

no offense


----------



## the54thvoid (Nov 9, 2010)

What is interesting is that a few sites are reporting 'same as' or higher power draw than a GTX 480.  

I'm thinking the throttling is throwing a smokescreen over the true nature of the card.  I want to be wrong because I will be buying a single GPU new card to replace a x-fire set up and it's the 580 or the 6970....


----------



## Red_Machine (Nov 9, 2010)

arnoo1 said:


> lol, you will see no performans difference whit that slow basis system of yours lol, you have a major cpu botlleneck and memmory botlleneck, even for my system it's to fast.
> 
> no offense



I'm upgrading everything except the CPU right now, so I'll be getting new DDR3 to go with it.  Depending on the payout I get from my old boss, I may get a quad-core now rather than waiting for next summer.


----------



## LAN_deRf_HA (Nov 9, 2010)

Can something be done to the card to physically disable the throttling?


----------



## mdm-adph (Nov 9, 2010)

the54thvoid said:


> What is interesting is that a few sites are reporting 'same as' or higher power draw than a GTX 480.
> 
> I'm thinking the throttling is throwing a smokescreen over the true nature of the card.  I want to be wrong because I will be buying a single GPU new card to replace a x-fire set up and it's the 580 or the 6970....



My dear gentleman, the whimsy and satire found within the bounds of your system specs is most refreshing -- verily, it is a most droll repast upon which I am now savoring, for the level of palaver of this nature found in the environment in which we now converse can be somewhat sparse from time to time.


----------



## ShogoXT (Nov 9, 2010)

This really should have been called GTX 490. I dislike naming schemes like this when its not a new design. Also it consumes less power than 480? I wonder how they managed that.

Also whats with the note at the bottom of the first page? Did someone else release your review or leak it? Or was it posted too early? I must have missed it.


----------



## springs113 (Nov 9, 2010)

decent card, but my main gripe with those that say it beats the 5970...technically it is not the true dual card of the 5800 series...i don't think that this card is as good as ppl are making it out to be because the main thing i gathered from the benchmarks is that as you increase the reso, the margin of victory over the 5870 shrinks pretty noticably.


----------



## mtosev (Nov 9, 2010)

If the HD 6950/6970 series will be faster than the GTX 580 then its price will fall.The best thing for now is to wait and see what happens when ati releases its the high end cards


----------



## newtekie1 (Nov 9, 2010)

ShogoXT said:


> This really should have been called GTX 490. I dislike naming schemes like this when its not a new design. Also it consumes less power than 480? I wonder how they managed that.
> 
> Also whats with the note at the bottom of the first page? Did someone else release your review or leak it? Or was it posted too early? I must have missed it.



I believe what happened is that W1z uploaded the reviews to the webserver early so that they would be ready exactly when the NDA lifted.  And some sneaky people figured out the URLs and posted them, effectively leaking the reviews before the NDA was lifted by nVidia.


----------



## ShogoXT (Nov 9, 2010)

newtekie1 said:


> I believe what happened is that W1z uploaded the reviews to the webserver early so that they would be ready exactly when the NDA lifted.  And some sneaky people figured out the URLs and posted them, effectively leaking the reviews before the NDA was lifted by nVidia.



Yea I just found it in the comments section of the forum. Google and guessing URLs did it.


----------



## qubit (Nov 9, 2010)

johnnyfiive said:


> Exactly qubit, its a huge factor that people seem to overlook.
> 
> Also, take this into consideration. The GTX 580 is about 18-20% faster than a 5870. That means for the 6970 to even compete with the GTX 580, it has to be at least 20% faster than the 5870. For it to be superior to the GTX 580, it needs to be 30-32% faster than the 5870.. do you guys think that's possible... 30-32% faster than the 5870? I sure don't, but I hope so, lol.



Oh, a 30% speedup is certainly possible, even 40% or 50%. The problem is doing it within the constraints of cost and power envelopes and the corporate policies at AMD. Thing is, for the last few years, AMD hasn't really gone for the fastest single chip card crown, instead preferring to make a "high value" card at a lower price point.

What's not helping matters are all these console ported games not requiring the most powerful systems to get the best out of them. For example, my system (see specs) is hardly new, yet runs all my games very well at decent resolutions and image quality settings. So what incentive do I have to upgrade it? Of course, because I'm an enthusiast like others here, I don't actually _need_ a reason  but you can see the problem on a wider scale.

I think this is a shame, because head to head competition is reduced. The technology then ends up not being pushed as hard as possible. If it had, we would actually end up getting faster products with more features at any particular price point and the top-end products would also be better.

However, it does look like the new AMD high end out soon might compete head to head, so here's crossing fingers!


----------



## springs113 (Nov 9, 2010)

I do think that this year will have a nice battle, i also think AMD is using its head this time around because the specs on the 6900 series are really tight lipped and they can be tweaked to offset the 580s importance.


----------



## r9 (Nov 9, 2010)

This is amazing NVIDIA has chosen to cut the core for 10% in transistors and raise the price for 10%. That is great thinking. If AMD can do this why not NV. In the resent times like ATI and NV are competing in who will screw the customer more.


----------



## Fourstaff (Nov 9, 2010)

r9 said:


> This is amazing NVIDIA has chosen to cut the core for 10% in transistors and raise the price for 10%. That is great thinking. If AMD can do this why not NV. In the resent times like ATI and NV are competing in who will screw the customer more.



What's wrong with a re-engineer? Cutting down transistors does not mean that they are cutting down on anything especially if they are optimising the transistors. After all, you would happily pay just as much (or slightly less) for a 300m Athlon II X3 rather than 1.05Billion transistors of the 5750.


----------



## the54thvoid (Nov 9, 2010)

mdm-adph said:


> My dear gentleman, the whimsy and satire found within the bounds of your system specs is most refreshing -- verily, it is a most droll repast upon which I am now savoring, for the level of palaver of this nature found in the environment in which we now converse can be somewhat sparse from time to time.



Indeed. One can only assume that your meanderings into the very nature of my babbage reason engine was to expedite favor to one commercial interest in the extent of verifying verily the vexed nature of inseparable qualities.  Isofar as bias is contemplated my nature is spent on neither hue, red nor green.  It has been my most excellent pleasure conversing old chum.


----------



## Red_Machine (Nov 10, 2010)

Indeed gentlemen.  Old English is so quaint, eh what?


----------



## Arska (Nov 10, 2010)

So the review was supposedly uploaded early and someone guessed the url... so why wasn't it uploaded into a temp folder that's name was impossible to guess? And in addition, how come did the site go completely offline for a few hours after the leak had been... well, leaked?


----------



## qubit (Nov 10, 2010)

Arska said:


> So the review was supposedly uploaded early and someone guessed the url... so why wasn't it uploaded into a temp folder that's name was impossible to guess? And in addition, how come did the site go completely offline for a few hours after the leak had been... well, leaked?



Read TPU Downtime, it will answer your question.


----------



## blu3flannel (Nov 10, 2010)

Why do I even look at Tom's Hardware's reviews? Excellent review W1zz.


----------



## Delta6326 (Nov 10, 2010)

Just saying based on just $$$ it would be cheaper and get you more fps to get 2 6870's but still this is a beast of a single gpu


----------



## Benetanegia (Nov 10, 2010)

All I want to know is when will the GTX570 be released? That's going to be the interesting card IMO. I have no intention of buying either, but I'm really curious about the fact that there's no even rumors about that card being released. I think that yields are probably good and surely much better than GF100, but to the point that a harvested part is not necessary?

Just joking now, but maybe Nvidia has raised some really impressive skills on getting yields after these months with GF100, I mean they have been able to ship hundreds of thousands of "broken and unfixable" chips with "single digit yields" and "a single order of 9000 risk waffers" (using the replicator from Star Trek maybe?) so a chip with no problems must a be like a walk in the park.


----------



## Akrian (Nov 10, 2010)

> 1: Not doin a decent job? the card is just as quiet as a gtx260 (which is Very quiet), the gtx480 is twice as loud. And it still keeps the temps lower then the 480, even tho its quieter, faster, and consumes less power...
> 
> 2: Power consumtion is higher ,lol? it consumes less power then the 480, and its rougly 30% faster. Where did you read that the 580 consumes more power then the 480?
> 
> ...



Emmm yes it DOES consume more power in sli ( 702 vs 719 ).

and WHERE did you see any other significant improvements in games aside from Metro 2033 ? In 3d mark Vantage ? that's a benchmark, not a game. In Anno ? I wrote that I don't care much about the incredible difference between 205 and 210 fps. I will not see, nor will I feel, nor it will impact on gaming. that's why I wrote in METRO 2033 the difference in 13 fps WILL impact the gaming experience, because when playing at dx11 and 3d vision you will need those xtra fps, otherwise you will get less then 24 fps in some episodes of the game. 
And yes, it is more silent, didn't pay attention to that, just the temps, yo got me there. =)

And where did you find those 30% aside from 3d mark vantage 0_0 ????? 205 fps agains 210-215 in most games they show is 30% ??? riiiight

I hope only of 2 thing:
1. I want Nvidia back on track making sexy cards again, so that AMD won't slack off.
2. I want to see games that actually require that horsepower and show stunning visuals, and not just console ports with crappy textures and bad optimisation as an excuse to force us to buy new hardware.


----------



## MxPhenom 216 (Nov 10, 2010)

thanks for posting the review. I honestly think nvidia has a winner here. trades blows with the HD5970, runs very cool (http://www.youtube.com/watch?v=GAbaOrnV2Ag), far more efficient, and still lives up to the fermi overclocking and scaling


Nvidia should be happy with this release. so far ive read a lot of people on other forums buying like 2 or 3 of them and what not. 

Im glad that nvidia got a SINGLE gpu card to compete with a dual GPU card and wins instantly with tesselation heavy games and benchmarks


Way to go nvidia. Id get one if i had the cash. gtx470 it is for now!

EDIT: i would like to see what its get in vantage because the leaked benchmarks of the 6970 said it got P2400. So if the GTX580 can beat that that would be even more amazing


----------



## N3M3515 (Nov 10, 2010)

nvidiaintelftw said:


> thanks for posting the review. I honestly think nvidia has a winner here. trades blows with the HD5970, runs very cool (http://www.youtube.com/watch?v=GAbaOrnV2Ag), far more efficient, and still lives up to the fermi overclocking and scaling
> 
> 
> Nvidia should be happy with this release. so far ive read a lot of people on other forums buying like 2 or 3 of them and what not.
> ...



OMG a fanboy
If you saw the benchs you would know HD5970 consumes less power and performs faster. here

Single gpu to compete  with a dual gpu, yeah right, after a year lol..............let see how it does against 6990.

So you would be happy if GTX580 beats the 6970, so prices keep the same and don't go down omg....

I would be happy if 6970 performs equal or faster than GTX580 and costs less, so WE the customers WIN.
I don't care which is best i do want competition so the prices go down and WE will be the winners.


----------



## alexsubri (Nov 10, 2010)

I was hoping that the 580 would be faster than the 5970, oh well.


----------



## MxPhenom 216 (Nov 10, 2010)

N3M3515 said:


> OMG a fanboy
> If you saw the benchs you would know HD5970 consumes less power and performs faster. here
> 
> Single gpu to compete  with a dual gpu, yeah right, after a year lol..............let see how it does against 6990.
> ...



uhhh. actually from what ive seen the 5970 and the 580 trade blow and are basically equal and when i said its far more efficient i didnt mean to say then the 5970. i said fermi in itself its a quite a bit more efficient.


----------



## MxPhenom 216 (Nov 10, 2010)

alexsubri said:


> I was hoping that the 580 would be faster than the 5970, oh well.



from looking at anandtech review and hardware canucks they are =


----------



## theonedub (Nov 10, 2010)

Benetanegia said:


> All I want to know is when will the GTX570 be released? That's going to be the interesting card IMO. I have no intention of buying either, but I'm really curious about the fact that there's no even rumors about that card being released. I think that yields are probably good and surely much better than GF100, but to the point that a harvested part is not necessary?
> 
> Just joking now, but maybe Nvidia has raised some really impressive skills on getting yields after these months with GF100, I mean they have been able to ship hundreds of thousands of "broken and unfixable" chips with "single digit yields" and "a single order of 9000 risk waffers" (using the replicator from Star Trek maybe?) so a chip with no problems must a be like a walk in the park.



The Anand review mentioned that the rest of the 5 series line up should be detailed soon. 

I want to see how or if the pricing of the rest of the 4 series cards are affected. Have you guys seen how many 480s there are for sale now?


----------



## TheMailMan78 (Nov 10, 2010)

Good lookin card. But I thought it would be better. People will be selling these faster then they bought them when the 6970 comes out I have a feeling. Things are looking good for us consumers!


----------



## MxPhenom 216 (Nov 10, 2010)

TheMailMan78 said:


> Good lookin card. But I thought it would be better. People will be selling these faster then they bought them when the 6970 comes out I have a feeling. Things are looking good for us consumers!



you can only do so much with the 40nm node and a refined gf100 gpu


----------



## reverze (Nov 10, 2010)

http://www.newegg.com/Product/Product.aspx?Item=N82E16814125319

It already went down in price..


----------



## Frizz (Nov 10, 2010)

I expected it to beat out or clash head on with the 5970 on 1080p res .. To me 1080p is the most important res performance scale.. Looks like I'll have to wait for something better to come out. Nonetheless this would be one of the best options for new systems ATM.


----------



## jasper1605 (Nov 10, 2010)

reverze said:


> http://www.newegg.com/Product/Product.aspx?Item=N82E16814125319
> 
> It already went down in price..



great so now w/ their 10% off it will actually be at MSRP.  Silly newegg


----------



## TheMailMan78 (Nov 10, 2010)

nvidiaintelftw said:


> you can only do so much with the 40nm node and a refined gf100 gpu



Don't care. I wanted better results.


----------



## jasper1605 (Nov 10, 2010)

TheMailMan78 said:


> Don't care. I wanted better results.



agreed.  I could care less if it was on a .06nm process or a 6 meter process.  If it doesn't perform then I don't want it: I'm not going to buy it to pity it because it's stressing out 40nm.


----------



## CDdude55 (Nov 10, 2010)

TheMailMan78 said:


> Don't care. I wanted better results.



I agree.

But that was only because i didn't realize up until a while ago it was going to be only a 480 refresh, i was expecting much more performance out of it considering the name change. It's definitely not a bad card though, it fixed most if not all the 480's issues and gives us a decent increase in performance, but that's it, decent.

Considering the 6900's are a new architecture i expect them to beat the 580 by a good amount if done right.


----------



## N3M3515 (Nov 10, 2010)

Please god i beg you, let amd put 1920 shaders on HD 6970 pleasee
Then it would be able to surpass GTX 580 and make it cheaper, and the rest of the cards too 

EDIT: let it be only 5% faster than gtx 580 so amd doesn't inflate the MSRP.


----------



## wahdangun (Nov 10, 2010)

N3M3515 said:


> Please god i beg you, let amd put 1920 shaders on HD 6970 pleasee
> Then it would be able to surpass GTX 580 and make it cheaper, and the rest of the cards too
> 
> EDIT: let it be only 5% faster than gtx 580 so amd doesn't inflate the MSRP.



please god don't hear him, let the HD 6970 dominate (more than 20% faster than GTX 580) so we will have much more cheaper card and force nv to build dual GPU card


----------



## WarEagleAU (Nov 10, 2010)

IF I had the cash and could afford to get it, I definitely would pick this up and get rid of my 4870. Damn impressed Nvidia, from an AMD lover


----------



## MxPhenom 216 (Nov 10, 2010)

wahdangun said:


> wow, its just fully enabled GTX 580,
> 
> 
> btw where is bastenegia ?? where is 128 TMUS he claimed ?



128TMU's take up a lot of die space. i dont think nvidia could fit in.

I think the GTX580 is great.

According to anandtech and hardware canucks the GTX580=hd5970


----------



## TheMailMan78 (Nov 10, 2010)

nvidiaintelftw said:


> 128TMU's take up a lot of die space. i dont think nvidia could fit in.
> 
> I think the GTX580 is great.
> 
> According to anandtech and hardware canucks the GTX580=hd5970



 A year after and by the skin of its teeth.


----------



## bear jesus (Nov 10, 2010)

To be honest this really is the card nvidia wanted to release last year and if they could have done that then it would have been very impressive but to me the one thing that stops this being an option is the lack of ability to run 3 monitors on a single card, it has the power to do so for at least 5040x1050 if not higher.

I really wish nvidia had added that ability as i would have loved to start folding again, i gave up because the 4870 put out hardly any ppd and just produced a load of heat. I know any high end card would but i think it would be easier to accept if it was doing a fair amount of work for the added heat.


----------



## Frizz (Nov 10, 2010)

HMMM, prices here in AUS is still pretty bad despite our jump in the dollar... I'll give it a few more days before us aussies see any real price drops. The 580 should do a mighty fine job in lowering AMD's prices even further. $700+ AUD for a single core GPU is a tad too much for my liking or anyones I'd imagine.


----------



## wahdangun (Nov 10, 2010)

nvidiaintelftw said:


> 128TMU's take up a lot of die space. i dont think nvidia could fit in.
> 
> I think the GTX580 is great.
> 
> According to anandtech and hardware canucks the GTX580=hd5970



but the problem is bastenegia always claim it will have 128 TMUS, and will surpass HD 5970.

wew maybe thats why he didn't show up in here


----------



## the54thvoid (Nov 10, 2010)

nvidiaintelftw said:


> from looking at anandtech review and hardware canucks they are =



Well,

http://www.hardwarecanucks.com/foru...s/37789-nvidia-geforce-gtx-580-review-19.html

The above link is 8xMSAA with full eye candy at 2560 res.  Thats full on high spec - the 5970 wins on BFBC2, Dirt 2 DX11, Just Cause 2 DX10.

The 580 doesn't beat the 5970.  People may cherry pick but it's a very close fight.  The 580 is quieter by all accounts, consumes wildly varying amounts of power across the 8-9 reviews i've read - which i find dusturbing (much less, much more, same).  It's also a lot cheaper than a 5970 so i wouldn't compare them.

I think on the face of it the GTX 580 is a damn good card.  But will it stay the best single gpu this year?  Who knows.

And i'm not sure if it's intentional but it's Benetanegia, NOT bastenegia.  And it's impolite to speak of him when he's not here.


----------



## N3M3515 (Nov 10, 2010)

the54thvoid said:


> Well,
> 
> http://www.hardwarecanucks.com/foru...s/37789-nvidia-geforce-gtx-580-review-19.html
> 
> ...



HD 5970
GTX 580

Which one is cheaper?


----------



## DaedalusHelios (Nov 10, 2010)

TheMailMan78 said:


> Don't care. I wanted better results.



I wanted lower prices. The results look fine IMO. The MSRP is too high. Everything has a price.


----------



## the54thvoid (Nov 10, 2010)

Thanks for pointing that out.  Diff here in UK.  

But more importantly, i found this gem while surfing.

Legit Reviews were asked by a forum member to downclock the 580 to 480 speeds.  They used Metro 2033 (tesselation heavy) and the graph speaks for itself.  _1.6% improvement at same clocks as a 480_.  It's almost identical to the notion of the 6870 being a 5770 with much fatser clocks.

http://www.legitreviews.com/article/1461/8/

And legit's reviews favour Nvidia so it's not a BS anti NV test.

Compare the Metro 2033 benchs from legit and TPU http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/15.html.

Legit has 580 at 75% better than 5970.  TPU has 14% diff.  Just shows how you need to surf around for balance.


----------



## N3M3515 (Nov 10, 2010)

the54thvoid said:


> Thanks for pointing that out.  Diff here in UK.
> 
> But more importantly, i found this gem while surfing.
> 
> ...



That's very interesting to see, the clocks where the important factor for performance improvement, cuda cores only 3.5% :shadedshu


----------



## Bjorn_Of_Iceland (Nov 10, 2010)

the54thvoid said:


> Thanks for pointing that out.  Diff here in UK.
> 
> But more importantly, i found this gem while surfing.
> 
> ...









I was wondering the same thing as well.. OC the GTX 480 to 580 clocks, and see any difference.. this is just one game, I wish some reviewer would do this (then again, I guess nVidia would flame the guy  )

772 is easily attainable with the 480, 850 even. And if anyone would say "Drivers are still immature", Id say bull.. 580 is almost pretty much the same as 480, any immaturity would be ironed out in 480 days.


----------



## N3M3515 (Nov 10, 2010)

the54thvoid said:


> And legit's reviews favour Nvidia so it's not a BS anti NV test.
> 
> Compare the Metro 2033 benchs from legit and TPU http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/15.html.
> 
> Legit has 580 at 75% better than 5970.  TPU has 14% diff.  Just shows how you need to surf around for balance.



wow, and i had the feeling that legit reviews was nv favored, this just confirms it.


----------



## N3M3515 (Nov 10, 2010)

Bjorn_Of_Iceland said:


> http://www.legitreviews.com/images/reviews/1461/metro_3.jpg
> 
> I was wondering the same thing as well.. OC the GTX 480 to 580 clocks, and see any difference.. this is just one game, I wish some reviewer would do this (then again, I guess nVidia would flame the guy  )



Just watch any review of any factory overclocked GTX 480 

EDIT: here


----------



## Bjorn_Of_Iceland (Nov 10, 2010)

N3M3515 said:


> Just watch any review of any factory overclocked GTX 480



They dont have 580 comparison.


----------



## Wile E (Nov 10, 2010)

Bjorn_Of_Iceland said:


> http://www.legitreviews.com/images/reviews/1461/metro_3.jpg
> 
> I was wondering the same thing as well.. OC the GTX 480 to 580 clocks, and see any difference.. this is just one game, I wish some reviewer would do this (then again, I guess nVidia would flame the guy  )
> 
> 772 is easily attainable with the 480, 850 even. And if anyone would say "Drivers are still immature", Id say bull..* 580 is almost pretty much the same as 480, any immaturity would be ironed out in 480 days.*



The 1920x1200 Metro2033 tests in TPU's SLI test suggests otherwise.

I think they do have a little left on the table. Probably not a huge amount, but some.


----------



## N3M3515 (Nov 10, 2010)

guys watch this 
you can compare to their 3dmark of the gtx 580


----------



## Red_Machine (Nov 10, 2010)

N3M3515 said:


> OMG a fanboy
> If you saw the benchs you would know HD5970 consumes less power and performs faster. here
> 
> Single gpu to compete  with a dual gpu, yeah right, after a year lol..............let see how it does against 6990.
> ...



Sounds like YOU'RE the one being a fanboy to me.


----------



## wahdangun (Nov 10, 2010)

the54thvoid said:


> Well,
> 
> http://www.hardwarecanucks.com/foru...s/37789-nvidia-geforce-gtx-580-review-19.html
> 
> ...



ups sorry, i always forget his exact name and btw i said that just to teased him.

btw i can' wait for HD 6970 VS GTX 580


----------



## Borque (Nov 10, 2010)

Is the EVGA Superclocked 580 worth the difference price from the standard EVGA 580?

Is the performance increase that high?

I maged to grab a preorder with a mistaken price and I'm getting the standar for like 50 euros less than it should cost.


----------



## pantherx12 (Nov 10, 2010)

The 6870 was due to be the same power as this 580, so should be interesting.

I think this is the real reason for the delay, so they can fiddle a bit more and get another 5-10% out via over-clocking or something thus beating 580.

Interesting times!


----------



## AsRock (Nov 10, 2010)

CDdude55 said:


> It really should of been named ''GTX 485'' or something of the sort, as it's still based on the same architecture and process, just with more efficacy added in.



Delay tactic maybe ?.  Now they have a year to come up with some thing which is actually new design.  Even if it's just a few months extra it helps.

Nice review like always w1z


----------



## douglatins (Nov 10, 2010)

THIS CARD IS 18% BETTER THAN THE 480 STOP USING NVIDIA FIGURES


----------



## WhiteLotus (Nov 10, 2010)

douglatins said:


> http://images.hardwarecanucks.com/image//skymtl/GPU/GTX-580/GTX-580-95.jpg
> 
> THIS CARD IS 18% BETTER THAN THE 480 STOP USING NVIDIA FIGURES



I think that what the posts before you were saying that the only reason why it's "better" than the GTX480 is because of the higher clocks. Put the clocks down to the same as the GTX480 and the difference starts to become very minimal.

Now for a new generation of cards, especially in the same naming scheme as the previous generation, a new card should be able to out perform its predecessor by a fair margin using the same clock speeds... in this case it would appear that this is not the case.


----------



## douglatins (Nov 10, 2010)

WhiteLotus said:


> I think that what the posts before you were saying that the only reason why it's "better" than the GTX480 is because of the higher clocks. Put the clocks down to the same as the GTX480 and the difference starts to become very minimal.
> 
> Now for a new generation of cards, especially in the same naming scheme as the previous generation, a new card should be able to out perform its predecessor by a fair margin using the same clock speeds... in this case it would appear that this is not the case.



Exactly

 Anyone getting a 580 soon? I want to compare stuf...

I want to compare stuff

I might expect a 6% increase in performace with same clocks, since thats the difference with SPs.

And as for power consumption i would only care it its in a dual or 3x card setup, cause maybe a 1000W can handle a dual 580.


----------



## TheMailMan78 (Nov 10, 2010)

I wanted 60 FPS in Metro under 1920x1080


----------



## Red_Machine (Nov 10, 2010)

What CPU would you guys reccommend I get to go with this card?


----------



## qubit (Nov 10, 2010)

Red_Machine said:


> What CPU would you guys reccommend I get to go with this card?



1 What resolution do you intend gaming at? If it's 1680x1050 you only need a midrange card

2 The most powerful CPU that you can afford I'd say. What we need is a review of how game performance varies with CPU power to help you decide


----------



## Red_Machine (Nov 10, 2010)

Well, I've already ordered the card, but I am using 1680x1050.  At some point I'll upgrade it to something bigger.

Somebody reccommended at least a 3.6GHz quad core, but I haven't been able to find one on eBuyer or Scan.


----------



## TheMailMan78 (Nov 10, 2010)

Red_Machine said:


> Well, I've already ordered the card, but I am using 1680x1050.  At some point I'll upgrade it to something bigger.
> 
> Somebody reccommended at least a 3.6GHz quad core, but I haven't been able to find one on eBuyer or Scan.



i7 or 1055/1090T

Anything less would be stupid at this point.


----------



## douglatins (Nov 10, 2010)

Red_Machine said:


> Well, I've already ordered the card, but I am using 1680x1050.  At some point I'll upgrade it to something bigger.
> 
> Somebody reccommended at least a 3.6GHz quad core, but I haven't been able to find one on eBuyer or Scan.



Phenom X4 970 or so. And a AFMKT cooler. Like http://www.tweaktown.com/reviews/3543/scythe_ninja_3_scnj_3000_cpu_cooler/index.html


----------



## mdsx1950 (Nov 10, 2010)

Red_Machine said:


> What CPU would you guys reccommend I get to go with this card?



Get an i7 950 since it's not SOOOO expensive or a i7 920. If you got enough money to spend, go for the 980x


----------



## Red_Machine (Nov 10, 2010)

I'm getting an AMD board. XD


----------



## mdm-adph (Nov 10, 2010)

Red_Machine said:


> Somebody reccommended at least a 3.6GHz quad core, but I haven't been able to find one on eBuyer or Scan.



Somebody isn't being realistic.  Are you even having problems playing games now?  If not, just upgrade your video card to something mid-range -- don't even bother with your CPU.  You have a 3GHz dual-core -- you're fine.


----------



## MxPhenom 216 (Nov 10, 2010)

Red_Machine said:


> I'm getting an AMD board. XD



dont. AMD board plus a nvidia card is a funny combination IMHO

Id get X58 so then you have the ability to go sli if you want to and core i7 still beats everything amd has right now


----------



## dir_d (Nov 10, 2010)

nvidiaintelftw said:


> dont. AMD board plus a nvidia card is a funny combination IMHO
> 
> Id get X58 so then you have the ability to go sli if you want to and core i7 still beats everything amd has right now



AMD board and NVIDIA card is fine


----------



## MxPhenom 216 (Nov 10, 2010)

I see it as the HD5970 and GTX580 are = in performance. maybe like 3% worst and sometimes better in certain areas.

Price wise for the card doesnt bother me. with nvidia we get way more features in the card then just the plain old microstuttering and driver problems from ati.

Also in games that arent optimized for cf and sli the gtx580 wins hands down. Also when it comes to minimum FPS. a single gpu like the 580 will have smoother gameplay at 40FPS then a dual GPU like the 5970 at like 60FPS because the single GPU doesnt have random FPS plummets. For me minimum FPS is what really matters to me in the long run and thats where the 580 and all the fermi cards shine. I went form gtx260 sli and i got a lot of FPS with those but the choppiness from the fps drops really ruined it for me. I went to a 5870 that was better but ati drivers sucked for me and then went to the 470 and it has been amazing!


----------



## Red_Machine (Nov 10, 2010)

mdm-adph said:


> If not, just upgrade your video card to something mid-range



But I already ordered my 580 XD.


----------



## MxPhenom 216 (Nov 10, 2010)

Red_Machine said:


> But I already ordered my 580 XD.



your fine except you might not get all the performance out of the card then you should.


----------



## Red_Machine (Nov 10, 2010)

nvidiaintelftw said:


> dont. AMD board plus a nvidia card is a funny combination IMHO



Well, it has an nVidia chipset, if that matters.


----------



## CDdude55 (Nov 10, 2010)

Red_Machine said:


> But I already ordered my 580 XD.



lol

You're pretty much going to have to upgrade from that Athlon X2 most definitely then. That 580 won't be near as fast if you're still using an ancient CPU. I'd go with  an nice 1090T or i7 CPU as some have mentioned. If you're on a budget, just grab a 1090T or 1055T and overclock it.


----------



## Borque (Nov 10, 2010)

You have to overclock 1090T to avoid bottleneck as well? WTF?


----------



## CDdude55 (Nov 10, 2010)

Borque said:


> You have to overclock 1090T to avoid bottleneck as well? WTF?



It's just not as great an architecture as the i7's, the majority of Intel's Quad Cores can beat AMD's Hexa Cores, that doesn't mean AMD's six cores are garbage, they actually come close to an i7 most of the time and when overclocked enough they do touch the i7's and for a bit of a cheaper price to. AMD has been behind CPU wise against Intel since the Phenom I's, they haven't fully caught up since then. But Bulldozer will be out soon and maybe that will do some damage.


----------



## HalfAHertz (Nov 10, 2010)

The GTX580 looks like a great product and I have to agree with the majority of people that this is what the 480 should have been. It looks like a very "future proof" card because of all the additional tweaks Nvidia did to the shaders(like FP16 calcs and improoved Z-culling).
   Tho if I have to be blunt about something, it's not so much about the fact that the 580 makes Nvidia look good as is that it just shows how bad and under performing the 480(when compared to the original vision o fermi) was in comparison. I think that in the future we will continue to argue if releasing the 480 back then was the right solution.


----------



## TheMailMan78 (Nov 10, 2010)

mdm-adph said:


> Somebody isn't being realistic.  Are you even having problems playing games now?  If not, just upgrade your video card to something mid-range -- don't even bother with your CPU.  You have a 3GHz dual-core -- you're fine.



Dude that BS. A lot of game NEED a quad-core now to be smooth. BC2 comes to mind.


----------



## TRIPTEX_CAN (Nov 10, 2010)

More and more games and especially most recent console ports are heavily dependent on multithreading. The MailMan is right.


----------



## newtekie1 (Nov 10, 2010)

the54thvoid said:


> Legit Reviews were asked by a forum member to downclock the 580 to 480 speeds. They used Metro 2033 (tesselation heavy) and the graph speaks for itself. 1.6% improvement at same clocks as a 480. It's almost identical to the notion of the 6870 being a 5770 with much fatser clocks.



I think a lot of people are missing the point of the GTX580.  While it did take the crown as the top single GPU(though the GTX480 still had it), the main point was not to provide a huge performance increase.  Instead, similar to G80 to G92 and G70 to G71, it's main purpose to to revise the silicon to provide better thermal and power consumption while still allowing a marginal performance increase.

Yes, when downclocked to GTX480 speeds, the performance difference is marginal.  However, when clocked beyond GTX480 speeds, with all 512 SPs enabled, the GTX580 uses less power and puts out less heat.


----------



## Frick (Nov 10, 2010)

Also,as someone said already, to have something to shove against the highend 69xx.


----------



## Bjorn_Of_Iceland (Nov 10, 2010)

mdm-adph said:


> Somebody isn't being realistic.  Are you even having problems playing games now?  If not, just upgrade your video card to something mid-range -- don't even bother with your CPU.  You have a 3GHz dual-core -- you're fine.


Prepare to be bombarded with reply from multi core users!


----------



## KainXS (Nov 10, 2010)

so whats going to happen between you and nvidia since you broke the NDA w1zzard, do you think it will hurt your chances to get more review cards early cause that would really blow.


----------



## Red_Machine (Nov 10, 2010)

He didn't break the NDA.  You could almost say the site was hacked.


----------



## the54thvoid (Nov 10, 2010)

nvidiaintelftw said:


> ....the plain old microstuttering and driver problems from ati...



Change the record dude, it's getting worn out.  _I have no issues_ with my crossfire cards driver wise.  I just want a single gpu in my case.  Two huggy cards get hot and noisy on ocassion.



newtekie1 said:


> I think a lot of people are missing the point of the GTX580.  While it did take the crown as the top single GPU(though the GTX480 still had it), *the main point was not to provide a huge performance increase*.  Instead, similar to G80 to G92 and G70 to G71, it's main purpose to to revise the silicon to provide better thermal and power consumption while still allowing a marginal performance increase.
> 
> Yes, when downclocked to GTX480 speeds, the performance difference is marginal.  However, when clocked beyond GTX480 speeds, with all 512 SPs enabled, *the GTX580 uses less power* and puts out less heat.



First Bold Point - while true it had been touted as X% faster and how it's the fastest DX 11 card ever.  Thats a pretty strong indicator they're touting it as a faster card whose sole purpose is actually to combat the HD 6970 (Tom Peterson from NV has said as much).

Second Bold Point - It uses less power.... Does it?  I'm not going to cherry pick reviews as only ignorant folk do that.  But whilst some reviewers show it consuming far less power (GTX 260 levels) others show it consuming a tad more.  The problem is the throttler they have on it for power protection.  It's artificially skewing power consumption.  W1zz and a few others have worked around it but some reviewers have used Furmark and come away with (i.e GTX 260) results.

Bear in mind there are some 480's out there with reworked PCB's with better circuitry that consume less power than a standard 480.  Given the things i'm reading more of and how well the rejigged AIC 480's are, I'm more and more unimpressed.


----------



## newtekie1 (Nov 10, 2010)

the54thvoid said:


> First Bold Point - while true it had been touted as X% faster and how it's the fastest DX 11 card ever.  Thats a pretty strong indicator they're touting it as a faster card whose sole purpose is actually to combat the HD 6970 (Tom Peterson from NV has said as much).



Well of course that is the way they marketted it.  What do you think is going to get them more sales?  "We have the fastest most bad ass card on the market" or "The power consumption and heat output is lower than our previous cards but still not as good as the competition"?



the54thvoid said:


> Second Bold Point - It uses less power.... Does it?  I'm not going to cherry pick reviews as only ignorant folk do that.  But whilst some reviewers show it consuming far less power (GTX 260 levels) others show it consuming a tad more.  The problem is the throttler they have on it for power protection.  It's artificially skewing power consumption.  W1zz and a few others have worked around it but some reviewers have used Furmark and come away with (i.e GTX 260) results.
> 
> Bear in mind there are some 480's out there with reworked PCB's with better circuitry that consume less power than a standard 480.  Given the things i'm reading more of and how well the rejigged AIC 480's are, I'm more and more unimpressed.



If you read the reviews, the good reviews, yes it does use less power.  Furmark is the only application that the throttle effects, and it is also the application that I totally ignore when talking about power consumption because it is not realistic.  Any review that only puts out Furmark power consumption numbers isn't a valid review in any way, and I tend to not even read them entirely because it is obvious the reviewer has no clue what they are doing.

In real world use, as shown by W1z's review, you are looking at ~20-30w less.  Even if you remove the limitter from the equation and use Furmark numbers, it is still consuming 15w less under furmark load, so you are wrong about the limitter artificially making it look like it consumes less power.  It might make it seem like it consumes a lot less power than it really does, but it does consume less power none the less.


----------



## the54thvoid (Nov 10, 2010)

newtekie1 said:


> the good reviews



Cherries Picked.

Nah, I'm only having a laugh.  You're right.  Guru 3D had an _engineering sample_ which was pulling 30-40 Watts more at load. 

However, a well binned GF100 chip on a PCB redesigned by a board partner with similar clocks produces a GTX 480 that consumes less Watts and less noise and 10% more perf than a standard GTX 480.

I think, with that evidence, what NV has done is simply refine what the partners were already doing with the 480's.  And on that front (with clocks at 772 versus 702 - 10% higher) the GF 110 doesn't actually blow my socks off.

Dont get me wrong I have a bad feeling the HD6970 is going to be hot and noisy.  I'm very impatient though but 3-4 more weeks and I can make my purchasing decisions.

So far it is:

HD 6970 or
KFA GTX 480 Anarchy http://www.hexus.net/content/item.php?item=26757
Gigabyte GTX 480 SOC http://www.hexus.net/content/item.php?item=27253
MSI GTX 480 Lightning http://www.techpowerup.com/reviews/MSI/N480GTX_GTX_480_Lightning/
Sparkle GTX 580 Calibre http://www.techpowerup.com/134204/Sparkle-Announces-Calibre-X580-Graphics-Card.html

As you can see, good chance i'll be jumping ship to green unless the HD 6970 is my dream card.


----------



## CDdude55 (Nov 10, 2010)

newtekie1 said:


> If you read the reviews, the good reviews, yes it does use less power.  Furmark is the only application that the throttle effects, and it is also the application that I totally ignore when talking about power consumption because it is not realistic.  Any review that only puts out Furmark power consumption numbers isn't a valid review in any way, and I tend to not even read them entirely because it is obvious the reviewer has no clue what they are doing.



Furmark is a useful tool to find the max power consumption and temps of the GPU, i think it's good to include that just for reference, other then that i agree.


----------



## r9 (Nov 10, 2010)

Fourstaff said:


> What's wrong with a re-engineer? Cutting down transistors does not mean that they are cutting down on anything especially if they are optimising the transistors. After all, you would happily pay just as much (or slightly less) for a 300m Athlon II X3 rather than 1.05Billion transistors of the 5750.



There is nothing wrong price/performance improvement is OK. But the product becomes cheaper to manufacture and the price goes up. I don`t know about you guys bit some how it feels that prices calculated in agreement between ATI and NV. New cards are coming all the time but prices of old ones are stuck. Nor ati nor NV are interested in price cutting. And they are very keen on cutting cores.


----------



## N3M3515 (Nov 10, 2010)

r9 said:


> There is nothing wrong price/performance improvement is OK. But the product becomes cheaper to manufacture and the price goes up. I don`t know about you guys bit some how it feels that prices calculated in agreement between ATI and NV. New cards are coming all the time but prices of old ones are stuck. Nor ati nor NV are interested in price cutting. And they are very keen on cutting cores.



True, HD5850 was never sold under its msrp, unlike 4850 or 4870 or 4890.


----------



## newtekie1 (Nov 10, 2010)

the54thvoid said:


> However, a well binned GF100 chip on a PCB redesigned by a board partner with similar clocks produces a GTX 480 that consumes less Watts and less noise and 10% more perf than a standard GTX 480.
> 
> I think, with that evidence, what NV has done is simply refine what the partners were already doing with the 480's.  And on that front (with clocks at 772 versus 702 - 10% higher) the GF 110 doesn't actually blow my socks off.



Actually you made me think of something just now.  If you look back at the review W1z did of the Zotac GTX480 with the Zalman cooler on it, even using the same reference PCB design and components the GTX480 was capable of ~20-30w less power consumption just as a result of cooling the GPU core down!

That really makes me think that the improvements we are seeing here in the GF110 are not really a result of the GPU being tweaked, but instead a result of the Cooler being tweaked to be more efficient...



CDdude55 said:


> Furmark is a useful tool to find the max power consumption and temps of the GPU, i think it's good to include that just for reference, other then that i agree.



I think Furmark has its place in reviews as well, and I think power consumption numbers with Furmark should be in a review as well, but I just don't think they are that important.  And if I had my choice of only one power consumption number to put in a review, it would be the real world power consumption, not Furmark.


----------



## TheMailMan78 (Nov 10, 2010)

I think VIA is going to PWN all of them this time next year!


----------



## Fourstaff (Nov 10, 2010)

r9 said:


> There is nothing wrong price/performance improvement is OK. But the product becomes cheaper to manufacture and the price goes up. I don`t know about you guys bit some how it feels that prices calculated in agreement between ATI and NV. New cards are coming all the time but prices of old ones are stuck. Nor ati nor NV are interested in price cutting. And they are very keen on cutting cores.



You are quite right, but given the fact that AMD is still a money losing business as a whole, I doubt that they are going to cut down the prices anytime soon (the release of the 6870 forced price down for a few days though) and if Nvidia sticks with challenging AMD's pricing, nobody is going to lower the prices anytime soon. And also the retailers are not keen on cutting prices either. 

MSRP of stock 5870 was $400 when it was released, and now you can find versions which sell for $300 (sapphire), 25% price cut in a year and a bit seems quite reasonable to me.



N3M3515 said:


> True, HD5850 was never sold under its msrp, unlike 4850 or 4870 or 4890.



MSRP 5850 was $299, 

ASUS EAH5850 DIRECTCU/2DIS/1GD5 Radeon HD 5850 (Cy...


----------



## LAN_deRf_HA (Nov 10, 2010)

Fourstaff said:


> MSRP 5850 was $299,
> 
> ASUS EAH5850 DIRECTCU/2DIS/1GD5 Radeon HD 5850 (Cy...



I'd assume they mean launch price, which was $260. Then it sold out in about a hour and AMD jacked the price massively due to a lack of competition, lasting even after the 400 series launch. Why should these companies bother fighting a price war when they can both just jack things up equally. I seriously expect the 6970 to hit at $500 and the 580 to drop to $500, with both sitting there for a painfully long time.

For those asking about the clock for clock difference between the 480/580


----------



## N3M3515 (Nov 10, 2010)

Fourstaff said:


> You are quite right, but given the fact that AMD is still a money losing business as a whole, I doubt that they are going to cut down the prices anytime soon (the release of the 6870 forced price down for a few days though) and if Nvidia sticks with challenging AMD's pricing, nobody is going to lower the prices anytime soon. And also the retailers are not keen on cutting prices either.
> 
> MSRP of stock 5870 was $400 when it was released, and now you can find versions which sell for $300 (sapphire), 25% price cut in a year and a bit seems quite reasonable to me.
> 
> ...



MSRP was 260 USD, it went to 299 after release.


----------



## Fourstaff (Nov 10, 2010)

LAN_deRf_HA said:


> I'd assume they mean launch price, which was $260. Then it sold out in about a hour and AMD jacked the price massively due to a lack of competition, lasting even after the 400 series launch. Why should these companies bother fighting a price war when they can both just jack things up equally. I seriously expect the 6970 to hit at $500 and the 580 to drop to $500, with both sitting there for a painfully long time.



Ah. R&D is not cheap, so its to be expected. Anyways, even if they could lower the price, they wouldn't, just as how Bentleys and Rolls-Royces do not come cheap BECAUSE they are top of the range. Premium stuff demands premium price. Any economist will tell you that. Mid range cards now are perfectly capable of ripping most games to shreds, why bother with the top end?



N3M3515 said:


> MSRP was 260 USD, it went to 299 after release.



Current MSRP is USD260, but that was not the case at launch after you factor in the initial discounted price


----------



## N3M3515 (Nov 10, 2010)

Any way you get my point, 4870 started at 299 and reached eol at 140.
problem was nvidia that did not bring any real competition so we consumers were stuck with the same prices for a whole year.


----------



## TheMailMan78 (Nov 10, 2010)

LAN_deRf_HA said:


> I'd assume they mean launch price, which was $260. Then it sold out in about a hour and AMD jacked the price massively due to a lack of competition, lasting even after the 400 series launch. Why should these companies bother fighting a price war when they can both just jack things up equally. I seriously expect the 6970 to hit at $500 and the 580 to drop to $500, with both sitting there for a painfully long time.
> 
> For those asking about the clock for clock difference between the 480/580
> 
> http://images.anandtech.com/graphs/graph4012/33983.png



Those scores can't be legit. They suck.


----------



## Fourstaff (Nov 10, 2010)

N3M3515 said:


> Any way you get my point, 4870 started at 299 and reached eol at 140.
> problem was nvidia that did not bring any real competition so we consumers were stuck with the same prices for a whole year.



Well, 5870 is still going strong and not at eol yet, so dropping from $399 to $299 for the cheapest one is quite good seeing that its only a year. I expect it to drop to $250 or less when it reaches EOL, roughly similar to the drop (in percentage) of 4870. 

4870 started at june 2008 according to wiki, and went EOL because the 5xxx raped it bad hence the lower EOL prices. 

I do get your point though. Right now AMD is finding themselves at a point where there were during the Athlon vs Pentium 4 days, so they are trying to profit as much as they still can to reduce their debt.


----------



## N3M3515 (Nov 10, 2010)

Fourstaff said:


> Well, 5870 is still going strong and not at eol yet, so dropping from $399 to $299 for the cheapest one is quite good seeing that its only a year. I expect it to drop to $250 or less when it reaches EOL, roughly similar to the drop (in percentage) of 4870.
> 
> 4870 started at june 2008 according to wiki, and went EOL because the 5xxx raped it bad hence the lower EOL prices.
> 
> I do get your point though. Right now AMD is finding themselves at a point where there were during the Athlon vs Pentium 4 days, so they are trying to profit as much as they still can to reduce their debt.



I bought my 4870 at 150 on june 2009, 
And it was very strong at that time.


----------



## N3M3515 (Nov 10, 2010)

Fourstaff said:


> I do get your point though. Right now AMD is finding themselves at a point where there were during the Athlon vs Pentium 4 days, so they are trying to profit as much as they still can to reduce their debt.



That is so true, lol they are trying to recover all the money they had lost.


----------



## Fourstaff (Nov 10, 2010)

N3M3515 said:


> I bought my 4870 at 150 on june 2009,



Perhaps I was wrong then. 5770 is still selling for $120 cheapest, so the price drop throughout this year and half of last year is almost negligible. But then there is DX11 and crap like that, and you might have jumped on a very good deal.


----------



## N3M3515 (Nov 10, 2010)

Fourstaff said:


> Perhaps I was wrong then. 5770 is still selling for $120 cheapest, so the price drop throughout this year and half of last year is almost negligible. But then there is DX11 and crap like that, and you might have jumped on a very good deal.



Good point there, DX11 that's one more factor that influenced them to keep prices.


----------



## LAN_deRf_HA (Nov 10, 2010)

TheMailMan78 said:


> Those scores can't be legit. They suck.



Yeah. I'm sure anandtech fudges their numbers all the time. Or you know, you could infer that it's 2560x1600.


----------



## Red_Machine (Nov 10, 2010)

Take a look at my revised "soon to be" spec, guys.  Crazyeyesreaper just whooped my ass about the old one...


----------



## TheMailMan78 (Nov 10, 2010)

Red_Machine said:


> Take a look at my revised "soon to be" spec, guys.  Crazyeyesreaper just whooped my ass about the old one...



Tell Crazyeyes to go blow a goat.


----------



## Red_Machine (Nov 10, 2010)

He was saying pretty much what you guys were saying.  That and the case I was going to buy wouldn't fit the card. XD


----------



## Bluefox1115 (Nov 10, 2010)

would it be a worthwhile upgrade from an evga gtx285? or should I just save the cash and grab a 6000Series AMD card? Res. used would be 1080p.


----------



## TheMailMan78 (Nov 10, 2010)

Bluefox1115 said:


> would it be a worthwhile upgrade from an evga gtx285? or should I just save the cash and grab a 6000Series AMD card? Res. used would be 1080p.



No right answer to that as the 69xx are not out yet.


----------



## GotNoRice (Nov 10, 2010)

I just wanted to say thank you for including benchmarks for World of Warcraft.

Many review sites treat the game like a joke but those of us who play WoW tend put quite a bit of time into it.  When you have such an extreme time investment in one game it only makes sense to base your hardware purchases around what works best in that particular game.

It's also nice to have up to date benchmarks showing that Crossfire does indeed work properly in World of Warcraft.  Those of us who have actually run Crossfire and know how to dodge minor driver issues have been having great experiences with Crossfire and WoW for years, but that doesn't stop idiots who don't have a clue from trying to claim that WoW doesn't support crossfire.


----------



## Benetanegia (Nov 11, 2010)

the54thvoid said:


> Second Bold Point - It uses less power.... Does it?  I'm not going to cherry pick reviews as only ignorant folk do that.  But whilst some reviewers show it consuming far less power (GTX 260 levels) others show it consuming a tad more.  The problem is the throttler they have on it for power protection.  It's artificially skewing power consumption.  W1zz and a few others have worked around it but some reviewers have used Furmark and come away with (i.e GTX 260) results.
> 
> *Bear in mind there are some 480's out there with reworked PCB's with better circuitry that consume less power than a standard 480.  Given the things i'm reading more of and how well the rejigged AIC 480's are, I'm more and more unimpressed.*



It looks like GTX580 is far more power efficient. Thanks to Anandtech all your doubts should be answered. 



> Our reference GTX 580 shipped with a load voltage of 1.037v, notably higher than the sub-1v load voltages of the GTX 480 and a solid example of how NVIDIA has been able to reduce leakage on their GPUs. By luck our Asus GTX 580 comes with a different voltage, 1.000v, giving us some idea of what the VID range is going to be for the GTX 580 and what a card with a “good” GPU might be like.












So it turns out that the reference GTX580 comes with a higher voltage although it probably doesn't need it. Well, it's probably a measure to improve yields, but just like with the 480 newer cards will probably get better as the chip matures in future batches and partners will start cherry picking them and offer the GTX580 equivalents of the GTX480 cards you are mentioning.


----------



## a_ump (Nov 11, 2010)

Personally i feel this speaks nothing for nvidia. It looks to me that they've maxed out their architecture, unless they build onto it at 28nm is possible. But for 40nm, seeing as how the GTX 580 is only 15% faster than the GTX 480 i can't see it beating the HD 6970 when it's released, you know that with the performance jump the HD 6870 saw over the 5770 the HD 6970 will likely have a similar jump over the HD 5870. I honestly don't see how this release is going to trouble AMD in any way as its price performance isn't great, and though the HD 5970 does have a lower price/perf ratio, the rest of AMD's offerings seem to outdo the majority of Nvidia's still, as will the HD 6970 when it's released. Mark my words! lol

EDIT: haha and it has worse SLI performance for now. we'll see with driver updates though.


----------



## CDdude55 (Nov 11, 2010)

a_ump said:


> Personally i feel this speaks nothing for nvidia. It looks to me that they've maxed out their architecture, unless they build onto it at 28nm is possible. But for 40nm, seeing as how the GTX 580 is only 15% faster than the GTX 480 i can't see it beating the HD 6970 when it's released, you know that with the performance jump the HD 6870 saw over the 5770 the HD 6970 will likely have a similar jump over the HD 5870. I honestly don't see how this release is going to trouble AMD in any way as its price performance isn't great, and though the HD 5970 does have a lower price/perf ratio, the rest of AMD's offerings seem to outdo the majority of Nvidia's still, as will the HD 6970 when it's released. Mark my words! lol
> 
> EDIT: haha and it has worse SLI performance for now. we'll see with driver updates though.



I really don't think Nvidia aimed the 580 at the 6900's, i think that they just wanted to bring something out in time of the 6900 release so that it doesn't totally over shine the holiday season. And as newtekie said, the 5 series name is really just marketing so that they don't look like they are to far behind is GPU manufacturing against AMD, as this could of easily been called the ''GTX 485'' as ive said before. I think AMD has a very high chance in beating Nvidia's GTX 580 due to them building from the ground up, while the 580 is still building and improving upon the same structure. Now of course that doesn't automatically mean the 6900's will be better, but it definitely gives them more room and a greater chance to create a better design and ultimately beat the competition.

Now personally i think the 580 is a great card though, it gives us what most have been wanting with the 480, it has been improved in basically every area the 480's screwed up in and that's a great thing imo. The 580 is a very powerful card while keeping power and heat at a decent level. Sure i don't think it'll beat the 6900's, but they still gave us an awesome card at the end of the day.


----------



## bear jesus (Nov 11, 2010)

CDdude55 said:


> it gives us what most have been wanting with the 480, it has been improved in basically every area the 480's screwed up in and that's a great thing imo. The 580 is a very powerful card while keeping power and heat at a decent level.



I agree, if this was the card that had came out at the same time as the 5xxx card's like it should have i would be a happy owner of a 480 right now... although i would not be gaming on 3 monitors so i guess I'm kinda glad it took this long for nvidia to get the core they intended to market 

Saying that it kind of makes me think gf100 was delayed by around a year and only semi working ones were released before now (yes i know that's not really true but it almost feels that way)

All i can hope for now is that nvidia manages to add the ability to run 3 monitors on a single card for the 6xx cards.


----------



## N3M3515 (Nov 11, 2010)

CDdude55 said:


> I really don't think Nvidia aimed the 580 at the 6900's, i think that they just wanted to bring something out in time of the 6900 release so that it doesn't totally over shine the holiday season. And as newtekie said, the 5 series name is really just marketing so that they don't look like they are to far behind is GPU manufacturing against AMD, as this could of easily been called the ''GTX 485'' as ive said before. I think AMD has a very high chance in beating Nvidia's GTX 580 due to them building from the ground up, while the 580 is still building and improving upon the same structure. Now of course that doesn't automatically mean the 6900's will be better, but it definitely gives them more room and a greater chance to create a better design and ultimately beat the competition.



The way i see it, 6970 could have the performance of crossfired 6850, keep in mind this is asuming that 6970 will have 1920 shaders, if not then it could be slightly slower than 5970, and equal to GTX 580, what i am pretty sure about is the fact that amd can put the price at U$449 or U$399 if needed, even less.

It would be great for GTX 580 and HD 6970 to be at 399


----------



## Paintface (Nov 11, 2010)

http://www.newegg.com/Product/Product.aspx?Item=N82E16814125319

Looks like its not dropping in price


----------



## Super XP (Nov 11, 2010)

Hopefully this card helps drive prices down. I heard the HD 6970 is going to have 1536 stream processors, 32ROPs, 96 Texture units and 2GB of GDDR5 memory but will be a lot more efficient in design that it should easily surpass the HD 5970 by as much as 15% to 20% on average in gaming and 80%+ in Tessellation performance.


----------



## LAN_deRf_HA (Nov 11, 2010)

http://69.65.116.162/discussions.x/19954







> It's worth noting that the cards are positioned in this slide in order of performance. Apparently, only the very high end of the 6900 series will outdo the existing Radeon HD 5970. At the low end, Turks and Caicos will merely match, and in some cases fall short of, the performance found in today's 5700-series Radeons.



That blows. Not even sure it makes sense, unless the 6970 has less than 1920 shaders.


----------



## Ross211 (Nov 11, 2010)

I personally think Fermi in general is somewhat a repeat of when NVIDIA released the FX 5800 and FX series.  I don't think I'm the only one that thinks this.


----------



## a_ump (Nov 11, 2010)

nope. and personally, we've seen how 1160SPU's of the HD 6870 is on the HD 5870's toe's. So almost 1600 in the HD 6970, with various other mentioned hardware improvements, should definitely topple a GTX 580 with a pricetag either similar or lower. That's what i expect anyways.


----------



## MxPhenom 216 (Nov 11, 2010)

Super XP said:


> Hopefully this card helps drive prices down. I heard the HD 6970 is going to have 1536 stream processors, 32ROPs, 96 Texture units and 2GB of GDDR5 memory but will be a lot more efficient in design that it should easily surpass the HD 5970 by as much as 15% to 20% on average in gaming and 80%+ in Tessellation performance.



if its anything like the 6850 or 6870 then tesselation performance will not be changed much at all.

and the hd6970 is like ati gtx480 according to fudzilla. hot and hungrey compared to the norm of ati from the hd5000 series


----------



## Googoo24 (Nov 11, 2010)

The 6970 is hot and hungry? That's the first I've heard of that. I thought it just demanded slightly more power than the typical AMD card.


----------



## stinzza (Nov 11, 2010)

mr wizzard.. why not comment on each game review after bench.. not just the grafhs.. that would be something more to rely on.. and some more work.. but ey.. that important.


----------



## Frag_Maniac (Nov 11, 2010)

So what's the verdict on going 580 or waiting for the 28nm chip? What say you all? 

Also still waiting to hear from Mr. W1zzard if there's a Pci-Ex scaling article planned for the 580? I'd like to know by how much it exceeds 8x bandwidth.


----------



## erocker (Nov 11, 2010)

Frag Maniac said:


> So what's the verdict on going 580 or waiting for the 28nm chip? What say you all?
> 
> Also still waiting to hear from Mr. W1zzard if there's a Pci-Ex scaling article planned for the 580? I'd like to know by how much it exceeds 8x bandwidth.



I think, regardless of previous cards or what the competition has that it's a good card. Personally, I'm waiting for the competition to come out with their competing card before making any kind of purchace. 28nm most likely won't be here for performance parts until the end of 2011. So if you're a person who upgrades every few years, it might be good to wait depending on your current needs. As far as PCI-E bandwith, check the review with the GTX 480 as they are very similar. http://www.techpowerup.com/reviews/NVIDIA/GTX_480_PCI-Express_Scaling/


----------



## Frag_Maniac (Nov 11, 2010)

erocker said:


> As far as PCI-E bandwith, check the review with the GTX 480 as they are very similar.


I've already read that and the 5870 review, which states both exceed 8x by roughly 2%. Still though, I'd rather see a test on the 580 itself, because it's a redesigned chip with more cores used, different transistor spec, and totally different cooler, which also affects power draw. Not sure if the latter affects bandwidth used, but none the less, there's enough that's different about the 480 and 580 to want to see how they compare on a Pci-Ex scaling test.


----------



## TAViX (Nov 11, 2010)

This is a nice card, very nice, BUT to pricey for my taste. Maybe we need to wait for the new 6970 before deciding who-who?? 
Anyways, the folks with 58xx or 4xx cards can stay realxed. We can wait at least another generation before upgrading current GPUs. And that's a FACT!


----------



## Frag_Maniac (Nov 11, 2010)

TAViX said:


> This is a nice card, very nice, BUT to pricey for my taste. Maybe we need to wait for the new 6970 before deciding who-who??
> Anyways, the folks with 58xx or 4xx cards can stay realxed. We can wait at least another generation before upgrading current GPUs. And that's a FACT!


Too pricey? How So? It's debuting at the same price the 480 did yet has way more technology in it. The only caveat is the drivers aren't quite up to snuff yet, most notably regarding SLI scaling. Considering it already equals the 5970 at a much lower price and with much better DX11 support though, I'd say it's already looking like a bargain.

From what I'm hearing the 6970 won't be much more powerful than a 5870, and the 6000s so far have not shown any better DX11 support than the 5000s did. If anything Nvidia is ahead in both FPS and value the last two gens it appears.


----------



## TAViX (Nov 11, 2010)

The thing is 480 is also to pricey!  And is NOT equal to a 5970, it's not even close. Check the charts again. And where did you here that 6970 won't be much more powerful than a 5870??? The 6870 is very close in performance already??? Do you have a link or something? Let's not play fanboyisms here. please.


----------



## Frag_Maniac (Nov 11, 2010)

TAViX said:


> The thing is 480 is also to pricey!


I think the problem is you're only looking at price point and not price per FPS. You always pay more for higher FPS, esp if the chip has future ready tech in it. What's too high priced is the 5970, it's obvious. There's no way justifying more than $500 for it anymore, and even at that price it's not as good a deal as the 580 because it's woefully inadequate at DX11 support. Not to mention it doesn't even fit in a lot of cases.


----------



## TAViX (Nov 11, 2010)

I'm also looking at the power bill...


----------



## Frag_Maniac (Nov 11, 2010)

As they say ya gotta pay to play, and at least with Nvidia's latest offerings you get what you pay for. That's more than I can say for the 5970. The 5870 is cheap yeah, but quite a bit less powerful than Nvidia's cards and again, poor DX11 support.


----------



## Mr McC (Nov 11, 2010)

Have to say, this release took me completely by surprise and I'm still trying to digest it. I'm not currently thinking about changing anything in my rig, but if I was in the market for a new card this would be on the shortlist. That said, the card appears to be little more than a fixed 480 and I can't help feeling that things would have been much better for all of us, as consumers, if Nvidia had managed to release a 480 with these specs. Is it too little, too late? I don't know, but logic suggests that the 69xx will beat this, but we'll have to wait and see. More to the point, albeit sacrilege to say it on a tech forum, are there many games out there or due to be released in the near future that will seriously challenge anything above a 5850/460?

Anyway, seems like a nice card, all things considered.


----------



## the54thvoid (Nov 11, 2010)

nvidiaintelftw said:


> *if its anything like the 6850 or 6870 then tesselation performance will not be changed much at all*.



What?  Really?  What?  Given that the 6870 is the replacement at best for the 5850, please do compare the 5850 performance in unigine heaven compared to the 6870 (link below).  Unigine is a good guide at tesselation power - thus why the 580 kicks ass on it.  But the 6870 clearly does tesselation far better than a 5850 (about 30% better at 1920).

Sometimes your posts are a wee bitty PRO nvidia and you downtalk AMD far too much and without logic (this tesselation talk being an example).  And i'm not a fanboi of AMD - i'm trying very hard to not buy an Nvidia card (waiting for the competition so i can compare) but i really do like the new 580. 

http://www.techpowerup.com/reviews/HIS/Radeon_HD_6870_Turbo/25.html


----------



## erocker (Nov 11, 2010)

the54thvoid said:


> What?  Really?  What?  Given that the 6870 is the replacement at best for the 5850, please do compare the 5850 performance in unigine heaven compared to the 6870 (link below).  Unigine is a good guide at tesselation power - thus why the 580 kicks ass on it.  But the 6870 clearly does tesselation far better than a 5850 (about 30% better at 1920).
> 
> Sometimes your posts are a wee bitty PRO nvidia and you downtalk AMD far too much and without logic (this tesselation talk being an example).  And i'm not a fanboi of AMD - i'm trying very hard to not buy an Nvidia card (waiting for the competition so i can compare) but i really do like the new 580.
> 
> http://www.techpowerup.com/reviews/HIS/Radeon_HD_6870_Turbo/25.html



Member name gives it away. If I wasn't a moderator I would just ignore such people/posts. Plus, this review really has nothing to do with ATi cards.


----------



## ERazer (Nov 11, 2010)

grear review w1zz  and a great card  

bring on 6970 review! me want new card! im always about price/performance


----------



## wolf (Nov 12, 2010)

Benetanegia said:


> It looks like GTX580 is far more power efficient. Thanks to Anandtech all your doubts should be answered.
> 
> 
> 
> ...



great info there man cheers for posting it.

I for one think it's a feat all unto itself how well Nvidia refined essentially the same GPU.

they've gotten between 15-20% more performance through better yeilds, reworking circutry at many different areas of the GPU, and of course more aggressive clockspeeds.

not only that but will solidly whooping a GTX480 in every game or test it consumes less power, and makes less noise. 

it would be commendable IMO to keep the *same* performance and reduce power useage and noise, just remember they did that *and* added 15%+ more power. not freakin bad at all for a refresh.

t'aint next gen, but it restores some of my faith in Nvidia. Jen Hsun must have been pissed when GF100 went tits up, but this goes to show he's comitted to solutions.


----------



## qubit (Nov 12, 2010)

+1 wolf

Just look at AMD. They misfired pretty badly with the HD 2900 three years ago. That chip was supposed to take out the 8800 GTX, but ended up being hot and embarrassingly slow in the benchmarks at stock clocks. Then look at how AMD came back with the 4000 series. Sounds similar to nvidia's misfire this year, doesn't it?


----------



## newtekie1 (Nov 12, 2010)

wolf said:


> great info there man cheers for posting it.
> 
> I for one think it's a feat all unto itself how well Nvidia refined essentially the same GPU.
> 
> ...



As I mentioned though, I don't know how much of these improvements come from the GPU rework itself, and how much comes from the improved cooler design.  Since W1z showed that a GTX480 is capable of similar power number when the GPU runs cooler.


----------



## wolf (Nov 12, 2010)

newtekie1 said:


> As I mentioned though, I don't know how much of these improvements come from the GPU rework itself, and how much comes from the improved cooler design.  Since W1z showed that a GTX480 is capable of similar power number when the GPU runs cooler.



I guess it wouldnt be that hard to force the fan to run slower, and heat up the GPU to the mid 90's and see how power fares at that temperature.

however from all of the good reviews I've read, Nvidia really did tackle the GF100 GPU to the ground, pull it's pants down and tear it a new one. major work has been done to the GPU if the reviewers are to be believed. it really is a pity it took so long but this is what the GTX480 should have been.

and still, for it to get so close to a 5970 is a feat all unto itself, lightly overclocked cards will likely match or exceed it. no matter which way you look at it it's a butload of power for one GPU to have.

I'd love to see 5970CF vs GTX580 SLi, quad GPU scaling ftl.

EDIT: some info on what changed between GF100 and GF110;



> Little did we know at the time, but back in February of this year, before the first GF100 chips even shipped in commercial products, the decision had been made in the halls of Nvidia to produce a new spin of the silicon known as GF110. The goal: to reduce power consumption while improving performance. To get there, Nvidia engineers scoured each block of the chip, employing lower-leakage transistors in less timing-sensitive logic and higher-speed transistors in critical paths, better adapting the design to TSMC's 40-nm fabrication process.
> 
> At the same time, they made a few targeted tweaks to the chip's 3D graphics hardware to further boost performance. The first enhancement was also included in the GF104, a fact we didn't initially catch. The texturing units can filter 16-bit floating-point textures at full speed, whereas most of today's GPUs filter this larger format at half their peak speed. The additional filtering oomph should improve frame rates in games where FP16 texture formats are used, most prominently with high-dynamic-range (HDR) lighting algorithms. HDR lighting is fairly widely used these days, so the change is consequential. The caveat is that the GPU must have the bandwidth needed to take advantage of the additional filtering capacity. Of course, the GF110 has gobs of bandwidth compared to most.
> 
> ...


----------



## motasim (Nov 12, 2010)

... I am not red nor green; I just speak my mind; and what I think is that AMD has been surprised by the unexpected launch and performance of the GTX 580, as everyone else have ... we were only expecting a dual GF104 Chip or a fully-enabled GF104 GPU from nVidia and no one was expecting a "proper" GF100 (i.e. GF 110) chip GPU, but nVidia managed to keep a good secret long enough to surprise competition. Now, the ball is in AMD's court; AMD has had a year now since they launched the 5870 & 5970 and has no excuse in my opinion not to build single-chip GPUs that can outperform the GTX 580, but if they couldn't, that'll definitely mean that 2011 is going to be nVidia's year, especially that the GTX 560 & GTX 570 should be on the way ...


----------



## pantherx12 (Nov 12, 2010)

qubit said:


> +1 wolf
> 
> Just look at AMD. They misfired pretty badly with the HD 2900 three years ago. That chip was supposed to take out the 8800 GTX, but ended up being hot and embarrassingly slow in the benchmarks at stock clocks. Then look at how AMD came back with the 4000 series. Sounds similar to nvidia's misfire this year, doesn't it?



Regarding the 2900xt, nearly bought one the other day to experience , but just why did it do so badly, fermi type troubles?

Because it's specs are damn good!  ( for the time)

Did decent cooling yield good results?


----------



## Benetanegia (Nov 12, 2010)

newtekie1 said:


> As I mentioned though, I don't know how much of these improvements come from the GPU rework itself, and how much comes from the improved cooler design.  Since W1z showed that a GTX480 is capable of similar power number when the GPU runs cooler.



Look at the entire Anand review. Temps are exactly the same for both GTX580 models, ut power consumption on the Asus one is much lower.






Here's some results from the GTX480 lightning:










As you can see the temperatures are lower on this one thanks to a much better cooler and because of that (and PCB, etc) the card consumes the same as the reference GTX480 despite running at 750 Mhz and having a stock voltage of 1.06 V. So yeah temps do help power consumption, but look at this small comparison between Wizzard reviewed GTX480 Lightning/reference and reference GTX580:

GTX480 reference - 700 Mhz - 0.99 V - 96 ºC - 320 W Furmark - 257 W peak
GTX480 Lightning - 750 Mhz - 1.06 V - 71 ºC - 321 W Furmark - 230 W peak
GTX580 reference - 772 Mhz - 1.05 V - 86 ºC - 306 W Furmark - 226 W peak
Asus 580 in Anand - 782 Mhz - 1.00 V - 87 ºC - (*) 280 W Fur - (*) 206 W peak

*Estimated from Anand's results. As the Asus one consumes 26w less under Furmark and 20 W less on Crysis.

On top pf that I would say that the MSI Lighting is made of cores that have been cherry picked, while the Aus one is a launch product, there sure is room from improvement.

Bottom line is IMHO:

1- The GTX580 does consume much less, especially considering it has more SPs enabled and the consequent performance improvement.

2- Nvidia should definately change whoever makes them reference cards.


----------



## the54thvoid (Nov 12, 2010)

Benetanegia said:


> Bottom line is IMHO:
> 
> 1- The GTX580 does consume much less, especially considering it has more SPs enabled and the consequent performance improvement.
> 
> 2- Nvidia should definately change whoever makes them reference cards.



It really depends what you read.

http://www.hexus.net/content/item.php?item=27307&page=16  similar draw
http://www.bjorn3d.com/read.php?cID=1950&pageID=9757  similar draw
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/25.html  lower draw
http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580/17  lower draw
http://www.hardwareheaven.com/revie...rce-gtx-580-sli-review-power-temps-noise.html   lower draw

Some sites have awfully low power draws, others have marginal lower (~ 20 Watts) others have the same and Guru 3D had a shit engineering card with much higher draw.

I think focussing on power consumption though is a moot point.  What i see as vaild is it's other performance figures.  This review (below) shows the problems faced by crossfire set ups.  This is why i will be going single card come December (Red or Green, unsure).

http://www.hardwareheaven.com/revie...oc-crossfire-vs-radeon-5970-introduction.html

The 6870 crossfire and 5970 beat the GTX 580 in : BFBC2, AvP, poorer MINIMUM fps in Crysis, COD Black Ops (is a crock of shit and NV optimised for 580 - i have friends that cant play it with a GTX 260 - console port ahoy!), F1 2020 runs poorly on dual gpu, Fallout New Vegas is slower on dual gpu...

Basically, though the crossfire set ups are quicker technically, unless the game is optimised they run worse.

I'm just a little concerned the GTX 580 is so much better simply because they put on a superior cooler, thats been around on AIC/AIB cards for ages.  I'm waiting for other cards and reviews before i decide.

EDit: just found this review - GTX 580 beats things across the board (480, 480 OC, 480 lightning etc) but on unigine the OC GTX 480 Lightning beats the OC GTX 580.  Except in Minimum FPS (512 cores versus 480?)
http://www.overclock3d.net/reviews/gpu_displays/zotac_gtx580_review/5


----------



## nt300 (Nov 12, 2010)

I think I wait for HD 6970. The 580 is fast and much better than 480 but IMO only good for helping competition and to drop price of graphics cards. I feel the 580 should have been much much faster for the high price they ask.


----------



## wolf (Nov 12, 2010)

motasim said:


> ... I am not red nor green; I just speak my mind; and what I think is that AMD has been surprised by the unexpected launch and performance of the GTX 580, as everyone else have ... we were only expecting a dual GF104 Chip or a fully-enabled GF104 GPU from nVidia and no one was expecting a "proper" GF100 (i.e. GF 110) chip GPU, but nVidia managed to keep a good secret long enough to surprise competition. Now, the ball is in AMD's court; AMD has had a year now since they launched the 5870 & 5970 and has no excuse in my opinion not to build single-chip GPUs that can outperform the GTX 580, but if they couldn't, that'll definitely mean that 2011 is going to be nVidia's year, especially that the GTX 560 & GTX 570 should be on the way ...
> http://images.bit-tech.net/blog/201...the-geforce-500-series/gtx-500-prediction.jpg



those 500 series estimates look well possible from their currant lineup, at least specs and speed wise.

GTX570 will be GF110 with near GTX480 performance, should be considerably less power consumption too.
GTX560 is the fully enabled and clocked up GF104 we've been waiting for.
GTS550 is also a fully enabled GF106, and clocked up, giving it considerably more performance IMO, given at the moment it lacks 50% of its ROPS and memory bandwidth.


----------



## Benetanegia (Nov 12, 2010)

the54thvoid said:


> It really depends what you read.
> 
> http://www.hexus.net/content/item.php?item=27307&page=16  similar draw
> http://www.bjorn3d.com/read.php?cID=1950&pageID=9757  similar draw
> ...



But once again it depends on which cards those reviews are comparing. Hexus and bjorn3D are comparing GTX580 reference against Asus and Galaxy GTX480's which are much more refined than reference designs. Compare Asus GTX480 to the Asus GTX480 in Anandtech and you are going to be closer to the truth. My guess is that the reference card is made so that every candidate can attain the required specs, even the really "bad ones", it's like playing a lot on the safe side, because every card that can be released at launch counts, due to arguably high demand and low supply. In the meantime important partners like Asus (just as an example) get better candidates and they also do some binning themselves and by getting rid of the worst ones* they can improve a lot over the reference design, using far lower voltages to do the same thing. A lot of this has already happened with GTX480's and GTX470's on the wild, which have much much better thermals and power consumption than the reference ones reviewed at launch.

*You could be surprised how much it can improve if you get rid of only the worst 2% (because of Normal or Gaussian distribution you know) and because they are getting rid of 2% or lets say 5%, they just need to sell them at 2-5% higher price or just rely in the higher number sales derived from being the best supplier for that card.


----------



## Zubasa (Nov 12, 2010)

pantherx12 said:


> Regarding the 2900xt, nearly bought one the other day to experience , but just why did it do so badly, fermi type troubles?
> 
> Because it's specs are damn good!  ( for the time)
> 
> Did decent cooling yield good results?


The HD2900 felt really hard on its face because it have such an abyssmal default clock. (Yes yield/power consumption problems)
That and the fact that the SP on the R600 was severely under-utilized.
So the first thing that changed in the RV770 (4870/4850) is that they added a ton of SPs to the thing.


----------



## Over_Lord (Nov 13, 2010)

Benetanegia said:


> Look at the entire Anand review. Temps are exactly the same for both GTX580 models, ut *power consumption on the Asus one is much lower.*



Read the 1st page of review. The ASUS card uses a voltage of 1.00v, reference in 1.025v..

That alone makes a HUGE difference in power consumption.


----------



## Wile E (Nov 13, 2010)

pantherx12 said:


> Regarding the 2900xt, nearly bought one the other day to experience , but just why did it do so badly, fermi type troubles?
> 
> Because it's specs are damn good!  ( for the time)
> 
> Did decent cooling yield good results?



Extreme cooling yielded excellent results. Air cooling was useless on the card. AA absolutely crippled the card tho. It was mostly a good bencher's card.

The card had high leakage, and was a heat monster. It was in fermi territory for heat output.


----------



## Bjorn_Of_Iceland (Nov 13, 2010)

Wow, this thing is cooler, faster and more power efficient indeed.. they were not exaggerating. 850 core is easy without voltage tweaks


----------



## pantherx12 (Nov 13, 2010)

Cheers for 2900xt info, is it bad your words have only made it more tempting?

I can get one for £29.75 from work, but seems expensive ( this is including my discount)



I can't wait to see clocks on gf110 with water or epic aircooling.


----------



## the54thvoid (Nov 13, 2010)

*UK availability of GTX 580 is very low.*

As of writing this:

E-Buyer - pre order
Aria - pre order
Novatech - pre order
Scan - 3 of 11 lines in stock
Overclockers UK - 1 of 11 lines in stock (4 of @ £450!!!)

Methinks this launch, though real was very much a 6970 pre-empt.  There simply isn't much stock around - wasn't to start, probably less than a hundred units over those 5 stores and prices vary from £394 (pre-order) up past £450.

But it worked regardless.  I like the GTX 580.  But i'm thinking, hmm, how many are there?  I hope production is ramped up and not used to feed the 570/560 lines.

I also think HD6970 will be the same.  Few at launch, if they do get the 13th Dec nailed.


----------



## Red_Machine (Nov 13, 2010)

eBuyer said they would have some in on the 11th.  Tho my order has now been placed instead of there being a stock warning...


----------



## Paintface (Nov 13, 2010)

http://www.newegg.com/Product/Product.aspx?Item=N82E16814500184

possible to find it cheaper?


----------



## Red_Machine (Nov 13, 2010)

Would you guys say an i5 760 is superior to a Phenom II X4 965?


----------



## scaminatrix (Nov 13, 2010)

Red_Machine said:


> Would you guys say an i5 760 is superior to a Phenom II X4 965?



It seems they're close. I'm guessing the i5 when OC'ed will be the better performer.


----------



## Red_Machine (Nov 14, 2010)

Would you guys reccommend the Intel or the AMD?  I'm tempted to go AMD again, but I'm just worried that it won't be up to the task of supporting my 580.


----------



## bear jesus (Nov 14, 2010)

Red_Machine said:


> Would you guys reccommend the Intel or the AMD?  I'm tempted to go AMD again, but I'm just worried that it won't be up to the task of supporting my 580.



I would say go with AMD if you intend to overclock, if you intend to leave your cpu at stock speeds i would expect an i7 to be the better buy than i5 or anything from AMD.

My phenom 965 at 4ghz does pretty well at keeping up with my pair of 6870's but i admit an overclocked i7 would feed them better but only as i have 2 cards, i hope to go back to a single card soon so that i can keep my phenom a little longer without it holding me back.



scaminatrix said:


> It seems they're close. I'm guessing the i5 when OC'ed will be the better performer.



From that i would also say overclocked i5, if you can bump it up closer to 4ghz would beat out then phenom at around 4ghz.


----------



## HTC (Nov 14, 2010)

Any chance W1zzard can add to the review what one gets when the "limiter" on the car is off?



HTC said:


> I just have a couple of questions:
> 
> - Are there any performance gains when not limiting?
> 
> - Is it worth it to remove the limiter?


----------



## newtekie1 (Nov 14, 2010)

HTC said:


> Any chance W1zzard can add to the review what one gets when the "limiter" on the car is off?



Performance in everything but Furmark would be unchanged since Furmark was the only application that the limitter detects and activates with.  All other apps are already not effected by the limitter.


----------



## CDdude55 (Nov 14, 2010)

newtekie1 said:


> Performance in everything but Furmark would be unchanged since Furmark was the only application that the limitter detects and activates with.  All other apps are already not effected by the limitter.



Hmm, so Unigine Heaven isn't affected by the throttling?


----------



## HTC (Nov 14, 2010)

newtekie1 said:


> Performance in everything but Furmark would be unchanged since Furmark was the only application that the limitter detects and activates with.  All other apps are already not effected by the limitter.



Really?

I thought the extra power it uses when the limiter is not in use would bring some benefits in performance.


----------



## newtekie1 (Nov 14, 2010)

CDdude55 said:


> Hmm, so Unigine Heaven isn't affected by the throttling?





HTC said:


> Really?
> 
> I thought the extra power it uses when the limiter is not in use would bring some benefits in performance.



It amazes me sometimes that people can't be bothered to actually read the review they are commenting on...



W1zzard said:


> At this time the limiter is only engaged when the driver detects Furmark / OCCT, it is not enabled during normal gaming.


----------



## HTC (Nov 14, 2010)

newtekie1 said:


> It amazes me sometimes that people can't be bothered to actually read the review they are commenting on...



OK: where in the review, other then the power consumption test's last graph, does it show any difference, if any, with the limiter on VS off?


----------



## CDdude55 (Nov 14, 2010)

newtekie1 said:


> It amazes me sometimes that people can't be bothered to actually read the review they are commenting on...



I did read the review, and Unigine isn't normal gaming load, so it's a valid question. Chill out.


----------



## newtekie1 (Nov 14, 2010)

HTC said:


> OK: where in the review, other then the power consumption test's last graph, does it show any difference, if any, with the limiter on VS off?



Seriously, it is not that hard to understand.  The driver detects the exe for OCCT or Furmark and enabled the limitter.  Nothing else is effected, so your question about performance is answered in the review already.



CDdude55 said:


> I did read the review, and Unigine isn't normal gaming load, so it's a valid question. Chill out.



No it is not a valid question, because again, it says right in the review that the driver only activates the limitter when it detects OCCT or Furmark.  Is Unigine OCCT or Furmark?  No.  So there is your answer.


----------



## Steevo (Nov 14, 2010)

Not true or else I could change the name. The purpose is to throttle in what it considers a overpower situation, thus it throttled during overclocking as shown in the review.


----------



## newtekie1 (Nov 14, 2010)

Steevo said:


> Not true or else I could change the name. The purpose is to throttle in what it considers a overpower situation, thus it throttled during overclocking as shown in the review.



Wanna take a stab at how W1z likely bypassed the limit to get Furmark numbers w/o the limitter?

And no where in the review does it say the limitter was activated during overclocking.

Yes, it is possible that it could be set up to throttle in every overpower situation. However currently, as W1z stated, it only monitors the sensors when OCCT or Furmark is detected running.  Those two applications are the only application effected.  No ther application is effected by the limit currently.


----------



## HTC (Nov 14, 2010)

newtekie1 said:


> Seriously, it is not that hard to understand.  *The driver detects the exe for OCCT or Furmark and enabled the limitter.*  Nothing else is effected, so your question about performance is answered in the review already.



It's amazing how someone can comment on a review without actually bothering to read it ...



> Once we reached 97°C the card started to throttle down which forced us to lower clocks to reduce power consumption to get out of the throttling state to maximize performance.





> *When the card senses it is overloaded by either Furmark or OCCT*, the card will reduce clocks to keep power consumption within the board power limit of 300 W



How do you know if, in whatever game, the card reaches the 97º limit and starts throttling down or not without actually testing for it? I dunno, since i don't own the card.

If not W1zzard, perhaps someone who does own such card can test this and report back.


----------



## CDdude55 (Nov 14, 2010)

It's a valid question newtekie.



			
				W1zzard said:
			
		

> For the every day gamer the power draw limiter will not have any effect on performance.





			
				W1zzard said:
			
		

> As mentioned earlier, the card comes with a current limiter system which reduces clocks and performance in case the card senses it is overloaded.



You're basing what you're saying on the fact that it is driver limited for those two specific benchmarks, but it has also been stated that when overloaded the card will throttle, so if a program like Heaven is strong enough (because it's not regular load) to push the temps up significantly, then performance throttling would be expected. I'm not saying you're wrong and that the drivers don't dictate what's regular load and whats not, but based on a variety of different statements it wasn't 100% clear cut to me. Because as i said, Unigine isn't gaming load, so it's more then a valid question. No need to start PMSing lol, just a joke.


----------



## Steevo (Nov 14, 2010)

Or a game, or any CUDA program that pushes the card.


I am not saying it is specifically a bad thing, perhaps they plan on cooking some older cards and wanted to protect the new users again?


But my problem is what about the users that want to run this under water, or more extreme situations? Is Nvidia giving them the finger on this round? There has been talk of a "performance boost" on some forums, perhaps they have waiting till AMD drops their card to unleash its extra clocks to "meet the competition"? 


I just question the wisdom of tying down a the high end card, targeted at enthusiasts, when they get bad press for it.


----------



## newtekie1 (Nov 14, 2010)

HTC said:


> It's amazing how someone can comment on a review without actually bothering to read it ...
> 
> 
> 
> ...



The 97°C limit has nothing to do with the current limitter.  The 97°C limit has been on all the Fermi cards, AFAIK.  Temp is not the same as current.



CDdude55 said:


> It's a valid question newtekie.
> 
> 
> 
> ...



No, I'm basing what I'm saying on the fact that W1z has said that the limitter is only active when those two programs are detected by the driver.  In all other cases the driver doesn't care about current and doesn't activate the limitter even if the card goes over 300w.

Again, it can't get more clear cut than "the limiter is only engaged when the driver detects Furmark / OCCT", at no other point will the limitter activate, even if the application/game causes the current to go over 300w.



Steevo said:


> Or a game, or any CUDA program that pushes the card.
> 
> 
> I am not saying it is specifically a bad thing, perhaps they plan on cooking some older cards and wanted to protect the new users again?
> ...



Considering we already have ways around the 300w limit, I don't think this is the case.  I think there is a specific reason they only target 2 programs and don't monitor current during any others.  Thats not to say they won't expand the list later on though.


----------



## HTC (Nov 14, 2010)

newtekie1 said:


> The 97°C limit has nothing to do with the current limitter.  *The 97°C limit has been on all the Fermi cards, AFAIK.*  Temp is not the same as current.


Dunno: you could be right.


newtekie1 said:


> *No, I'm basing what I'm saying on the fact that W1z has said that the limitter is only active when those two programs are detected by the driver.*  In all other cases the driver doesn't care about current and doesn't activate the limitter even if the card goes over 300w.
> 
> *Again, it can't get more clear cut than "the limiter is only engaged when the driver detects Furmark / OCCT", at no other point will the limitter activate, even if the application/game causes the current to go over 300w.*
> 
> Considering we already have ways around the 300w limit, I don't think this is the case. *I think there is a specific reason they only target 2 programs and don't monitor current during any others.*  Thats not to say they won't expand the list later on though.



Since W1zzard didn't say that (bold underlined), your other points (bold) are wrong.

This is what W1zzard said:



> *Once we reached 97°C the card started to throttle down* which forced us to lower clocks to reduce power consumption to get out of the throttling state to maximize performance.





> When the card senses it is overloaded by either Furmark or OCCT, the card will reduce clocks to keep power consumption within the board power limit of 300 W



If you're right, anyone with the card can test this by changing Furmark's EXE file to ... "x.EXE" or something like that: then, he / she will get quite a hike in temps.

Maybe W1zzard can clarify it for us: has the card been limited by those 2 programs specifically? I ask because he said "senses it is overloading" and this can be done by other programs as well: will the limiter kick in then too?


----------



## wahdangun (Nov 14, 2010)

newtekie1 said:


> The 97°C limit has nothing to do with the current limitter.  The 97°C limit has been on all the Fermi cards, AFAIK.  Temp is not the same as current.
> 
> 
> 
> ...



sp if the driver just check furmark.exe then why wizz bother to build GPU-z to overcome the limiter, and every reviewer can just rename the exe and move on


----------



## LAN_deRf_HA (Nov 14, 2010)

After giving it some thought, I think this limiter thing is perfectly acceptable as long as they include an official option to disable it in the control panel, or put some documentation somewhere telling us it can be done with gpuz. This way they protect the average consumer and stay within the pci-e rating, but give the option to go beyond to the people who actually know what they're doing. Atm though I don't see them doing that, if anything I wonder if they'll block the gpuz bypass in the next driver.


----------



## HalfAHertz (Nov 14, 2010)

Well only stress testing programs use it, so I don't see a need to disable it.I mean if all you're using the card for is gaming, and you want to over-clock, then why not stress test with a demanding game instead? Sometimes I've had OCs which would have corruptions and artefacts in The Furry Donut™, but would work perfectly in games.


----------



## newtekie1 (Nov 14, 2010)

HTC said:


> Dunno: you could be right.
> 
> 
> Since W1zzard didn't say that (bold underlined), your other points (bold) are wrong.



Oh Jesus Christ!!! I quoted him saying exact that!  Do you have a hard time reading? I'm not quoting him again, you can read my post above to see where he said it, or even better read the review!:shadedshu



HTC said:


> This is what W1zzard said:
> 
> 
> 
> > Once we reached 97°C the card started to throttle down which forced us to lower clocks to reduce power consumption to get out of the throttling state to maximize performance.



Again, why are you taking a statement from the overclocking section about temperature and trying to say it has anything to do with the Over Current Protection?  Did you not get that the temperature protection is a totally different thing from the current protection?  I believe I already told you this.

You can read pretty much the same statement about GTX480s at well: http://www.techpowerup.com/reviews/MSI/N480GTX_GTX_480_Lightning/31.html

But I guess you will still want to go on about the temp limit like it is the same thing as the current limit, but  now I'm sure you'll say the GTX480 must have had it too...right?  Because this is like the 3rd time you've went on about the temperature limit like it is related to the current limit.:shadedshu

Again, temperature is not the same as current, they are two different things and hence two different protection systems.



HTC said:


> > *When the card senses it is overloaded by either Furmark or OCCT*, the card will reduce clocks to keep power consumption within the board power limit of 300 W



Correct he did say that, and I bolded the important part.  And when it is put in context with the fact that he said the OCP is only activated when it detects OCCT and Furmark.



HTC said:


> If you're right, anyone with the card can test this by changing Furmark's EXE file to ... "x.EXE" or something like that: then, he / she will get quite a hike in temps.
> 
> Maybe W1zzard can clarify it for us: has the card been limited by those 2 programs specifically? I ask because he said "senses it is overloading" and this can be done by other programs as well: will the limiter kick in then too?



Again, I don't know why you can't be bothered to read the review as W1z already said this in it.  The driver detects Furmark and OCCT and activates the limitter.  It is limitted by those 2 programs specifically because those are the only two programs the driver detects.  In all other programs the driver doesn't monitor the overcurrent sensors.




wahdangun said:


> sp if the driver just check furmark.exe then why wizz bother to build GPU-z to overcome the limiter, and every reviewer can just rename the exe and move on



Because when nVidia adds more programs latter it will be easier to just use the GPU-z tool to disable the limitter instead of trying to figure out if a program is affected and what programs are affected.

Plus there are more ways than just the exe name to detect if a certain program is running.


----------



## bear jesus (Nov 14, 2010)

HalfAHertz said:


> Sometimes I've had OCs which would have corruptions and artefacts in The Furry Donut™, but would work perfectly in games.



But does that not just mean it is not really 100% stable, kind of like a cpu overclock that is fine in games but something like prime causes bsod's?

I must admit though with my 6870's i have had overclocks that were furmark and atitool stable with no artifacting yet 3dmark06 would cause crashes so it's not like furmark is a definitive stress test.


----------



## N3M3515 (Nov 14, 2010)

From my experience with graphic cards, all of them have a temperature limit, once they reach it they lower the clocks, i've had 5700 ultra, 6800 gs, and when they reached 97 - 100 degrees, the driver throtled the clocks down, so for those thinking it was the OCP from the GTX 580, it's not.
The OCP works only when reading current passes 300W(my guess), but since there are no games that makes it surpass 300W then it only works when the only apps that makes the card surpass 300W: occt and furmark.

There you go, temp is not the same as current OCP reads current: *Watts*


----------



## Steevo (Nov 14, 2010)

What happens once a game comes along that uses more than 300W, nvidia just expects users to live with underperformance then? Or do they expect you to upgrade cards?


Would you buy a car with a wood block under the throttle?


----------



## N3M3515 (Nov 14, 2010)

Steevo said:


> What happens once a game comes along that uses more than 300W, nvidia just expects users to live with underperformance then? Or do they expect you to upgrade cards?
> 
> 
> Would you buy a car with a wood block under the throttle?



I think they will take care of it when it happens, driver level.


----------



## Benetanegia (Nov 14, 2010)

Steevo said:


> What happens once a game comes along that uses more than 300W, nvidia just expects users to live with underperformance then? Or do they expect you to upgrade cards?
> 
> 
> Would you buy a car with a wood block under the throttle?



That will never happen. Never. It's impossible to load all the resources on a GPU at the same time unless you artificially do it with something like Furmark. In actual games, there's a lot of things to be made other than running shader code, like for example texture loading and filtering.


----------



## Steevo (Nov 14, 2010)

So then they felt the extra money they could spend on more hardware was obviously worth it for the few who would run furmark or occt? Bullshit.

The did it to either mislead the public in power use, or to protect the card from being used to the full in a optimized way.



A car with a wood block under the gas pedal.


----------



## the54thvoid (Nov 14, 2010)

Steevo said:


> What happens once a game comes along that uses more than 300W, nvidia just expects users to live with underperformance then? Or do they expect you to upgrade cards?
> 
> 
> Would you buy a car with a wood block under the throttle?



If 300W is the PCI-e limit i dont see it being an issue.  Under current protocols for meeting specs, i dont think any game coding would be 'valid' that did that.  The design spec is after all 300W.  Why design games that require more power than a single card can meet by specification.  Given the console domination of gaming design, we're still not even getting DX 11 up to a good standard yet.

In the future i dont see it happening either as the manufacture processes shrink.

As for the car analogy, most super sport production cars have speed limiters (150mph for many BMW/Mercedes etc) built in, so we do buy cars with metaphorical chokes built in.


----------



## HTC (Nov 14, 2010)

newtekie1 said:


> Oh Jesus Christ!!! I quoted him saying exact that!  Do you have a hard time reading? I'm not quoting him again, you can read my post above to see where he said it, or even better read the review!:shadedshu
> 
> Again, why are you taking a statement from the overclocking section about temperature and trying to say it has anything to do with the Over Current Protection?  Did you not get that the temperature protection is a totally different thing from the current protection?  I believe I already told you this.
> 
> ...



You were right. I'm man enough to admit when i'm wrong and judging by W1zzard's quote below, i was indeed wrong.



ty_ger said:


> What I was stating was that _NVIDIA_ never stated that _only_ OCCT and Furmark triggered the OCP protection cap.





W1zzard said:


> thats exactly what nvidia told me



It's all about interpretation: Until now, W1zzard hadn't stated what you have been claiming as fact (the card *really does react* to Furmark and OCCT) and that's what i was clinging onto.

The thing is, when i'm convinced i'm right, i'll argue and argue, and then argue some more ... until someone proves me wrong, just like W1zzard did.


----------



## bear jesus (Nov 14, 2010)

Steevo said:


> The did it to either *mislead the public in power use*, or to protect the card from being used to the full in a optimized way.



The one thing that makes me think you could be right about that is the fact so many people quoted the 480's power usage as what it used in furmark and not real in game power usage when complaining about how much power the card used, i assume so many people did that because they are talking about the max power the card could possibly use and with this limit makes it seam much better when quoting the absolute max power.


----------



## LAN_deRf_HA (Nov 14, 2010)

HalfAHertz said:


> Well only stress testing programs use it, so I don't see a need to disable it.I mean if all you're using the card for is gaming, and you want to over-clock, then why not stress test with a demanding game instead? Sometimes I've had OCs which would have corruptions and artefacts in The Furry Donut™, but would work perfectly in games.



That's a horrible idea. Sometimes it takes a good 5 hours for a game to crash from a bad overclock, OCCT will find it in 10-20 minutes, and then you don't need to worry about finding stability with hours of testing for each individual program. And "the furry donut" is only good for heating up your card or telling you you're way past the stability limit, it's not sensitive enough for real stress testing. At least not with current cards. If that or programs based on it is the only test you use you're not going to have a truly stable overclock, then you'll get crashes in games and blame the games or the drivers when it's really user error.


----------



## newtekie1 (Nov 14, 2010)

HTC said:


> You were right. I'm man enough to admit when i'm wrong and judging by W1zzard's quote below, i was indeed wrong.
> 
> 
> 
> ...



Its cool man, I don't hold a grudge or anything, and it wasn't like I was really angry or anything.  And I'm the same way when I'm convinced I'm right.



bear jesus said:


> The one thing that makes me think you could be right about that is the fact so many people quoted the 480's power usage as what it used in furmark and not real in game power usage when complaining about how much power the card used, i assume so many people did that because they are talking about the max power the card could possibly use and with this limit makes it seam much better when quoting the absolute max power.




The problems I have with the whole idea that nVidia did it to give false power consumption reading is that if they wanted to do that they would have done a better job at it.  The power consumption with the limitter on under Furmark is like 150w, that is lower than game power consumption.  So it makes it pretty obvious what is going on there, and anyone taking power consumption numbers would have instantly picked up on that.  If they were really trying to do this to provide false power consumption numbers they would have tuned it so that power consumption under Furmark was at least at a semi-realistic level.


----------



## HTC (Nov 14, 2010)

LAN_deRf_HA said:


> That's a horrible idea. Sometimes it takes a good 5 hours for a game to crash from a bad overclock, OCCT will find it in 10-20 minutes, and then you don't need to worry about finding stability with hours of testing for each individual program. And "the furry donut" is only good for heating up your card or telling you you're way past the stability limit, it's not sensitive enough for real stress testing. At least not with current cards. If that or programs based on it is the only test you use you're not going to have a truly stable overclock, then you'll get crashes in games and blame the games or the drivers when it's really user error.



Agreed. Furmark and other such programs "find" a bad OC quicker but that doesn't mean it's full proof.

Sometimes, you run the stress progs for several hours on your OCs and it all checks out fine and then, while playing some game, you get crashes. Who's to blame: the game? The VGA drivers? Most of the time it's the OCs, be them CPU related or GPU related.


----------



## a_ump (Nov 14, 2010)

yea, for me to feel my overclock is stable usually takes 1-3days of messing around, stress tests, gaming, everything. Its not when you can game or when you can pass a stress test that its stable, its when it can do everything . If anything starts being faulty after an OC i always bounce back to square 1.


----------



## newtekie1 (Nov 14, 2010)

HTC said:


> Agreed. Furmark and other such programs "find" a bad OC quicker but that doesn't mean it's full proof.
> 
> Sometimes, you run the stress progs for several hours on your OCs and it all checks out fine and then, while playing some game, you get crashes. Who's to blame: the game? The VGA drivers? Most of the time it's the OCs, be them CPU related or GPU related.



That is one of the things about Furmark I've notices, it doesn't use a whole lot of VRAM.  So if your RAM overclock is slightly unstable it will almost never find it.  That is usually when I fire up Unigine at full tessellation settings to really fill that VRAM up.


----------



## AddSub (Nov 14, 2010)

A total of 307 comments? Make it 308 now. Only a passionate hate of nVidia can make a thread grow this fast and this large. Whatever, this card is pretty much as fast as two 5870 GPU's (5970) as per the following really cool link and all without all the CrossFire scaling issues, since sadly (for CrossFire tech users that is) SLI is still better tech of the two.

Till the next round then, although I don't think AMD will stick around for that long since their Abu Dhabi sugar daddies... ummm, investors, yeah that's it, "investors", well they aren't doing too well themselves. Let's see, who's got half a dozen to a dozen billion dollars (US) sitting around to be spent in this time of global economic downturn in order to bail out and save AMD yet again? IBM? Microsoft? Sony? Fat chance! 

Let me put it this way for hard-core nVidia haters: come Christmas time 2011 (maybe even a few months earlier the way things are going) it's either nVidia GPU or nVidia GPU when it comes to your upgrading purposes.


----------



## LAN_deRf_HA (Nov 14, 2010)

newtekie1 said:


> That is one of the things about Furmark I've notices, it doesn't use a whole lot of VRAM.  So if your RAM overclock is slightly unstable it will almost never find it.  That is usually when I fire up Unigine at full tessellation settings to really fill that VRAM up.



It's interesting with occt, the vram testing part never found any errors at all. It was letting me crank it all the way up to 4000mhz effective. The occt gpu test though was able to find vram errors, probably because both clocks are really tied together in the 4xx series. The vram test must just be showing what the chips can do, not what the controller can handle.


----------



## Steevo (Nov 15, 2010)

AddSub said:


> A total of 307 comments? Make it 308 now. Only a passionate hate of nVidia can make a thread grow this fast and this large. Whatever, this card is pretty much as fast as two 5870 GPU's (5970) as per the following really cool link and all without all the CrossFire scaling issues, since sadly (for CrossFire tech users that is) SLI is still better tech of the two.
> 
> Till the next round then, although I don't think AMD will stick around for that long since their Abu Dhabi sugar daddies... ummm, investors, yeah that's it, "investors", well they aren't doing too well themselves. Let's see, who's got half a dozen to a dozen billion dollars (US) sitting around to be spent in this time of global economic downturn in order to bail out and save AMD yet again? IBM? Microsoft? Sony? Fat chance!
> 
> Let me put it this way for hard-core nVidia haters: come Christmas time 2011 (maybe even a few months earlier the way things are going) it's either nVidia GPU or nVidia GPU when it comes to your upgrading purposes.



I don't hate Nvidia anymore than I hate Ford cars and trucks. My company car is a Ford Explorer.


I use what works best for me, and right now it is ATI for the money. 


Back to your comment about AMD, they have paid off millions of their debts that is why they were not showing a profit, if you understand balance sheets and finance you would understand this.



the54thvoid said:


> If 300W is the PCI-e limit i dont see it being an issue.  Under current protocols for meeting specs, i dont think any game coding would be 'valid' that did that.  The design spec is after all 300W.  Why design games that require more power than a single card can meet by specification.  Given the console domination of gaming design, we're still not even getting DX 11 up to a good standard yet.
> 
> In the future i dont see it happening either as the manufacture processes shrink.
> 
> As for the car analogy, most super sport production cars have speed limiters (150mph for many BMW/Mercedes etc) built in, so we do buy cars with metaphorical chokes built in.



Yep, and some don't have the limiters.

A game does not have anything to do with power consumption anymore than a movie has to do with power use. The game specs don't list how many watts you to have to run it. Nvidia chooses the power consumption of a card based on the coolers ability, and other specs. They made a card that pulls 350+ watts in a real world performance test. Then they put a self limiting throttle on it to keep it from pulling that amount. They claim they have the most powerful card, and in some games they do, but when pushed to the max by a program designed to do so it has to self limit to maintain standards. Like a dragster that has a self deployment chute when you go full throttle. Or a block of wood under the pedal.


----------



## newtekie1 (Nov 15, 2010)

Steevo said:


> They made a card that pulls 350+  watts in a real world performance test.



Furmark is hardly a real world performance test.  It is a torture test more than even a benchmark, though it does have the benchmark function built in.  And even then it isn't a real world benchmark, it is a synthetic benchmark.

And according to W1z it doesn't pull 350+ watts, it pulls ever so slightly over 300w.


----------



## CDdude55 (Nov 15, 2010)

Steevo said:


> I don't hate Nvidia anymore than I hate Ford cars and trucks. My company car is a Ford Explorer.
> 
> 
> I use what works best for me, and right now it is ATI for the money.



It's the way you come across, a lot of your posts come across as the stereotypical rabid ignorant fanboy. You should really be aware of that , because if you really are non bias, comments like ''Nfags can leave this thread, I hope Nvidia doesn't drop their prices and you continue to get assraped by them.'', aren't actually in your favor of being non bias.. just saying.


----------



## Frag_Maniac (Nov 15, 2010)

AddSub said:


> ...this card is pretty much as fast as two 5870 GPU's (5970) as per the following...and all without all the CrossFire scaling issues...


Actually at launch the 580 did have serious scaling issues. It was stomped by the 480 in dual GPU SLI in almost every test. Here's hoping driver maturation will sort that out, because right now that's the only thing keeping me from buying one, other than maybe trying to hit a holiday sale.


----------



## Steevo (Nov 15, 2010)

CDdude55 said:


> It's the way you come across, a lot of your posts come across as the stereotypical rabid ignorant fanboy. You should really be aware of that , because if you really are non bias, comments like ''Nfags can leave this thread, I hope Nvidia doesn't drop their prices and you continue to get assraped by them.'', aren't actually in your favor of being non bias.. just saying.



 Yes, I do get a bit heated when some people piss me off.


Part of the reason why is in the last few years this digital dream bullcrap that all these companies have been promising hasn't come true. I get really pissed when either of them start spewing numbers, then hold users back from enjoying and using the hardware they purchase by limiting it, just to spin the wheel again at a later date with nothing more than a new paint job. ATI has failed me, Adobe has failed me, Canon has failed me, Intel is promising crap they can't deliver, Motorola has failed me, Nvidia has failed to deliver, Microsoft is not pushing people to get standardized programming.


I bought a canon high def camcorder, it records M2TS, the same as blu-ray. According to ATI we should be processing that on stream processors with this shiny new....but this $600 software, then install this patch, then install this set of codecs, then you have to export it to this format, then burn it.

Intel is still fiddle Fing around with crap they are to large and clumsy to do right the first three times.

My phone still doesn't support flash and they promise "its coming, just wait" Sounds like Atari who still haven't fixed their TDU game, or many other issues with games that just get thrown to the side. 

Nvidia pushes proprietary crap like Physx, that works on all of 13 GPU enabled titles, despite others showing it works just as fast on CPU when moved up from antiquated code, besides it now being a part of DX11. Also Nvidia and Adobe seem to be stuck in a 69 swap meat, they disable the hardware stream acceleration when at ATI card is present, some forum members have learned how to bypass it, and wonder of all wonders, it still works using the ATI GPU to perform the calculations, and not CUDA according to them as long as it doesn't get shut down.


So this shiny new future, is bullshit. it is the same crap we have had from day one. I'm tired of spending thousands of dollars to be told I still have it wrong.


----------



## motasim (Nov 15, 2010)

Steevo said:


> Yes, I do get a bit heated when some people piss me off.
> 
> 
> Part of the reason why is in the last few years this digital dream bullcrap that all these companies have been promising hasn't come true. I get really pissed when either of them start spewing numbers, then hold users back from enjoying and using the hardware they purchase by limiting it, just to spin the wheel again at a later date with nothing more than a new paint job. ATI has failed me, Adobe has failed me, Canon has failed me, Intel is promising crap they can't deliver, Motorola has failed me, Nvidia has failed to deliver, Microsoft is not pushing people to get standardized programming.
> ...



... well put ... 




AddSub said:


> A total of 307 comments? Make it 308 now. Only a passionate hate of nVidia can make a thread grow this fast and this large. Whatever, this card is pretty much as fast as two 5870 GPU's (5970) as per the following really cool link and all without all the CrossFire scaling issues, since sadly (for CrossFire tech users that is) SLI is still better tech of the two.
> 
> Till the next round then, although I don't think AMD will stick around for that long since their Abu Dhabi sugar daddies... ummm, investors, yeah that's it, "investors", well they aren't doing too well themselves. Let's see, who's got half a dozen to a dozen billion dollars (US) sitting around to be spent in this time of global economic downturn in order to bail out and save AMD yet again? IBM? Microsoft? Sony? Fat chance!
> 
> Let me put it this way for hard-core nVidia haters: come Christmas time 2011 (maybe even a few months earlier the way things are going) it's either nVidia GPU or nVidia GPU when it comes to your upgrading purposes.



... if nVidia becomes the only choice of discrete GPU (although I know that it's never going to happen), I think that it'll be the day on which I switch to Intel integrated graphics, or better still AMD Fusion ... in fact, with its current management; I believe that nVidia will eventually be acquired by Intel ... again; I'm not Red nor Green, but I hate it when idiot fan boys try to transform every single discussion on these forums into an nVidia/ATI trashing circus ...


----------



## the54thvoid (Nov 15, 2010)

AddSub said:


> A total of 307 comments? Make it 308 now. Only a passionate hate of nVidia can make a thread grow this fast and this large. Whatever, this card is pretty much as fast as two 5870 GPU's (5970) as per the following really cool link and all without all the CrossFire scaling issues, since sadly (for CrossFire tech users that is) SLI is still better tech of the two.
> 
> Till the next round then, although I don't think AMD will stick around for that long since their Abu Dhabi sugar daddies... ummm, investors, yeah that's it, "investors", well they aren't doing too well themselves. Let's see, who's got half a dozen to a dozen billion dollars (US) sitting around to be spent in this time of global economic downturn in order to bail out and save AMD yet again? IBM? Microsoft? Sony? Fat chance!
> 
> Let me put it this way for hard-core nVidia haters: come Christmas time 2011 (maybe even a few months earlier the way things are going) it's either nVidia GPU or nVidia GPU when it comes to your upgrading purposes.



Odd.  I'm an ATI owner and i've been praising the GTX 580, trying to not buy one so i can gauge the competition when it comes out in December.  Your post is ignorant with regards to scaling:

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580_SLI/24.html  GTX 580
1 GTX 580 is 77% of GTX 580 sli (all resolutions)
http://www.techpowerup.com/reviews/ATI/Radeon_HD_6870_CrossFire/23.html  HD 6870
1 HD6870 is 73% of HD 6870 crossfire (all resolutions)

So the 6 series scales better in dual gpu config.  Granted, on 5 series, the sli option is better but the 6 series nailed it well.

As for hard core NVidia haters (not a nice comment to use - hate is such a strong word) - i think at christmas we'll get a fair choice.  My personal feeling is that indeed the 6970 isn't faster than a 580.  I think if it was faster there would be some leaks out from AMD PR to say, look, our card is better - hold off buying that 580.  But if it doesn't perform as well, there's nothing to leak - better safe to stay quiet.
Hope i'm wrong because if i'm not the 580's will go up in price.

I think though that you're way off base.  Most people do tend to take sides but 'hating' isn't part of it.  It more shows your own predisposition against AMD.  But at least you wear your colours on your sleeve.  It makes you prone to make erroneous statements (a la the one above ref: scaling).


----------



## HalfAHertz (Nov 15, 2010)

LAN_deRf_HA said:


> That's a horrible idea. Sometimes it takes a good 5 hours for a game to crash from a bad overclock, OCCT will find it in 10-20 minutes, and then you don't need to worry about finding stability with hours of testing for each individual program. And "the furry donut" is only good for heating up your card or telling you you're way past the stability limit, it's not sensitive enough for real stress testing. At least not with current cards. If that or programs based on it is the only test you use you're not going to have a truly stable overclock, then you'll get crashes in games and blame the games or the drivers when it's really user error.



It's always easier to have a stress testing program open, don't get me wrong. But what I was trying to say was that pointless and needless to run it for 5+ hours. I usually set a clock, test for a couple of mins, go higher, test for a couple of mins, go higher, test for a couple of mins. The moment I get artifacts, i go back 10 MHz and try again.Once I'm bored of that, I fire a game and if crashes, I just go back 10-20 MHz on both RAM and Core and try again...

I agree that it's unrealistic to think that a game can go over the 300W limit because of the way game code is written and because of the randomness that the human player creates.
The game-play is always random and that means that the environment is always created in real time. Thus every scene has to go through the entire pipeline and spend finite ammounts in each step of it.
   To be fair stress testing tools are more like advanced HPC calculations or even folding, where a specific part is stressed over and over for long periods of time.

Edit:
And if we're talking about corporate takeovers, I think Nvidia will be snatched up first, not because they're in danger of going down or anything crazy like that, but because it would be a smart purchase. Their cards are doing great in the HPC space and it would be a smart move for someone like IBM or Oracle(or even HP and Dell) to snatch them up while Nvidia hasn't gotten too much momentum and are still cheaper. That would allow them to add them to their server farm line up and have an ace up their sleeves compared to the opposition.


----------



## GC_PaNzerFIN (Nov 15, 2010)

Do I run Furmark 24/7? No.
Does it break if I do run Furmark without the limiter? No.
Does the limiter kick in games even with overvolting and overclocking? No.
Does it prevent someone breaking card if they don't know what they are doing with voltages? Quite possibly.
Card priced right compared to previous gens? Yes.
Fastest single GPU at least for the moment? Yes.
Does it run relatively quiet and at reasonable temps? Yes.
Do I need new card? Yes.

= Ordered GTX 580

Seriously, this bullshit whining about limiters in programs as Furmark is silly, it is not even new thing and even AMD  has done driver level limiters. There is huge total of 0 people to whom it is a problem, except in their heads and yet another thing to bash NV about with no intentions to ever even look in the direction of their cards. 

Oh and just to be sure: I have had over 10 ATi and 10 NV cards in past 2 years, go figure. 

If 580 isn't for you then please move along, I am sure the HD 69xx will come and deliver too. But stop this nonsense please.

/End of rant


----------



## Stoogie (Nov 15, 2010)

*Wtf at 5970 scores in WoW?*

compare these two

http://www.techpowerup.com/reviews/ATI/Radeon_HD_6870_CrossFire/18.html

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/20.html

5970 is in totally different places in these 2 tests, while the other gpus are at the exact same fps.

are we %100 sure that this site is trustable ?

i looked into this regarding the 6870's CF performance with WoW however the score seems to be half that of just 1 card, i believe that this is a mistake on your end techpowerup when u benchmarked the 6870 cards.

Please give a logical explanation for the 2 entirely different answers to the same benchmark.


----------



## W1zzard (Nov 15, 2010)

Stoogie said:


> 5970 is in totally different places in these 2 tests [reviews], while the other gpus are at the exact same fps.
> 
> are we %100 sure that this site is trustable ?



dont trust this site!! read the test setup page before making accusations


----------



## the54thvoid (Nov 15, 2010)

Can I swear?

BASTARDS!

Overcockers, sorry OverclockersUK are price gouging for sure.  Only have the ASUS board in stock and it's £459.99.  They'll do this until the HD 6970 comes out.  Same way the 6850 and 6870 prices _generally_ increased almost immediately.


----------



## Stoogie (Nov 15, 2010)

W1zzard said:


> dont trust this site!! read the test setup page before making accusations



so the catalyst 10.10 drivers fixed the issue from the 10.7 drivers?


----------



## Stoogie (Nov 15, 2010)

if so can you rebenchmark WoW with 10.10 drivers for 6870's in CF?

Edit: my initial post was copied from a overclockers site, maybe i shouldve removed the trust bit lol XD my bad


----------



## W1zzard (Nov 15, 2010)

Stoogie said:


> if so can you rebenchmark WoW with 10.10 drivers for 6870's in CF?
> 
> Edit: my initial post was copied from a overclockers site, maybe i shouldve removed the trust bit lol XD my bad



just go by the 5970 numbers and the 5970 vs 6870 relative performance in other games

and please go to that forum and tell them what's going on with the numbers, so no need to cry conspiracy


----------



## Wile E (Nov 15, 2010)

the54thvoid said:


> Odd.  I'm an ATI owner and i've been praising the GTX 580, trying to not buy one so i can gauge the competition when it comes out in December.  Your post is ignorant with regards to scaling:
> 
> http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580_SLI/24.html  GTX 580
> 1 GTX 580 is 77% of GTX 580 sli (all resolutions)
> ...


You can't really compare 580 and 68xx in terms of multi card scaling. A pair of 68xx's will become the bottleneck LONG before a pair of 580's do. The 580's run out of work to do at lower resolutions more quickly than the 68xx's.

At 2560 res is where they are pretty even in terms of scaling, as both setups are still being pushed hard.


----------



## newtekie1 (Nov 15, 2010)

the54thvoid said:


> Odd. I'm an ATI owner and i've been praising the GTX 580, trying to not buy one so i can gauge the competition when it comes out in December. Your post is ignorant with regards to scaling:
> 
> http://www.techpowerup.com/reviews/N...80_SLI/24.html GTX 580
> 1 GTX 580 is 77% of GTX 580 sli (all resolutions)
> ...



Lower end GPUs scale better, always has been always will be.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTS_450_SLI/23.html
1 GTS450 is 58% of GTS450 SLi (all resolutions) that is very close to 100% scaling.

And you have to realize that the higher performance the cards, the more they will be bottlenecked at the lower resolutions.  Hell, SLi GTX580s would probably still be somewhat CPU bottlenecked at 1920x1200.


----------



## Ross211 (Nov 15, 2010)

Is it just me or did some retailers drop the price on the GTX 580 ?  Many of them are now starting at $500 instead of around $570 the other day.  I'm seeing GTX 480s more expensive at some retailers still too.


----------



## bear jesus (Nov 15, 2010)

the54thvoid said:


> Can I swear?
> 
> BASTARDS!
> 
> Overcockers, sorry OverclockersUK are price gouging for sure.  Only have the ASUS board in stock and it's £459.99.  They'll do this until the HD 6970 comes out.  Same way the 6850 and 6870 prices _generally_ increased almost immediately.



I think you should start looking for a new e-tailer like scan, palit 580 for £399.99 and in stock, asus 580 £410 and in stock also the 6870/6850 have dropped between £15 and £25 since launch.


----------



## Red_Machine (Nov 17, 2010)

eBuyer cancelled my order because the 580s have gone out of stock.  That's bullshit.  I pre-ordered within minutes of them becoming available, one should have been allocated to me.

I just grabbed the last one off YoYoTech.  I would have used Scan, but they will only ship to my home, and there's nobody there during shipping hours anymore.


----------



## yogurt_21 (Nov 17, 2010)

Wile E said:


> You can't really compare 580 and 68xx in terms of multi card scaling. A pair of 68xx's will become the bottleneck LONG before a pair of 580's do. The 580's run out of work to do at lower resolutions more quickly than the 68xx's.
> 
> At 2560 res is where they are pretty even in terms of scaling, as both setups are still being pushed hard.



which is true for all flagship cards, the gtx295 and 5970 are the same. If the does manage to match or beat the 580 then I'd imagine poor sacling except at high reolutions as well.



Red_Machine said:


> eBuyer cancelled my order because the 580s have gone out of stock.  That's bullshit.  I pre-ordered within minutes of them becoming available, one should have been allocated to me.
> 
> I just grabbed the last one off YoYoTech.  I would have used Scan, but they will only ship to my home, and there's nobody there during shipping hours anymore.



not that I know anythign about uk retailers but it cracks me up that one named ebuyer acted unprofessional while one name yoyotech acted professional.


----------



## Disruptor4 (Nov 22, 2010)

Would be an excellent upgrade from an 8800GT  Gonna wait and see what AMD bring out though.


----------



## KashunatoR (Nov 26, 2010)

i'm just telling you this: it's worth every penny. it's visibly better then my previous gtx 480


----------

