• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce GTX 480 Fermi

I've been trying to find any info that suggested that what I'm going to expose has changed but I didn't find any so here we go:

First of all, I'm not saying that Fermi isn't too power hungry or hot, but it's definately not as dramatic as many have claimed. Most claims are based on Furmark readings, especially those who are claiming that Nvidia is lying about TDP and those readings are absolutely misleading when it comes to any brand comparison. The reason is simple, Ati cards throttle down under Furmark to prevent going too high:

http://www.techpowerup.com/index.php?69799

Renaming Furmark will no longer help as AMD succesfully "fixed" that "problem" since Cat 9.8:

http://www.geeks3d.com/20090914/catalyst-9-8-and-9-9-improve-protection-against-furmark/

But that's not all! HD5xxx cards have hardware protection (throttling when a limit is exceeded) against stress tests like Furmark and although that's a good thing for the product, since no single game will stress the cards as much as Furmark, numbers are totally misleading. Furmark numbers don't represent absolute max load as they do on Nvidia cards.

http://www.geeks3d.com/20090925/ati...on-against-power-virus-like-furmark-and-occt/

That feature in the HD5xxx series is fantastic, don't get me wrong, but fact remains true though, that such a protection absolutely denies any attempt of comparison under Furmark load as a valid point.

HD5970 throttling back:

http://www.legionhardware.com/articles_pages/ati_radeon_hd_5970_overclocking_problems,4.html

Hmm but Nvidia doesn't have that feature so it will in fact run hot and go WAY past its listed power draw which equals lie. Plus thats a 5890 your talking about. Duel GPU? I'm afraid your grabbing at straws here man.
 
Hmm but Nvidia doesn't have that feature so it will in fact run hot and go WAY past its listed power draw which equals lie. Plus thats a 5890 your talking about. Duel GPU? I'm afraid your grabbing at straws here man.

It's not lie, since only under Furmark it will go beyond the specified TDP. AMD has put protection so that Furmark does not stress the GPU to those limits. If at all, it's AMD who is lying in that respect, because Furmark is not showing the real max consumption of the cards, Nvidia is. Except that it is not lying, since a card will never reach those limits in any REAL application. Same goes to Nvidia cards, they will never reach those levels under gaming or CUDA apps or whatever you throw at them, as far as is not a synthetic app especifically designed to stress the GPU that far. Any real application will do much more than just stress the shader processors, the SPs do their work, but that work has to go somewhere and has to be treated there too and then go elsewhere, etc. That's why AMD's raw flop numbers are totally meaningless for real apps, because although the SPs can essentially work that hard, the data generated would never be able to go out and be useful. And that's what Furmark does, stress the shaders without the need for the generated data to be useful, without the need for the data to go outside the SPs.

Taking the above into account and the links I posted, which cards are worse when under Furmark before throttling kicks in? Well both the HD4850 and the HD5970 went way above 100C before throttling kicked in, in just 40 seconds of Furmark!!! God knows how far they could go in some minutes under full load. On the other hand the GTX480 stays at around 95C even if there's no throttling going on, so Nvidia took actions on hardware itself to keep the card cool, while AMD used artificial measures. Both use what I would call legit measures, since both are rightly "assuming" that nobody will be able to reach those limits under any real condition and from several years of cards being out there, it's obvious they are right. HD4850s have not died while gaming right? Fermi won't either.

Now if you have to use Furmark all day... yeah you'd need a card that artificially cripples the cards performance to prevent it from burning inside your PC. Maybe Nvidia should release a similar protection on their next drivers? Would you all be happy? Thing is, I doubt it. In fact I bet that although AMD did it first AND still is doing it, if Nvidia did that on their next drivers and Fermi power consumtion went lower especially on Furmark, we would see many many complaints about how Nvidia is cheating, because well, it's Nvidia. Sad but true.
 
Do you know why FurMark is one of the best ways to determine a GPU's power consumption along with other synthetic benchmarks? So far it’s the only way to measure a cards true power consumption in a consistent manner by utilising something that is constant, won’t change or surprise and change the GPU’s stress level just as you would find in real world gaming. This is the only thing that is great about Synthetic benchmarks. It’s the best way to measure a cards performance vs. previous gen performances.

I believe the results speak for themselves ;)
While there are a few games in which the GTX 480 was faster, there are many resolutions in our test games where the HD 5870 comes out on top. Clearly, the GTX 480 is not the world's fastest single-GPU card.

3DMark06 Canyon Flight test, 1,280 x 1,024 0xAA 16xAF, Peak Temperature
Power Consumption (Idle and Gaming)

http://www.bit-tech.net/hardware/2010/03/27/nvidia-geforce-gtx-480-1-5gb-review/10

QUOTE:
We've found that synthetic benchmarks such as FurMark thrash the GPU constantly, which simply isn't reflective of how GPU will be used when gaming.

It's such a hardcore test that any GPU under test is almost guaranteed to hit its thermal limit, the mark at which the card's firmware will kick in, speeding up the fan to keep the GPU within safe temperature limits.

As the test is so demanding and GPU limited, we've set 3DMark to run the test at 1,280 x 1,024 with 0xAA and 16xAF (enabled in the driver), constantly looping the test for thirty minutes and recording the maximum power consumption and GPU Delta T (the difference between the temperature of the GPU and the ambient temperature in our labs).
 
There’s no denying the facts, it's so evident that the GTX 480 & 470 are Hot, Power Hungry and sound like a jet engine. But anyway we've got more than 25+ reviews to prove this with different methods of tests. No point in defending something that cannot be defended. All we need to do right now is live with the results and wait for a possible Fermi re-fresh if and when it gets released. But we won’t see anything for another 6+ months IMO. Until then, everybody enjoy your nice cool running HD 5800 series cards O.K.
Thermals and Power Consumption
Living with a card's thermal characteristics, power consumption and noise levels are just as important as its graphics horsepower and it’s here that the GeForce GTX 480 really runs into trouble.

Power consumption at idle was the highest we’ve seen from a single GPU card at 186W system power draw, 18W more than the HD 5870. At load though it entered a whole new dimension for a single GPU card sucking down a massive 382W while looping the canyon flight demo in 3DMark 06. That’s a full 106W more than the Radeon HD 5870 in the same test, 30W more than the dual GPU Radeon HD 5970 and only 6W less than the dual GPU GeForce GTX 295!

Sucking down all that power has clear consequences for the card’s thermal output, and while the GTX 480 idles at a balmy 20°C above room temperature in our 22°C air conditioned labs with a low and utterly un-intrusive fan noise to match, things change for the worse at full load. The GPU temperature rapidly rises to a heady 94°C – 72°C above the ambient room temperature, where the fan speeds up to whatever speed necessary to keep the GPU from getting any hotter.
The result is a graphics card that runs extremely hot at full load, and that coupled with the unique external heatsink it could easily be rebranded the GTX 480 Griddle Edition - the heatsink in our test rig, which, bear in mind is a roomy Antec Twelve Hundred, hit 67°C, which is enough to burn your skin. Nvidia recommends spacing the cards at least two expansion slots apart in an SLI configuration and even then we suspect there will be raft of watercooled editions of the GTX 480 to counteract the massive thermal demands of the GPU.

Adding insult to injury the GTX 480 is also extremely noisy when under load, easily matching the racket of the HD 5970 and comparable to a DVD-ROM drive at full speed when striving to keep the GPU at 94°C. The 65mm paddle fan was easily the loudest component in our Antec 1200 test chassis and was clearly audible from 6ft away through a closed side panel.
http://www.bit-tech.net/hardware/2010/03/27/nvidia-geforce-gtx-480-1-5gb-review/12
And I would have to agree 100%
The higher price, the 100W of extra power consumption, scorchingly hot temperatures and a much noisier stock cooler are all extremely detrimental to its desirability. The HD 5870 remains a far better choice if you're a gamer; while we've yet to see how the GTX 480 performs with CUDA apps and Folding, at this stage Fermi looks like a flop.:D

GTX 480
Performance - 9/10
Features - 6/10
Value - 6/10
Overall - 6/10
 
@Benetanegia: Haha well let's see what do I prefer...to have a fried GPU or be "cheated" and have my GPU throttle down to safe temps...hm-mm what a hard decision. We've had throttling CPUs since the P4 days, and nobody complained, I don't think anyone ever will either considering the consequences otherwise. I think Nvidia's solution is simpler but just as effective.

About Furmark - yes it is a power virus because, just as you said, it stresses the SPs beyond what they were meant to do by means of bypassing the rest of the GPU pipeline and overloading them with calculations - a situation not usefull in a real life situation. You wouldn't see that in any other GPU application.

And why do you insist of saying that Ati's gflops numbers are wrong? I remember we had a similar discussion before. They aren't the only problem is that you'd need smart coding to get to the low level hardware functionality. Remember the SPs are in groups of 5 - 1 for complex calculations and 4 for simple calcs. But if you want to do only dual point calculatins, you can group the 4 simple ones and simulate a second complex SP, indeed reducing the SP count to 640 - the fact why DP gflops is only 2/5 of the SP gflops numbers. The numbers stated by Ati are indeed achievable but only with smart coding specifically for their architecture.

Edit: SuperXP stop spamming your negative propaganda :p I remember that the single slot HD4850s and the original 4870x2 also ran at 90+ degrees and nobody complained as much...
 
sometimes not disengaging safety features is a good thing . . .
fernobyl.jpg


But that's not all! HD5xxx cards have hardware protection (throttling when a limit is exceeded) against stress tests like Furmark and although that's a good thing for the product, since no single game will stress the cards as much as Furmark, numbers are totally misleading. Furmark numbers don't represent absolute max load as they do on Nvidia cards.
 
Last edited:
@Benetanegia: Haha well let's see what do I prefer...to have a fried GPU or be "cheated" and have my GPU throttle down to safe temps...hm-mm what a hard decision. We've had throttling CPUs since the P4 days, and nobody complained, I don't think anyone ever will either considering the consequences otherwise. I think Nvidia's solution is simpler but just as effective.

I have not option but to wonder why you have to take all things personal... I talk about Ati fanboys in one post and you reply what you think. I talk about how people would complain and you reply about what you'd prefer. I'm thinking of an F word and it's not f--k?

And why do you insist of saying that Ati's gflops numbers are wrong? I remember we had a similar discussion before. They aren't the only problem is that you'd need smart coding to get to the low level hardware functionality. Remember the SPs are in groups of 5 - 1 for complex calculations and 4 for simple calcs. But if you want to do only dual point calculatins, you can group the 4 simple ones and simulate a second complex SP, indeed reducing the SP count to 640 - the fact why DP gflops is only 2/5 of the SP gflops numbers. The numbers stated by Ati are indeed achievable but only with smart coding specifically for their architecture.

I never said they were wrong. They are not achievable in any real application, not even AMD's internal apps are achieving anything beyond a 75% or so and that's on very very especific apps. Why do I insist? I was not insisting in the matter, I mentioned that because normal usage is way below the raw "potential" and that's why under normal usage a HD4850 would not go much higher than 90C, but on Furmark, where artificial stressing will increase the usage close to its potential, well, we don't know how high it could reach, all we know is that it would reach 105C++ and get fried. The reson is simple and it's where I was at when I mentioned it. Typical AMD shader usage is around a 40%, which is around 7% higher than the usage found under SGEMM. That's real usage. On Furmark it probably reaches something close to 100%, tbh I have no idea, but probaby nobody knows the exact number or even an aproximation, except AMD. From 33-40% to 100% there's a long way though enough to put temps through the roof.

As to the performance side of things, if it can't be achieved in normal escenarios, it can't be achieved period. Sure you can create an app that uses 4 simple and 1 complex one and bla bla bla, but that's not an application, that's a benchmark, a demo, a showcase. No single application (not even games, transcoding, SGEMM...) will be close to being able to do that, real apps need what they need in the exact moment they need them and AMD's architecture simply isn't suited for that. Period, you can argue as much as you want.[/QUOTE]
 
Uhm, I was actually trying to agree with you in the above post. I was basically saying that I don't care exactly how they prevent a GPU from frying - be it a throttling function or a powerful fan, as long as my expensive GPU doesn't turn into an expensive paperweight :o

I'm no mathematician nor a coder so I don't really know how hard it is to write a code utilizing all that hardware, but I'll tell you one thing - there are many people much smarter than you and I who can and will if given enough incentive to do so...
 
I'm no mathematician nor a coder so I don't really know how hard it is to write a code utilizing all that hardware, but I'll tell you one thing - there are many people much smarter than you and I who can and will if given enough incentive to do so...

You don't get what I'm trying to say, but it's probably my problem, it's usually difficult for me to explain such complicated things in a foreign language. Well, it's not only the fact that you can use all the shaders, it's that not always the fact that you are using all those shaders will suppose a true benefit. Take games for example, average shader (ALU) use has been stablished (Beyond3D, Devnet...) to be around 3.8 out of 5 on AMD SPs, which is a 76%, but even that number is not exact or true by any means. Let me explain, of course 75% of shaders are working, but not all of them are producing genuine results, many of them are duplicating work (couldn't find a better word than genuine). This becomes obvious as soon as you realize that 76% of 1.2 TFlops (HD4870) is 912 Gflops, way more than the theoretical 708 Gflops on a GTX285 or 536 Gflops on a GTX260, and those don't have 100% efficiency either, not at all. Basically the HD4870 is calculating twice as much for the same task, otherwise if every flop operation was genuine, that would mean that 900 Gflops was required for a certain performance and the GTX cards would be seriously bottlenecked by shaders. What most probably happens is that, like I said, the AMD card is duplicating many of the calculations and it just makes sense if you think about it: when you have many spare ALUs and your bandwidth is more limited, it doesn't make sense to store some results in vram, even if you know you will need them later, because you know you will have spare ALUs too, so you just calculate things (most things) as they come. Nvidia, on the other hand prefers efficiency over throughoutput and hence they store the output, and as a result they need better caches and intercommunications. Like I have always said, two different ways of achieving the same thing.
 
We relayed this information on to NVIDIA and they informed us that our dual monitor idle temp problem will be solved by another new VBIOS that will be released this week that will ramp up the fan speed starting in the 70s instead of the 80s.

yeah, wonderful solution.. Ramp that fan up boys!
 
Thanks kids. Thanks for pissing off the admin. so much he's leaving TPU. Hope your all proud of yourselves.
 
No shit, way to go. Bash on the reviewer and look at what happens.
 
Last edited by a moderator:
Great review .
I was thinking of getting one now I just may :D .
 
Honestly, after seeing the review, i have no idea which card I want my girlfriend to buy me.

I really like the non reference cooler 5870's but it looks like the gtx480 does just about as good, and we all know how the driver game goes.

btw, thanks w1zzard :D
 
Stop the retarded arguing. If the negativity continues , I'll be handing out major custom infractions.
 
Last edited:
You don't get what I'm trying to say, but it's probably my problem, it's usually difficult for me to explain such complicated things in a foreign language. Well, it's not only the fact that you can use all the shaders, it's that not always the fact that you are using all those shaders will suppose a true benefit. Take games for example, average shader (ALU) use has been stablished (Beyond3D, Devnet...) to be around 3.8 out of 5 on AMD SPs, which is a 76%, but even that number is not exact or true by any means. Let me explain, of course 75% of shaders are working, but not all of them are producing genuine results, many of them are duplicating work (couldn't find a better word than genuine). This becomes obvious as soon as you realize that 76% of 1.2 TFlops (HD4870) is 912 Gflops, way more than the theoretical 708 Gflops on a GTX285 or 536 Gflops on a GTX260, and those don't have 100% efficiency either, not at all. Basically the HD4870 is calculating twice as much for the same task, otherwise if every flop operation was genuine, that would mean that 900 Gflops was required for a certain performance and the GTX cards would be seriously bottlenecked by shaders. What most probably happens is that, like I said, the AMD card is duplicating many of the calculations and it just makes sense if you think about it: when you have many spare ALUs and your bandwidth is more limited, it doesn't make sense to store some results in vram, even if you know you will need them later, because you know you will have spare ALUs too, so you just calculate things (most things) as they come. Nvidia, on the other hand prefers efficiency over throughoutput and hence they store the output, and as a result they need better caches and intercommunications. Like I have always said, two different ways of achieving the same thing.

Ok now I see what you mean. So basically if I got what you're saying, the built in scheduler sucks and does some of the calculations multiple times, thus wasting sp cycles?
 
Hm... long story? I hope it was not because of the 9.12!

When I first saw the review I was like "9.12? WTF?!?!" But then I thought, well, he has lots of cards and lots of tests, and they go way back, so no reason bothering him about it, knowing that there will be kids that will do just that, and I hoped somehow that 10.3 will be released... and it did! :) Now I need one for the 5970... probably I'll find something out there in time.
 
Am i the only person who gets this?

the march 26th "event" was another conundrum** to help nvidia further delay the REAL release of the retail version of the 4 series cards so that nvidia could buy more time to fix the issues at hand.

the "review" samples that were given out were known to be faulty in several ways but it gave everyone something to talk about to shut the fuck up about "when is fermi comming out? its 6months late"

yes, maybe it looks "bad" that the cards are hot and draw a ton of power but it alleviates one problem and starts another.

the big thing i see here is... NO ONE CAN EVER BE HAPPY ABOUT A DAMN THING.

if the gtx480 was 30x faster than 5970 you would still bitch cause the price is too high. but why is the price so high? because its bleeding edge technology and thats the price you pay.

i notice alot of you guys bitching about "well you shoulda used the 10.X driver for ATI... its better" yes... perhaps it is but why is that? because ATI has had time to fix and optimize their drivers for better performance. has nvidia had time to do that? NO. does it cross your mind that perhaps the older driver was used so that both ati and nvidia's offerings could be compared as they were released?

if you are comparing 2 brand new cars off the show room floor would it be "fair" to let company A fix a bunch of their problems before the comparison while company B is forced to be judged on what they brought to the plate as it stands? NO.

these reviews are done with immature cards, and immature drivers. why do you expect so much from them?

perhaps im being an asshole here but i just want to remind you that you should take these early reviews with a grain of salt.

if you think you can do a better review then do so yourself. oh wait... .you cant.. you dont have any gtx480s or gtx470s.

give the man some respect.




**Conundrum is a logical postulation that evades resolution, an intricate and difficult problem
 
Am i the only person who gets this?

the march 26th "event" was another conundrum** to help nvidia further delay the REAL release of the retail version of the 4 series cards so that nvidia could buy more time to fix the issues at hand.

the "review" samples that were given out were known to be faulty in several ways but it gave everyone something to talk about to shut the fuck up about "when is fermi comming out? its 6months late"

yes, maybe it looks "bad" that the cards are hot and draw a ton of power but it alleviates one problem and starts another.

the big thing i see here is... NO ONE CAN EVER BE HAPPY ABOUT A DAMN THING.

if the gtx480 was 30x faster than 5970 you would still bitch cause the price is too high. but why is the price so high? because its bleeding edge technology and thats the price you pay.

i notice alot of you guys bitching about "well you shoulda used the 10.X driver for ATI... its better" yes... perhaps it is but why is that? because ATI has had time to fix and optimize their drivers for better performance. has nvidia had time to do that? NO. does it cross your mind that perhaps the older driver was used so that both ati and nvidia's offerings could be compared as they were released?

if you are comparing 2 brand new cars off the show room floor would it be "fair" to let company A fix a bunch of their problems before the comparison while company B is forced to be judged on what they brought to the plate as it stands? NO.

these reviews are done with immature cards, and immature drivers. why do you expect so much from them?

perhaps im being an asshole here but i just want to remind you that you should take these early reviews with a grain of salt.

if you think you can do a better review then do so yourself. oh wait... .you cant.. you dont have any gtx480s or gtx470s.

give the man some respect.




**Conundrum is a logical postulation that evades resolution, an intricate and difficult problem

fit dont waste your time some ppl will never change, thats a fact look at the world we live in today ppl bitch about everything

and he does retest everytime he does a new review so idk what all the crying was all about if i had the money i would have two of every card to play with but i dont :ohwell:
 
Back
Top