• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

The GTX 480/470 performance thread

Status
Not open for further replies.
@Bene: You raise some valid points, but still we're talking about engineers with 10 to 15 years of experience. They should have known better and acted accordingly.

Anyway I expect a GTX 490, which will consist of two 470s with another cluster disabled (416sp per card, 832 total, yey) and under-clocked to ~581MHz. It should perform somewhere in the ballpark of a CFX 5870 and be in the power bill of a GXT480 +-10%. Nice way to concrete the performance king title and salvage defective dies at the same time.

I bet 1 internet cookie on that!
 
I think the only thing 'wrong' with it is that the chip design is too advanced for the process tech it's manufactured on. 3.2bn transistors is an awful lot and it appears to be maxing out the 40nm process tech it's built on. I reckon the die shrunk version will be much better. :)

I'll definitely be waiting for the 'GTX485' version. In the meantime, I'll likely be getting one of the lower-end Fermi derivatives in a couple of months, so that I can play with DX11 & 3D Vision together. (I already have 3D Vision).

Catalyst 10.3 has stereo 3d option. Anyone test it?

@Bene: You raise some valid points, but still we're talking about engineers with 10 to 15 years of experience. They should have known better and acted accordingly.

The CEO don't give enough time to the engineers, that is the problem.

GTX480/470 like FX5800, it loud, it hot and a failure.
 
@Bene: You raise some valid points, but still we're talking about engineers with 10 to 15 years of experience. They should have known better and acted accordingly.

Anyway I expect a GTX 490, which will consist of two 470s with another cluster disabled (416sp per card, 832 total, yey) and under-clocked to ~581MHz. It should perform somewhere in the ballpark of a CFX 5870 and be in the power bill of a GXT480 +-10%. Nice way to concrete the performance king title and salvage defective dies at the same time.

I bet 1 internet cookie on that!

Not unless they get that heat under control... I don't see them putting anything better than the 480 out for at least 9 months.

sub $300 cards is what I expect from NV in the next wave...

Ati will probbaly release some 2gb version of the 5800 series... big deal.

The gpu game doesn't seem to have much to look forward to for a while.. I'll probbaly still play BC2 for several months and the next game to look for is RAGE.

I don;t know how people are dying to have a 200F card in their system and pay $500 for it... I still cannot understand that mentality, especially with summer upon us.
 
@Bene: You raise some valid points, but still we're talking about engineers with 10 to 15 years of experience. They should have known better and acted accordingly.

Perhaps, it was Jen-Hsun Huang who pushed the idea of Fermi we have now.
 
Maybe nvidia didnt pay them and acted accordingly ahaha :D
 
In regards to your whole post, the Fermi design better be much faster than ATI's current GPU. ATI's been refreshing the same old chip design for ages now where as NVIDIA's Fermi is based on a "Brand New Architecture" build from the ground up. And new architectures should be extremely faster than what is out today. We are talking about 60% up to 150% better performance improvement over older designs.

Like R600 you mean? Let's compare -40% versus +20% over competition. Hmmm, would you say that R600 architecture was broken, when it's mostly the same architecture that we can find on Cypress? I don't know what exactly should happen, but I do know what usually happens and that is that new architectures usually underperform specially in the performance/watt department, because usually the underlaying architecture (on top of which the shaders and clusters are going to be put) is usually overkill for that generation's performance and "active units". Maybe you don't know how tall your building will exactly be, and you know you are going to add floors over the time, but you better make the foundations well (aka overkill for that 4 story building, but enough for when the building is 64 floors high). Sometimes new architectures are better, they manage to excel because they overcome a serious bottleneck, and that more than makes up for the "unnecessary" foundations, like G80 or Ati 9000 series, but that is not the norm, not at all.

And you know what? Fermi was designed for DX11 and tesselation and it is faster, no, much faster when DX11 is actually used. That's what it was designed to do and that's what it does, but oh I forgot how irrelevant DX11 is now that Nvidia is the better one at that. But ey, I'm biased, me and only me.

What I see is GX 480 regardless of its power & heat issues performs quite miserably overall not to mention it’s a so called Next Gen design. 40nm being problematic? Yes indeed it is but ATI’s HD 5800 series are on the same process and outperforming anything NVIDIA has to offer, so I wouldn’t put the blame on 40nm but rather how dam COMPLEX NVIDIA chose to make Fermi.

Its fine to go nuts with a GPU design but it also has to work right and so far Fermi running 97C and sucking back over 300W of power is not running right.

Personally NVIDIA’s CEO is to blame, because he stubbornly refused to listen to his CTO. But that is a different story.

Regarding performance, it's far far far from performing miserably, you seem to forget that it is faster than Cypress. Anyhow, I find it funny how strong conclusions people have based on pre-release performance. Seriously, it's amazing, especially considering how in that comparison, Cat 10.3 has been used (not without it's controversy), a driver that suposedly increases performance by as much as 15%, a driver that has been released 6 months after release. And there's been previous drivers with similar improvements, and it's the 4th generation from the same architecture and not a new one at all, all of which have seen their improved drivers. But on the other hand first seen performance based on pre-release drivers from a completely new architecture that functions quite differently than any previous GPU, is etched on stone, with no posibility for an improvement ever. Not only that, but it's a clear sign of architecture failure. Nevermind.

Regarding the manufacturing, of course Nvidia design is complex and it shares its part of the blame, but don't forget that AMD had to retire a 140 mm^2 chip because it couldn't be manufactured and despite being producing Cypress since June, 3 months later, on release, they only had less than 20.000 cards at launch. In the next 4 months, until january they had only managed to produce and sell 300.000 cards. That is the truth, 40nm process is absolutely fucked up. And yes AMD found a workaround, not a fix, a workaround and they were lucky enough that the workaround worked, although they never managed to fix RV740 and Cypress has sold much less in 6 months than what HD4850 sold in a single month. Nvidia has not been so lucky (I say this with a little irony btw) and on top of that they didn't do as much homework as AMD, but fact still remains that they didn't have to, since that's TSMC's work. That's why foundry companies exist, that's why they do tests and speak with chip designers about what can and what cannot be done and that's why they sign contracts and that's why already 2 years ago TSMC had advertised everywhere on their homepage that 40nm was on schedule, that they espected yields to be above 80% by years end, etc etc etc.
 
Last edited:
This might sound stupid, but if the 5870 is 10% slower, would you say the 480 is 10% faster? I get confused when it comes to relative percentages.

perfrel_1920.gif
 
When Charlie writes one if his "I told you so!" articles, he always backs it up with many, many links to his previous articles that show him predicting all that has come true. This time is no exception. He's not "semiaccurate" at all.

But when he is wrong (which is on NUMEROUS occasions), he mysteriously never makes another peep about it.

He is wrong more than he is right, the proof was already posted on the previous page, and again, TPU doesn't even allow him as a news source because of his past innacuracies. Seriously, look thru the news archives.
 
No he's not semi-accurate, he's rarely accurate. Not to be confused with never accurate. I suppose even with a blindfold you could hit the dart board if you throw enough darts.
 
I'd also like to point out to some of the Fermi detractors. You do realize that a 4870X2 still consumes more power than Fermi, right?

Now, that doesn't make Fermi's power consumption a non-issue, but it's not quite as bad as it could be. Yeah, you get a lot less consumption with the HD 5k series, but you also get less performance.

Fermi isn't a total failure, but it does need some work. I think I'll wait for the companies to do their refreshes this time around. I'm not completely satisfied with either camps' choices at the moment.
 
Yeah but your comparing a card that has 2 GPU's to one, and that one is just notorious for its high power use but its still 2 to one

fermi isn't actually that bad of the card, the heat in my mind is a non issue since the fan on it is so powerful I don't think it will overheat unless you covered it if the fan was at 100% when the card was loaded the temps would be alot lower but it would sound like a monster probably, power consumption also isn't "that" bad, it could've been worst, I think its the fault of them and TSMC not having enough experience with 40nm, unlike AMD who obviously got something down, but they did have to modify the original Cypress design if I remember.

I think we have in our minds the 8800GTX and I don't think Nvidia will ever topple what they did with that card, make a card that beats out the competition for so long, so when they make the new architecture card, people remember the 8800GTX, and its almost instant, this sucks, it could've been much better but its still progress, we should be happy with this instead of nothing, because if nvidia came out and said, . . . . . we got nothin, AMD would almost definitely ram up their prices.
 
I would like for someone to enlighten me on to what makes the GTX480 so much faster than the HD5870 for the game Metro 2033. I tought it would be Tessellation (in fact I do think tessellation plays a role in this), but when you look at the DX10 performance, the GTX480 still outperforms the HD5870 by a strong margin.
http://www.pcgameshardware.com/aid,...Fermi-performance-benchmarks/Reviews/?page=13

Don't forget this game was optimized specifically for nvidia video cards, mainly the GTX 470 & GTX 480, Knowing nvidia they would have paid them off to make it play really well on their hardware and make run like crap on ATi hardware, including all DX9, DX10 & DX11 settings, thats how nvidia likes to play, dirty:banghead:
 
Is it weird I still want a 480? I want that extra memory. I like that at idle it's quieter than my 260 and with my psu it won't be an issue. Though it would bug me that my friends $300 5850 overclocked to 1000 mhz would be a few frames faster in Crysis. I have to say a big wtf to nvidia there.
 
Wow, all the arguing. Lets look at a nvida centric forum and lets see what they say. I mean, it's their customers right? Ok, lets have a look here. Look like they think that "GF" in GF100 means something else. By the look of that pic I think they mis-spelled the name as well.

Edit:
Video using the card in Metro 2033
 
Last edited:
remove the cap of the gf100 GPU... then start cooling custom! let that be the start.. bechause this chip IS fast.. lets get it on!!!!
 
Yeah but your comparing a card that has 2 GPU's to one, and that one is just notorious for its high power use but its still 2 to one
Like it or not the GF100 is hotter, less energy efficient and slower than the 5970.
It is the same thing as the GTX 295 vs 5870, only this time the GF100 loses in pretty much every aspect while the 5870 loses only in perfomance.
Credits is where credits due, the 5970 is still the fastest card on the planet, just as the GTX 295 was.
 
Wow, all the arguing. Lets look at a nvida centric forum and lets see what they say. I mean, it's their customers right? Ok, lets have a look here. Look like they think that "GF" in GF100 means something else. By the look of that pic I think they mis-spelled the name as well.

Edit:
Video using the card in Metro 2033

lol

its actually pretty true there are alot of reviewers that are saying that the 480 gets hot enough to burn you even if you just tap it, including the admin here,

thermi.jpg


at least you can eat while you play now:nutkick:

can you imagine when the heatsink gets dirty though . . . . . .
 
a read a review the guys said it was so hot the fan ramped up so fast it made objects on the desk vibrate and move towards the case.. he said the gaming experience was not too enjoyable.. he had ot use gloves to remove the card after a 20 minute cool down.. WTF!

..i need to delete this subscription.. i cannot stop talking about how disappointing this crap really is...

I mean c'mon.. idling at 90c when 2 displays are connected... bah.. I am just baffled as to how they can release this shit and think it's the best thing ever... it's retarded!

lol i better not come back to this thread before i get banned for flaming! I had hope for ferm.. so disappointing.
 
I wont be joining nvidia any time soon or the near future, ATi ATW:D

Awesome picture cause its so true:laugh:
 
who is gonna buy nvidia when they launch it? it will be insane to buy a card who is going to burn your rig to ashes!! What was thinking nvidia,this is not the way to win the war but to lose it burning the HOUSE DOWN!!:D
Im an enthusiastic gamer,not a cooker
 
Yeah but your comparing a card that has 2 GPU's to one, and that one is just notorious for its high power use but its still 2 to one

fermi isn't actually that bad of the card, the heat in my mind is a non issue since the fan on it is so powerful I don't think it will overheat unless you covered it if the fan was at 100% when the card was loaded the temps would be alot lower but it would sound like a monster probably, power consumption also isn't "that" bad, it could've been worst, I think its the fault of them and TSMC not having enough experience with 40nm, unlike AMD who obviously got something down, but they did have to modify the original Cypress design if I remember.

I think we have in our minds the 8800GTX and I don't think Nvidia will ever topple what they did with that card, make a card that beats out the competition for so long, so when they make the new architecture card, people remember the 8800GTX, and its almost instant, this sucks, it could've been much better but its still progress, we should be happy with this instead of nothing, because if nvidia came out and said, . . . . . we got nothin, AMD would almost definitely ram up their prices.
Doesn't matter. It's a single card. That's all that matters. It plugs into a single PCIe 16x slot and physically takes the space of 2 slots, just like fermi.

Don't forget this game was optimized specifically for nvidia video cards, mainly the GTX 470 & GTX 480, Knowing nvidia they would have paid them off to make it play really well on their hardware and make run like crap on ATi hardware, including all DX9, DX10 & DX11 settings, thats how nvidia likes to play, dirty:banghead:

:shadedshu Wow, this one gets old. nVidia DOES NOT make games run like crap on ATI hardware. Not even ATI claims that BS. These are, and always have been, unsubstantiated claims by ATI fanboys. Don't you think ATI would have them in court by now if that were truly the case? nVidia helps devs optimize, yes, but ATI has the same opportunities to do so, and chooses not to on most occasions.

Although, I hear ATI has finally started rolling out their own answer to TWIMTBP, and it's about damn time too. It's been how many years that they've continued to let nV have that advantage over them?
 
Last edited:
lol

its actually pretty true there are alot of reviewers that are saying that the 480 gets hot enough to burn you even if you just tap it, including the admin here,

http://img.techpowerup.org/100329/thermi.jpg

at least you can eat while you play now:nutkick:

can you imagine when the heatsink gets dirty though . . . . . .

... That's one giant card. It must be, considering seemingly tiny size of those meats on the grill.

Is that ... Fermi 2?
 
Doesn't matter. It's a single card. That's all that matters. It plugs into a single PCIe 16x slot and physically takes the space of 2 slots, just like fermi.



:shadedshu Wow, this one gets old. nVidia DOES NOT make games run like crap on ATI hardware. Not even ATI claims that BS. These are, and always have been, unsubstantiated claims by ATI fanboys. Don't you think ATI would have them in court by now if that were truly the case? nVidia helps devs optimize, yes, but ATI has the same opportunities to do so, and chooses not to on most occasions.

Although, I hear ATI has finally started rolling out their own answer to TWIMTBP, and it's about damn time too. It's been how many years that they've continued to let nV have that advantage over them?

Im not gonna bother answering back to ya, just gonna start a bullshit arguement!:mad:
 
Status
Not open for further replies.
Back
Top