Tuesday, March 16th 2010

GeForce GTX 480 has 480 CUDA Cores?

In several of its communications about Fermi as a GPGPU product (Next-Gen Tesla series) and GF100 GPU, NVIDIA mentioned the GF100 GPU to have 512 physical CUDA cores (shader units) on die. In the run up to the launch of GeForce 400 series however, it appears as if GeForce GTX 480, the higher-end part in the series will have only 480 of its 512 physical CUDA cores enabled, sources at Add-in Card manufacturers confirmed to Bright Side of News. This means that 15 out of 16 SMs will be enabled. It has a 384-bit GDDR5 memory interface holding 1536 MB of memory.

This could be seen as a move to keep the chip's TDP down and help with yields. It's unclear if this is a late change, because if it is, benchmark scores of the product could be different when it's finally reviewed upon launch. The publication believes that while the GeForce GTX 480 targets a price point around $449-499, while the GeForce GTX 470 is expected to be priced $299-$349. The GeForce GTX 470 has 448 CUDA cores and a 320-bit GDDR5 memory interface holding 1280 MB of memory. In another report by Donanim Haber, the TDP of the GeForce GTX 480 is expected to be 298W, with GeForce GTX 470 at 225W. NVIDIA will unveil the two on the 26th of March.
Sources: Bright Side of News, DonanimHaber
Add your own comment

79 Comments on GeForce GTX 480 has 480 CUDA Cores?

#26
qubit
Overclocked quantum bit
The more news tidbits that are released, the more this chip sounds like another HD 2900 disappointment. :(

As Wolf said: bring on the reviews.
Posted on Reply
#27
Benetanegia
KainXSim tryin to think of nice things to say about the 480 but when I think about the shader clocks being permantly linked to the core since the core clock is half the shader clock, . . . . . .
It's not exactly like that iirc, shaders are not linked to core clock, it's somehow difficult to say if the new aproach makjes the chip worse or better though. Everything inside the GPC (TMUs, tesselator, setup engine, rasterizer) runs at half the speed of the SPs (hot clock), but everything on the outside (ROPs, main scheduler, L2 cache) runs at the core clock.

IMO it's mostly a good thing, because the only significant move (to the higher clock) are the TMU. The setup engine, rasterizer and tesselators are suposedly much smaller than SPs, TMUs or ROPs, so they should not have any effect on keeping the shader domain reaching higher clocks or on the temperatures and stability of the GPC, IMO. The units that are supposed to be more sensible to clocks like the ROPs and L2 remain at the slower core clock.
Posted on Reply
#28
newconroer
SteevoHow do we know what ATI needs, there isn't even competing cards out yet? Everyone of the green team keeps assuming this card will be so awesome, yet it has been how long, and all we keep getting is spin, more spin, and more spin on how awesome it is, and how much better it will be. If you really believe all this and feel that good, then keep on breathing the fumes man.
Putting the green versus red bit aside, Steve unlocks an even greater point :
The architecture of both GPUs and the software they run hasn't changed fundamentally. So until that does, all of the hype on upcoming GPUs as we know them, is neither here nor there.
What we'll get, is another powerful card, that still falls down like all the rest, in all the same places, for all the same reasons - of which are the same reasons that have existed for the last decade or more.

To me that's not impressive. I like big, I like powerful, it's how I like my vehicles, but not my computer components. I'm tired of getting bigger cases, bigger motherboards, bigger radiators and bigger PSUs, only to have the overrated max FPS of a game, go plummeting straight back down to twenty-five frames, because another character strolled onto the screen, and all this supposed Direct X special effects, that unfortunately we cannot actually see, has just sucked away the performance.

Don't get me wrong, I'm pro Direct X. When people were crying and whining about DX10 being a failure, I wasn't. I understood, I got it. Had you tried to run a lot of the background processing of DX10 on DX9(if it was possible) it wouldn't be a pretty sight, and DX11 brings some much needed tools for developers.

But I'm just not 'pro waiting six months or more every year' to see these 'fabled' graphics processors be put on a pedestal, and be released and yet don't provide anything really tangible over the last generation.

Consider that brute power alone, and computational flexibility, something like a GTX 295 or 4870X2 should be MORE than enough for modern games, and they usually are. Heck I can run the X2 at clocks of 500/500 in about 90% of modern games, and still have over 50fps. But then you get those moments where it all comes crashing down, and no matter how powerful the cards, it never ends.
Posted on Reply
#29
qubit
Overclocked quantum bit
newconroerPutting the green versus red bit aside, Steve unlocks an even greater point :
The architecture of both GPUs and the software they run hasn't changed fundamentally. So until that does, all of the hype on upcoming GPUs as we know them, is neither here nor there.
What we'll get, is another powerful card, that still falls down like all the rest, in all the same places, for all the same reasons - of which are the same reasons that have existed for the last decade or more.

To me that's not impressive. I like big, I like powerful, it's how I like my vehicles, but not my computer components. I'm tired of getting bigger cases, bigger motherboards, bigger radiators and bigger PSUs, only to have the overrated max FPS of a game, go plummeting straight back down to twenty-five frames, because another character strolled onto the screen, and all this supposed Direct X special effects, that unfortunately we cannot actually see, has just sucked away the performance.

Don't get me wrong, I'm pro Direct X. When people were crying and whining about DX10 being a failure, I wasn't. I understood, I got it. Had you tried to run a lot of the background processing of DX10 on DX9(if it was possible) it wouldn't be a pretty sight, and DX11 brings some much needed tools for developers.

But I'm just not 'pro waiting six months or more every year' to see these 'fabled' graphics processors be put on a pedestal, and be released and yet don't provide anything really tangible over the last generation.

Consider that brute power alone, and computational flexibility, something like a GTX 295 or 4870X2 should be MORE than enough for modern games, and they usually are. Heck I can run the X2 at clocks of 500/500 in about 90% of modern games, and still have over 50fps. But then you get those moments where it all comes crashing down, and no matter how powerful the cards, it never ends.
I know what you mean by the same old same old, man. I hate those frame rate drops too. However, bear in mind that quite often that low fps bottleneck can also be at the cpu and not necessarily the gpu. Keeping those frame rates high and consistent is a big challenge when designing a game and unfortunately, it's not possible to prevent high complexity/detail scenes from tanking the frame rate sometimes. This is why I love running my old DX7 games on my big, grossly overpowered rig: even the lowest points are doing over 100fps (if vsync us unlocked) and the game runs smooth as butter 100% of the time. :D :rockout:
Posted on Reply
#30
the54thvoid
Super Intoxicated Moderator
freaksaviorWhat annoys me the most, (sorry to those who do this) but I hate when a series of card(s) isn't even out, and people start talking about their "next" version how its going to be so much better.

Seriously, lets wait for whats not even out first.
+1 dude.
Posted on Reply
#31
phanbuey
newconroerPutting the green versus red bit aside, Steve unlocks an even greater point :
The architecture of both GPUs and the software they run hasn't changed fundamentally. So until that does, all of the hype on upcoming GPUs as we know them, is neither here nor there.
What we'll get, is another powerful card, that still falls down like all the rest, in all the same places, for all the same reasons - of which are the same reasons that have existed for the last decade or more.

To me that's not impressive. I like big, I like powerful, it's how I like my vehicles, but not my computer components. I'm tired of getting bigger cases, bigger motherboards, bigger radiators and bigger PSUs, only to have the overrated max FPS of a game, go plummeting straight back down to twenty-five frames, because another character strolled onto the screen, and all this supposed Direct X special effects, that unfortunately we cannot actually see, has just sucked away the performance.

Don't get me wrong, I'm pro Direct X. When people were crying and whining about DX10 being a failure, I wasn't. I understood, I got it. Had you tried to run a lot of the background processing of DX10 on DX9(if it was possible) it wouldn't be a pretty sight, and DX11 brings some much needed tools for developers.

But I'm just not 'pro waiting six months or more every year' to see these 'fabled' graphics processors be put on a pedestal, and be released and yet don't provide anything really tangible over the last generation.

Consider that brute power alone, and computational flexibility, something like a GTX 295 or 4870X2 should be MORE than enough for modern games, and they usually are. Heck I can run the X2 at clocks of 500/500 in about 90% of modern games, and still have over 50fps. But then you get those moments where it all comes crashing down, and no matter how powerful the cards, it never ends.
I see your point but at the same time I dont. Yeah they all come crashing down, but crashing down for an x2 or 295 is like 25 fps which is annoying but still ok. Crashing down for a single 260 or 4870 is like 15 fps... which is just jerky enough to send me into epileptic shock.

plus

All brand new architectures are head of their time. Becuase no developer will spend oodles of money and time to develop a game for a hardware feature that 0.0001% of the gaming market has. (sometimes a token game comes out with sponsorship of ati/nv but changes nothing).

Its just exciting bc these cards do bring something new to the table... unlike the 4xxx or gt200 or g92 or rv670 - its been a long time since that has happened.
Posted on Reply
#32
1freedude
fermi meant for parallel processing

disabled SM to control TPD
Posted on Reply
#33
SKD007
qubitThe more news tidbits that are released, the more this chip sounds like another HD 2900 disappointment. :(

As Wolf said: bring on the reviews.
Don,t worry there are some GREEN web sits like GURU3D to do some 8xAA benchmark and say ATI was beaten by 10% but they will not try 8xQ which is real 8X for Nvidia ;)

:roll: and there are fools who support that.....
Posted on Reply
#34
v12dock
Block Caption of Rainey Street
bright side of the news is more like green side of the news
Posted on Reply
#35
OnBoard
tofuOr perhaps nVidia is so confident that their Fermi will perform well at its price point and decided to disable a cluster to improve yields and pave the way to release a GTX485 later on.

Just throwing some thoughts out there.
I think they'll first make GTX495 with 2x GTX470 but with 512 Cores. Then later with better yields GTX485 with some OC.
Posted on Reply
#36
Marineborn
saikamaldossDon,t worry there are some GREEN web sits like GURU3D to do some 8xAA benchmark and say ATI was beaten by 10% but they will not try 8xQ which is real 8X for Nvidia ;)

:roll: and there are fools who support that.....
+1, team green will release there next set of cards in 4 yrs, lol
Posted on Reply
#37
Edito
Where all the hate for nVidia FERMI go??? anyone??? lol don't get me wrong i like ATI but i knew this would happen nice move nvidia the good price is the key and nice work ATI/AMD to keep nvidia controlled in the price departament...
Posted on Reply
#38
eidairaman1
The Exiled Airman
so where's the board at then huh Nvidia Huh!? I dont see it in Wizzard's or other members hands. To me it's deader than dead itself.
Posted on Reply
#39
simlariver
qubitThe more news tidbits that are released, the more this chip sounds like another HD 2900 disappointment. :(

As Wolf said: bring on the reviews.
x2

All this ongoing delays and lack of communication from Nvidia and those rumours about insanely high TPD are clearly pointing to a disaster release from Nvidia. If they were confident in their product, they would brag about it to no end instead of hiding it far from the rogue benchmarkers.
Posted on Reply
#40
locoty
Shot of Die A3 Stepping of Fermi










more HERE
Posted on Reply
#41
pr0n Inspector
Look at all the scared fanboys trying to convince themselves that these cards are full of fail. How entertaining.
Posted on Reply
#42
OnBoard
Naked pictures are always nice, be they cores or more fleshy stuff :)

So 87 cores ready at least :p Wonder if they leave that marker stuff under IHS or those are just quality assurance samples.
Posted on Reply
#43
Benetanegia
locoty
Stares at pic.

Oh nooooo! Fermi A3 silicon also has 2% yields. Because we can clearly see a number 2 written over that die and as everybody knows when a company gets few samples back from factory they write numbers on them (and only when they get very few of them, otherwise they would never write over them, it would be stupid to do so) and always ALWAYS show the one with the higher number, in this case a 2 (in other a 7 :)). That number clearly means 2% yields. :rolleyes:
Posted on Reply
#44
Wile E
Power User
The gap between 470 and 480 is now far too small. This is very bad news, imo.
Posted on Reply
#45
xtremesv
I keep saying... rumooooorrrrrs. Show me the numbers.

There's no attack to Nvidia, it's just fair to recognize when there's a disappointment, that's how human civilization has evolved, learning from errors.

ATi, for instance, failed with their HD2xxx/3xxx series, that's why I kept my X1650Pro a little longer. Nvidia delivered the mythical G92 and the sole reason I chose my HD4830 over the 9800GT was the price (in my country Nvidia is really overpriced).

Nvidia failed with the FX series and ATi triumphed with the 9xxx (9 a lucky number?).

There's no need to fight, even a fanboy has to reckon when his/her company screwed it up.

More rumors? See this:

www.semiaccurate.com/2010/02/20/semiaccurate-gets-some-gtx480-scores/
Posted on Reply
#46
v12dock
Block Caption of Rainey Street
"One said they measured it at 70C at idle on the 2D clock." :eek:
Posted on Reply
#47
Bjorn_Of_Iceland
phanbuey...
All brand new architectures are head of their time. Becuase no developer will spend oodles of money and time to develop a game for a hardware feature that 0.0001% of the gaming market has. (sometimes a token game comes out with sponsorship of ati/nv but changes nothing).

Its just exciting bc these cards do bring something new to the table... unlike the 4xxx or gt200 or g92 or rv670 - its been a long time since that has happened.
It really is a hit or miss when it comes to deploying new architecture / hardware level instructions. Heck when MMX came out, we all thought that it was the future of games.. but then graphics accellerators squashed it.. and tbh, when that whole Hardware T&L came out, very few games were using it, I thought it was just another fad.. but then it became staple to all games.
Posted on Reply
#48
theonedub
habe fidem
Wile EThe gap between 470 and 480 is now far too small. This is very bad news, imo.
How so? When looking back at the GT200 cards the gap between the GT400 cards is actually bigger. I know ATI cards are a little different, but by nVidia standards this is about right.
Posted on Reply
#49
Marineborn
pr0n InspectorLook at all the scared fanboys trying to convince themselves that these cards are full of fail. How entertaining.
imo they are full of fail, released 6-7 months late then when they should have been.
Posted on Reply
#50
Wile E
Power User
theonedubHow so? When looking back at the GT200 cards the gap between the GT400 cards is actually bigger. I know ATI cards are a little different, but by nVidia standards this is about right.
Even the GTX280 had 11% more shaders than the 260 core 216 (even bigger difference for the original 196 shader 260's). If this rumor is true, the 480 only has 7% more than 470.
Posted on Reply
Add your own comment
Dec 18th, 2024 08:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts