Tuesday, May 5th 2009

GT300 to Boast Around 256 GB/s Memory Bandwidth

Recently, early-information on NVIDIA's next-generation GT300 graphics processor surfaced, that suggested it to pack 512 shader processors, and an enhanced processing model. A fresh report from Hardware-Infos sheds some light on its memory interface, revealing it to be stronger than that of any production GPU. According to a piece of information that has been doing ping-pong between Hardware-Infos and Bright Side of News, GT300 might feature a 512-bit wide GDDR5 memory interface.

The memory interface in conjunction with the use of the lowest latency GDDR5 memory available, at a theoretical 1000 MHz (2000 MHz DDR) would churn out 256 GB/s of bandwidth, the highest for a GPU so far. Although Hardware-Infos puts the lowest-latency figure at 0.5 ns, the math wouldn't work out. At 0.5 ns, memory with actual clock rate of 1000 MHz would churn out 512 GB/s, so a slight inaccuracy there. Qimonda's IDGV1G-05A1F1C-40X leads production today with its "40X" rating. With these chips across a 512-bit interface, the 256 GB/s bandwidth equation is satisfied. The clock speeds of the memory isn't known just as yet, the above is just an example that uses the commonly available high-performance GDDR5 memory chip. The new GPU, at least from these little information leaks, is shaping up to be another silicon-monstrosity by NVIDIA in the making.
Source: Hardware-Infos
Add your own comment

106 Comments on GT300 to Boast Around 256 GB/s Memory Bandwidth

#26
eidairaman1
The Exiled Airman
Musselssounds like a new 8800GTX
As of performance Gain from a Generation, aka going for the 7900 to 8800 or just the rebag, aka 8800-9800.
Posted on Reply
#27
Frizz
ShadowFoldSo did this leak or did they announce it? I don't think it's smart to announce what you're coming out with like this. AMD is watching.. They're probably already trying to get something to trump it. It's gonna be hard, but I'm sure they'll keep up. I just hope we don't see another HD 2900XT vs 8800GTX :laugh: Not saying the 2900XT was bad. I owned two of them myself. The 8800GTX was just so much better..
True that, hopefully AMD/ATI has matured enough not to let that happen again, ATI by itself showed silver medal no matter what they released, but now they have AMD its been the closest challenge ever since the 4x00 series came out.

I've been seeing 4870's 512mb and GTX260's (not core 216's) as low as 4850/9800gtx+ reference design price ranges. Big insight on how hardware is much more advanced than software to get that kind of performance/price ratio. When AMD strikes back it will be very very good news for everyones' pockets :laugh:
Posted on Reply
#29
Bjorn_Of_Iceland
Run tri SLI with this and you can use the electric meter's rotating part for lapping.
Posted on Reply
#30
Mussels
Freshwater Moderator
Bjorn_Of_IcelandRun tri SLI with this and you can use the electric meter's rotating part for lapping.
cut your veggies while you're at it.
Posted on Reply
#31
[I.R.A]_FBi
TheMailMan78This is pure Fappuccino.
FAP FAP FAP :)
Posted on Reply
#32
soldier242
sounds like an utter beast ... will this thang feature DX11?
Posted on Reply
#33
buggalugs
soldier242sounds like an utter beast ... will this thang feature DX11?
Ya, It will. I'm more interested in AMD's DX11 card.
Posted on Reply
#34
HTC
btarunrRecently, early-information on NVIDIA's next-generation GT300 graphics processor surfaced, that suggested it to pack 512 shader processors, and an enhanced processing model. A fresh report from Hardware-Infos sheds some light on its memory interface, revealing it to be stronger than that of any production GPU. According to a piece of information that has been doing ping-pong between Hardware-Infos and Bright Side of News, GT300 might feature a 512-bit wide GDDR5 memory interface.
Someone please correct me but, with a 512 bit wide GDDR5, doesn't that mean the die size will be huge ... again?
Posted on Reply
#35
slyfox2151
BIGGER IS ALWAYS BETTER :laugh::laugh::laugh::roll: /sarcasim :P
Posted on Reply
#36
HTC
slyfox2151BIGGER IS ALWAYS BETTER :laugh::laugh::laugh::roll:
No: bigger is always tougher to cool down
Posted on Reply
#37
Mussels
Freshwater Moderator
HTCNo: bigger is always tougher to cool down
larger surface area increases heat dissipation!
Posted on Reply
#38
HellasVagabond
Last time i saw " early info " on a card it was the GTX275 and everyone went way off so although i hope NVIDIA pulls it through and makes something great i will be waiting for the official specs.
Posted on Reply
#39
W1zzard
HTCSomeone please correct me but, with a 512 bit wide GDDR5, doesn't that mean the die size will be huge ... again?
yes, thats why i find much of this leaked info hard to believe. the larger your die, the worse your yields are. the market for those huge gpus is rather small anyway, nobody wants to pay 500-1000 bucks for their graphics card. especially when you can play all games fine with a $99 card
Posted on Reply
#40
DrPepper
The Doctor is in the house
W1zzardyes, thats why i find much of this leaked info hard to believe. the larger your die, the worse your yields are. the market for those huge gpus is rather small anyway, nobody wants to pay 500-1000 bucks for their graphics card
Especially since this might be on 55nm or even 40nm which would make yields even lower but this happened with the GTX280 and 260.
Posted on Reply
#41
HTC
DrPepperEspecially since this might be on 55nm or even 40nm which would make yields even lower but this happened with the GTX280 and 260.
Actually, with a reduced process (55nm or even 40nm), the die size would shrink, by a LOT, but it would still be HUGE, no?
Posted on Reply
#42
Imsochobo
And yet i fail to see need of 256 gb/sec, after looking at 4770 crossfire, and 4870 crossfire, i have no reason to belive its the future.

at 65gb/sec you can do 1920x1200, at 120gb/sec you can do 2560x1600.
So where do we need twice ? i think amd proved that this is not needed when they made the 4770 with 128 bit.

I smell false rumour, or a new Radeon 2900 XT, just from nvidia, maybe it will be the champ in 3dmark like the 2900 xt was.
Posted on Reply
#43
h3llb3nd4
Well in the future it's gonna be like the x1950 so 256/gb is needed for future proofing:)
Posted on Reply
#44
iamverysmart
mlee49Can someone help explain how memory bandwidth relates to overall preformance? If the new GTX 300 series has 256 GB/s and the 295 already has 223.8 GB/s does the gpu use the clocks better? Does it use the memory better?

So does higher mem bandwidth = better memory overclock?
I thought Crossfire or SLi doesn't work that way, the memory bandwidth doesn't combine. Technically it has that much bandwidth but effectively, it's not doubled because of the way it works, each set of memory houses the same base data (textures and whatever) so each GPU can work on it's own.
Posted on Reply
#45
soldier242
iamverysmartI thought Crossfire or SLi doesn't work that way, the memory bandwidth doesn't combine. Technically it has that much bandwidth but effectively, it's not doubled because of the way it works, each set of memory houses the same base data (textures and whatever) so each GPU can work on it's own.
yup thats how it works
Posted on Reply
#46
Imsochobo
this is something amd is working hard on, i bet nvidia have catched up to ati's strategy(multi-gpu).
I suspect ati to be futher ahead in what i like to name Lego strategy, i think that name orginally comes from AMD, we have seen start of this strategy with HD 2xxx->3xxx-4xxx.
Scaleable architecture.
3870x2 was first step, 4870 x2 2nd 4850 x2 3rd, scaling and issues are narrowed down, and we might see lower and lower end cards with setups like this.

They need shared memory system to make this good, nowdays a 4870x2 or a GTX295 has ~1 gb video memory per gpu, and total video memory for use in games is ~1gb, no more than lowest videocard.
Posted on Reply
#47
W1zzard
iamverysmartI thought Crossfire or SLi doesn't work that way, the memory bandwidth doesn't combine. Technically it has that much bandwidth but effectively, it's not doubled because of the way it works, each set of memory houses the same base data (textures and whatever) so each GPU can work on it's own.
that's correct
Posted on Reply
#48
alwayssts
Correct.

With a 512-bit interface you're looking at (bare minimum) a 400mm2+ (20x20) die.

Knowing nVidia, this part will be made to be shrunk to 32nm without loosing it's bus, which would mean at least a 500mm2 die.

Minus the bus (which is 2x), this is 4x g92 (which is 754M transistors) + whatever changes they made for MIMD (dual-issue MADD?) + DX11, which should clock in at ~3 billion(+?) transistors, in my guesstimate.

Comparatively speaking to rv740 (826M, 136mm2) and rv870 (1.25ishB?, 205mm2), we'd we talking a ~23x23 die, or 529mm2, which could realistically shrink to around 400mm2 @ 32nm.

IOW, this mother gonna be big, and 40nm is not a good process for a big die. I wouldn't expect this to see the light of day until 32nm personally, although TSMC might get their problems worked out later this year allowing it happen. Still, it will not be a good yeilding part, nor do I expect high clocks. I figure 700c/1750s sounds doable, with 800/2000 on 32nm.

I believe r800 gen being 400sp/16tmu (low-end, 32nm) 800sp (mid-range,32nm) 1200sp/48tmu (rv870 - 40nm) and 1600/64 (rv870 replacement on 32nm). That really makes the most sense, as 'rv890' could replace rv870, with rv870 essentially becoming the 3/4 product of yore after it's release. This would be 4-16 arrays; 100 shaders (or 20 if you like), and 4 tmus per array. 32nm should allow for roughly a 1/3 shrink over 40nm, which would allow these die sizes to stay comparable to the parts preceding them (rv740, rv870).

That's just an informed guess, but I think a realistic one.
Posted on Reply
#49
Unregistered
ImsochoboAnd yet i fail to see need of 256 gb/sec, after looking at 4770 crossfire, and 4870 crossfire, i have no reason to belive its the future.

at 65gb/sec you can do 1920x1200, at 120gb/sec you can do 2560x1600.
So where do we need twice ? i think amd proved that this is not needed when they made the 4770 with 128 bit.

I smell false rumour, or a new Radeon 2900 XT, just from nvidia, maybe it will be the champ in 3dmark like the 2900 xt was.
lol, forthcoming games are going to be pushing more data through the pipeline with increasing graphics and physics so you are going to need more bandwidth.
#50
TheMailMan78
Big Member
W1zzardyes, thats why i find much of this leaked info hard to believe. the larger your die, the worse your yields are. the market for those huge gpus is rather small anyway, nobody wants to pay 500-1000 bucks for their graphics card. especially when you can play all games fine with a $99 card
Ah yes but with your e-penis be as large with a $99 card?
Posted on Reply
Add your own comment
Jan 15th, 2025 17:39 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts