# GT300 to Boast Around 256 GB/s Memory Bandwidth



## btarunr (May 5, 2009)

Recently, early-information on NVIDIA's next-generation GT300 graphics processor surfaced, that suggested it to pack 512 shader processors, and an enhanced processing model. A fresh report from Hardware-Infos sheds some light on its memory interface, revealing it to be stronger than that of any production GPU. According to a piece of information that has been doing ping-pong between Hardware-Infos and Bright Side of News, GT300 might feature a 512-bit wide GDDR5 memory interface.

The memory interface in conjunction with the use of the lowest latency GDDR5 memory available, at a theoretical 1000 MHz (2000 MHz DDR) would churn out 256 GB/s of bandwidth, the highest for a GPU so far. Although Hardware-Infos puts the lowest-latency figure at 0.5 ns, the math wouldn't work out. At 0.5 ns, memory with actual clock rate of 1000 MHz would churn out 512 GB/s, so a slight inaccuracy there. Qimonda's IDGV1G-05A1F1C-40X leads production today with its "40X" rating. With these chips across a 512-bit interface, the 256 GB/s bandwidth equation is satisfied. The clock speeds of the memory isn't known just as yet, the above is just an example that uses the commonly available high-performance GDDR5 memory chip. The new GPU, at least from these little information leaks, is shaping up to be another silicon-monstrosity by NVIDIA in the making.

*View at TechPowerUp Main Site*


----------



## Mussels (May 5, 2009)

sounds like a new 8800GTX


----------



## h3llb3nd4 (May 5, 2009)

Yeah lets just hope that it doesn't idle at 55 degrees


----------



## wolf2009 (May 5, 2009)

> This in conjunction with the use of the lowest latency GDDR5 memory available (0.5 ns), at a theoretical 1000 MHz (2000 MHz DDR)



If its GDDR5 shouldn't it be 4000MHz DDR ?


----------



## btarunr (May 5, 2009)

wolf2009 said:


> If its GDDR5 shouldn't it be 4000MHz DDR ?



No, the data is pushed only on two parts of a clock cycle, so it's DDR, and G*D*DR5. The amount of data pushed makes the difference here, and is twice that of what GDDR3 pushes. You can put it as "effectively 4.00 GHz", but not "4.00 GHz DDR". It's still 2.00 GHz when its actual clock-speed is 1 GHz.


----------



## cyriene (May 5, 2009)

I have a feeling this monster card will come with a monster price!
Can't wait to see what AMD's response is to this beast.


----------



## DrPepper (May 5, 2009)

btarunr said:


> No, the data is pushed only on two parts of a clock cycle, so it's DDR, and G*D*DR5. The amount of data pushed makes the difference here, and is twice that of what GDDR3 pushes.



GDDR5 is actually QDR.

At least from what I've been told.

Edit: Wasn't quite right but according to wikipedia "GDDR5 is the successor to GDDR4 and unlike its predecessors has two parallel DQ links which provide doubled I/O throughput when compared to GDDR4"


----------



## sapetto (May 5, 2009)

And it will be 30cm long....


----------



## PCpraiser100 (May 5, 2009)

Again another PCI-E graphics monster from Nvidia intending to torture our PSUs.


----------



## wolf2009 (May 5, 2009)

btarunr said:


> No, the data is pushed only on two parts of a clock cycle, so it's DDR, and G*D*DR5. The amount of data pushed makes the difference here, and is twice that of what GDDR3 pushes. You can put it as "effectively 4.00 GHz", but not "4.00 GHz DDR". It's still 2.00 GHz when its actual clock-speed is 1 GHz.



i meant to say 4GHz effective, since we are used to saying that in case of GDDR5 on ati cards. 

So shouldn't it be easier to put 4GHz effective in the article as people are used to saying that, rather than getting somebody confused. 

Anyway, I didn't get you point that 





> It's still 2.00 GHz when its actual clock-speed is 1 GHz


. So does GDDR5 push 4GHz or 2GHz ?


----------



## Animalpak (May 5, 2009)

takes performance crown for sure.


----------



## DrPepper (May 5, 2009)

Animalpak said:


> takes performance crown for sure.



From the current generation yes but from the next we will never know.


----------



## btarunr (May 5, 2009)

DrPepper said:


> GDDR5 is actually QDR.
> 
> At least from what I've been told.



Not that I didn't know that. You need to understand how it works to know why they don't call it QDR, even when the bandwidth is four times that of DRAM at a given clock-speed.


----------



## DrPepper (May 5, 2009)

btarunr said:


> Not that I didn't know that. You need to understand how it works to know why they don't call it QDR, even when the bandwidth is four times that of DRAM at a given clock-speed.



I just looked it up there. Its not that it sends information 4 times in 1 clock but its that it has two more paths. I think.


----------



## mlee49 (May 5, 2009)

Can someone help explain how memory bandwidth relates to overall preformance?  If the new GTX 300 series has 256 GB/s and the 295 already has 223.8 GB/s does the gpu use the clocks better? Does it use the memory better?

So does higher mem bandwidth = better memory overclock?


----------



## ZoneDymo (May 5, 2009)

Its already ridiculous as it is.
People have freaking 1200 watt psu's.
That is not cool, this thing better now use more power than currnet cards.


----------



## DrPepper (May 5, 2009)

mlee49 said:


> Can someone help explain how memory bandwidth relates to overall preformance?  If the new GTX 300 series has 256 GB/s and the 295 already has 223.8 GB/s does the gpu use the clocks better? Does it use the memory better?
> 
> So does higher mem bandwidth = better memory overclock?



Bandwidth is the result of the memory clock speed and the bus size. Which means if the memory runs at a higher clock speed and the bus width is bigger then more bandwidth.


----------



## mlee49 (May 5, 2009)

So the overall higher bandwidth will mean the card will run an overclock better, right?


----------



## DrPepper (May 5, 2009)

mlee49 said:


> So the overall higher bandwidth will mean the card will run an overclock better, right?



No. That is dependant on the memory modules on the card.


----------



## mlee49 (May 5, 2009)

So I'm ovbiously not fully understanding this so lets do a comparision:

Take for example the 275 line up from Evga, the standard edition vs the FTW edition.  Both are same gpu's, both are same memory modules, but the FTW edition is clocked faster and has a slightly higher memory bandwidth.  Wouldn't the higher bandwidth mean the FTW edition preforms better than the regular edition overclocked to the same clock settings?  I'm trying to clarify to see if the extra $ for a higher memory bandwidth will pay off.


----------



## mdm-adph (May 5, 2009)

Wonder if it'll be dual-core...


----------



## DrPepper (May 5, 2009)

mlee49 said:


> So I'm ovbiously not fully understanding this so lets do a comparision:
> 
> Take for example the 275 line up from Evga, the standard edition vs the FTW edition.  Both are same gpu's, both are same memory modules, but the FTW edition is clocked faster and has a slightly higher memory bandwidth.  Wouldn't the higher bandwidth mean the FTW edition preforms better than the regular edition overclocked to the same clock settings?  I'm trying to clarify to see if the extra $ for a higher memory bandwidth will pay off.



Chances are the FTW version and stock can both achieve the same clock speed because the modules would be rated to the same speed. If the FTW version and the stock version are at the same speed performance will be identical. Higher bandwidth will increase frames per second and help with higher resolutions


----------



## Frizz (May 5, 2009)

Woot! Next gen nvidia cards here we come! Might leave ATI/AMD speechless for a while unless they release their next gen cards along the same quarter which I hope will happen else we aussies will be seeing video cards at the 1000+ AUD mark again


----------



## ShadowFold (May 5, 2009)

So did this leak or did they announce it? I don't think it's smart to announce what you're coming out with like this. AMD is watching.. They're probably already trying to get something to trump it. It's gonna be hard, but I'm sure they'll keep up. I just hope we don't see another HD 2900XT vs 8800GTX  Not saying the 2900XT was bad. I owned two of them myself. The 8800GTX was just so much better..


----------



## mlee49 (May 5, 2009)

DrPepper said:


> Chances are the FTW version and stock can both achieve the same clock speed because the modules would be rated to the same speed. If the FTW version and the stock version are at the same speed performance will be identical. Higher bandwidth will increase frames per second and help with higher resolutions



Thanks DP   Just trying to learn more about gfx cards and overclocking.  It's not all about the clock speeds


----------



## DrPepper (May 5, 2009)

mlee49 said:


> Thanks DP   Just trying to learn more about gfx cards and overclocking.  It's not all about the clock speeds



No problem. I forgot to mention latency in the memory modules come into play just like regular RAM. It's rated in ns and the lower the better. If you have a FTW edition that has better rated modules than a stock version then at the same speed they will be slightly different but most companies use the same memory modules for FTW editions and stock.


----------



## eidairaman1 (May 5, 2009)

Mussels said:


> sounds like a new 8800GTX



As of performance Gain from a Generation, aka going for the 7900 to 8800 or just the rebag, aka 8800-9800.


----------



## Frizz (May 5, 2009)

ShadowFold said:


> So did this leak or did they announce it? I don't think it's smart to announce what you're coming out with like this. AMD is watching.. They're probably already trying to get something to trump it. It's gonna be hard, but I'm sure they'll keep up. I just hope we don't see another HD 2900XT vs 8800GTX  Not saying the 2900XT was bad. I owned two of them myself. The 8800GTX was just so much better..



True that, hopefully AMD/ATI has matured enough not to let that happen again, ATI by itself showed silver medal no matter what they released, but now they have AMD its been the closest challenge ever since the 4x00 series came out.

I've been seeing 4870's 512mb and GTX260's (not core 216's) as low as 4850/9800gtx+ reference design price ranges. Big insight on how hardware is much more advanced than software to get that kind of performance/price ratio. When AMD strikes back it will be very very good news for everyones' pockets


----------



## TheMailMan78 (May 5, 2009)

This is pure Fappuccino.


----------



## Bjorn_Of_Iceland (May 5, 2009)

Run tri SLI with this and you can use the electric meter's rotating part for lapping.


----------



## Mussels (May 5, 2009)

Bjorn_Of_Iceland said:


> Run tri SLI with this and you can use the electric meter's rotating part for lapping.



cut your veggies while you're at it.


----------



## [I.R.A]_FBi (May 5, 2009)

TheMailMan78 said:


> This is pure Fappuccino.



FAP FAP FAP


----------



## soldier242 (May 5, 2009)

sounds like an utter beast ... will this thang feature DX11?


----------



## buggalugs (May 5, 2009)

soldier242 said:


> sounds like an utter beast ... will this thang feature DX11?



Ya, It will. I'm more interested in AMD's DX11 card.


----------



## HTC (May 5, 2009)

btarunr said:


> Recently, early-information on NVIDIA's next-generation GT300 graphics processor surfaced, that suggested it to pack 512 shader processors, and an enhanced processing model. A fresh report from Hardware-Infos sheds some light on its memory interface, revealing it to be stronger than that of any production GPU. According to a piece of information that has been doing ping-pong between Hardware-Infos and Bright Side of News, *GT300 might feature a 512-bit wide GDDR5 memory interface.*



Someone please correct me but, with a 512 bit wide GDDR5, doesn't that mean the die size will be huge ... again?


----------



## slyfox2151 (May 5, 2009)

BIGGER IS ALWAYS BETTER  /sarcasim


----------



## HTC (May 5, 2009)

slyfox2151 said:


> BIGGER IS ALWAYS BETTER



No: bigger is always tougher to cool down


----------



## Mussels (May 5, 2009)

HTC said:


> No: bigger is always tougher to cool down



larger surface area increases heat dissipation!


----------



## HellasVagabond (May 5, 2009)

Last time i saw " early info " on a card it was the GTX275 and everyone went way off so although i hope NVIDIA pulls it through and makes something great i will be waiting for the official specs.


----------



## W1zzard (May 5, 2009)

HTC said:


> Someone please correct me but, with a 512 bit wide GDDR5, doesn't that mean the die size will be huge ... again?



yes, thats why i find much of this leaked info hard to believe. the larger your die, the worse your yields are. the market for those huge gpus is rather small anyway, nobody wants to pay 500-1000 bucks for their graphics card. especially when you can play all games fine with a $99 card


----------



## DrPepper (May 5, 2009)

W1zzard said:


> yes, thats why i find much of this leaked info hard to believe. the larger your die, the worse your yields are. the market for those huge gpus is rather small anyway, nobody wants to pay 500-1000 bucks for their graphics card



Especially since this might be on 55nm or even 40nm which would make yields even lower but this happened with the GTX280 and 260.


----------



## HTC (May 5, 2009)

DrPepper said:


> Especially since this might be on 55nm or even 40nm which would make yields even lower but this happened with the GTX280 and 260.



Actually, with a reduced process (55nm or even 40nm), the die size would shrink, by a LOT, but it would still be HUGE, no?


----------



## Imsochobo (May 5, 2009)

And yet i fail to see need of 256 gb/sec, after looking at 4770 crossfire, and 4870 crossfire, i have no reason to belive its the future.

at 65gb/sec you can do 1920x1200, at 120gb/sec you can do 2560x1600.
So where do we need twice ? i think amd proved that this is not needed when they made the 4770 with 128 bit.

I smell false rumour, or a new Radeon 2900 XT, just from nvidia, maybe it will be the champ in 3dmark like the 2900 xt was.


----------



## h3llb3nd4 (May 5, 2009)

Well in the future it's gonna be like the x1950 so 256/gb is needed for future proofing


----------



## iamverysmart (May 5, 2009)

mlee49 said:


> Can someone help explain how memory bandwidth relates to overall preformance?  If the new GTX 300 series has 256 GB/s and the 295 already has 223.8 GB/s does the gpu use the clocks better? Does it use the memory better?
> 
> So does higher mem bandwidth = better memory overclock?



I thought Crossfire or SLi doesn't work that way, the memory bandwidth doesn't combine. Technically it has that much bandwidth but effectively, it's not doubled because of the way it works, each set of memory houses the same base data (textures and whatever) so each GPU can work on it's own.


----------



## soldier242 (May 5, 2009)

iamverysmart said:


> I thought Crossfire or SLi doesn't work that way, the memory bandwidth doesn't combine. Technically it has that much bandwidth but effectively, it's not doubled because of the way it works, each set of memory houses the same base data (textures and whatever) so each GPU can work on it's own.



yup thats how it works


----------



## Imsochobo (May 5, 2009)

this is something amd is working hard on, i bet nvidia have catched up to ati's strategy(multi-gpu).
I suspect ati to be futher ahead in what i like to name Lego strategy, i think that name orginally comes from AMD, we have seen start of this strategy with HD 2xxx->3xxx-4xxx.
Scaleable architecture.
3870x2 was first step, 4870 x2 2nd 4850 x2 3rd, scaling and issues are narrowed down, and we might see lower and lower end cards with setups like this.

They need shared memory system to make this good, nowdays a 4870x2 or a GTX295 has ~1 gb video memory per gpu, and total video memory for use in games is ~1gb, no more than lowest videocard.


----------



## W1zzard (May 5, 2009)

iamverysmart said:


> I thought Crossfire or SLi doesn't work that way, the memory bandwidth doesn't combine. Technically it has that much bandwidth but effectively, it's not doubled because of the way it works, each set of memory houses the same base data (textures and whatever) so each GPU can work on it's own.



that's correct


----------



## alwayssts (May 5, 2009)

Correct.

With a 512-bit interface you're looking at (bare minimum) a 400mm2+ (20x20) die.

Knowing nVidia, this part will be made to be shrunk to 32nm without loosing it's bus, which would mean at least a 500mm2 die.

Minus the bus (which is 2x), this is 4x g92 (which is 754M transistors) + whatever changes they made for MIMD (dual-issue MADD?) + DX11, which should clock in at ~3 billion(+?) transistors, in my guesstimate.

Comparatively speaking to rv740 (826M, 136mm2) and rv870 (1.25ishB?, 205mm2), we'd we talking a ~23x23 die, or 529mm2, which could realistically shrink to around 400mm2 @ 32nm.   

IOW, this mother gonna be big, and 40nm is not a good process for a big die.  I wouldn't expect this to see the light of day until 32nm personally, although TSMC might get their problems worked out later this year allowing it happen.  Still, it will not be a good yeilding part, nor do I expect high clocks.  I figure 700c/1750s sounds doable, with 800/2000 on 32nm.

I believe r800 gen being 400sp/16tmu (low-end, 32nm) 800sp (mid-range,32nm) 1200sp/48tmu (rv870 - 40nm) and 1600/64 (rv870 replacement on 32nm).  That really makes the most sense, as 'rv890' could replace rv870, with rv870 essentially becoming the 3/4 product of yore after it's release.  This would be 4-16 arrays; 100 shaders (or 20 if you like), and 4 tmus per array.  32nm should allow for roughly a 1/3 shrink over 40nm, which would allow these die sizes to stay comparable to the parts preceding them (rv740, rv870).  

That's just an informed guess, but I think a realistic one.


----------



## wolf2009 (May 5, 2009)

Imsochobo said:


> And yet i fail to see need of 256 gb/sec, after looking at 4770 crossfire, and 4870 crossfire, i have no reason to belive its the future.
> 
> at 65gb/sec you can do 1920x1200, at 120gb/sec you can do 2560x1600.
> So where do we need twice ? i think amd proved that this is not needed when they made the 4770 with 128 bit.
> ...



lol, forthcoming games are going to be pushing more data through the pipeline with increasing graphics and physics so you are going to need more bandwidth.


----------



## TheMailMan78 (May 5, 2009)

W1zzard said:


> yes, thats why i find much of this leaked info hard to believe. the larger your die, the worse your yields are. the market for those huge gpus is rather small anyway, nobody wants to pay 500-1000 bucks for their graphics card. especially when you can play all games fine with a $99 card



Ah yes but with your e-penis be as large with a $99 card?


----------



## erocker (May 5, 2009)

TheMailMan78 said:


> Ah yes but with your e-penis be as large with a $99 card?



Expensive cards are good for the minority, however the minority is a bit overexposed here on a tech website.


----------



## vega22 (May 5, 2009)

but if nv do go nuts with the filters like the leaked 185.2x beta was showing at the start of the year i imagine 64xaa would love that mem bandwidth.

i do hope its true and ati have somthing lined up to counter as i dont want to see nv sit on their ass for a year again just milking the one core like they did the g80, and again with the g92.


----------



## Nkd (May 5, 2009)

wolf2009 said:


> lol, forthcoming games are going to be pushing more data through the pipeline with increasing graphics and physics so you are going to need more bandwidth.



I honestly dont think so, no game has even used 150gb/s bandwidth as of yet, not even crysis at the highest resolution, memory bandwidth wise current generation cards are more than suffiecient, than again all they need is faster running gddr5, instead of 512bit memory bus, but again if the price is right I ain't complaining.


----------



## TheMailMan78 (May 5, 2009)

erocker said:


> Expensive cards are good for the minority, however the minority is a bit overexposed here on a tech website.



Very much a fact. I sometimes forget my X2 4200+ would pretty much run everything out right now and I really don't "need" a 955 since my x3 pretty much runs everything now. Until I come here and realize I MOUR POWA!


----------



## ShinyG (May 5, 2009)

I think the key information in this "leaked info" is actually the missing info. What will the manufacturing process for this card be? What will the power consumption be? Etc, etc... I think the video card market has matured past the "who's got the bigger GPU" e-penis measuring contest! Well, I guess we just have to wait and see...


----------



## h3llb3nd4 (May 5, 2009)

LOLz
my awesome 8600GT runs WiC(High settings) at...





So such a card would be needed in my rig


----------



## TheMailMan78 (May 5, 2009)

ShinyG said:


> I think the key information in this "leaked info" is actually the missing info. What will the manufacturing process for this card be? What will the power consumption be? Etc, etc... I think the video card market has matured past the "who's got the bigger GPU" e-penis measuring contest! Well, I guess we just have to wait and see...



I have an HD2900 that says different


----------



## W1zzard (May 5, 2009)

TheMailMan78 said:


> Ah yes but with your e-penis be as large with a $99 card?



my epenis below, wanna contest?


----------



## El Fiendo (May 5, 2009)

My god! That'd be perfect for F@H and WCG contests. Among others I'm sure but...

Please, even if you put them up F/S and donate to charity or site upkeep. Damn! That's alot of cards.

Probably keep them around for review purposes though eh?


----------



## W1zzard (May 5, 2009)

El Fiendo said:


> My god! That'd be perfect for F@H and WCG contests. Among others I'm sure but...
> 
> Please, even if you put them up F/S and donate to charity or site upkeep. Damn! That's alot of cards.
> 
> Probably keep them around for review purposes though eh?



to retest with new drivers etc, btw. the kind folks at zotac were kind enough to donate a 30" 2560x1600 screen just for vga testing. results will be in the next full rebench, in other news: i now have a setup that can measure vga card power alone - no more whole system


----------



## alexp999 (May 5, 2009)

W1zzard said:


> in other news: i now have a setup that can measure vga card power alone - no more whole system



How??


----------



## El Fiendo (May 5, 2009)

Ah yea, I figured as much. That's still a very impressive box. And that's probably the first time I've said that to a man.


----------



## HTC (May 5, 2009)

W1zzard said:


> to retest with new drivers etc, btw. the kind folks at zotac were kind enough to donate a 30" 2560x1600 screen just for vga testing. results will be in the next full rebench, in other news: *i now have a setup that can measure vga card power alone - no more whole system*



MONSTER excellent


----------



## TheMailMan78 (May 5, 2009)

W1zzard said:


> my epenis below, wanna contest?
> 
> http://img.techpowerup.org/090505/Capture182.jpg



Fap, Fap, Fap, Fap.


----------



## W1zzard (May 5, 2009)

alexp999 said:


> How??



magic blue box, really


----------



## alexp999 (May 5, 2009)

Yeah but how can it measure the power draw through the PCI-E slot?

Or does the card plug into an external box then a cable goes from that to the mobo


----------



## TheMailMan78 (May 5, 2009)

W1zzard said:


> magic blue box, really



To hell with that I want to swan dive into your GPU collection.


----------



## El Fiendo (May 5, 2009)

I want to frolic in his GPU garden as well. It looks quite divine.


----------



## TheMailMan78 (May 5, 2009)

El Fiendo said:


> I want to frolic in his GPU garden as well. It looks quite divine.



10 years ago I was banging cheerleader left overs and defending quarterbacks. Now I get excited over a box of fans and some silicone. I'm pathetic.


----------



## W1zzard (May 5, 2009)

yes it measures pcie bus power too


----------



## El Fiendo (May 5, 2009)

Yes, but these ones won't come back looking for child support now will they?


----------



## TheMailMan78 (May 5, 2009)

El Fiendo said:


> Yes, but these ones won't come back looking for child support now will they?



The other ones won't ether. They never found the bodies. 

Edit: I had to put a laugh at the end of that one. Someone may think I'm serious.


----------



## DrPepper (May 5, 2009)

I just realised this core will be about the same size as g200 so its pretty much the same bandwidth you would get if you stuck GDDR5 on a GTX285.


----------



## El Fiendo (May 5, 2009)

TheMailMan78 said:


> The other ones won't ether. They never found the bodies.
> 
> Edit: I had to put a laugh at the end of that one. Someone may think I'm serious.



  I have that problem all the time, some of my friends think I'm a child molestor / hooker murderer. Heh.


@ Wiz: Very nice. This will help alot in your reviews, which are already jam packed with information.


----------



## TheMailMan78 (May 5, 2009)

W1zzard said:


> http://img.techpowerup.org/090505/Capture186.jpg
> 
> yes it measures pcie bus power too



Does that thing say "PWN" on the top?!


----------



## Selene (May 5, 2009)

DrPepper said:


> Especially since this might be on 55nm or even 40nm which would make yields even lower but this happened with the GTX280 and 260.



LOL, 8800GTX/Ultra were 80nm, G92s(8800GT/9800GTX)were 65nm, then 9800GTX+/GTS250s and the later GTX260/275/280/285 and 295 were all 55nm, they shrank to make yields bigger, thus when the GT300 cards come out @ 40nm the yields will be even bigger and produce less heat.


----------



## wiak (May 5, 2009)

sounds fishy
well if you remember R600 days when they put a 512bit ringbus controller and it flopped 
GT300 will cost a arm, leg and a head for us mortals, i bet NVIDIA will go the same rute as their current GT200, and ATI will realy beat them with their sleek, fast and CHEAP cards


----------



## mdm-adph (May 5, 2009)

wiak said:


> sounds fishy
> well if you remember R600 days when they put a 512bit ringbus controller and it flopped
> GT300 will cost a arm, leg and a head for us mortals, i bet NVIDIA will go the same rute as their current GT200, and ATI will realy beat them with their sleek, fast and CHEAP cards



I'm just waiting for the first ATI 40nm quad-cpu card to come out, so I can buy one and put it on my shelf next to my museum-quality Voodoo 5 6000.

And then the cycle will be complete.  Muhuhahaha


----------



## DrPepper (May 5, 2009)

Selene said:


> LOL, 8800GTX/Ultra were 80nm, G92s(8800GT/9800GTX)were 65nm, then 9800GTX+/GTS250s and the later GTX260/275/280/285 and 295 were all 55nm, they shrank to make yields bigger, thus when the GT300 cards come out @ 40nm the yields will be even bigger and produce less heat.



8800GTX was 90nm
GTX280 was 65nm

Indeed when a new process comes out yields will be bigger and when a new fabrication process comes out it usually has low yields until it is perfected. LOL


----------



## eidairaman1 (May 5, 2009)

mdm-adph said:


> I'm just waiting for the first ATI 40nm quad-cpu card to come out, so I can buy one and put it on my shelf next to my museum-quality Voodoo 5 6000.
> 
> And then the cycle will be complete.  Muhuhahaha



Did you know that some devs have Fresh Drivers for the Voodoo 2 and Higher i Believe as of 2008/2009.


----------



## a_ump (May 5, 2009)

lol it cracks me, everytime ATI does decent nvidia just thinks,"NEED BIGGER CORE, MORE PROCESSORS GRRR" lol. though i really dout those specifications, doubling the SPU's from one gen to the next, i mean its possible but i would thk it'd take longer than a year to fit twice as many SPU's in the same die size even with a smaller fab process, not including the other mumbo jumbo they'll be adding. And can't say 512-bit will be a flop, they already proved it's worth with the GTX 280/85. I figured they'd have gone something like 384 SPU's or so, and then do a die shrink and have a smaller die size this time around, but if these rumors are true i guess nvidia wants to stomp AMD instead of having competition. i don't see the speculated 1200spu's on RV870 coming close to matching the performance of a GT300's 256 SPU's. if those speculations turn out true.


----------



## Valdez (May 6, 2009)

GT300 delayed till 2010?


----------



## Selene (May 6, 2009)

I dont belive that guy , 90% of what he types is made up IMO.
Now I dont think we will see the GT300 any time soon, but even if it is mid 2010 it will be a sweet part no dought.


----------



## TheGuruStud (May 6, 2009)

Selene said:


> I dont belive that guy , 90% of what he types is made up IMO.
> Now I dont think we will see the GT300 any time soon, but even if it is mid 2010 it will be a sweet part no dought.



Must be a spell check drout...


----------



## Kursah (May 6, 2009)

Well I hope the GT300 lives up to the hype, I've been very very very content with my GTX260 since July 2008, and it just seems to get better with age. It folds like a champ, runs all my games maxxed out, runs and decent temps and I will probably grab a similar version of the next gen card. It depends on what I feel is the best bang for the buck at the time, I was initially torn between an HD4870 and GTX260 last summer, I still feel I made the right decision to this day for many reasons that suited my needs and preferences. I do hope they keep affordability in mind though, the GTX series started at a hefty price tag that wasn't worth the performance, I jumped on at a good time with a good deal + MIR for a GTX260 at about $230 shipped back then, when the average was 300+ shipped.

The GT300 GPU could be very promising, and I hope it does succeed, and I also hope ATI brings up some seriously good competition this next round too, gotta have it!


----------



## 15th Warlock (May 6, 2009)

Been waiting for this card for some time, bring it on!!!


----------



## hat (May 6, 2009)

Wooooooo, another bloodthirsty powerhouse. When will the computer industry move to lower power, similar performance components?


----------



## TheMailMan78 (May 6, 2009)

Will someone tell me if that says "PWN" on the top of Wiz's little blue box? The fate of the universe depends on the answer!


----------



## Mussels (May 6, 2009)

TheMailMan78 said:


> Will someone tell me if that says "PWN" on the top of Wiz's little blue box? The fate of the universe depends on the answer!



it says PWN!


----------



## TheMailMan78 (May 6, 2009)

Mussels said:


> it says PWN!



Thats awesome. I wish I had a computer component that said "PWN".


----------



## btarunr (May 6, 2009)

wiak said:


> sounds fishy
> well if you remember R600 days when they put a 512bit ringbus controller and it flopped
> GT300 will cost a arm, leg and a head for us mortals, i bet NVIDIA will go the same rute as their current GT200, and ATI will realy beat them with their sleek, fast and CHEAP cards



NVIDIA already has a decent 448-bit / 512-bit GDDR3 architecture that performs well. Without competition, yes, it will cost an arm and a leg. GTX 280 started at $650 and crash-landed at $300 in less than an year, while 8800 GTX remained above the $500 mark for over an year.


----------



## W1zzard (May 6, 2009)

TheMailMan78 said:


> Thats awesome. I wish I had a computer component that said "PWN".



it's photochopped. the box is semi transparent, i wanted to add some dramatization


----------



## Hayder_Master (May 6, 2009)

it is hard to believe all this specification in one card , im wait for GPU-Z read


----------



## W1zzard (May 6, 2009)

hayder.master said:


> it is hard to believe all this specification in one card , im wait for GPU-Z read



at this point in time gpuz can't ready anything off gt300, so any screenshots are fake until you see in the gpuz changelogs that some gt300 support was added


----------



## sturla (May 6, 2009)

Back then before AMD bought ATI. ATI vas doing this massive processor that should be able to all sorts of thing, they had there own plans of a 'CUDA land' etc. To day they are more and more task optimized, which gives them the competetive edge. They support the standards, like DirectXxx OpenCl Havok etc. and no 'CUDA land'. The professional cards like FirePro, -Gl, -Stream whatever, are for the professionals, and optimized for that, and they pay a premium for that. When the first rumers came about GT300, another castle twice the die size of RV870, they are asking for trouble. IMHO CUDA is BS in the mainstream, good as e-penis and nothing more. But I have seen videos of what it can do, and I'm impressed, but it can become NVIDEA's Titanic. I don't think that ATI is going to lay dovn or raise theire hands in submission, now they have got them by the balls. There is a rumer, that the last card this year, is going to be made at GloFo in 32nm, as an MCM, and it's going to raise the bar, but it's probably just whisfull thinking.


----------



## HossHuge (May 6, 2009)

Valdez said:


> GT300 delayed till 2010?



don't trust anything Charlie Demerjian says.


----------



## El Fiendo (May 6, 2009)

HossHuge said:


> don't trust anything Charlie Demerjian says.



I know right? That's like having a KKK member report on news in Africa.


----------



## a_ump (May 6, 2009)

sturla said:


> Back then before AMD bought ATI. ATI vas doing this massive processor that should be able to all sorts of thing, they had there own plans of a 'CUDA land' etc. To day they are more and more task optimized, which gives them the competetive edge. They support the standards, like DirectXxx OpenCl Havok etc. and no 'CUDA land'. The professional cards like FirePro, -Gl, -Stream whatever, are for the professionals, and optimized for that, and they pay a premium for that. When the first rumers came about GT300, another castle twice the die size of RV870, they are asking for trouble. IMHO CUDA is BS in the mainstream, good as e-penis and nothing more. But I have seen videos of what it can do, and I'm impressed, but it can become NVIDEA's Titanic. I don't think that ATI is going to lay dovn or raise theire hands in submission, now they have got them by the balls. There is a rumer, that the last card this year, is going to be made at GloFo in 32nm, as an MCM, and it's going to raise the bar, but it's probably just whisfull thinking.



eh i'd love to know where your information comes from and what video you've seen of the GT300


----------



## W1zzard (May 6, 2009)

a_ump said:


> and what video you've seen of the GT300



i see cool videos at night when i sleep


----------



## a_ump (May 6, 2009)

lol i do too


----------



## h3llb3nd4 (May 6, 2009)

Really? so you guys can access sites from the future in your brains while sleeping?
god I want that ability


----------



## Mussels (May 6, 2009)

h3llb3nd4 said:


> Really? so you guys can access sites from the future in your brains while sleeping?
> god I want that ability



w1zzards from the future. His dreams are sent back in time for us to beta test as games.


----------



## a_ump (May 6, 2009)

h3llb3nd4 said:


> Really? so you guys can access sites from the future in your brains while sleeping?
> god I want that ability



lol i just meant the awesome dreams part


----------



## Hayder_Master (May 6, 2009)

W1zzard said:


> at this point in time gpuz can't ready anything off gt300, so any screenshots are fake until you see in the gpuz changelogs that some gt300 support was added



yeah , sure until i see GPU-Z (your gpu-z) screen shot , so you start think about add supoort GT300 in gpu-z from now that's quick , for me i see this card need more time for release maybe in q1 2010


----------



## Mussels (May 7, 2009)

hayder.master said:


> yeah , sure until i see GPU-Z (your gpu-z) screen shot , so you start think about add supoort GT300 in gpu-z from now that's quick , for me i see this card need more time for release maybe in q1 2010



w1zz cant really add support until he gets his hands on one of the cards.


----------



## Hayder_Master (May 7, 2009)

Mussels said:


> w1zz cant really add support until he gets his hands on one of the cards.



ok we wait and see unitl this card release


----------

