# ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.



## Polaris573 (Jun 17, 2008)

The head of ATI Technologies claims that the recently introduced NVIDIA GeForce GTX 200 GPU will be the last monolithic "megachip" because they are simply too expensive to manufacture. The statement was made after NVIDIA executives vowed to keep producing large single chip GPUs. The size of the G200 GPU is about 600mm2¬¬ which means only about 97 can fit on a 300mm wafer costing thousands of dollars. Earlier this year NVIDIA's chief scientist said that AMD is unable to develop a large monolithic graphics processor due to lack of resources. However, Mr. Bergman said that smaller chips allow easier adoption of them for mobile computers.

*View at TechPowerUp Main Site*


----------



## panchoman (Jun 17, 2008)

the war for who can build the biggest monolithic gpu? and then you just x2 the monolithic? lol....

i bet that both companies will have troubl producing big monolithic gpus.. but nvidia more because the R7 is not near the size of the G2


----------



## imperialreign (Jun 17, 2008)

I kinda partially agree, only on the fact that nVidia has been sandbagging their GPU tech for a while now, and I think they're at the furthest they can go with current architecture.

But, if it comes down to a resources debate - nVidia can most easily afford titanic productions


----------



## panchoman (Jun 17, 2008)

imperialreign said:


> I kinda partially agree, only on the fact that nVidia has been sandbagging their GPU tech for a while now, and I think they're at the furthest they can go with current architecture.
> 
> But, if it comes down to a resources debate - nVidia can most easily afford titanic productions



nvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?


----------



## kenkickr (Jun 17, 2008)

I'm all of for Nvidia's monolithic production!! I'll just go out and buy a couple A/C units and fans for my computer room during the summer and never have to turn the heat on in the fall, winter, and early spring having one of their cards in the house, LOL


----------



## Megasty (Jun 17, 2008)

ATi is only saying that because they already know their X2s are gonna dust the G200s. Combine that with the cost to produce them & you have a no-brainer. Its like comparing a Viper & a Mach truck. They both have the same HP but whiich one is _faster_


----------



## DOM (Jun 17, 2008)

panchoman said:


> nvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. *its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?*


 lol amd still got there ass handed to them its a Q core ita has 4 on one cpu true doesnt mean anything 

but I want to see what amd has to offer in the gpu department


----------



## lemonadesoda (Jun 17, 2008)

If nVidia can do a fab shrink to reduce die size and to reduce power they have a clear winner.

THEREFORE, AMD are creating this "nVidia is a dinosaur" hype, because, truth be told, AMD cannot compete with nVidia unless they go x2. And x2? Oh, thats the same total chip size as GTX200 (+/- 15%). But with a fab shrink (to same fab scale as AMD), nVidia would be smaller. Really? Can that really be true? Smaller and same performance = nVidia architecture must be better.

So long as nVidia can manufacture with high yield, they are AOK.


----------



## DaJMasta (Jun 17, 2008)

I agree that this size in mm2 gpu will seldom be seen again, because it costs so much to make.  But the transistor count will continue to rise, as the manufacturing process gets smaller.


----------



## PVTCaboose1337 (Jun 17, 2008)

I think that AMD is right, NVIDIA is not being progressive, but they are getting the most out of a GPU technology...  and it seems to be working.


----------



## dalekdukesboy (Jun 17, 2008)

*I have to reply to this...*



panchoman said:


> nvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?



Well, that may be all fine and good and you may have a valid point...but bottom line, what performs better?  The nvidia g92 or ATI's r7...The Intel duo core/quad core, or the phenom?  I understand that from a purely theoretical/architectural standpoint ati/amd could be more advanced but no one can objectively tell me the phenom or ati's 3780/3750 can even keep up with nevermind beat Nvidia's g92 or Intel's current cpu lineup.


----------



## Rurouni Strife (Jun 17, 2008)

My thoughts:
GPU's will eventually end up kinda like dual/quad core CPUs.  You'll have 2 on one Die.  When? who knows, but it seems that AMD is kinda working in that direction.  However, people complained when the 7950GX2 came out because "it took 2 cards to beat ATI's 1 (1950XTX)".  They did it again, but to a lesser degree for the 3870X2, and it'll become more accepted as it goes on, espically since AMD has said "no more mega GPUs".  Part of that is they don't wanna f up with another 2900 and they dont quite have the cash, but they are also thinking $$$.  Sell more high performing mid range parts.  That's where all the money is made.  And we all know AMD needs cash.


----------



## mullered07 (Jun 18, 2008)

panchoman said:


> nvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?



not exactly to keep up with phenom since the Q series was released like a year b4 and still pwns phenom, who actually gives a shit if its "true" quad or not, it does the job and better than amd? 

i dont understand what you mean by workaround, nvidia has handed amd there ass for the last 2 gens, if there not even trying and just making the most of old technology, then god help amd if they come up with a new architecture. ati died the day they were bought by amd :shadedshu


----------



## imperialreign (Jun 18, 2008)

Rurouni Strife said:


> My thoughts:
> GPU's will eventually end up kinda like dual/quad core CPUs.  You'll have 2 on one Die.  When? who knows, but it seems that AMD is kinda working in that direction.  However, people complained when the 7950GX2 came out because "it took 2 cards to beat ATI's 1 (1950XTX)".  They did it again, but to a lesser degree for the 3870X2, and it'll become more accepted as it goes on, espically since AMD has said "no more mega GPUs".  Part of that is they don't wanna f up with another 2900 and they dont quite have the cash, but they are also thinking $$$.  Sell more high performing mid range parts.  That's where all the money is made.  And we all know AMD needs cash.



I kinda agree here as well - TBH, I think multi-GPU setups will be the future over the monolith designs . . . -two or more efficient GPUs can work just as effectively and efficiently, if not better, than one megaPU.  With AMD behind ATI at this point, I defi see that the move towards this implimentation is already there.

I'm sure that if indeed multi-core GPUs come marching out of ATI, we'll be seeing a lot of kicking and screaming from the green camp that "it's still 2 GPUs to our 1!!"  Which, IMO, I don't believe to be the case.  If one chip marches out that has 2 cores on one die, it's still 1 GPU.  We don't go around saying that "my Q6600 is 4 CPUs, man!"




Sure, a lot of this progress by ATI/AMD's part has got to be dictated by cost and resources; but I think this is one area where the red camp will be pushing new technology that nVidia will sooner or later have to accept.  nVidia can go and counter with a whole new, megaPU pushing uber-1337 processing capabilites, and ATi could just say "alright, we'll add 2 more cores to our current design and match you again."  nVidia could go to the drawing boards and redesign yet another 1337 GPU, and ATI could again counter with "alright, we'll add another 3 cores to our current design and take the lead."

IMHO, the smaller package will be way more cost efficient for both manufacturer and consumer years and years down the road.


----------



## WarEagleAU (Jun 18, 2008)

I have to kind of agree. But they will continue that if ATI doesnt do something to countermeasure it. I dont think sales of the GT200 line will be as high as NV hopes it will. As prices come down though, it will...but until then....


----------



## imperialreign (Jun 18, 2008)

WarEagleAU said:


> I have to kind of agree. But they will continue that if ATI doesnt do something to countermeasure it. I dont think sales of the GT200 line will be as high as NV hopes it will. As prices come down though, it will...but until then....



I agree as well - but, I think we're on the verge of seeing the first dual core GPU.  Initial rumors of the R700 hinted at the possiblity, but that seems to have turned out a negative (although we still have yet to see concrete specs on the 4870x2).  With the advent of Fuzion, though, I think they're further paving the way.  R800 could potentially deliver the first dual-core GPU, whenever HD5000 will be released (probably next year), and if so, the next series after that we could potentially see every card in the lineup (except for the low-end cards) stouting dual-core GPUs.

TBH, I don't forsee nVidia having the ability to counter that just yet.

This is all speculation, though, and it's all ways off in the future anyhow.  We'll just have to see.


----------



## yogurt_21 (Jun 18, 2008)

lemonadesoda said:


> THEREFORE, AMD are creating this "nVidia is a dinosaur" hype, because, truth be told, AMD cannot compete with nVidia unless they go x2. .



where in the article do you see that ati says nvidia is a dinosaur? they are merely stating that based on the performance vs cost to produce of the gtx280 it will likely be the last of it's kind. considering the 9800gx2 was cheaper to produce and offers similar if not better performance. 

it's not like nvidia can't simply go dual or even quad, seeing as they did buy up 3dfx. it would make more sense, as in the end the uberperformance seekers are going to sli those monolithic gpu's anyways. so why not make a cheaper variant that can  be a dual those seeking uber performance can buy the x2 while those seeking better price/performance can be accomidated as well. The geforce 9 series did this quite well. 

and I seriously don't get all the comments about the x2's I mean when the athlon 64 x2's came out they didn't say, "oh for amd to be able to beat the pentium 4 they had to go dual" dual was a means of providing more processing power without increasing clock speed or changing architecture. just because a gpu or cpu has more than one core doesn't mean it's inferior design. it's just a different way of meeting the same performance demand. 

if anything the argument against duals should be the return for the second core as it is in the cpu market. but if ati can make a dual that ebats nvidias single for the same or cheaper cost. thats good business, not inferior design.


----------



## evil bill (Jun 18, 2008)

I once read about the Nvidia v ATI "battle" being compared to a muscle car like a Viper or Mustang against a Ferrari. Nvidias stuff is modern, but not overly sophisticated and with its roots in older technologies whereas ATI/AMD tends to be pretty high-tech and cutting edge (e.g. the ringbus memory in the HD2900). You therefore get the fans of either camp decrying how the other arrives at their performance level regardless of how well it performs. 

ATIs problem is that as soon as its technological "higher ground" fails to best the competition, it puts itself under serious pressure. 

Still, hopefully the internal distractions of the ATI/AMD merger are in the past and they can concentrate on doing their stuff and keep the market moving on. I agree that Nvidia aren't being pushed hard enough by them and are probably sandbagging tech. Necessity is the mother of invention, and unless they have a strong competitor they will be tempted to make cost savings by stretching old tech for longer.


----------



## pentastar111 (Jun 18, 2008)

Even if nVidia's cards are a little faster..I'll probably still go ahead as planned with my next build being an all AMD rig...$700 for a vid card  is just tooooooooo much money in my opinion.


----------



## wolf (Jun 18, 2008)

this titanic GPU may not fare that well now, but it falls right into the category of future proofing. it, like the G80GTX/Ultra, will stand the test of time, especially when the 55nm GT200b comes out with better yields/higher clocks.


----------



## DarkMatter (Jun 18, 2008)

I completely disagree in the single-die multi-core GPU thing. The whole idea of using multiple GPUs is to reduce die size. Doing dual core GPU on a single die is exactly the same as doing a double sized chip, but even worse IMO. Take into account that GPUs are already multi-processor devices, in which the cores are tied to a crossbrigde bus for communication. Look at GT200 diagram:

http://techreport.com/articles.x/14934

In the image the conections are missing, but it suffices to say they are all conected to the "same bus". A dual core GPU would be exactly the same because GPUs are already a bunch of parallel processors, but with two separate buses, so it'd need an external one and that would only add latency. What's the point of doing that? Yields are not going to be higher, as in both cases you have same number of processors and same silicon that would need to go (and work) together. In a single "core" GPU if one unit fails you can just disable it and sell it as a lower model (8800 GT, G80 GTS, HD2900GT, GTX 260...) but in a dual "core" GPU the whole core should need to be disabled or you would need to dissable another unit in the other "core" (most probably) to keep symetry. In any case you loose more than with the single "core" aproach, and you don't gain anything because the chip is the same size. In the case of CPUs multi-core does make sense because you can't cut down/dissable parts of them, except the cache, if one unit is broken you have to throw away the whole core and in the case that one of them is "defective" (it's slower, only half the cache works...) you just cut them off and sell them separately. With CPUs is a matter of "it works/ doesn't work and if it does at which speed?", with GPUs is "how many units work?".


----------



## Rurouni Strife (Jun 18, 2008)

Can't disagree with you DarkMatter, you make perfect sense.  Didn't think about that.  Perhaps as die sizes get smaller, the way GPU's talk to eachother can be improved via a type of HT link or whatever.  Then you get shared memory, like what is rumored for R700 (don't know if thats true).  

as for evil bill-Ring Bus actually came on the x1K series of cards.  Just improved for R/RV600


----------



## WarEagleAU (Jun 18, 2008)

True Imperial.

@yogurt. The logical next step for ATI and eventually NV would be dual gpu cores. In a sense it would be like the X2s but a bit different. Whereas AMD/ATI may not want to go uber high core like Nvidia, they may break in on the dual core gpu. Kind of awesome to say the least.


----------



## hat (Jun 18, 2008)

You can only make transistors so small. Thier current pholosophy seems to be "moar transistors, who cares about moar bigger gpus?"

These rediculously large gpus are going to put out a rediculous amount of heat, and make vga coolers rediculously expensive due to the rediculous size of the bases of the heatsink needed to cool the rediculously large gpu.


----------



## DanishDevil (Jun 18, 2008)

That's one thing I love about die shrinks.  My EK full cover block cools both my 3870x2's GPUs lower than my E8500's cores at stock.  I bet the GTX280 puts out quite a lot of heat, though for so much power in a single, larger chip.


----------



## Megasty (Jun 18, 2008)

DarkMatter said:


> I completely disagree in the single-die multi-core GPU thing. The whole idea of using multiple GPUs is to reduce die size. Doing dual core GPU on a single die is exactly the same as doing a double sized chip, but even worse IMO. Take into account that GPUs are already multi-processor devices, in which the cores are tied to a crossbrigde bus for communication. Look at GT200 diagram:
> 
> http://techreport.com/articles.x/14934
> 
> In the image the conections are missing, but it suffices to say they are all conected to the "same bus". A dual core GPU would be exactly the same because GPUs are already a bunch of parallel processors, but with two separate buses, so it'd need an external one and that would only add latency. What's the point of doing that? Yields are not going to be higher, as in both cases you have same number of processors and same silicon that would need to go (and work) together. In a single "core" GPU if one unit fails you can just disable it and sell it as a lower model (8800 GT, G80 GTS, HD2900GT, GTX 260...) but in a dual "core" GPU the whole core should need to be disabled or you would need to dissable another unit in the other "core" (most probably) to keep symetry. In any case you loose more than with the single "core" aproach, and you don't gain anything because the chip is the same size. In the case of CPUs multi-core does make sense because you can't cut down/dissable parts of them, except the cache, if one unit is broken you have to throw away the whole core and in the case that one of them is "defective" (it's slower, only half the cache works...) you just cut them off and sell them separately. With CPUs is a matter of "it works/ doesn't work and if it does at which speed?", with GPUs is "how many units work?".



I was thinking the same thing. Given the size of the present ATi chips, they could be combined & still retain a reasonbly sized die but the latency between the 'main' cache & 'sub' cache would be so high that they might as well leave them apart. It would be fine if they increase the bus but then you would end up with a power hungry monster. If the R800 is a multi-core then so be it but we gonna need a power plant for the thing if its not going to be just another experiment like the R600.


----------



## imperialreign (Jun 18, 2008)

DarkMatter said:


> I completely disagree in the single-die multi-core GPU thing. The whole idea of using multiple GPUs is to reduce die size. Doing dual core GPU on a single die is exactly the same as doing a double sized chip, but even worse IMO. Take into account that GPUs are already multi-processor devices, in which the cores are tied to a crossbrigde bus for communication. Look at GT200 diagram:
> 
> http://techreport.com/articles.x/14934
> 
> In the image the conections are missing, but it suffices to say they are all conected to the "same bus". A dual core GPU would be exactly the same because GPUs are already a bunch of parallel processors, but with two separate buses, so it'd need an external one and that would only add latency. What's the point of doing that? Yields are not going to be higher, as in both cases you have same number of processors and same silicon that would need to go (and work) together. In a single "core" GPU if one unit fails you can just disable it and sell it as a lower model (8800 GT, G80 GTS, HD2900GT, GTX 260...) but in a dual "core" GPU the whole core should need to be disabled or you would need to dissable another unit in the other "core" (most probably) to keep symetry. In any case you loose more than with the single "core" aproach, and you don't gain anything because the chip is the same size. In the case of CPUs multi-core does make sense because you can't cut down/dissable parts of them, except the cache, if one unit is broken you have to throw away the whole core and in the case that one of them is "defective" (it's slower, only half the cache works...) you just cut them off and sell them separately. With CPUs is a matter of "it works/ doesn't work and if it does at which speed?", with GPUs is "how many units work?".




I see your point, and I slightly agree as well . . . but that's looking at current tech with current technology and current fabrication means.

If AMD/ATI can develop a more sound fabrication process, or reduce the number of dead cores, it would make it viable, IMO.

I'm just keeping in mind that over the last 6+ months, AMD has been making contact with some reputable companies who've helped them before, and have also taken on quite a few new personnel who are very well respected and amoungst the top of the line in their fields.

The Fuzion itself is, IMO, a good starting point, and AMD proving to themselves they can do it.  Integrating a GPU core like that wouldn't be resource friendly if their fabrication process left with a lot of dead fish in the barrel - they would be losing money just in trying to design such an architecture if fabrication would shoot themselves in the foot.

Perhaps it's possible they've come up with a way to stitch two cores together where if one is dead from fabrication, it doesn't cripple the chip, and the GPU can be slapped on a lower end card and shipped.  Can't really be sure right now, as AMD keeps throwing out one surprise after another . . . perhaps this will be the one they hit the home run with?


----------



## [I.R.A]_FBi (Jun 18, 2008)

you guys sre making the green giant seem like a green dwarf.


----------



## DarkMatter (Jun 18, 2008)

Megasty said:


> I was thinking the same thing. Given the size of the present ATi chips, they could be combined & still retain a reasonbly sized die but the latency between the 'main' cache & 'sub' cache would be so high that they might as well leave them apart. It would be fine if they increase the bus but then you would end up with a power hungry monster. If the R800 is a multi-core then so be it but we gonna need a power plant for the thing if its not going to be just another experiment like the R600.



Well we don't know with certainty the number of transistors of RV770 but they are above 800 million so it would be more than 1600 for a dual core. That's more than the size of the GT200, but I don't think it would be a big problem. 

On the other hand, the problem with GT200 is not transistor count, but die size, the fact they have done it in 65 nm. In 55 nm the chip would probably be around 400 cm2 which is not that high really. 

Another problem when we compare GT200 size and the performance it delivers is that they have added those 16k caches in the shader processors where are not needed for any released game or benchmark. Applications will need to be programmed to use them. As it stands now GT200 has almost 0,5 MB of cache with zero benefit. 4MB of cache in Core2 are pretty much half the die size, in GT200 it's a lot less than that but a lot from a die size/gaming performance point of view. And to that you have to add L1 caches, that are probably double the size than on G92, with zero benefit again. It's here and in FP64 shaders where Nvidia has used a lot of silicon for future proofing the marchitecture, but we don't see the fruits yet.

I think that on GPUs bigger single core chips is the key to performance and multi-GPU is the key to profitability once reached one point in the fab-process. The better result is probably something in the middle, I mean not going with more than two GPUs and keep making the chips bigger according to the fab-process capabilities. As I explained above I don't think multi-core GPUs have any advantage over bigger chips.



imperialreign said:


> I see your point, and I slightly agree as well . . . but that's looking at current tech with current technology and current fabrication means.
> 
> If AMD/ATI can develop a more sound fabrication process, or reduce the number of dead cores, it would make it viable, IMO.
> 
> ...



_That would open the door to both bigger chips and, as you say, multi-core chips. Again I don't see any advantage on multi-core GPUs._


_And what's the difference between that and what they do today? Well what Nvidia does today, as Ati is not doing that with RV670 and 770, but they did in the past._


----------



## candle_86 (Jun 18, 2008)

panchoman said:


> nvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?



how do you get that it has 40 rops, the G92 has 16 even that discredits even the idea of dual G92 under there.


----------



## tkpenalty (Jun 18, 2008)

lemonadesoda said:


> If nVidia can do a fab shrink to reduce die size and to reduce power they have a clear winner.
> 
> THEREFORE, AMD are creating this "nVidia is a dinosaur" hype, because, truth be told, AMD cannot compete with nVidia unless they go x2. And x2? Oh, thats the same total chip size as GTX200 (+/- 15%). But with a fab shrink (to same fab scale as AMD), nVidia would be smaller. Really? Can that really be true? Smaller and same performance = nVidia architecture must be better.
> 
> So long as nVidia can manufacture with high yield, they are AOK.



Even the CEO of nvidia admitted that die shrinking will do shit all in terms of cooling, the effectiveness of a Die shrink from 65nm to 45nm is not that big for that many transistors.

AMD Creating this "nvidia is a dinosaur" hype is, viable. 

If you have so much heat output on one single core, the cooling would be expensive to manufacture. 200W on one core, the cooling system would have to transfer the heat away ASAP. While, 2x100W cores, would fare better, with the heat output being spread out. 

Realise that a larger core means a far more delicate card with the chip itself requiring more BGA solder balls; means the card cannot take much stress before BGA solder balls falter. 

AMD is saying that, if they do what they are doing now, they will not need to completely redesign an architecture. It doesn't matter if they barely spend anything in R&D, in the end the consumer benefits from lower prices, we are the consumer remember. 

AMD can decide to stack two or even three cores, provided they make the whole card function as one GPU (instead of the HD3870X2 style's 2 cards on software/hardware level), if the performance and price is good.



Rurouni Strife said:


> My thoughts:
> GPU's will eventually end up kinda like dual/quad core CPUs.  You'll have 2 on one Die.  When? who knows, but it seems that AMD is kinda working in that direction.  However, people complained when the 7950GX2 came out because "it took 2 cards to beat ATI's 1 (1950XTX)".  They did it again, but to a lesser degree for the 3870X2, and it'll become more accepted as it goes on, espically since AMD has said "no more mega GPUs".  Part of that is they don't wanna f up with another 2900 and they dont quite have the cash, but they are also thinking $$$.  Sell more high performing mid range parts.  That's where all the money is made.  And we all know AMD needs cash.



Just correcting you, 2 on one die is what we have atm anyway. GPUs are effectively a collection of processors in one die. AMD is trying not to put dies together as they know that die shrinks under 65~45nm do not really help in terms of heat output, and therefore are splitting the heat output. As I mentioned before, a larger die will mean more R&D effort, and more expensive to manufacture.


----------



## btarunr (Jun 18, 2008)

At least NVidia came this far. ATI hit its limit way back with the R580+. The X1950 XTX was the last ''Mega Chip'' ATI made. Ofcourse the R600 was their next megachip but ended up being a cheeseburger.


----------



## tkpenalty (Jun 18, 2008)

btarunr said:


> At least NVidia came this far. ATI hit its limit way back with the R580+. The X1950 XTX was the last ''Mega Chip'' ATI made. Ofcourse the R600 was their next megachip but ended up being a cheeseburger.



Instead of the word cheeseburger I think you should use something that tastes vile. Cheeseburgers are successful.


----------



## btarunr (Jun 18, 2008)

tkpenalty said:


> Instead of the word cheeseburger I think you should use something that tastes vile. Cheeseburgers are successful.



by cheeseburger I was highlighting the word 'fattening', 'not as nutritious as it should be', 'unhealthy diet'.  Popularity isn't indicative of a better product. ATI fans will continue to buy just anything they put up. Though I'm now beginning to admire the HD3870 X2.


----------



## Nyte (Jun 18, 2008)

One still has to wonder though if NVIDIA has already thought ahead and designed a next-gen GPU with a next-gen architecture... just waiting for the right moment to unleash it.


----------



## laszlo (Jun 18, 2008)

DarkMatter said:


> On the other hand, the problem with GT200 is not transistor count, but die size, the fact they have done it in 65 nm. In 55 nm the chip would probably be around 400 cm2 which is not that high really.





The die size of gt200 is 576mm2 on 65nm  so in 55nm  160000 mm2 ?


----------



## aj28 (Jun 18, 2008)

wolf said:


> this titanic GPU may not fare that well now, but it falls right into the category of future proofing. it, like the G80GTX/Ultra, will stand the test of time, especially when the 55nm GT200b comes out with better yields/higher clocks.



Not saying anything but umm... From my understanding anyway, die shrinks generally cause worse yields and a whole mess of manufacturing issues in the short run, depending of course upon the core being shrunk. Again, not an engineer or anything, but shrinking the GT200, being the behemoth that it is, will not likely be an easy task. Hell, if it were easy we'd have 45nm Phenoms by now, and Intel wouldn't bother with their 65nm line either now that they've already got the tech pretty well down. Correct me if I'm wrong...


----------



## DarkMatter (Jun 18, 2008)

laszlo said:


> The die size of gt200 is 576mm2 on 65nm  so in 55nm  160000 mm2 ?



 
Yeah I meant 400 mm2


----------



## Voyager (Jun 18, 2008)

AMD more than 1 Teraflop


----------



## btarunr (Jun 18, 2008)

Voyager said:


> AMD more than 1 Teraflop



The news is as stale as...






...that.


----------



## candle_86 (Jun 18, 2008)

yea but raw power means diddly the R600 has twice the computational units yet lagged behind, i still await benchmarks


----------



## Easy Rhino (Jun 18, 2008)

i love AMD, but come on. why would they go and say something like that? nvidia has proven time and again that they can put out awesome cards and make a ton of money doing it. meanwhile amds stock is in the toilet and they arent doing anything special to keep up with nvidia. given the past 2 years history between the 2 groups, who would you put your money on in this situation? the answer is nvidia.


----------



## newconroer (Jun 18, 2008)

Even if the statement is true it still falls in Nvidia's favor either way.

They have the resources to go 'smaller' if need be. ATi has less flexibility.


----------



## tkpenalty (Jun 18, 2008)

btarunr said:


> The news is as stale as...
> 
> 
> 
> ...



LOL. That would make Zek cry


----------



## DanishDevil (Jun 18, 2008)

btarunr said:


> The news is as stale as...
> 
> 
> 
> ...



I just woke up my entire family because I fell out of my chair and knocked over my lamp at 3AM when I read that


----------



## vega22 (Jun 18, 2008)

ati claims nvidia is using dinosaur tech, love it.

its the most powerful single gpu ever, of course ati will try and dull the shine on it.

i recall all the ati fanbios claiming foul when nv did the 79gx2 but now its cool to do 2 gpu on 1 card to compete?

wait till the 280gtx gets a die shrink and they slap 2 on 1 card, can you say 4870x4 needed to compete.


----------



## btarunr (Jun 18, 2008)

marsey99 said:


> ati claims nvidia is using dinosaur tech, love it.
> 
> its the most powerful single gpu ever, of course ati will try and dull the shine on it.
> 
> ...



Even if you do shrink the G200 to 55nm (and get a 4 sq.cm die), its power and thermal properties won't allow a X2. Too much power consumption (peak) compared to a G92(128SP, 600MHz) which allowed it. Watch how the GTX 280 uses a 6 + 8 pin input. How far do you think a die shrink would go to reduce it? Not to forget, there's something funny as to why NV isn't adapting newer memory standards (that are touted to be energy efficient). (1st guess: stick with GDDR3 to cut mfg costs since it takes $120 to make the GPU alone). Ceiling Cat knows what....but I don't understand what "meow" actually means...it means a lot of things


----------



## tkpenalty (Jun 18, 2008)

In the end it DOES NOT MATTER how AMD achieves their performace. 

7950GX2 is an invalid claim as it could not function on every system, due to it being seen on a driver level as two cards; an SLi board was needed. You can't compare the 4870X2 to a 7950, its like comparing apples and oranges, 4870X2 to the system is only ONE card not two, CF is not enabled (therefore performance problems with multi GPUs go out the window). Moreover the way that the card uses memory is just the same as the C2Ds, two cores, shared L2.


----------



## Megasty (Jun 18, 2008)

The sheer size of the G200 won't allow for an GX2 or whatever. The heat that 2 of those things produce will burn each other out. Why in the hell would NV put 2 of them in a card when it costs an arm & a leg just to make one. The PP ratio for this this card is BS too when $400 worth of cards, whether it be the 9800GX2 or 2 4850s, are not only in the same league as the beast but allegedly beats it. The G200b won't be any different either. NV may be putting all their cash in this giant chip ATM but that doesn't mean that they're going to do anything stupid with it.

If the 4870x2 & the 4850x2 are both faster than the GTX280 & costs a whole lot less then I don't see what the problem is except for people crying about the 2 GPU mess. As long as its fast & DON'T cost a bagillion bucks I'm game.


----------



## DarkMatter (Jun 18, 2008)

I would like to know in which facts are you guys basing your claims that a die shrink won't do anything to help lowering heat output and power consumption? It has always helped A LOT. It is helping Ati and surely will help Nvidia. Thinking that the lower power consumption of RV670 and RV770 is based on architecture ehancements alone is naive. I'm talking about peak power, in comparison to what R600 was compared to competition, idle power WAS improved indeed, and so has GT200.


----------



## tkpenalty (Jun 18, 2008)

Megasty said:


> The sheer size of the G200 won't allow for an GX2 or whatever. The heat that 2 of those things produce will burn each other out. Why in the hell would NV put 2 of them in a card when it costs an arm & a leg just to make one. The PP ratio for this this card is BS too when $400 worth of cards, whether it be the 9800GX2 or 2 4850s, are not only in the same league as the beast but allegedly beats it. The G200b won't be any different either. NV may be putting all their cash in this giant chip ATM but that doesn't mean that they're going to do anything stupid with it.
> 
> If the 4870x2 & the 4850x2 are both faster than the GTX280 & costs a whole lot less then I don't see what the problem is except for people crying about the 2 GPU mess. As long as its fast & DON'T cost a bagillion bucks I'm game.



I agree with your view.

GT200s, well Nvidia are shearing down their profits just to get these things to sell, AMD on the otherhand enjoy not having to reinforce their cards and put high end air cooling on-they are way better off. If these 4850s sell well, as well as the RV770, the GT200s look like an awful flop.


----------



## btarunr (Jun 18, 2008)

DarkMatter said:


> I would like to know in which facts are you guys basing your claims that a die shrink won't do anything to help lowering heat output and power consumption?


A 65nm single GPU requires 6 + 8 pin power input (obviously for higher power input at peak). How much of that input can be reduced with a die shrink to 55nm? Enough to make a GX2? Without say three 6-pin connectors?


----------



## Megasty (Jun 18, 2008)

DarkMatter said:


> I would like to know in which facts are you guys basing your claims that a die shrink won't do anything to help lowering heat output and power consumption? It has always helped A LOT. It is helping Ati and surely will help Nvidia. Thinking that the lower power consumption of RV670 and RV770 is based on architecture ehancements alone is naive. I'm talking about peak power, in comparison to what R600 was, idle power WAS improved indeed, and so has GT200.



Of course it'll lower the heat & power. The only point was that it still wouldn't allow for a GX2, not to mention that the card would cost around $1200  :shadedshu 

However, a faster & cheaper 400mm² die does have EVERY advantage over a slower, more costly 576mm² die.


----------



## Deleted member 24505 (Jun 18, 2008)

I bet there aint gonna be a gtx200 mobile chip


----------



## tkpenalty (Jun 18, 2008)

DarkMatter said:


> I would like to know in which facts are you guys basing your claims that a die shrink won't do anything to help lowering heat output and power consumption? It has always helped A LOT. It is helping Ati and surely will help Nvidia. Thinking that the lower power consumption of RV670 and RV770 is based on architecture ehancements alone is naive. I'm talking about peak power, in comparison to what R600 was compared to competition, idle power WAS improved indeed, and so has GT200.



90nm > 65nm (R600 to RV670), was a HUGE leap in the drop of the transistor size, moreover remember the RV670 has a massive chunk of it; half of the 512bit memory controller effectively removed. 

Die shrinks do something but under around 65nm the usefulness of die shrinking insn't really significant. Nvidia's CEO admitted that dieshrinking the GTX280 wouldnt help its extreme heat output a lot. Its fairly reasonable as to why, transistor count is more of a factor. In all cases, G80 > G92, R600 > RV670, its due to the cutting down of the memory controller. 

By the way, the reason why AMD's cards use more power is simple; their cards use more phases in contrast to Nvidia. More phases = more power used but phases subject to less current, as well as generating less heat.


----------



## newconroer (Jun 18, 2008)

tkpenalty said:


> I agree with your view.
> 
> GT200s, well Nvidia are shearing down their profits just to get these things to sell, *AMD on the otherhand enjoy not having to reinforce their cards and put high end air cooling on-they are way better off.* If these 4850s sell well, as well as the RV770, the GT200s look like an awful flop.




Ya because ATi's been looooooving the way things have turned out the last two and a half years.

Yep, they don't have put 'high end air cooling' on their products, what a wonderful relief for them!

~


----------



## Megasty (Jun 18, 2008)

newconroer said:


> Ya because ATi's been looooooving the way things have turned out the last two and a half years.
> 
> Yep, they don't have put 'high end air cooling' on their products, what a wonderful relief for them!
> 
> ~



Not to be negative or anything but none of the stock cards from NV or ATi have high-end cooling fans. The stock casing only restrict most of the fans anyway


----------



## Kreij (Jun 18, 2008)

Polaris573 said:


> The head of ATI Technologies claims that the recently introduced NVIDIA GeForce GTX 200 GPU will be the last monolithic “megachip” because they are simply too expensive to manufacture.  The statement was made after NVIDIA executives vowed to keep producing large single chip GPUs.   The size of the G200 GPU is about 600mm2¬¬ *which means only about 97 can fit on a 300mm wafer* costing thousands of dollars.




Why are the wafers limited to 300mm? Can't they use a 600mm wafer and get four times the processors out of it?
Is it just because all the FABs are set up to use that size or is there some kind of physical limit?


----------



## [I.R.A]_FBi (Jun 18, 2008)

Megasty said:


> Not to be negative or anything but none of the stock cards from NV or ATi have high-end cooling fans. The stock casing only restrict most of the fans anyway




orly? 
im sure i read somewhere about teh gtx 280 cooler being designed by CM or sumpn.


----------



## DarkMatter (Jun 18, 2008)

IMO a die shrink to 55nm could enable the posibility of doing a GX2. Maybe not a GTX280 GX2 but yes one with slightly lower clocks or one cluster dissabled and with enough performance to crunch the X2, of course it would require more power, but it would still be within the 6+8 pin envelop. I have three "facts", but of course are only based on my opinion:

1- You have to take into account how power consumtion works. It's exponential, not linear, so a slower part would consume a lot less and the same can be applied to voltages. Nvidia because GT200 was worse than expected in this area had to lower the clocks, but probably they have kept it as high as possible within the selected power envelope. There's always a hot spot for performance-per-watt for any chip and GTX 280 is probably quite higher than that spot. FACT: look at Wizzard's Zotac AMP! GTX280 it consumes a lot more than what you should expect from that overclock. Aim a bit lower than that said spot and you have a "low power" chip. For example a GTX280 GX2 @ 500 Mhz would consume a lot less and still leave the HD4870 X2 behind in performance.

2- Nvidia has implemented the abbility to shut down parts of the chip in the GT200, and it really works very well. Again look at Wizzard's power consumption charts and how it consumes a lot less than the X2 on average, even though its maximum is almost the same. That would make the card to probably never reach the maximum power consumtion in the GX2 card. There's no way you are going to be able to make a total of 64 ROPs work at the same time, for example.

3- Continuing with the above argument, IMO if Nvidia did a GX2 it wouldn't be based on the GTX 280, but on the 8800 GS substitute. Nvidia will surely make a 16-20-24 ROP card while mantaining a high shader count (maybe 192/168 SP, same or one less cluster than GTX260 for example), they would be stupid if they didn't, as it makes more sense than ever. The GS is "weak" because it has 12 ROPs but 16 on the other hand are enough for high-def gaming. 16 ROP x 2 is more than enough as the X2/GX2 can testify, 32 x 2 is just over-over-overkill and silly. 
My bet is that Nvidia will do a 20 ROP 168/192 SP card for high mainstream no matter what and they could use that for the GX2. Final specs for that hypothetical GX2 would be: 40 ROP, 336/384 SP, 112/128 TMU and 2 x 320 bit memory controler, that if they can't make the card use the same memory pool as R700 seems to be going to do. The above card would leave the X2 well behind performance wise and still be within the power envelop IMO. Of course that envelop would be higher than that on the X2 but reachable IMHO and still within the 6+8 pin layout that's 300W, the GTX 280 needs 6+8 pins just by a hair.


----------



## wolf (Jun 18, 2008)

interesting


----------



## DarkMatter (Jun 18, 2008)

Kreij said:


> Why are the wafers limited to 300mm? Can't they use a 600mm wafer and get four times the processors out of it?
> Is it just because all the FABs are set up to use that size or is there some kind of physical limit?



Exactly. Wafer size is to Fabs and manufacturers like CPU sockets are for motherboard and CPU makers (interms of compatibility) or ATX standard if you prefer. 450 mm wafers are on track already BTW, 600mm is too much to handle right now.

EDIT: And yes, there is some physical limit too. Take in mind wafers are done by slicing silicon bars at very thin width (less than 1 mm IIRC) and have to mantain the same width all over their area. To that you have to add that the alloy of silicon has to be homogeneous throughout the whole wafer too.


----------



## btarunr (Jun 18, 2008)

DarkMatter said:


> Exactly. Wafer size is to Fabs and manufacturers like CPU sockets are for motherboard and CPU makers (interms of compatibility) or ATX standard if you prefer. 450 mm wafers are on track already BTW, 600mm is too much to handle right now.



That's because of yields. Big wafer, wafer fails, loss of more yield. Keeping wafer sizes limited is a precautionary measure (while compromising on manufacturing expenditure).


----------



## Nyte (Jun 18, 2008)

tkpenalty said:


> 90nm > 65nm (R600 to RV670), was a HUGE leap in the drop of the transistor size, moreover remember the RV670 has a massive chunk of it; half of the 512bit memory controller effectively removed.
> 
> Die shrinks do something but under around 65nm the usefulness of die shrinking insn't really significant. Nvidia's CEO admitted that dieshrinking the GTX280 wouldnt help its extreme heat output a lot. Its fairly reasonable as to why, transistor count is more of a factor. In all cases, G80 > G92, R600 > RV670, its due to the cutting down of the memory controller.
> 
> By the way, the reason why AMD's cards use more power is simple; their cards use more phases in contrast to Nvidia. More phases = more power used but phases subject to less current, as well as generating less heat.



670/620/635 = 55 nm
630/610 = 65 nm


----------



## Deleted member 24505 (Jun 18, 2008)

Whats the average yield for a 300mm wafer then? Does it differ with differant manufacturers or is it totally dependant on the size of the wafer?


----------



## HTC (Jun 18, 2008)

tigger69 said:


> Whats the average yield for a 300mm wafer then? Does it differ with differant manufacturers or is it totally dependant on the size of the wafer?



It depends on the die size: the bigger the die size, the less units a wafer yields.

That's why ATI is ahead of nVidia (in this respect, atm): they manage to make their die size much smaller then nVidia.


----------



## btarunr (Jun 18, 2008)

tigger69 said:


> Whats the average yield for a 300mm wafer then? Does it differ with differant manufacturers or is it totally dependant on the size of the wafer?



Of course, articles from _The Inquirer_ are so full of it, but in one such article, it was mentioned that on 300mm wafer, for the GT200 yields could be as low as 40%. Somewhere else is said that the die costs $110 to manufacture and assemble into the package (package as in electronics, not logistics) will send the cost upto $120. With increase in wafer sizes, you're increasing the risk of yield loss.


----------



## DarkMatter (Jun 18, 2008)

btarunr said:


> That's because of yields. Big wafer, wafer fails, loss of more yield. Keeping wafer sizes limited is a precautionary measure (while compromising on manufacturing expenditure).



Yeah that's what I wanted to say when I said physical limit, as there is no absolute physical limit for that. 
Also because bigger wafers are possible I said it works like standards. Nvidia will surely want bigger wafers for GT200 in expense of waffer yields, because probably the loss in those yields would be smaller than the gains in die yields, but since it's like an standard they can't. I don't know if I have explained that well.

EDIT: Also I highly doubt those Inquirer yield numbers. Probably are on the high 40s and were told so, and they just slapped that 40% number. Also that number seems extremely low without knowing how high are other GPU yields. Probably are never higher than 75%, and much lower in new high-end chips. For example RV770. Difference from say 60% and 50% is already very high.


----------



## spud107 (Jun 18, 2008)

so when nvidia see this they say o rly?
next gfx card will be 2pcb's, one for the gpu, other for the rest of the components


----------



## DanishDevil (Jun 18, 2008)

^ THAT was a good laugh.  It wouldn't surprise me...


----------



## tkpenalty (Jun 18, 2008)

spud107 said:


> so when nvidia see this they say o rly?
> next gfx card will be 2pcb's, one for the gpu, other for the rest of the components



Explain the logic behind that. You're only mfgr prices up by doing that, ever heard of multi layered PCBs man?... 

HD4850 > 9800GTX by 25% According to AMD, this is fairly believeable. 

A dual GTX280 is technically impossible, between two slots, why? 65nm to 55nm doesn't boast much of a change in TDP! Nvidia's CEO even admitted it, do I have to repeat this? GX2 would be viable, with say a GT200 variant that is similar to the G92 in die size. It was mentioned that a die shrink would only drop the GTX280's heat ouput down to what, around 200W, which is still ridiculously high (400W+ GX2). Who gives about Idle when the card is ridiculously hot at load. 

Nvidia really stabbed themselves in the foot, while it is powerful as such, the HD4870X2 will be a more successful product.


----------



## spud107 (Jun 18, 2008)

the logic is "taking the piss", wheres your sense of humour?


----------



## tkpenalty (Jun 18, 2008)

spud107 said:


> the logic is "taking the piss", wheres your sense of humour?



Oh, um it broke when my cousin dropped my guitar.

This is serious.


----------



## Megasty (Jun 18, 2008)

tkpenalty said:


> Oh, um it broke when my cousin dropped my guitar.
> 
> This is serious.



The only thing that's serious about it is how NV bet the farm on this thing. I'll be collecting that farm when I buy my 4870x2


----------



## EastCoasthandle (Jun 18, 2008)

gtx 200 series gpu (from what I've found so far)

A wafer from the 4800 series gpu will offer a whole lot more.  However, I haven't found one yet. Anyone have a 55nm wafer pic?


----------



## DarkMatter (Jun 18, 2008)

tkpenalty said:


> Explain the logic behind that. You're only mfgr prices up by doing that, ever heard of multi layered PCBs man?...
> 
> HD4850 > 9800GTX by 25% According to AMD, this is fairly believeable.
> 
> ...



Could you post a link to where Nvidia's CEO said that please? And where those power numbers were mentioned, though I suppose it's the same. I highly doubt going to 55nm won't make the card consume less than 200W. 
Also as I mentioned, Nvidia doesn't need 2 280s to crush Ati's X2, not even 2 260s. By only shrinking the chip to 55nm it would be 400mm2, take some ROPs out and you will get a die size close to G92. No one has said GT200 GX2 is possible but GT200b IS, and you will see it soon if Ati's X2 happens to be quite faster than GTX280.

Also real power consumption of GTX280 is nowhere near those 236W, while the older cards are close to their claimed TDP. It's temperatures are far better than on G92 and RV670 too, despite being a lot bigger, so there's some room left there. If GT200b can't improve the performance beyond that of the X2 a GX2 of GT200b WILL come, but it's nature is not so defined. In fact a card with 2x the performance of GTX280 doesn't make sense AT ALL. If it did, because games in the near future could take advantage of it, then Ati would be LOST.

In the end it will all depend on the real performance of the RV770. AFAIK HD48*7*0 > 9800 GTX by 25% and HD4850 > 8800 GT by 25%. That also means HD4850 > 9800 GTX but by 5-10 %. ANYWAY forget about that if the performance boost of newer drivers happens to be true.


----------



## DarkMatter (Jun 18, 2008)

EastCoasthandle said:


> gtx 200 series gpu (from what I've found so far)
> 
> A wafer from the 4800 series gpu will offer a whole lot more.  However, I haven't found one yet. Anyone have a 55nm wafer pic?



Wow I knew that such big die size and low die number meant fewer complete dies  in theory, but seing that in a picture is more impressive! I counted 95 complete dies there, and like 30 incomplete ones. Almost 10 of the incomplete ones have more than 90% of the die intact, but I don't know if they can use them. I guess they can cut that part, that judging by the die picture that means cutting some SPs in most of them and sell it as GTX260. Nevertheless only that fact alone contributes to lower yields, I suppose, and the number of incomplete dies is going to be a lot lower at 55 nm.


----------



## yogurt_21 (Jun 18, 2008)

Megasty said:


> The only thing that's serious about it is how NV bet the farm on this thing. I'll be collecting that farm when I buy my 4870x2



lol nvidia didn't bet the farm on this one. if they did we'd be seeing a commercial on televsion every 3 seconds followed by famous endorcements, several small islands being purchased and named gtx280. lol

nvidia is a big company and it would take alot for them to "bet the farm" on a single chip. it's not like nvidia will really care if ati's is faster thsi time. nvidia will just simply laugh when theirs outsells ati's faster card. this has been typical since the dawn of nvidia (though back then it was rare that ati got a win, the radeon was the first to even truly compete) 

the gtx280 seems to be a flop, no biggie nvidia will launch a revision which may or may not flop as well. it doesn't matter because nvidia is already working 4-5 generations out. So if this generation is a flop, they'll simply pour more of their employees into the next gen. 

Ati also works several generations out which is why it didn't matter that the r600 was a flop. they already had several others in the pipeline that they knew performed better perwatt. 

and I seriously have to laugh at all the fanboys who say "look at the last 2 years, nvidia can't lose" wow was the 8800 your first gpu or what? nvidia nor intel nor amd nor ati nor via nor any other manufacturer can put out the best product every time. it's impossible and history tells us differently. the ti4000 series stomped all over ati's radeon 8500. and all the nvidia fanboys went "see nvidia can't lose" and then came the fx series in which the 9700's stomped all over and later the 9800's widened the gap. Then fact that nvidia has been ruling for the past 2 years only strengthens the argument that the ati card will be faster this time. ATi and nvidia have been doing this dance since long before many on this forum, knew what a graphics card was and they'll be doing this dance long after. it's development + pressure from competition + a little luck that forms the winner. and nvidia has been missing an element making them less likely to come on top this time. 

the gtx280 was the chip specs wise we all wanted it to be double the rop's double the mem bit, and nearly double the shaders. the trouble is each time the gpu manufacturers double things, it takes games quite a while to catch up in coding to use the extra power. The gtx280 will only grow more powerful as time goes on, but it will likely be the last of it's kind. why? because it's the way the markets going. the bigger badder phase started  when intel was pushing the clock speeds and ati was pushing the pixel pipeline. both required more cooling and psu than previous generations had seen. the core2duo is different offering more power without going for the GHZ (stock comparison of course) and each generation seems to have a lower tdp than the last. gpu's similarly will start (and have already started rv670/g92) doing the same thing. the high tdp gpus will start going by the wayside while cheaper/quieter/lower tdp versions will replace them. the g92 did a good job of increasing performance while dropping heat and energy requirements. the gt200b will likely do the same with nvidias next chip being cooler than the gt200b. it's market trends. more ussers are going for cooler quiter pc's than in 2000 making this quite a different battle than it used to be.


----------



## PVTCaboose1337 (Jun 18, 2008)

I agree that each company will take the lead somehow, and keep alternating.  I believe that eventually, there will be a standstill when they hit physical limits of graphics processing technology, and advancements will slow, and basically most cards will be equal for a period of time.


----------



## yogurt_21 (Jun 18, 2008)

DarkMatter said:


> Wow I knew that such big die size and low die number meant fewer complete dies  in theory, but seing that in a picture is more impressive! I counted 95 complete dies there, and like 30 incomplete ones. Almost 10 of the incomplete ones have more than 90% of the die intact, but I don't know if they can use them. I guess they can cut that part, that judging by the die picture that means cutting some SPs in most of them and sell it as GTX260. Nevertheless only that fact alone contributes to lower yields, I suppose, and the number of incomplete dies is going to be a lot lower at 55 nm.



is it just me or would a square wafer make alot more sense lol. I mean look at all the partials that needn't be that way if the dies weren't square and the wafer round.


----------



## [I.R.A]_FBi (Jun 18, 2008)

why is it round?


----------



## btarunr (Jun 18, 2008)

[I.R.A]_FBi said:


> why is it round?



Because 'stuff' is 'planted' on it while it spins.


----------



## DarkMatter (Jun 18, 2008)

yogurt_21 said:


> is it just me or would a square wafer make alot more sense lol. I mean look at all the partials that needn't be that way if the dies weren't square and the wafer round.



Yeah I thought the same some years back when I first saw a wafer picture. I suppose wafers being round has to do with their manufacturing process, but why can't be square is a question I have since that first time. It was a comparison of a chip at 180nm and 130nm, and I have to say there were a lot more dies than ~100, so incomplete ones were a lot less in comparison. Complete dies on 130 nm were more than double the ones on 180nm. Theoretically each CPU process (180-130-90-65-45-32...) can do double the number of dies than the previous one, but I think it's actually a bit more because of that.



btarunr said:


> Because 'stuff' is 'planted' on it while it spins.



It spins? Really? Or are you just kidding? I thought they were made by exposition to "light" and chemicals. Pretty much how you would reveal photos in the old fashion. I read an article about how they made the chips and I don't remember anything about spining, not at least while the layout was being "printed"...


----------



## Deleted member 24505 (Jun 18, 2008)

So because of the smaller 55nm die,you'd get a lot more complete cores on a 300mm wafer.So in theory you'd get a higher yield with a 55nm die/300mm wafer.


----------



## btarunr (Jun 18, 2008)

DarkMatter said:


> It spins? Really? Or are you just kidding? I thought they were made by exposition to "light" and chemicals. Pretty much how you would reveal photos in the old fashion. I read an article about how they made the chips and I don't remember anything about spining, not at least while the layout was being "printed"...



Why do people enjoy complicated lives? http://en.wikipedia.org/wiki/Semiconductor_fabrication

It's round so it aides several manufacturing processes. A Perti-plate is never square, we use them for microbial cultures. It's round and so aides streaking, colony design, etc. I wish pizza was square, but then it becomes difficult for Dominoes to make them. They come in a semi-manufactured state, the local Dominoes completes the manufacture before giving it away to the delivery boys.


----------



## DarkMatter (Jun 18, 2008)

tigger69 said:


> So because of the smaller 55nm die,you'd get a lot more complete cores on a 300mm wafer.So in theory you'd get a higher yield with a 55nm die/300mm wafer.



And that only from geometry perspective. 

You have to add lower operational voltages, lower in-between transistor latency = higher possible clocks and lower power consumption. Everything adds up to manufacturers being able to make the same chip a lot easier/cheaper or a faster chip for the same costs.


----------



## HTC (Jun 18, 2008)

*They're planning 450 mm wafers!*

Read this.


----------



## v-zero (Jun 18, 2008)

It is cheaper (due to the mathematics of yields in producing semiconductor dies) to produce two smaller chips and place them on one PCB than to produce one larger chip, even if the sum of the transistors is equal in both cases... Q.E.D.


----------



## btarunr (Jun 18, 2008)

Ok. I are serious cat now. Go through these at leisure: http://www.youtube.com/results?sear...afer&search_type=&aq=0&oq=semiconductor+wafer


----------



## DarkMatter (Jun 18, 2008)

btarunr said:


> Ok. I are serious cat now. Go through these at leisure: http://www.youtube.com/results?sear...afer&search_type=&aq=0&oq=semiconductor+wafer



Ok I've seen some of the videos on that link and these 2 explain the thing very well. It's very easy to know why they are round after seeing the secon one, and actually understanding in the end how they make them and specially how they make sure the silicon is pure:

http://www.youtube.com/watch?v=LWfCqpJzJYM&feature=related

http://www.youtube.com/watch?v=aWVywhzuHnQ&feature=related

EDIT: Anyway it's not because they spin while they "plant" suff in them, but quite the oposite from what I've understood. It's while they remove the remains and they could be square for that purpose. I had already taken into account they could make them spin to take those remains out, but square wafers could spin too, it would be just not as easier.  They have to be round because of how the silicon bar is created though, and I didn't know that. I like learning this kind of things.


----------



## razaron (Jun 18, 2008)

oh oh i have a brilliant comparison of nvidia to ATI. nvidia is a shelby gt500 with good old muscle and ati would be ATI would be a lexus LS 460 a car that can park itself but would lose in a race with a shelby gt500. now hows tha for a car comparison.

ps. btarunr you would have made a brilliantly chavy sentence if you said "i is serious cat now."


----------



## Assimilator (Jun 18, 2008)

Who wants to bet that NV are going to skip the jump to 55nm and go straight to 45nm, a la Intel?


----------



## DarkMatter (Jun 18, 2008)

Assimilator said:


> Who wants to bet that NV are going to skip the jump to 55nm and go straight to 45nm, a la Intel?



I thought the same and that thought was kinda strengthened by the fact that TSMC announced they were ready for 45nm and how Nvidia preffers bigger jumps ala Intel, as you said. But I don't think they are doing that, Nvidia also likes using proved technologies and GT200b is said to come really soon. Plus I think 55nm is already said to be GT200b's fab process.

But yeah, the posibility still remains, I wouldn't bet my leg though.


----------



## v-zero (Jun 18, 2008)

Assimilator said:


> Who wants to bet that NV are going to skip the jump to 55nm and go straight to 45nm, a la Intel?


It's extremely unlikely due to the complexity of gt200, making it on 65nm is already giving horrible yields...


----------



## btarunr (Jun 18, 2008)

razaron said:


> ps. btarunr you would have made a brilliantly chavy sentence if you said "i is serious cat now."


 

No, I wouldn't 






See, don't I look serious?™



DarkMatter said:


> Ok I've seen some of the videos on that link and these 2 explain the thing very well. It's very easy to know why they are round after seeing the secon one, and actually understanding in the end how they make them and specially how they make sure the silicon is pure:
> 
> http://www.youtube.com/watch?v=LWfCqpJzJYM&feature=related
> 
> ...



See, they polish the wafers when they're rotated at high-speeds. Could you do that with squares? Next time, hide the rick-roll in a bundle of links, don't make it obvious, I didn't fall for that last link.


----------



## spud107 (Jun 18, 2008)

ibm has some funky stuff in this vid, http://www.youtube.com/watch?v=8UOS0f4G3Zk
first half is wii chips, its the second half that gets interesting


----------



## candle_86 (Jun 18, 2008)

v-zero said:


> It is cheaper (due to the mathematics of yields in producing semiconductor dies) to produce two smaller chips and place them on one PCB than to produce one larger chip, even if the sum of the transistors is equal in both cases... Q.E.D.



but not as efficient as both cores have to then split resources at that point and loose a little power than a single core setup


----------



## Megasty (Jun 18, 2008)

yogurt_21 said:


> lol nvidia didn't bet the farm on this one. if they did we'd be seeing a commercial on televsion every 3 seconds followed by famous endorcements, several small islands being purchased and named gtx280. lol
> 
> nvidia is a big company and it would take alot for them to "bet the farm" on a single chip. it's not like nvidia will really care if ati's is faster thsi time. nvidia will just simply laugh when theirs outsells ati's faster card. this has been typical since the dawn of nvidia (though back then it was rare that ati got a win, the radeon was the first to even truly compete)
> 
> ...



When I said _bet the farm_ I was referring to this series. It might lose & lose big seeing how things are going but NV loves being the fastest, biggest, loudest, whatever & won't take losing lying down. I just hope they don't go crazy & make a $1200 GTX280 GX2 or some other sick bs to brag about :shadedshu




DarkMatter said:


> HAHAHAHAHAHAHAHAH!
> 
> HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA!
> 
> ...



That's why I don't ever click on youtube links from these goofy forums...NEVER


----------



## DarkMatter (Jun 18, 2008)

btarunr said:


> No, I wouldn't
> 
> 
> 
> ...



Yes you could use the same technique except on the edges, that BTW I don't know why they have to polish them. You could polish them with another technique, it would be more complex but could benefit in the end price of the chips. Of course because of the way they create the wafers they can't be square, but I don't think polishing would be a problem.

About the link I made it obvious because I wanted it to be obvious, that was my joke.

EDIT: BTW is that cat photoshoped? lol


----------



## DarkMatter (Jun 18, 2008)

Megasty said:


> That's why I don't ever click on youtube links from these goofy forums...NEVER



I never do, that's why bt made such an achievement.


----------



## swaaye (Jun 18, 2008)

What's wrong with GTX 280 again? It looks like it's 30% faster than a 8800 GTX and that seems right inline with where it should be.


----------



## yogurt_21 (Jun 19, 2008)

swaaye said:


> What's wrong with GTX 280 again? It looks like it's 30% faster than a 8800 GTX and that seems right inline with where it should be.



nothing would be wrong with the gtx280 if the 9800gx2 didn't precede it. being that it did, the gtx 280's price vs performance doesn't seemt hat impressive against it's 400$ predecessor which performs the same in manhy situations. 

but as for the discussion at hand, the gtx280 is like my 2900xt in that it puts out alot of heat, uses alot of energy, is expensive to produce,  and has to have a big cooler on it.

but as for the specs, I said it before, the gtx280 is exactly what we all hoped it would be spec wise.


----------



## tkpenalty (Jun 19, 2008)

swaaye said:


> What's wrong with GTX 280 again? It looks like it's 30% faster than a 8800 GTX and that seems right inline with where it should be.



Nothing wrong with it. It just saps a lot of power for a graphics card, and costs more than the 9800GX2 which is pretty close to it.

Its powerful. Thats for sure. But AMD are saying that Nvidia are being suicidal by keeping everything in one core, I have to agree with that logic. Two HD4850s according to Tweaktown spank a GTX280, and those are the mid range HD4850, not the high-mid, 4870. The 4850 already is faster than a 9800GTX. 

Now if you consider AMD putting two HD4850/HD4870's performance into ONE card, what AMD is saying suddenly makes sense.


----------



## erocker (Jun 19, 2008)

If this archetecture was produced using a 45nm or 32nm process, a single chip would be a bit more efficient.  But that's a lot of chip to shrink!


----------



## Polaris573 (Jun 19, 2008)

Do not rick roll people outside of general nonsense.  This is not 4chan.  Techpowerup is not for spamming useless junk.  This is becoming more and more of a problem, I am going to have to start handing out infractions for this in the future if it does not stop.


----------



## btarunr (Jun 19, 2008)

erocker said:


> If this archetecture was produced using a 45nm or 32nm process, a single chip would be a bit more efficient.  But that's a lot of chip to shrink!



45nm itself was unthinkable just three years ago. Remember how technologists world over celebrated the introduction of Prescott just because it breached into the 100nm process territory? Unfortunately, a die-shrink didn't give it any edge over its 130nm cousins (Northwood) albeit more L2 cache could be accommodated, just as the shrink from Prescott to Cedarmill (and Smithfield to Presler), 90nm to 65nm didn't benefit thermal/power properties of the chip, just that miniaturisation helped squeeze in more L2 cache(s). In the same way, I doubt if this transit from 65nm to 55nm will help NVidia in any way. If you want a live example from GPU's, compare Radeon HD2600 XT to HD3650 (65nm - 55nm, nothing (much) changed).


----------



## hat (Jun 19, 2008)

Die shrinks will just allow Nvidia to cram more transistors into the same package size. Nvidia's battle plan seems to be something like this:


----------



## DarkMatter (Jun 19, 2008)

yogurt_21 said:


> nothing would be wrong with the gtx280 if the 9800gx2 didn't precede it. being that it did, the gtx 280's price vs performance doesn't seemt hat impressive against it's 400$ predecessor which performs the same in manhy situations.
> 
> but as for the discussion at hand, the gtx280 is like my 2900xt in that it puts out alot of heat, uses alot of energy, is expensive to produce,  and has to have a big cooler on it.
> 
> but as for the specs, I said it before, the gtx280 is exactly what we all hoped it would be spec wise.



I would say it has even "better" specs than what we thought. At least this is true in my case. This is because it has effectively an additional Physx processor slapped into the core. Those additional 30 FP64 units with all tha added registers and cache don't help on rendering  at all. Nor can be used by graphics APIs, only by CUDA. That's why I say better in quotes, they have added a lot of silicon that is not useful at all NOW. It could be very useful in the future, that FP64 unit really is powerful and unique as no other comercial chip has ever implemented a unit with such capabilities, so when CUDA programs start to actually be something more than a showcase, or games start to implement Ageia we could say that enhancements are something good. Until then we can only look at them like some kind of silicon waste.



btarunr said:


> 45nm itself was unthinkable just three years ago. Remember how technologists world over celebrated the introduction of Prescott just because it breached into the 100nm process territory? Unfortunately, a die-shrink didn't give it any edge over its 130nm cousins (Northwood) albeit more L2 cache could be accommodated, just as the shrink from Prescott to Cedarmill (and Smithfield to Presler), 90nm to 65nm didn't benefit thermal/power properties of the chip, just that miniaturisation helped squeeze in more L2 cache(s). In the same way, I doubt if this transit from 65nm to 55nm will help NVidia in any way. If you want a live example from GPU's, compare Radeon HD2600 XT to HD3650 (65nm - 55nm, nothing (much) changed).



You seem to overlook that more cache means more power and heat. Specially when caches are half the size of the chip, even despite caches do not consume nearly as much as other parts, but it makes a difference, a big one.


----------



## candle_86 (Jun 19, 2008)

depending on how well cuda is adopted for games in the next 6 months could very well mean Nvidia wins round 10 in the GPU wars even with the price, if CUDA is worked into games to offload alot of the calculations then Nvidia just won, and im betting money this is there gamble.


----------



## Bjorn_Of_Iceland (Jun 20, 2008)

delusions of hope


----------



## candle_86 (Jun 20, 2008)

not really look how nvidia helps devs to ensure compatibilty with NVidia GPU's. 

If physics and lighting where moved to the GPU from the CPU that bottleneck is gone from the CPU and the GPU can handle it 200x at least faster than the fastest quad core even running the game at the same time. This in turn allows for better more realistic things to be done, remember the alan wake demo at IDF for those great physics, heres the thing it stuttered, now if CUDA is used intead it would get alot more FPS, the reason for not so heavy realistic phyics is the lack of raw horsepower, if CUDA is used as Nvidia hopes it will be used the games may not run any faster, but the level of realism can increase greatly which would sway more than one consumer.

If it get 100FPS and use's large transparent textures for dust thats great

if it gets 100FPS but draws each grain of dirt as its own pixel thats even better

which would you get evne with the price diffrence id go for the real pixel dirt


----------



## btarunr (Jun 20, 2008)

DarkMatter said:


> You seem to overlook that more cache means more power and heat. Specially when caches are half the size of the chip, even despite caches do not consume nearly as much as other parts, but it makes a difference, a big one.



Cache sizes and their relations to heat is close to insignificant. The Windsor 5000+ (2x 512KB L2) differed very little from Windsor 5200+ (2x 1MB L2). Both had the same speeds and other parameters. I've used both. But when Prescott is shrunk, despite double cache there should be significant falls in power consumptions, like Windsor (2x 512KB L2 variants) and Brisbane had.


----------



## candle_86 (Jun 20, 2008)

agreed its not the cache its the overall design of the processing unit. The reason presscott had so many problems with heat is very simple, the extra 512k cache was takced next to the old cache causing a longer distance than before for the CPU to read the cache, and this causes friction which creates heat, the shorter the distance the better. Intel was just lazy back then


----------



## btarunr (Jun 20, 2008)

candle_86 said:


> agreed its not the cache its the overall design of the processing unit. The reason presscott had so many problems with heat is very simple, the extra 512k cache was takced next to the old cache causing a longer distance than before for the CPU to read the cache, and this causes friction which creates heat, the shorter the distance the better. Intel was just lazy back then



You're being sarcastic right? Even if you weren't,


----------



## DarkMatter (Jun 20, 2008)

btarunr said:


> Cache sizes and their relations to heat is close to insignificant. The Windsor 5000+ (2x 512KB L2) differed very little from Windsor 5200+ (2x 1MB L2). Both had the same speeds and other parameters. I've used both. But when Prescott is shrunk, despite double cache there should be significant falls in power consumptions, like Windsor (2x 512KB L2 variants) and Brisbane had.



Huh!  Now I'm impressed. You have the required tools to see power consumption and heat at home?!!?

Because otherwise, just because temperatures are not higher doesn't mean the chip is not outputting more heat and consuming more. Heat has to do with energy swapping. In the case of CPU is energy swapping between surfaces. More cache = more surface = more energy transfer = lower temperatures at same heat output.

That was one reason, the other a lot more simple is that, was not 5000+ a 5200+ wih hlf the cache "dissabled"? In quotes because most times they can't dissable all the energy in the dissabled part.


----------



## btarunr (Jun 20, 2008)

DarkMatter said:


> Huh!  Now I'm impressed. You have the required tools to see power consumption and heat at home?!!?
> 
> Because otherwise, just because temperatures are not higher doesn't mean the chip is not outputting more heat and consuming more. Heat has to do with energy swapping. In the case of CPU is energy swapping between surfaces. More cache = more surface = more energy transfer = lower temperatures at same heat output.
> 
> That was one reason, the other a lot more simple is that, was not 5000+ a 5200+ wih hlf the cache "dissabled"? In quotes because most times they can't dissable all the energy in the dissabled part.



No, it's charts that I follow, and I don't mean charts from AMD showing a fixed 89W or 65W across all models of a core. It's more than commonsense that when a die-shrink from 90nm to 65nm sent the rated wattage down from roughly (89W~65W) for AMD, Prescott and Cedarmill didn't share a similar reduction. That's what I'm basing it on.


----------



## DarkMatter (Jun 20, 2008)

btarunr said:


> No, it's charts that I follow, and I don't mean charts from AMD showing a fixed 89W or 65W across all models of a core. It's more than commonsense that when a die-shrink from 90nm to 65nm sent the rated wattage down from roughly (89W~65W) for AMD, Prescott and Cedarmill didn't share a similar reduction. That's what I'm basing it on.



Well I can easily base my point in that CPUs with L3 caches have a lot higher TDP. Which of the two do you think is better?


----------

