Tuesday, June 17th 2008

ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.

The head of ATI Technologies claims that the recently introduced NVIDIA GeForce GTX 200 GPU will be the last monolithic "megachip" because they are simply too expensive to manufacture. The statement was made after NVIDIA executives vowed to keep producing large single chip GPUs. The size of the G200 GPU is about 600mm2¬¬ which means only about 97 can fit on a 300mm wafer costing thousands of dollars. Earlier this year NVIDIA's chief scientist said that AMD is unable to develop a large monolithic graphics processor due to lack of resources. However, Mr. Bergman said that smaller chips allow easier adoption of them for mobile computers.
Source: X-bit Labs
Add your own comment

116 Comments on ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.

#1
panchoman
Sold my stars!
the war for who can build the biggest monolithic gpu? and then you just x2 the monolithic? lol....

i bet that both companies will have troubl producing big monolithic gpus.. but nvidia more because the R7 is not near the size of the G2
Posted on Reply
#2
imperialreign
I kinda partially agree, only on the fact that nVidia has been sandbagging their GPU tech for a while now, and I think they're at the furthest they can go with current architecture.

But, if it comes down to a resources debate - nVidia can most easily afford titanic productions
Posted on Reply
#3
panchoman
Sold my stars!
imperialreignI kinda partially agree, only on the fact that nVidia has been sandbagging their GPU tech for a while now, and I think they're at the furthest they can go with current architecture.

But, if it comes down to a resources debate - nVidia can most easily afford titanic productions
nvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?
Posted on Reply
#4
kenkickr
I'm all of for Nvidia's monolithic production!! I'll just go out and buy a couple A/C units and fans for my computer room during the summer and never have to turn the heat on in the fall, winter, and early spring having one of their cards in the house, LOL :laugh:
Posted on Reply
#5
Megasty
ATi is only saying that because they already know their X2s are gonna dust the G200s. Combine that with the cost to produce them & you have a no-brainer. Its like comparing a Viper & a Mach truck. They both have the same HP but whiich one is faster :cool:
Posted on Reply
#6
DOM
panchomannvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?
:confused: lol amd still got there ass handed to them its a Q core ita has 4 on one cpu true doesnt mean anything

but I want to see what amd has to offer in the gpu department :D
Posted on Reply
#7
lemonadesoda
If nVidia can do a fab shrink to reduce die size and to reduce power they have a clear winner.

THEREFORE, AMD are creating this "nVidia is a dinosaur" hype, because, truth be told, AMD cannot compete with nVidia unless they go x2. And x2? Oh, thats the same total chip size as GTX200 (+/- 15%). But with a fab shrink (to same fab scale as AMD), nVidia would be smaller. Really? Can that really be true? Smaller and same performance = nVidia architecture must be better.

So long as nVidia can manufacture with high yield, they are AOK.
Posted on Reply
#8
DaJMasta
I agree that this size in mm2 gpu will seldom be seen again, because it costs so much to make. But the transistor count will continue to rise, as the manufacturing process gets smaller.
Posted on Reply
#9
PVTCaboose1337
Graphical Hacker
I think that AMD is right, NVIDIA is not being progressive, but they are getting the most out of a GPU technology... and it seems to be working.
Posted on Reply
#10
dalekdukesboy
I have to reply to this...
panchomannvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?
Well, that may be all fine and good and you may have a valid point...but bottom line, what performs better? The nvidia g92 or ATI's r7...The Intel duo core/quad core, or the phenom? I understand that from a purely theoretical/architectural standpoint ati/amd could be more advanced but no one can objectively tell me the phenom or ati's 3780/3750 can even keep up with nevermind beat Nvidia's g92 or Intel's current cpu lineup.
Posted on Reply
#11
Rurouni Strife
My thoughts:
GPU's will eventually end up kinda like dual/quad core CPUs. You'll have 2 on one Die. When? who knows, but it seems that AMD is kinda working in that direction. However, people complained when the 7950GX2 came out because "it took 2 cards to beat ATI's 1 (1950XTX)". They did it again, but to a lesser degree for the 3870X2, and it'll become more accepted as it goes on, espically since AMD has said "no more mega GPUs". Part of that is they don't wanna f up with another 2900 and they dont quite have the cash, but they are also thinking $$$. Sell more high performing mid range parts. That's where all the money is made. And we all know AMD needs cash.
Posted on Reply
#12
mullered07
panchomannvidia has many work arounds, like the pci freq. trick that they used, and their architecutre has been the same since like geforce 4... and on top of that.. they get hurt here because, in order to keep up with amd's R7 core, they basically slapped 2 G92 cores into a new core and released it. its like intel with the dual-dual-core chips on die to make a quad core in order to keep up with amd's phenom "true" quad core you know?
not exactly to keep up with phenom since the Q series was released like a year b4 and still pwns phenom, who actually gives a shit if its "true" quad or not, it does the job and better than amd?

i dont understand what you mean by workaround, nvidia has handed amd there ass for the last 2 gens, if there not even trying and just making the most of old technology, then god help amd if they come up with a new architecture. ati died the day they were bought by amd :shadedshu
Posted on Reply
#13
imperialreign
Rurouni StrifeMy thoughts:
GPU's will eventually end up kinda like dual/quad core CPUs. You'll have 2 on one Die. When? who knows, but it seems that AMD is kinda working in that direction. However, people complained when the 7950GX2 came out because "it took 2 cards to beat ATI's 1 (1950XTX)". They did it again, but to a lesser degree for the 3870X2, and it'll become more accepted as it goes on, espically since AMD has said "no more mega GPUs". Part of that is they don't wanna f up with another 2900 and they dont quite have the cash, but they are also thinking $$$. Sell more high performing mid range parts. That's where all the money is made. And we all know AMD needs cash.
I kinda agree here as well - TBH, I think multi-GPU setups will be the future over the monolith designs . . . -two or more efficient GPUs can work just as effectively and efficiently, if not better, than one megaPU. With AMD behind ATI at this point, I defi see that the move towards this implimentation is already there.

I'm sure that if indeed multi-core GPUs come marching out of ATI, we'll be seeing a lot of kicking and screaming from the green camp that "it's still 2 GPUs to our 1!!" Which, IMO, I don't believe to be the case. If one chip marches out that has 2 cores on one die, it's still 1 GPU. We don't go around saying that "my Q6600 is 4 CPUs, man!"




Sure, a lot of this progress by ATI/AMD's part has got to be dictated by cost and resources; but I think this is one area where the red camp will be pushing new technology that nVidia will sooner or later have to accept. nVidia can go and counter with a whole new, megaPU pushing uber-1337 processing capabilites, and ATi could just say "alright, we'll add 2 more cores to our current design and match you again." nVidia could go to the drawing boards and redesign yet another 1337 GPU, and ATI could again counter with "alright, we'll add another 3 cores to our current design and take the lead."

IMHO, the smaller package will be way more cost efficient for both manufacturer and consumer years and years down the road.
Posted on Reply
#14
WarEagleAU
Bird of Prey
I have to kind of agree. But they will continue that if ATI doesnt do something to countermeasure it. I dont think sales of the GT200 line will be as high as NV hopes it will. As prices come down though, it will...but until then....
Posted on Reply
#15
imperialreign
WarEagleAUI have to kind of agree. But they will continue that if ATI doesnt do something to countermeasure it. I dont think sales of the GT200 line will be as high as NV hopes it will. As prices come down though, it will...but until then....
I agree as well - but, I think we're on the verge of seeing the first dual core GPU. Initial rumors of the R700 hinted at the possiblity, but that seems to have turned out a negative (although we still have yet to see concrete specs on the 4870x2). With the advent of Fuzion, though, I think they're further paving the way. R800 could potentially deliver the first dual-core GPU, whenever HD5000 will be released (probably next year), and if so, the next series after that we could potentially see every card in the lineup (except for the low-end cards) stouting dual-core GPUs.

TBH, I don't forsee nVidia having the ability to counter that just yet.

This is all speculation, though, and it's all ways off in the future anyhow. We'll just have to see.
Posted on Reply
#16
yogurt_21
lemonadesodaTHEREFORE, AMD are creating this "nVidia is a dinosaur" hype, because, truth be told, AMD cannot compete with nVidia unless they go x2. .
where in the article do you see that ati says nvidia is a dinosaur? they are merely stating that based on the performance vs cost to produce of the gtx280 it will likely be the last of it's kind. considering the 9800gx2 was cheaper to produce and offers similar if not better performance.

it's not like nvidia can't simply go dual or even quad, seeing as they did buy up 3dfx. it would make more sense, as in the end the uberperformance seekers are going to sli those monolithic gpu's anyways. so why not make a cheaper variant that can be a dual those seeking uber performance can buy the x2 while those seeking better price/performance can be accomidated as well. The geforce 9 series did this quite well.

and I seriously don't get all the comments about the x2's I mean when the athlon 64 x2's came out they didn't say, "oh for amd to be able to beat the pentium 4 they had to go dual" dual was a means of providing more processing power without increasing clock speed or changing architecture. just because a gpu or cpu has more than one core doesn't mean it's inferior design. it's just a different way of meeting the same performance demand.

if anything the argument against duals should be the return for the second core as it is in the cpu market. but if ati can make a dual that ebats nvidias single for the same or cheaper cost. thats good business, not inferior design.
Posted on Reply
#17
evil bill
I once read about the Nvidia v ATI "battle" being compared to a muscle car like a Viper or Mustang against a Ferrari. Nvidias stuff is modern, but not overly sophisticated and with its roots in older technologies whereas ATI/AMD tends to be pretty high-tech and cutting edge (e.g. the ringbus memory in the HD2900). You therefore get the fans of either camp decrying how the other arrives at their performance level regardless of how well it performs.

ATIs problem is that as soon as its technological "higher ground" fails to best the competition, it puts itself under serious pressure.

Still, hopefully the internal distractions of the ATI/AMD merger are in the past and they can concentrate on doing their stuff and keep the market moving on. I agree that Nvidia aren't being pushed hard enough by them and are probably sandbagging tech. Necessity is the mother of invention, and unless they have a strong competitor they will be tempted to make cost savings by stretching old tech for longer.
Posted on Reply
#18
pentastar111
Even if nVidia's cards are a little faster..I'll probably still go ahead as planned with my next build being an all AMD rig...$700 for a vid card :eek: is just tooooooooo much money in my opinion.
Posted on Reply
#19
wolf
Better Than Native
this titanic GPU may not fare that well now, but it falls right into the category of future proofing. it, like the G80GTX/Ultra, will stand the test of time, especially when the 55nm GT200b comes out with better yields/higher clocks.
Posted on Reply
#20
DarkMatter
I completely disagree in the single-die multi-core GPU thing. The whole idea of using multiple GPUs is to reduce die size. Doing dual core GPU on a single die is exactly the same as doing a double sized chip, but even worse IMO. Take into account that GPUs are already multi-processor devices, in which the cores are tied to a crossbrigde bus for communication. Look at GT200 diagram:

techreport.com/articles.x/14934

In the image the conections are missing, but it suffices to say they are all conected to the "same bus". A dual core GPU would be exactly the same because GPUs are already a bunch of parallel processors, but with two separate buses, so it'd need an external one and that would only add latency. What's the point of doing that? Yields are not going to be higher, as in both cases you have same number of processors and same silicon that would need to go (and work) together. In a single "core" GPU if one unit fails you can just disable it and sell it as a lower model (8800 GT, G80 GTS, HD2900GT, GTX 260...) but in a dual "core" GPU the whole core should need to be disabled or you would need to dissable another unit in the other "core" (most probably) to keep symetry. In any case you loose more than with the single "core" aproach, and you don't gain anything because the chip is the same size. In the case of CPUs multi-core does make sense because you can't cut down/dissable parts of them, except the cache, if one unit is broken you have to throw away the whole core and in the case that one of them is "defective" (it's slower, only half the cache works...) you just cut them off and sell them separately. With CPUs is a matter of "it works/ doesn't work and if it does at which speed?", with GPUs is "how many units work?".
Posted on Reply
#21
Rurouni Strife
Can't disagree with you DarkMatter, you make perfect sense. Didn't think about that. Perhaps as die sizes get smaller, the way GPU's talk to eachother can be improved via a type of HT link or whatever. Then you get shared memory, like what is rumored for R700 (don't know if thats true).

as for evil bill-Ring Bus actually came on the x1K series of cards. Just improved for R/RV600
Posted on Reply
#22
WarEagleAU
Bird of Prey
True Imperial.

@yogurt. The logical next step for ATI and eventually NV would be dual gpu cores. In a sense it would be like the X2s but a bit different. Whereas AMD/ATI may not want to go uber high core like Nvidia, they may break in on the dual core gpu. Kind of awesome to say the least.
Posted on Reply
#23
hat
Enthusiast
You can only make transistors so small. Thier current pholosophy seems to be "moar transistors, who cares about moar bigger gpus?"

These rediculously large gpus are going to put out a rediculous amount of heat, and make vga coolers rediculously expensive due to the rediculous size of the bases of the heatsink needed to cool the rediculously large gpu.
Posted on Reply
#24
DanishDevil
That's one thing I love about die shrinks. My EK full cover block cools both my 3870x2's GPUs lower than my E8500's cores at stock. I bet the GTX280 puts out quite a lot of heat, though for so much power in a single, larger chip.
Posted on Reply
#25
Megasty
DarkMatterI completely disagree in the single-die multi-core GPU thing. The whole idea of using multiple GPUs is to reduce die size. Doing dual core GPU on a single die is exactly the same as doing a double sized chip, but even worse IMO. Take into account that GPUs are already multi-processor devices, in which the cores are tied to a crossbrigde bus for communication. Look at GT200 diagram:

techreport.com/articles.x/14934

In the image the conections are missing, but it suffices to say they are all conected to the "same bus". A dual core GPU would be exactly the same because GPUs are already a bunch of parallel processors, but with two separate buses, so it'd need an external one and that would only add latency. What's the point of doing that? Yields are not going to be higher, as in both cases you have same number of processors and same silicon that would need to go (and work) together. In a single "core" GPU if one unit fails you can just disable it and sell it as a lower model (8800 GT, G80 GTS, HD2900GT, GTX 260...) but in a dual "core" GPU the whole core should need to be disabled or you would need to dissable another unit in the other "core" (most probably) to keep symetry. In any case you loose more than with the single "core" aproach, and you don't gain anything because the chip is the same size. In the case of CPUs multi-core does make sense because you can't cut down/dissable parts of them, except the cache, if one unit is broken you have to throw away the whole core and in the case that one of them is "defective" (it's slower, only half the cache works...) you just cut them off and sell them separately. With CPUs is a matter of "it works/ doesn't work and if it does at which speed?", with GPUs is "how many units work?".
I was thinking the same thing. Given the size of the present ATi chips, they could be combined & still retain a reasonbly sized die but the latency between the 'main' cache & 'sub' cache would be so high that they might as well leave them apart. It would be fine if they increase the bus but then you would end up with a power hungry monster. If the R800 is a multi-core then so be it but we gonna need a power plant for the thing if its not going to be just another experiment like the R600.
Posted on Reply
Add your own comment
Nov 28th, 2024 12:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts