Tuesday, June 17th 2008
ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.
The head of ATI Technologies claims that the recently introduced NVIDIA GeForce GTX 200 GPU will be the last monolithic "megachip" because they are simply too expensive to manufacture. The statement was made after NVIDIA executives vowed to keep producing large single chip GPUs. The size of the G200 GPU is about 600mm2¬¬ which means only about 97 can fit on a 300mm wafer costing thousands of dollars. Earlier this year NVIDIA's chief scientist said that AMD is unable to develop a large monolithic graphics processor due to lack of resources. However, Mr. Bergman said that smaller chips allow easier adoption of them for mobile computers.
Source:
X-bit Labs
116 Comments on ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.
i bet that both companies will have troubl producing big monolithic gpus.. but nvidia more because the R7 is not near the size of the G2
But, if it comes down to a resources debate - nVidia can most easily afford titanic productions
but I want to see what amd has to offer in the gpu department :D
THEREFORE, AMD are creating this "nVidia is a dinosaur" hype, because, truth be told, AMD cannot compete with nVidia unless they go x2. And x2? Oh, thats the same total chip size as GTX200 (+/- 15%). But with a fab shrink (to same fab scale as AMD), nVidia would be smaller. Really? Can that really be true? Smaller and same performance = nVidia architecture must be better.
So long as nVidia can manufacture with high yield, they are AOK.
GPU's will eventually end up kinda like dual/quad core CPUs. You'll have 2 on one Die. When? who knows, but it seems that AMD is kinda working in that direction. However, people complained when the 7950GX2 came out because "it took 2 cards to beat ATI's 1 (1950XTX)". They did it again, but to a lesser degree for the 3870X2, and it'll become more accepted as it goes on, espically since AMD has said "no more mega GPUs". Part of that is they don't wanna f up with another 2900 and they dont quite have the cash, but they are also thinking $$$. Sell more high performing mid range parts. That's where all the money is made. And we all know AMD needs cash.
i dont understand what you mean by workaround, nvidia has handed amd there ass for the last 2 gens, if there not even trying and just making the most of old technology, then god help amd if they come up with a new architecture. ati died the day they were bought by amd :shadedshu
I'm sure that if indeed multi-core GPUs come marching out of ATI, we'll be seeing a lot of kicking and screaming from the green camp that "it's still 2 GPUs to our 1!!" Which, IMO, I don't believe to be the case. If one chip marches out that has 2 cores on one die, it's still 1 GPU. We don't go around saying that "my Q6600 is 4 CPUs, man!"
Sure, a lot of this progress by ATI/AMD's part has got to be dictated by cost and resources; but I think this is one area where the red camp will be pushing new technology that nVidia will sooner or later have to accept. nVidia can go and counter with a whole new, megaPU pushing uber-1337 processing capabilites, and ATi could just say "alright, we'll add 2 more cores to our current design and match you again." nVidia could go to the drawing boards and redesign yet another 1337 GPU, and ATI could again counter with "alright, we'll add another 3 cores to our current design and take the lead."
IMHO, the smaller package will be way more cost efficient for both manufacturer and consumer years and years down the road.
TBH, I don't forsee nVidia having the ability to counter that just yet.
This is all speculation, though, and it's all ways off in the future anyhow. We'll just have to see.
it's not like nvidia can't simply go dual or even quad, seeing as they did buy up 3dfx. it would make more sense, as in the end the uberperformance seekers are going to sli those monolithic gpu's anyways. so why not make a cheaper variant that can be a dual those seeking uber performance can buy the x2 while those seeking better price/performance can be accomidated as well. The geforce 9 series did this quite well.
and I seriously don't get all the comments about the x2's I mean when the athlon 64 x2's came out they didn't say, "oh for amd to be able to beat the pentium 4 they had to go dual" dual was a means of providing more processing power without increasing clock speed or changing architecture. just because a gpu or cpu has more than one core doesn't mean it's inferior design. it's just a different way of meeting the same performance demand.
if anything the argument against duals should be the return for the second core as it is in the cpu market. but if ati can make a dual that ebats nvidias single for the same or cheaper cost. thats good business, not inferior design.
ATIs problem is that as soon as its technological "higher ground" fails to best the competition, it puts itself under serious pressure.
Still, hopefully the internal distractions of the ATI/AMD merger are in the past and they can concentrate on doing their stuff and keep the market moving on. I agree that Nvidia aren't being pushed hard enough by them and are probably sandbagging tech. Necessity is the mother of invention, and unless they have a strong competitor they will be tempted to make cost savings by stretching old tech for longer.
techreport.com/articles.x/14934
In the image the conections are missing, but it suffices to say they are all conected to the "same bus". A dual core GPU would be exactly the same because GPUs are already a bunch of parallel processors, but with two separate buses, so it'd need an external one and that would only add latency. What's the point of doing that? Yields are not going to be higher, as in both cases you have same number of processors and same silicon that would need to go (and work) together. In a single "core" GPU if one unit fails you can just disable it and sell it as a lower model (8800 GT, G80 GTS, HD2900GT, GTX 260...) but in a dual "core" GPU the whole core should need to be disabled or you would need to dissable another unit in the other "core" (most probably) to keep symetry. In any case you loose more than with the single "core" aproach, and you don't gain anything because the chip is the same size. In the case of CPUs multi-core does make sense because you can't cut down/dissable parts of them, except the cache, if one unit is broken you have to throw away the whole core and in the case that one of them is "defective" (it's slower, only half the cache works...) you just cut them off and sell them separately. With CPUs is a matter of "it works/ doesn't work and if it does at which speed?", with GPUs is "how many units work?".
as for evil bill-Ring Bus actually came on the x1K series of cards. Just improved for R/RV600
@yogurt. The logical next step for ATI and eventually NV would be dual gpu cores. In a sense it would be like the X2s but a bit different. Whereas AMD/ATI may not want to go uber high core like Nvidia, they may break in on the dual core gpu. Kind of awesome to say the least.
These rediculously large gpus are going to put out a rediculous amount of heat, and make vga coolers rediculously expensive due to the rediculous size of the bases of the heatsink needed to cool the rediculously large gpu.