Tuesday, June 17th 2008
ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.
The head of ATI Technologies claims that the recently introduced NVIDIA GeForce GTX 200 GPU will be the last monolithic "megachip" because they are simply too expensive to manufacture. The statement was made after NVIDIA executives vowed to keep producing large single chip GPUs. The size of the G200 GPU is about 600mm2¬¬ which means only about 97 can fit on a 300mm wafer costing thousands of dollars. Earlier this year NVIDIA's chief scientist said that AMD is unable to develop a large monolithic graphics processor due to lack of resources. However, Mr. Bergman said that smaller chips allow easier adoption of them for mobile computers.
Source:
X-bit Labs
116 Comments on ATI Believes GeForce GTX 200 Will be NVIDIA's Last Monolithic GPU.
However, a faster & cheaper 400mm² die does have EVERY advantage over a slower, more costly 576mm² die.
Die shrinks do something but under around 65nm the usefulness of die shrinking insn't really significant. Nvidia's CEO admitted that dieshrinking the GTX280 wouldnt help its extreme heat output a lot. Its fairly reasonable as to why, transistor count is more of a factor. In all cases, G80 > G92, R600 > RV670, its due to the cutting down of the memory controller.
By the way, the reason why AMD's cards use more power is simple; their cards use more phases in contrast to Nvidia. More phases = more power used but phases subject to less current, as well as generating less heat.
Yep, they don't have put 'high end air cooling' on their products, what a wonderful relief for them!
~
Is it just because all the FABs are set up to use that size or is there some kind of physical limit?
im sure i read somewhere about teh gtx 280 cooler being designed by CM or sumpn.
1- You have to take into account how power consumtion works. It's exponential, not linear, so a slower part would consume a lot less and the same can be applied to voltages. Nvidia because GT200 was worse than expected in this area had to lower the clocks, but probably they have kept it as high as possible within the selected power envelope. There's always a hot spot for performance-per-watt for any chip and GTX 280 is probably quite higher than that spot. FACT: look at Wizzard's Zotac AMP! GTX280 it consumes a lot more than what you should expect from that overclock. Aim a bit lower than that said spot and you have a "low power" chip. For example a GTX280 GX2 @ 500 Mhz would consume a lot less and still leave the HD4870 X2 behind in performance.
2- Nvidia has implemented the abbility to shut down parts of the chip in the GT200, and it really works very well. Again look at Wizzard's power consumption charts and how it consumes a lot less than the X2 on average, even though its maximum is almost the same. That would make the card to probably never reach the maximum power consumtion in the GX2 card. There's no way you are going to be able to make a total of 64 ROPs work at the same time, for example.
3- Continuing with the above argument, IMO if Nvidia did a GX2 it wouldn't be based on the GTX 280, but on the 8800 GS substitute. Nvidia will surely make a 16-20-24 ROP card while mantaining a high shader count (maybe 192/168 SP, same or one less cluster than GTX260 for example), they would be stupid if they didn't, as it makes more sense than ever. The GS is "weak" because it has 12 ROPs but 16 on the other hand are enough for high-def gaming. 16 ROP x 2 is more than enough as the X2/GX2 can testify, 32 x 2 is just over-over-overkill and silly.
My bet is that Nvidia will do a 20 ROP 168/192 SP card for high mainstream no matter what and they could use that for the GX2. Final specs for that hypothetical GX2 would be: 40 ROP, 336/384 SP, 112/128 TMU and 2 x 320 bit memory controler, that if they can't make the card use the same memory pool as R700 seems to be going to do. The above card would leave the X2 well behind performance wise and still be within the power envelop IMO. Of course that envelop would be higher than that on the X2 but reachable IMHO and still within the 6+8 pin layout that's 300W, the GTX 280 needs 6+8 pins just by a hair.
EDIT: And yes, there is some physical limit too. Take in mind wafers are done by slicing silicon bars at very thin width (less than 1 mm IIRC) and have to mantain the same width all over their area. To that you have to add that the alloy of silicon has to be homogeneous throughout the whole wafer too.
630/610 = 65 nm
That's why ATI is ahead of nVidia (in this respect, atm): they manage to make their die size much smaller then nVidia.
Also because bigger wafers are possible I said it works like standards. Nvidia will surely want bigger wafers for GT200 in expense of waffer yields, because probably the loss in those yields would be smaller than the gains in die yields, but since it's like an standard they can't. I don't know if I have explained that well.
EDIT: Also I highly doubt those Inquirer yield numbers. Probably are on the high 40s and were told so, and they just slapped that 40% number. Also that number seems extremely low without knowing how high are other GPU yields. Probably are never higher than 75%, and much lower in new high-end chips. For example RV770. Difference from say 60% and 50% is already very high.
next gfx card will be 2pcb's, one for the gpu, other for the rest of the components:D
HD4850 > 9800GTX by 25% According to AMD, this is fairly believeable.
A dual GTX280 is technically impossible, between two slots, why? 65nm to 55nm doesn't boast much of a change in TDP! Nvidia's CEO even admitted it, do I have to repeat this? GX2 would be viable, with say a GT200 variant that is similar to the G92 in die size. It was mentioned that a die shrink would only drop the GTX280's heat ouput down to what, around 200W, which is still ridiculously high (400W+ GX2). Who gives about Idle when the card is ridiculously hot at load.
Nvidia really stabbed themselves in the foot, while it is powerful as such, the HD4870X2 will be a more successful product.
This is serious.
gtx 200 series gpu (from what I've found so far)
A wafer from the 4800 series gpu will offer a whole lot more. However, I haven't found one yet. Anyone have a 55nm wafer pic?
Also as I mentioned, Nvidia doesn't need 2 280s to crush Ati's X2, not even 2 260s. By only shrinking the chip to 55nm it would be 400mm2, take some ROPs out and you will get a die size close to G92. No one has said GT200 GX2 is possible but GT200b IS, and you will see it soon if Ati's X2 happens to be quite faster than GTX280.
Also real power consumption of GTX280 is nowhere near those 236W, while the older cards are close to their claimed TDP. It's temperatures are far better than on G92 and RV670 too, despite being a lot bigger, so there's some room left there. If GT200b can't improve the performance beyond that of the X2 a GX2 of GT200b WILL come, but it's nature is not so defined. In fact a card with 2x the performance of GTX280 doesn't make sense AT ALL. If it did, because games in the near future could take advantage of it, then Ati would be LOST.
In the end it will all depend on the real performance of the RV770. AFAIK HD4870 > 9800 GTX by 25% and HD4850 > 8800 GT by 25%. That also means HD4850 > 9800 GTX but by 5-10 %. ANYWAY forget about that if the performance boost of newer drivers happens to be true.