Thursday, February 16th 2012
NVIDIA Kepler Yields Lower Than Expected.
NVIDIA seems to be playing the blame game according to a article over at Xbit. This is what they had to say, "Chief executive officer of NVIDIA Corp. said that besides continuously increasing capital expenditures that the company ran into in the recent months will be accompanied by lower than expected gross margins in the forthcoming quarter. The company blames low yields of the next-generation code-named Kepler graphics chips that are made at TSMC's 28nm node. "Decline [of gross margin] in Q1 is expected to be due to the hard disk drive shortage continuing, as well as a shortage of 28nm wafers. We are ramping our Kepler generation very hard, and we could use more wafers. The gross margin decline is contributed almost entirely to the yields of 28nm being lower than expected. That is, I guess, unsurprising at this point," said Jen-Hsun Huang, chief executive officer of NVIDIA, during a conference call with financial analysts.
NVIDIA's operating expenses have been increasing for about a year now: from $329.6 million in Q1 FY2012 to $367.7 million in Q4 FY2012 and expects OpEx to be around $383 million in the ongoing Q1 FY2013. At the same time, the company expects its gross margins in Q1 FY2013 to decline below 50% for the first time in many quarters to 49.2%. Nvidia has very high expectations for its Kepler generation of graphics processing units (GPUs). The company claims that it had signed contracts to supply mobile versions of GeForce "Kepler" chips with every single PC OEM in the world. In fact, NVIDIA says Kepler is the best graphics processor ever designed by the company. [With Kepler, we] won design wins at virtually every single PC OEM in the world. So, this is probably the best GPU we have ever built and the performance and power efficiency is surely the best that we have ever created," said Mr. Huang.
Unfortunately for NVIDIA, yields of Kepler are lower than the company originally anticipated and therefore their costs are high. Chief exec of NVIDIA remains optimistic and claims that the situation with Fermi ramp up was ever worse than that. "We use wafer-based pricing now, when the yield is lower, our cost is higher. We have transitioned to a wafer-based pricing for some time and our expectation, of course, is that the yields will improve as they have in the previous generation nodes, and as the yields improve, our output would increase and our costs will decline," stated the head of NVIDIA.
Kepler is NVIDIA's next-generation graphics processor architecture that is projected to bring considerable performance improvements and will likely make the GPU more flexible in terms of programmability, which will speed up development of applications that take advantage of GPGPU (general purpose processing on GPU) technologies. Some of the technologies that NVIDIA promised to introduce in Kepler and Maxwell (the architecture that will succeed Kepler) include virtual memory space (which will allow CPUs and GPUs to use the "unified" virtual memory), pre-emption, enhance the ability of GPU to autonomously process the data without the help of CPU and so on. Entry-level chips may not get all the features that Kepler architecture will have to offer."
Source:
Xbit Laboratories
NVIDIA's operating expenses have been increasing for about a year now: from $329.6 million in Q1 FY2012 to $367.7 million in Q4 FY2012 and expects OpEx to be around $383 million in the ongoing Q1 FY2013. At the same time, the company expects its gross margins in Q1 FY2013 to decline below 50% for the first time in many quarters to 49.2%. Nvidia has very high expectations for its Kepler generation of graphics processing units (GPUs). The company claims that it had signed contracts to supply mobile versions of GeForce "Kepler" chips with every single PC OEM in the world. In fact, NVIDIA says Kepler is the best graphics processor ever designed by the company. [With Kepler, we] won design wins at virtually every single PC OEM in the world. So, this is probably the best GPU we have ever built and the performance and power efficiency is surely the best that we have ever created," said Mr. Huang.
Unfortunately for NVIDIA, yields of Kepler are lower than the company originally anticipated and therefore their costs are high. Chief exec of NVIDIA remains optimistic and claims that the situation with Fermi ramp up was ever worse than that. "We use wafer-based pricing now, when the yield is lower, our cost is higher. We have transitioned to a wafer-based pricing for some time and our expectation, of course, is that the yields will improve as they have in the previous generation nodes, and as the yields improve, our output would increase and our costs will decline," stated the head of NVIDIA.
Kepler is NVIDIA's next-generation graphics processor architecture that is projected to bring considerable performance improvements and will likely make the GPU more flexible in terms of programmability, which will speed up development of applications that take advantage of GPGPU (general purpose processing on GPU) technologies. Some of the technologies that NVIDIA promised to introduce in Kepler and Maxwell (the architecture that will succeed Kepler) include virtual memory space (which will allow CPUs and GPUs to use the "unified" virtual memory), pre-emption, enhance the ability of GPU to autonomously process the data without the help of CPU and so on. Entry-level chips may not get all the features that Kepler architecture will have to offer."
75 Comments on NVIDIA Kepler Yields Lower Than Expected.
www.anandtech.com/show/5465/amd-q411-fy-2011-earnings-report-169b-revenue-for-q4-657b-revenue-for-2011 Also while on laptops AMD has a bigger marketshare, in desktops Nvidia has a 60%, so it's more affected than AMD there. In any case Nvidia's Q4 results were much better than AMD's Q4, so it's just a matter of explaining why their operating expenses were higher than before.
Nvidia has sacrificed image quality in lieu of performance.
Now this goes back a little bit but back when I was using some 8800's in SLI when I switched from the 175.19 driver to the 180.xx driver I noticed that my framerate doubled [in BF2142] but all of the colors washed out. At the time I was using a calibrated Dell Trinitron Ultra-Scan monitor so I immediately noticed the difference in color saturation and overall image quality.
I actually switched back to the 175.19 driver and used it as long as I possibly could. Then I made the switch to ATi and couldn't have been happier. Image quality and color saturation was back, not to mention the 4870 I bought simply SMOKED my SLI getup. :D
EDIT: Makes me wonder if the same thing that happened when Fermi came out is going to happen again. People waited and waited, then Fermi debuted, was a flop and all of the ATi cards sold out overnight.
Compound this:
AMD has 32 CUs and really only needs slightly more than 28 most of the time. 7950 is a fine design, and it doesn't really hurt the design if yields are low on 7970. Tahiti is over-designed, prolly because of the exact reason mentioned; big chip on new node. Even if GK104 did have the perfect mix of rop:shader ipc, the wider bus and (unneeded) bandwidth of 7950 should make up that performance versus a similar part with 256bit bus because 7950 is not far off that reality. Point AMD on flexibility to reach a certain performance level.
Again, I think the 'efficient/1080p/gk104-like' 32 ROP design will come with Sea Islands when 28nm is mature and 1.5v 7gbps gddr5 is available..think something similar to a native 7950 with a 256-bit bus using higher clocks. Right now, that chip will be Pitcairn (24 ROPs) because it is smaller and lines up with market realities. Point AMD on being realistic.
nVIDIA appears to have 16 less-granular big units, which itself is a yield problem...like Fermi on a less-drastic level because the die is smaller. If the shader design is 90% ppc (2 CU versus 1 SM) or less versus AMD, every single SM is needed to balance the design. I wager that is either a reality or very close to it considering 96sp, even with realistic use of SFU, is not 90% of 128. Yeah, scalar is 100% efficient, but AMD's 4vliw/MIMD designs are not that far off on average. Add that Fermi should need every bit of of 5ghz memory bandwidth per 1ghz core clock and 2 SM (ie 32 ROP/16/256-bit SM, 28 ROP/14 SM/224-bit) and you don't have any freaking wiggle room at all if your memory controller/core design over or under-perform.
Conclusion:
So if you are nVIDIA you are sitting with a large die, with big units that are all needed at it's maximum level to compete against the salvage design of the competition. Efficient as fermi can be yes, smart choices for this point in time...not even close.
Design epic fail.
For a professional, where colour matters, sure, calibration of your tools is 100% needed. But not all PC users use their PCs in a professional context, and most definitely not the gamer-centric market that find their way on to TPU.
You need to be able to relate the user experience, nto the optimal, unless every user can get the same experience with minimal effort. When that requires education of the consumer, you can forget about it.
So what is "better"? What is your definition of better? I guess if you belong to the 70% of people whose definition of better is more saturated then I guess that AMD has a more appealing default color scheme. If your definition of better is "more close to reality, more natural" then you'd prefer Nvidia's scheme.
Saying that AMD has better color is like saying that fast food tastes better, because they use additives to make it "taste more". I guess people who get addicted to fast food do think it tastes better, but in the end it's just a matter of taste and so is colors.
That's why I asked the person who brought the "colour" argument in the first place.
And I find kinda funny that you chose to call BS on my post and not any of the preceeding ones. :cool:
Dont sink to it man.
so while i agree with you overall nvidia isnt in such a bad place, only their biggest chip is.
so in the worst case nvidia will end up with a top end gpu that is 10-20% slower than amds top end, but I doubt that, even with the 256bit bandwidth that everyone is all crazy about i dont think it should be a problem in most scenarios, especially considering the fact that most people buying nvidia dont really do multiple gpu setups while for amd its almost a must for eyefinity.
also i heard leaks nvidia was debating whether they should call the gk104 gtx660 or gtx680 when the gk110 was supposed to be for that but isnt coming anytime soon, so idk whether the yeild issues force nvidia to do so, or whether they think the gk104 is sufficient, either way we need competition already, and for cards with 340mm2 and 365mm2 die sizes they should be well in the 350-400$ price range, and thats considering the TSMC 20% more expensive wafer prices
Fantastic card, btw :) Runs much better than the 6950s I had. At 1,175 core so far. Still testing :)
With a non-reference cooler and OCed it still won't go above low 60s. The fans are still silent.
and explains why nvidia is releasing a 256-bit card to compete with amds hd7970 for multi-monitor on nvidia you have to SLI, while for eyefinity you can use one amd card, thats what i was referring to, in other words the hd7970 needs that extra bandwidth more than the gk106