Tuesday, October 28th 2008
40nm High-End NVIDIA GPUs Slated for 2009, GT206 for Q4
NVIDIA is expected to continue on its monolithic high-end GPU approach with a few notable GPUs that have been slated for Q4, 2008 and throughout 2009. The visual computing giant will be rolling out a 55nm derivative of the existing G200 graphics processor, codenamed GT206. The GPU is expected to be essentially the same, albeit newer silicon process allowing higher clock-speeds, that push up the performance envelope. The GT206 will be released in Q4, 2008., presumably to cash-in on the X-mas shopping season. It is found that GT206 seems to be having problems with its shader domain, which has pushed its launch for that late.
Following GT206, GT212 and GT216 would be NVIDIA's entries to the 40nm silicon fabrication process. Earlier reports suggested that foundry companies in Taiwan could be ready with the infrastructure to manufacture 40nm GPUs by June/July 2009. For the late second quarter 2009, either GT212, GT216, or simply a new card based the GT206, in a dual-GPU configuration could lead the pack. GT212 and GT216 could release in Q2, 2009. The GT212, GT216 GPUs support GDDR5 memory on a broad memory bus. Towards the end of the year however, NVIDIA will have made its DirectX 11 GPU, the GT300.
Source:
Hardspell
Following GT206, GT212 and GT216 would be NVIDIA's entries to the 40nm silicon fabrication process. Earlier reports suggested that foundry companies in Taiwan could be ready with the infrastructure to manufacture 40nm GPUs by June/July 2009. For the late second quarter 2009, either GT212, GT216, or simply a new card based the GT206, in a dual-GPU configuration could lead the pack. GT212 and GT216 could release in Q2, 2009. The GT212, GT216 GPUs support GDDR5 memory on a broad memory bus. Towards the end of the year however, NVIDIA will have made its DirectX 11 GPU, the GT300.
34 Comments on 40nm High-End NVIDIA GPUs Slated for 2009, GT206 for Q4
With exception to Crysis, no game required the caliber of the R700 or the GT200 up until a few titles recently.
So what would be the point?
Later many factors leaded to GT200's "failure" (I don't think it's a failure at all, as long as you see it as something more than a GPU).
First of all, Ati played really well with RV770, including not revealing the true specs until very late (many partners still said 480SP in their sites even after the card was released, until the NDA was lifted!!), thus negating any effective response from Nvidia. When you are on the lead to the extent Nvidia was back then you have to try to match competitor's performance as much as possible. Being much faster won't help you at all, because of what I said above about developers: you would increase costs with little to no perceived advantage for the masses.
Not using GDDR5 was not really an error IMO, them not using it was not because they were lazy, sitting on their butts. GDDR5 has been expensive and scarce, and manufacturers have been struggling to meet the demand. Nvidia was selling twice as much as Ati when GT200 was concieved and also when it was released. That means that if they wanted to mantain that rate (and they had to...), the number of graphics cards that would have been shipped with GDDR5 would have been 3x the amount of the ones (HD4870 and X2) that have been shiped in reality (even today Nvidia leads at 30%, while Ati is at 20% so take that into account too). Manufacturers simply wouldn't have been able to meet that amount and prices would have been much much higher to the point of reaching obscene numbers. I don't have to say that would be bad for everybody, except memory manufacturers. Especially the end user.
Additionally 512bit memory bus is very beneficial (almost a must IMO) for CUDA and Nvidia decided to bet hard on their GPGPU solution. And although CUDA is still not widely used, and although CUDA support is one of the things that made GT200 so big and expensive I think it was justified. CUDA simply works, I love it if only for the encoding capabilities. Badaboom is far from being perfected and it's already like the Godsend for me: I usually encode 20+ videos per day (mp4, yeah I was lucky here :)) and that usually took me 2-3 hours on my previous CPU (X2 4800+) and 1-2 with the Quad. With Badaboom and my 8800GT I can do it on 20-30 minutes so it's simply amazing. I can only see it getting much better in the future with GT200 and above cards (specifically designed for CUDA).
The only thing that Nvidia should have changed in the first place is the manufacturing process. That was what made GT200 so expensive. Anyway I think that was not due to the process itself, was not inherent to it, but just a failed implementation at launch. IMO the yield problems are more than fixed right now and that there's more than competition needs behind the massive price cuts. What I mean is that there's a strong "we can" component along with the "we need to" in the formula for the price cuts.
All in all, IMHO Nvidia has been doing a lot of good things in the meanwhile, rather than sit down on their butts. They have devised that graphics is not all and want t follow that route. It just happens that aiming at more than graphics makes you weak on graphics only applications from a performance/price point of view, but it's a step you have to take if you really want to submerge into new waters. Some people might apreciate it and some won't, but IMO there's no doubt about the value of CUDA and PhyX. Exactly. You beat me to it, although I wanted to elaborate much more my reply. And yeah I know it's boring to read me. :ohwell:
GT206 X2 sounds mighty powerful. I would expect it to perform on par with 2x GTX 280.
So keep them coming :) That makes me think that YOU think that companies design and build products because they want to soothe the soul of the consumer; nope, they just want their money, and sometimes not losing money is as good as making money.
It wasn't lazy, it was economically smart.
Would you have bought anything faster than a 8800gts in 2007 or H1 2008? Looking at your specs I think not...
Do you honestly believe anyone would have bought GT200 or RV770* when they launched if Crysis didn't exist? If it wasn't a testimony of games to come? I mean enough people to justify the costs. Without Crysis in the middle, it would have been 2-3 years without a significant increase in graphics quality and performance requirements. FEAR and Oblivion (hell even HL2 or Doom3) are not too far off from Bioshock, UT3 or COD4 in the graphics department, and it's much closer when it comes to GPU requirements.
*Actually RV770 yes, it would have been bought in HD4850 form, but HD4870 would have sold close to nothing. And yes NVidia does not need anything to compete with HD4850, G92 does it just well. I don't know if I understood that well, but isn't what's happening in the CPU market a testimony that as long as there's a need for more power, better products are released? Despite not competition at all, the improvement on Nehalem is IMO bigger than P3 vs. P4 and in the whole P4 era. It was in that era when the competition was more fierce. So there's an slowdown maybe (though I don't think so), but not stagnation, as long as there's a need for more.
Anyway one of the reasons that CPU don't advance as much, is because more performance is not needed for most of the things. Although there's a demand for much higher performance on some circles, that only feeds small markets.
The issue is with pricing. Competition brings in advancements. Right now we do need better CPUs than what they can give us.
not losing money is NEVER as good as making money, I dont know where you're coming up with this. Any business strategy 101 course will beat into to you "if you don't grow, you die." Being economically smart has nothing to do with stalling R&D and holding up product development, especially when these are your competitive advantages.
The CEO of nvidia himself said that they underestimated ATI, and how they f-ed up doing it. Now theyre losing money and market share. I love their products, but they gave up the lead, and released the same exact product for 2 gens while ATI was catching up.
I do agree we need faster CPUs, if by "we" you are talking about enthusiasts, hardcore gamers or professionals. But sadly for us, the general trend is moving towards slower cheaper parts. About 90% of the people I know do close to nothing more than web surfing, chating and watching videos (most of times youtube, go figure...) and the likes. I think this is something common around the world, although I may be wrong assuming that. Anyway those people wouldn't really need anything faster than a P3 as long as you put a good enough GPU along with it (actually Atom is not much better). Interestingly, up until recently, most of them kept upgrading because they had the perception they needed something better, IMO purely driven by the inertia adquired in the past and poor system and OS maintenance. But now that's changing at a pace that it almost scares me. The advancement in all portable devices and a little bit more awareness of what it is inside adquired by time and experience (maybe I am guilty of teaching them too), is making them realise that they really don't need more. And without them, we are pretty much left in the cold: this industry works in cycles, enthusiast (early adopters to be more precise) make faster/better mainstream products possible, but it's the masses who can justify the costs of making enthusiasts parts, because those parts would eventually become mainstream parts (either directly or on products based on those). With an stagnation in the mainstream area (cross fingers) making high end parts will become harder to justify and will eventually become elitist.
TBH I started seeing something similar even in games last year, probably because of the consoles and that's one of the reasons that Crytek people became gods for me.
we = enthusiasts, hardcore gamers or professionals in case of discrete graphics, while majority (as in web + some entertainment users) use integrated graphics.
Anyway, I love the comments about R&D because Ati didn't really made too much on that department either. Every improvement in RV770 was reached by using the same "old" design criteria that was used in G92. Why would Nvidia have to change it when the best thing the competitor did in years (a couple) was use that same old design? Most people don't want to hear about this, but only reason RV770 is so good is because they had the chance to implement many things the competitor couldn't, mainly: fab process, GDDR5. And IMO as I said in other posts, they couldn't not because they lack in the R&D department or they lack the knowledge. It's just that what worked for Ati, couldn't work for Nvidia. i.e 512bit interface and a large amount of ROPs is desirable for CUDA, regardless of the memory you use, so GDDR5 simply didn't make sense.
In terms of performance alone, a two generation old 8800GTX still pretty much matches the HD4850. That is a two generation gap, and ATi 3rd best card is equal to nVidia's second best...:shadedshu
Don't get me wrong, RV770 is great. It has definitely caught ATi back up, which is good. But most would make you believe that it shot ATi into some huge lead. That simply isn't the case, and the fact is that nVidia probably could have easily competed even with ATi's highest offerings, with only slight improvements to G92.
I see it hard for those to appear in US and EU so they wouldn't contribute to competition in these regions, they would just fill the emerging markets Intel and Amd could desperately need: Asia and Middle East.
My two cents.
Gaming enthusiasts aren't the "consumer" I was talking about this time -- I'm sure the industrial and scientific worlds can always benefit from faster processing cores, no matter what the price, especially when it comes to research into finding cures for diseases (where fast GPU's are now making a big impact).
They'll always be a market for something faster.
Enthusiasts often decide to spend more money in order to have something a little bit better. Few people in the corporate arena do the same. Of course there are some esceptions, but are very rare.
Anyway your comment was a bit off then, because if Nvidia has been doing something in the recent times, is to improve the support for that corporative market you mention. Making a faster GPU can't help that market when you have not still tuned the ones you already have. They needed to deliver the proper software and support (to companies, coders, etc.) for the hardware they already had. Now that they already have a working and "complete API" (needs a lot of work still, even though it is the best GPGPU solution out there right now) they can focus more on the hardware side again.
Anyway I had the impresion Ati's FireStream was a good competing product, even though Tesla seems to be more successful and complete solution, mostly thanks to CUDA.