Tuesday, October 28th 2008

40nm High-End NVIDIA GPUs Slated for 2009, GT206 for Q4

NVIDIA is expected to continue on its monolithic high-end GPU approach with a few notable GPUs that have been slated for Q4, 2008 and throughout 2009. The visual computing giant will be rolling out a 55nm derivative of the existing G200 graphics processor, codenamed GT206. The GPU is expected to be essentially the same, albeit newer silicon process allowing higher clock-speeds, that push up the performance envelope. The GT206 will be released in Q4, 2008., presumably to cash-in on the X-mas shopping season. It is found that GT206 seems to be having problems with its shader domain, which has pushed its launch for that late.

Following GT206, GT212 and GT216 would be NVIDIA's entries to the 40nm silicon fabrication process. Earlier reports suggested that foundry companies in Taiwan could be ready with the infrastructure to manufacture 40nm GPUs by June/July 2009. For the late second quarter 2009, either GT212, GT216, or simply a new card based the GT206, in a dual-GPU configuration could lead the pack. GT212 and GT216 could release in Q2, 2009. The GT212, GT216 GPUs support GDDR5 memory on a broad memory bus. Towards the end of the year however, NVIDIA will have made its DirectX 11 GPU, the GT300.
Source: Hardspell
Add your own comment

34 Comments on 40nm High-End NVIDIA GPUs Slated for 2009, GT206 for Q4

#1
Hayder_Master
40 nm , gddr5 ,dx11 , i don't think nvidia put all of this same time can reach it all in one card , maybe next time but not after two month
Posted on Reply
#2
btarunr
Editor & Senior Moderator
hayder.master40 nm , gddr5 ,dx11 , i don't think nvidia put all of this same time can reach it all in one card , maybe next time but not after two month
Umm...after two months it's GT206, GT300 is towards the end of next year.
Posted on Reply
#3
wolf
Better Than Native
its totally doable, and they have some ground to make up
Posted on Reply
#4
phanbuey
Makes you wonder why they sat around on their butts during the g80/g92 reign... This could have been out along time ago.
Posted on Reply
#5
wolf
Better Than Native
exactly, they thought they had all the time in the world, then BAM 48xx hits and they rush GT200 out.
Posted on Reply
#6
newconroer
Maybe because staying ahead of the competition isn't worth the costs or the resources, when there's no real demand in the market, for them to do so.

With exception to Crysis, no game required the caliber of the R700 or the GT200 up until a few titles recently.

So what would be the point?
Posted on Reply
#7
DarkMatter
phanbueyMakes you wonder why they sat around on their butts during the g80/g92 reign... This could have been out along time ago.
If by that you mean why they didn't release anything new it's because they really had not many options. Without serious competition for their top cards and with G92 cards being capable of playing everything at highest settings, any card would be just seen as overkill. Withour Ati cards being on par, developers would never really take advantage of the extra power of the new cards. It happened to G80 to some extent and back then performance was more required than in G92 days.

Later many factors leaded to GT200's "failure" (I don't think it's a failure at all, as long as you see it as something more than a GPU).

First of all, Ati played really well with RV770, including not revealing the true specs until very late (many partners still said 480SP in their sites even after the card was released, until the NDA was lifted!!), thus negating any effective response from Nvidia. When you are on the lead to the extent Nvidia was back then you have to try to match competitor's performance as much as possible. Being much faster won't help you at all, because of what I said above about developers: you would increase costs with little to no perceived advantage for the masses.

Not using GDDR5 was not really an error IMO, them not using it was not because they were lazy, sitting on their butts. GDDR5 has been expensive and scarce, and manufacturers have been struggling to meet the demand. Nvidia was selling twice as much as Ati when GT200 was concieved and also when it was released. That means that if they wanted to mantain that rate (and they had to...), the number of graphics cards that would have been shipped with GDDR5 would have been 3x the amount of the ones (HD4870 and X2) that have been shiped in reality (even today Nvidia leads at 30%, while Ati is at 20% so take that into account too). Manufacturers simply wouldn't have been able to meet that amount and prices would have been much much higher to the point of reaching obscene numbers. I don't have to say that would be bad for everybody, except memory manufacturers. Especially the end user.

Additionally 512bit memory bus is very beneficial (almost a must IMO) for CUDA and Nvidia decided to bet hard on their GPGPU solution. And although CUDA is still not widely used, and although CUDA support is one of the things that made GT200 so big and expensive I think it was justified. CUDA simply works, I love it if only for the encoding capabilities. Badaboom is far from being perfected and it's already like the Godsend for me: I usually encode 20+ videos per day (mp4, yeah I was lucky here :)) and that usually took me 2-3 hours on my previous CPU (X2 4800+) and 1-2 with the Quad. With Badaboom and my 8800GT I can do it on 20-30 minutes so it's simply amazing. I can only see it getting much better in the future with GT200 and above cards (specifically designed for CUDA).

The only thing that Nvidia should have changed in the first place is the manufacturing process. That was what made GT200 so expensive. Anyway I think that was not due to the process itself, was not inherent to it, but just a failed implementation at launch. IMO the yield problems are more than fixed right now and that there's more than competition needs behind the massive price cuts. What I mean is that there's a strong "we can" component along with the "we need to" in the formula for the price cuts.

All in all, IMHO Nvidia has been doing a lot of good things in the meanwhile, rather than sit down on their butts. They have devised that graphics is not all and want t follow that route. It just happens that aiming at more than graphics makes you weak on graphics only applications from a performance/price point of view, but it's a step you have to take if you really want to submerge into new waters. Some people might apreciate it and some won't, but IMO there's no doubt about the value of CUDA and PhyX.
newconroerMaybe because staying ahead of the competition isn't worth the costs or the resources, when there's no real demand in the market, for them to do so.

With exception to Crysis, no game required the caliber of the R700 or the GT200 up until a few titles recently.

So what would be the point?
Exactly. You beat me to it, although I wanted to elaborate much more my reply. And yeah I know it's boring to read me. :ohwell:
Posted on Reply
#8
mdm-adph
phanbueyMakes you wonder why they sat around on their butts during the g80/g92 reign... This could have been out along time ago.
It's just further proof that when companies have no competition, they sit around on their asses, and the consumer isn't given the best.
Posted on Reply
#9
btarunr
Editor & Senior Moderator
mdm-adphIt's just further proof that when companies have no competition, they sit around on their asses, and the consumer isn't given the best.
Like what's about to happen to the CPU market...

GT206 X2 sounds mighty powerful. I would expect it to perform on par with 2x GTX 280.
Posted on Reply
#10
newconroer
DarkMatterExactly. You beat me to it, although I wanted to elaborate much more my reply. And yeah I know it's boring to read me. :ohwell:
Boring? Not at all. If we had more people with healthy lengthed responses, with half decent punctuation and some respect for grammar, TPU might be somewhat of the same image it used to years ago.

So keep them coming :)
mdm-adphIt's just further proof that when companies have no competition, they sit around on their asses, and the consumer isn't given the best.
That makes me think that YOU think that companies design and build products because they want to soothe the soul of the consumer; nope, they just want their money, and sometimes not losing money is as good as making money.

It wasn't lazy, it was economically smart.
Posted on Reply
#11
DarkMatter
mdm-adphIt's just further proof that when companies have no competition, they sit around on their asses, and the consumer isn't given the best.
Companies can't take the risk of releasig something better if people won't buy the thing.

Would you have bought anything faster than a 8800gts in 2007 or H1 2008? Looking at your specs I think not...

Do you honestly believe anyone would have bought GT200 or RV770* when they launched if Crysis didn't exist? If it wasn't a testimony of games to come? I mean enough people to justify the costs. Without Crysis in the middle, it would have been 2-3 years without a significant increase in graphics quality and performance requirements. FEAR and Oblivion (hell even HL2 or Doom3) are not too far off from Bioshock, UT3 or COD4 in the graphics department, and it's much closer when it comes to GPU requirements.

*Actually RV770 yes, it would have been bought in HD4850 form, but HD4870 would have sold close to nothing. And yes NVidia does not need anything to compete with HD4850, G92 does it just well.
btarunrLike what's about to happen to the CPU market...
I don't know if I understood that well, but isn't what's happening in the CPU market a testimony that as long as there's a need for more power, better products are released? Despite not competition at all, the improvement on Nehalem is IMO bigger than P3 vs. P4 and in the whole P4 era. It was in that era when the competition was more fierce. So there's an slowdown maybe (though I don't think so), but not stagnation, as long as there's a need for more.

Anyway one of the reasons that CPU don't advance as much, is because more performance is not needed for most of the things. Although there's a demand for much higher performance on some circles, that only feeds small markets.
Posted on Reply
#12
btarunr
Editor & Senior Moderator
DarkMatterI don't know if I understood that well, but isn't what's happening in the CPU market a testimony that as long as there's a need for more power, better products are released? Despite not competition at all, the improvement on Nehalem is IMO bigger than P3 vs. P4 and in the whole P4 era. It was in that era when the competition was more fierce. So there's an slowdown maybe (though I don't think so), but not stagnation, as long as there's a need for more.

Anyway one of the reasons that CPU don't advance as much, is because more performance is not needed for most of the things. Although there's a demand for much higher performance on some circles, that only feeds small markets.
There is technological advancement, no doubt about that; but I don't see CPUs getting the kind of consumer-driven price-cuts GPUs got. I never could have dreamed that my $240 8800 GT would sell for $100, and less than one year into its release. There is a need for the amount of power today's CPUs carry. Maybe the amount of parallelism they bring in doesn't benefit today's games, but with everything else, sky is the limit. I was trying out PS CS4, and I thought my Phenom X4 would more than suffice. To my surprise, CS4 wasn't a cakewalk for my Phenom at all. With GPU acceleration disabled, the Phenom made it irritable (not as smooth as expected) to work with large ( as in > 3000x3000px) images. There were lags with scrolling, the hand tool wasn't all that smooth. Ofcourse the UI became cakewalk once I enabled GPU drawing, but that's another thing.

The issue is with pricing. Competition brings in advancements. Right now we do need better CPUs than what they can give us.
Posted on Reply
#13
wolf
Better Than Native
btarunrLike what's about to happen to the CPU market...

GT206 X2 sounds mighty powerful. I would expect it to perform on par with 2x GTX 280.
that i would buy :toast:
btarunrCompetition brings in advancements. Right now we do need better CPUs than what they can give us.
here here and honestly i dont think LGA1366/whatever the new AMD chip is lol, will bring all of that power, not only do we need more physical cores (as oppose to logical with hyperthreading) but also some more speed definitely helps in games, naturally making them more powerful clock for clock helps.
Posted on Reply
#14
phanbuey
newconroerThat makes me think that YOU think that companies design and build products because they want to soothe the soul of the consumer; nope, they just want their money, and sometimes not losing money is as good as making money.

It wasn't lazy, it was economically smart.
Its not economically smart to give up market share and have no contingency plan. That is why intel is out on it's Tick-Tock strategy - and their shareholders love it.

not losing money is NEVER as good as making money, I dont know where you're coming up with this. Any business strategy 101 course will beat into to you "if you don't grow, you die." Being economically smart has nothing to do with stalling R&D and holding up product development, especially when these are your competitive advantages.

The CEO of nvidia himself said that they underestimated ATI, and how they f-ed up doing it. Now theyre losing money and market share. I love their products, but they gave up the lead, and released the same exact product for 2 gens while ATI was catching up.
Posted on Reply
#15
DarkMatter
btarunrThere is technological advancement, no doubt about that; but I don't see CPUs getting the kind of consumer-driven price-cuts GPUs got. I never could have dreamed that my $240 8800 GT would sell for $100, and less than one year into its release. There is a need for the amount of power today's CPUs carry. Maybe the amount of parallelism they bring in doesn't benefit today's games, but with everything else, sky is the limit. I was trying out PS CS4, and I thought my Phenom X4 would more than suffice. To my surprise, CS4 wasn't a cakewalk for my Phenom at all. With GPU acceleration disabled, the Phenom made it irritable (not as smooth as expected) to work with large ( as in > 3000x3000px) images. There were lags with scrolling, the hand tool wasn't all that smooth. Ofcourse the UI became cakewalk once I enabled GPU drawing, but that's another thing.

The issue is with pricing. Competition brings in advancements. Right now we do need better CPUs than what they can give us.
I have to agree then. I completely forgot about pricing, I was just talking about technological advancements, because the discurse seemd to be about that. My personal opinion is that CPU pricing issue is more related to the general migration to cheaper PCs, rather than (lack of) competition in the high end. Although both have their share.

I do agree we need faster CPUs, if by "we" you are talking about enthusiasts, hardcore gamers or professionals. But sadly for us, the general trend is moving towards slower cheaper parts. About 90% of the people I know do close to nothing more than web surfing, chating and watching videos (most of times youtube, go figure...) and the likes. I think this is something common around the world, although I may be wrong assuming that. Anyway those people wouldn't really need anything faster than a P3 as long as you put a good enough GPU along with it (actually Atom is not much better). Interestingly, up until recently, most of them kept upgrading because they had the perception they needed something better, IMO purely driven by the inertia adquired in the past and poor system and OS maintenance. But now that's changing at a pace that it almost scares me. The advancement in all portable devices and a little bit more awareness of what it is inside adquired by time and experience (maybe I am guilty of teaching them too), is making them realise that they really don't need more. And without them, we are pretty much left in the cold: this industry works in cycles, enthusiast (early adopters to be more precise) make faster/better mainstream products possible, but it's the masses who can justify the costs of making enthusiasts parts, because those parts would eventually become mainstream parts (either directly or on products based on those). With an stagnation in the mainstream area (cross fingers) making high end parts will become harder to justify and will eventually become elitist.

TBH I started seeing something similar even in games last year, probably because of the consoles and that's one of the reasons that Crytek people became gods for me.
Posted on Reply
#16
btarunr
Editor & Senior Moderator
Yes we = enthusiasts, hardcore gamers or professionals. The scene is similar to:

we = enthusiasts, hardcore gamers or professionals in case of discrete graphics, while majority (as in web + some entertainment users) use integrated graphics.
Posted on Reply
#17
DarkMatter
phanbueyIts not economically smart to give up market share and have no contingency plan. That is why intel is out on it's Tick-Tock strategy - and their shareholders love it.

not losing money is NEVER as good as making money, I dont know where you're coming up with this. Any business strategy 101 course will beat into to you "if you don't grow, you die." Being economically smart has nothing to do with stalling R&D and holding up product development, especially when these are your competitive advantages.

The CEO of nvidia himself said that they underestimated ATI, and how they f-ed up doing it. Now theyre losing money and market share. I love their products, but they gave up the lead, and released the same exact product for 2 gens while ATI was catching up.
Who told you Nvidia stopped R&D? By the time GT200 launched, GT200b was already taped out AFAWK. They seem to be having problems with it again, but has nothing to do with R&D. They are releasing 40nm and DX11 at the same time of Ati and acording to some rumors only reason Nvidia won't release them long before Ati is because TSMC has problems with the fab process. Looking at it retropectively, this just matches the rumors that said quite some time ago that Nvidia might completely skip DX10.1 and half nodes.

Anyway, I love the comments about R&D because Ati didn't really made too much on that department either. Every improvement in RV770 was reached by using the same "old" design criteria that was used in G92. Why would Nvidia have to change it when the best thing the competitor did in years (a couple) was use that same old design? Most people don't want to hear about this, but only reason RV770 is so good is because they had the chance to implement many things the competitor couldn't, mainly: fab process, GDDR5. And IMO as I said in other posts, they couldn't not because they lack in the R&D department or they lack the knowledge. It's just that what worked for Ati, couldn't work for Nvidia. i.e 512bit interface and a large amount of ROPs is desirable for CUDA, regardless of the memory you use, so GDDR5 simply didn't make sense.
Posted on Reply
#18
newtekie1
Semi-Retired Folder
phanbueyMakes you wonder why they sat around on their butts during the g80/g92 reign... This could have been out along time ago.
Well, the simple fact is that they didn't have to really do much. It is the by-product of not having any real competition. Lets face it, RV770 is a wonderful thing for the market, it was the first offering by ATi that actually matched what nVidia was putting out. And even then, if you look at just the RV770, it really isn't that much of an improvement over G92. So for those 6 months, nVidia had no real reason to improve tech. In fact, with the release of RV770, we had the top end G92 cards matching the send from the top RV770 card. It isn't a good thing when the top end of the previous generation managed to match the second best card(now third) of the "next" generation.

In terms of performance alone, a two generation old 8800GTX still pretty much matches the HD4850. That is a two generation gap, and ATi 3rd best card is equal to nVidia's second best...:shadedshu

Don't get me wrong, RV770 is great. It has definitely caught ATi back up, which is good. But most would make you believe that it shot ATi into some huge lead. That simply isn't the case, and the fact is that nVidia probably could have easily competed even with ATi's highest offerings, with only slight improvements to G92.
Posted on Reply
#19
DarkMatter
btarunrYes we = enthusiasts, hardcore gamers or professionals. The scene is similar to:

we = enthusiasts, hardcore gamers or professionals in case of discrete graphics, while majority (as in web + some entertainment users) use integrated graphics.
But IMO there's a big difference. Discreet GPUs have always been for enthusiasts, enthusist enough people at least. Remember that even today Intel integrated graphics sell much more than discreet cards. Any mainstream discreet GPU sold is usually one less IGP and a win for them. On the other hand taking into account almost every household in USA and EU have a PC, any mainstream CPU sold is one less high end CPU sold and a significant loss in profits in those markets. I don't know if the increasing "alternative" markets can make for the difference (for Intel, AMD I mean) considering the biggest one (China) is about to create their own CPUs, and correct me if I'm wrong, but probably big countries in expansion in the region, namely India for example will probably prefer those if they are available.
I see it hard for those to appear in US and EU so they wouldn't contribute to competition in these regions, they would just fill the emerging markets Intel and Amd could desperately need: Asia and Middle East.

My two cents.
Posted on Reply
#20
mdm-adph
newconroerThat makes me think that YOU think that companies design and build products because they want to soothe the soul of the consumer; nope, they just want their money, and sometimes not losing money is as good as making money.

It wasn't lazy, it was economically smart.
DarkMatterCompanies can't take the risk of releasig something better if people won't buy the thing.

Would you have bought anything faster than a 8800gts in 2007 or H1 2008? Looking at your specs I think not...
Boys, boys -- settle down. :laugh:

Gaming enthusiasts aren't the "consumer" I was talking about this time -- I'm sure the industrial and scientific worlds can always benefit from faster processing cores, no matter what the price, especially when it comes to research into finding cures for diseases (where fast GPU's are now making a big impact).

They'll always be a market for something faster.
Posted on Reply
#21
magibeg
I just wonder how the GT206 will matchup against the 5870. Looks like a possible place to spend my x-mas money ;)
Posted on Reply
#22
DarkMatter
mdm-adphBoys, boys -- settle down. :laugh:

Gaming enthusiasts aren't the "consumer" I was talking about this time -- I'm sure the industrial and scientific worlds can always benefit from faster processing cores, no matter what the price, especially when it comes to research into finding cures for diseases (where fast GPU's are now making a big impact).

They'll always be a market for something faster.
Oooh, then it's even worse. The percentage of corporate consumers that require a high end GPUs is even smaller than the uber enthusiast gamer market. And the percentage of those willing to spend more money is even smaller. And the number of those willing to be the first ones to invest in new solutions is even smaller. And.. (sorry :D)
Enthusiasts often decide to spend more money in order to have something a little bit better. Few people in the corporate arena do the same. Of course there are some esceptions, but are very rare.

Anyway your comment was a bit off then, because if Nvidia has been doing something in the recent times, is to improve the support for that corporative market you mention. Making a faster GPU can't help that market when you have not still tuned the ones you already have. They needed to deliver the proper software and support (to companies, coders, etc.) for the hardware they already had. Now that they already have a working and "complete API" (needs a lot of work still, even though it is the best GPGPU solution out there right now) they can focus more on the hardware side again.

Anyway I had the impresion Ati's FireStream was a good competing product, even though Tesla seems to be more successful and complete solution, mostly thanks to CUDA.
Posted on Reply
#23
jbunch07
Ok let me get this straight please correct me if im wrong. GT206, GT212, and GT216 will be faster than the current GTX260 and GTX280? Nvidia's nomenclature has got me really confused. :confused:
Posted on Reply
#24
Octavean
nVIDIA DirectX 11 GT300 here I come ;)
Posted on Reply
#25
btarunr
Editor & Senior Moderator
jbunch07Ok let me get this straight please correct me if im wrong. GT206, GT212, and GT216 will be faster than the current GTX260 and GTX280? Nvidia's nomenclature has got me really confused. :confused:
GTxxx references in the news article, are to the code-names of the GPUs, not the trade-names. eg: G200 is the code-name, [GeForce] GTX 280/260 is the trade-name, capiche?
Posted on Reply
Add your own comment
Dec 22nd, 2024 02:14 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts