Friday, November 7th 2008
AMD to Give RV770 a Refresh, G200b Counterattack Planned
The RV770 graphics processor changed AMD's fortunes in the graphics processor industry and put it back in the race for supremacy over the larger rival NVIDIA. The introduction of RV770-based products had a huge impact on the mid-range and high-end graphics card markets, which took NVIDIA by surprise. Jen-Hsun Huang, the CEO of NVIDIA has been quoted saying that they had underestimated their competitor's latest GPU, referring to RV770. While the Radeon HD 4870 graphics accelerator provided direct competition to the 192 shader-laden GeForce GTX 260, the subsequent introduction of a 216 shader variant saw it lose ground, leaving doubling of memory size to carve out the newer SKU, the Radeon HD 4870 1GB. Performance benchmarks of this card from all over the media have been mixed, but show that AMD isn't giving up this chance for gaining technological supremacy.
In Q4 2008, NVIDIA is expected to release three new graphics cards: GeForce GTX 270 and GeForce GTX 290. The cards are based on NVIDIA's G200 refresh, the G200b, which incorporates a new manufacturing technology to facilitate higher clock-speeds, stepping up performance. This looks to threaten the market position of AMD's RV770, since it's already established that G200 when overclocked to its stable limits, achieves more performance than RV770 pushed to its limits. This leaves AMD with some worries, since it cannot afford to lose the wonderful market-position its cash-cow, the RV770 is currently in, to an NVIDIA product that outperforms it by a significant margin, in its price-domain. The company's next generation graphics processor would be the RV870, which still has some time left before it could be rushed in, since its introduction is tied to the constraints of foundry companies such as TSMC, and the availability of the required manufacturing process (40nm silicon lithography) by them. While TSMC takes its time working on that, there's a fair bit of time left, for RV770 to face NVIDIA, which given the circumstances, looks a lost battle. Is AMD going to do something about its flagship GPU? Will AMD make an effort to maintain its competitiveness before the next round of the battle for technological supremacy begins? The answer is tilting in favour of yes.
AMD would be giving the RV770 a refresh, with the introduction of a new graphics processor, which could come out before RV870. This graphics processor is to be codenamed RV790 while the possible new SKU name is kept under the wraps for now. AMD would be looking to maintain the same exact manufacturing process of the RV770 and all its machinery, but it would be making changes to certain parts of the GPU that genuinely facilitate it to run at higher clock-speeds, unleashing the best efficiency level of all its 10 ALU clusters.
Déjà-vu? AMD has already attempted to achieve something similar, with its big plans on the Super-RV770 GPU, where the objective was the same: to achieve higher clock speeds, but the approach wasn't right. All they did back then, was to put batches of RV770 through binning, pick the best performing parts, and use it on premium SKUs with improved cooling. The attempt evidently wasn't very successful: no AMD partner was able to sell graphics cards that ran stable out of the box, in clock-speeds they set out to achieve: excess of 950 MHz.
This time around, the objective remains the same: to make the machinery of RV770 operate at very high clock-speeds, to bring out the best performance-efficiency of those 800 stream processors, but the approach would be different: to reengineer parts of the GPU to facilitate higher clock speeds. This aims to bring in a boost to the shader compute power (SCP) of the GPU, and push its performance. What gains are slated to be brought about? Significant and sufficient. Significant, with the increase of reference clock-speeds beyond those of what the current RV770 can reach with overclocking, and sufficient for making it competitive with G200b based products.
With this, AMD looks to keep its momentum as it puts up a great competition with NVIDIA, yielding great products from both camps, at great prices, all in all propelling the fastest growing segment in the PC hardware industry, graphics processors. This is going to be a Merry Xmas [shopping season] for graphics cards buyers.
In Q4 2008, NVIDIA is expected to release three new graphics cards: GeForce GTX 270 and GeForce GTX 290. The cards are based on NVIDIA's G200 refresh, the G200b, which incorporates a new manufacturing technology to facilitate higher clock-speeds, stepping up performance. This looks to threaten the market position of AMD's RV770, since it's already established that G200 when overclocked to its stable limits, achieves more performance than RV770 pushed to its limits. This leaves AMD with some worries, since it cannot afford to lose the wonderful market-position its cash-cow, the RV770 is currently in, to an NVIDIA product that outperforms it by a significant margin, in its price-domain. The company's next generation graphics processor would be the RV870, which still has some time left before it could be rushed in, since its introduction is tied to the constraints of foundry companies such as TSMC, and the availability of the required manufacturing process (40nm silicon lithography) by them. While TSMC takes its time working on that, there's a fair bit of time left, for RV770 to face NVIDIA, which given the circumstances, looks a lost battle. Is AMD going to do something about its flagship GPU? Will AMD make an effort to maintain its competitiveness before the next round of the battle for technological supremacy begins? The answer is tilting in favour of yes.
AMD would be giving the RV770 a refresh, with the introduction of a new graphics processor, which could come out before RV870. This graphics processor is to be codenamed RV790 while the possible new SKU name is kept under the wraps for now. AMD would be looking to maintain the same exact manufacturing process of the RV770 and all its machinery, but it would be making changes to certain parts of the GPU that genuinely facilitate it to run at higher clock-speeds, unleashing the best efficiency level of all its 10 ALU clusters.
Déjà-vu? AMD has already attempted to achieve something similar, with its big plans on the Super-RV770 GPU, where the objective was the same: to achieve higher clock speeds, but the approach wasn't right. All they did back then, was to put batches of RV770 through binning, pick the best performing parts, and use it on premium SKUs with improved cooling. The attempt evidently wasn't very successful: no AMD partner was able to sell graphics cards that ran stable out of the box, in clock-speeds they set out to achieve: excess of 950 MHz.
This time around, the objective remains the same: to make the machinery of RV770 operate at very high clock-speeds, to bring out the best performance-efficiency of those 800 stream processors, but the approach would be different: to reengineer parts of the GPU to facilitate higher clock speeds. This aims to bring in a boost to the shader compute power (SCP) of the GPU, and push its performance. What gains are slated to be brought about? Significant and sufficient. Significant, with the increase of reference clock-speeds beyond those of what the current RV770 can reach with overclocking, and sufficient for making it competitive with G200b based products.
With this, AMD looks to keep its momentum as it puts up a great competition with NVIDIA, yielding great products from both camps, at great prices, all in all propelling the fastest growing segment in the PC hardware industry, graphics processors. This is going to be a Merry Xmas [shopping season] for graphics cards buyers.
92 Comments on AMD to Give RV770 a Refresh, G200b Counterattack Planned
The whole 512bit more prone to damage thing is just a mathematical probability. Of course the vMem circuitry is a different issue, where more chips = more wear/tear, but it's understood, that on a card with a 512bit memory interface, the vMem is accordingly durable (high-grade components used), something NVIDIA does use on its reference G80 and G200 PCBs.
anything thats more complex will be more prone to problems, look at windows and pc's they are more complex to deal with then a mac, hardware is limmited, so you have less problems, but that dosnt make them better, dosnt really make them worse eather(the user base does that :P )
i think you get what i was talking about, im done trying to explain/justify/wtfe im gonna watch some lost and look for some more stuff to dump on my samsung players.
you do know they bin chips dont you?
you do know that intels quads are still 2 core2 duo chips on one packege dont you?
do you know why intel does this?
they do it because the fail rate/flaw rate of dual core chips/dies is less due to lower complexity then it would be with one solid die with 4 cores on it.
what im saying is your logic is flawed, that or you really dont know what your talking about......
"you do know they bin chips dont you?"
The durability of components used in those complex graphics cards negate their mathematically high probability to fail (merely because of the complexity of them). The probability is only mathematical, not real.
Binning chips and componants and building most costly pcb's leads to higher costs, leads to higher prices, I would like to know how high the fail rate of the pcb's themselves if in QA testing, Each fail is $ wasted, so the point is that nvidias costs are higher, as are their prices.
just like back in the day the 9700-9800pro/xt was a 256bit card and the 9500pro/9800se(256bit) was 128bit, some old 9500's where just 9700-9800pro/xt with a bios to dissable 1/2 the memory buss and/or pipes on the card( have seen cards both ways ) they also had native pro versions that where 128bit and FAR cheaper to make, less complext pcb's.
blah, that last bit was a bit of a rammble, point beeing that ati's way this time around as they have in the past they found a cheaper more efficent way to do the same job.
gddr5 on 256bit can have equivlant bandwith to 512+bit gddr3, sure the initial price of gddr5 was higher but i would bet by no the cost has come down a good bit(alot of companys are making it after all) I was reading that nvidia could and likely will move to gddr5, they didnt use gddr4 because of cost and low supply avalable(also wasnt that much better then 3)
blah, you treat me like a moron, and you use "flawed logic" to try and get around the situation.
you used the qx9770(who the hells gonna pay that kinda price for a cpu?) as an example, we coudnt get real 1:1 numbers on that because nobody sane buys those things(over 1k for a cpu.......)
example that can show you what i mean would be the k10's, there are quad cores and tricores.
the tricore's are eather weak or failed quads, amd found a way to make money off flawed chips, they still function just fine, but due to complexity of NATIVE quadcore you ARE going to have higher fails then if you went with multi dual cores on 1 die.
in that reguard intels method was smarter to a point(for home user market) since it was cheaper and had lower fail rates(could alwase sell failed duals as lower singel core models) amd even admited for the non-server market they should have done an intel style setup for the first run, then moved native on the second batch.
as i have a feeling nvidia will endup moving to less complex pcb's with gddr5 with their next real change(non-refresh)
we shal see, i just know that price for perf i would take a 4800 over the gt200 or g92, that being if i hadnt bought the card i got b4 the 3800's where out :)
theres no real sense arguing just to argue, you are well entitled to your opinion, but rest assured, nobody is calling, or treating you like a moron.
i do see some very valid points you raise FudFighter, but statements like "you do know they bin chips dont you?" to btarunr does not strengthen your position.
play it cool man, we just wanna discuss cool new hardware :)
- First of all, we are arguing to the failure rates, not to the price. Complex PCBs are undoubtely more expensive, but you make them because they allow for cheaper parts on other places (i.e GDDR3) or for improved performance. What it is better is something much more complicated than comparing the PCBs. What has hapenned with GT200 and RV770 doesn't proof anything either on this matter, first because GT200 "failed" (couldn't clock it as high as they wanted*) and second because when comparing prices you have to take into account that prices fluctuate and are related to demand. I have said this like a million times, but had Nvidia adopted GDDR5 the same day Ati did, the demand for GDDR5 would have been 3x that of what it has been**, WHEN suppliers couldn't even meet Ati's demand. That would make prices skyrocket. It's easy to look at the market today and say 256bit + GDDR5 is cheaper (I'm still not so sure), but what would have happened if GDDR5 prices were a 50-80% higher? Nvidia because of the high market share they had, couldn't take the risk of finding that out. You can certainly thank Nvidia for RV770's success in price/performance, don't doubt this a moment.
- We have told you that failure rates are indeed higher too (under the same coditions), but not as much as you make them to be, nowhere near to it really. AND that small increase is already taken into account BEFORE they manufacture them and they take actions to make for that difference (conditions are then different). In fact, a big part (biggest one probably) of the increased costs of manufacturing complex PCBs is because of that. In the end the final product is in practice as error free as the simpler one, but at higher costs. As I've discused above those increased costs could not matter, it just depends on your strategy and the world surrounding.
- Don't mix apples to oranges. Microchip manufacturing and PCBs are a totlly different thing. I can't think of any other manufactured product with such high failure rates as microchips, it's part of the complexity of the process of making them. In chips a failure rate difference of 20% can easily happen between a simple design and a more complex one, but that's not the case with PCBs.
And also don't take individual examples as proof of facts. I'm talking about K10. Just as GT200 failed, K10 failed and although those fails are related to their complexity, the nature of the failure surpassed ANY expectations. Although related, the failure was not due to the complexity, but issues with the manufacturing process. You can't take one example and make a point with it. What happens with Nehalem? It IS a native quad core CPU and they are not having the problems K10 had. Making a native quad it is more risky than slapping two duals together but the benefits of a native quad are evident. Even if failures are higher, a native quad is inherently much faster and makes up for the difference: if the native is 30% faster (just as an example), in order to give the same performance you can afford to make each core on the native CPU a 30% simpler. In the end it will depend on if the architectural benefits can make up for the difference at manufacturing time. In K10 it didn't, in Nehalem it does, in Shanghai it does. The conclusion is evident, in practice native quads are demostrating to be the better solution.
* IMO that had to be sorted out before launch, BUT not in time for when specs were finalized. I can't understand otherwise how the chip they couldn't make it run faster is in the top list of overclockers, with average overclocks above 16% percent on stock. With little increased heat and power consumption for more signs...
** Nvidia sold 2 cards for every card Ati sold back then. They had to aim at the same market share AND the same number of sales. With both fighting for GDD5 supply that would have been impossible. A low supply of RAM would have hurt them more than what has actually happened. They lost market share, but their sales didn't suffer so much. Ati on the other hand, desperately needed to gain market share, no matter what, they needed a moral win, and how much could that cost them didn't really matter. They took the risk and with a little help from Nvidia they succeeded.
I am late maybe someone will read this.
to btarunr: Fudfighter is speaking of productions fail rates which do not make it the shelf. The only impact to the consumer is the price tag, and that is because of everything he has explained and i will not go into. He is not exactly saying that your purchased x4 CPU or video card is more likely gonna fail because it is more complex
to Fudfighter: btarnunr has missed your point and was totally talkin about user fail rates. as you where mostly clear as to what your were saying, your post had nothing to do with what btarunr was posting and you kept on argumentatively responding and both wasnt even on the same subject.
Both are mostly correct just a little bit of a mix up in communication. FudFighter is correct and his point is valid here when speaking of production fail rates causing higher production cost and less profits for Nvidia or any company. If they arent making money then they arent gonna keep selling cards. So maximizing quality with a higher percentage of acceptable yield is an absolute must for any GPU or other manufacturer to stay competitive. This article was about AMD and Nvidia rivals so every bit of fudfighters info apples to this. Now btarnunr is correct in his point as he was trying to get across that just because a component is more complex it doesnt mean it is gonna have a higher fail rate. And he is right it absolutely doesnt, from an end user stand point. If QA does its job correctly there should be no problems with the end results.
I have no enemies, i dont mean to make any. You are both right in your own way
When you are about to create a more complex product, you use better, expensive materials, you use better, slower techniques of manufacturing the product, better and more workers take care of the future of the end product, etc. All those things make the resultant product more expensive, BUT the failure rate is mantained at a level close to that of a simple product. How much you pay (or how much it makes sense to spend) to mantain that low level of failures depends on many things and is up to the company to choose, in the end it's give or take between paying more to have less failures or "spend" that money in QA and paying the materials/workforce of those failed products that will never get to the end of the pipeline.
When they do this the center is used in commercial markets such as Xeons, Opterons, FireGL, and so on. The next step is the normal market you and me.
As btarunr said So the the farther out you get the worse it gets, but that is also why some cheap chips will run at the same speed as the more expensive one. The beat the high probability to fail when they were made.
Now you might think this is crazy.....Lets just say a 8600Gt is the fastest selling card on the market. They run out of that bin for that card. What do they do stop making them? Nope they pull from the higher up bin and continue on with production because money is money. So if it was a really popular card you could have the TOP bin chip in a very low product, because they are selling them really fast and making their money back faster.
That really does happen with CPUs, and with Video cards bining what they sell.