Friday, November 7th 2008

AMD to Give RV770 a Refresh, G200b Counterattack Planned

The RV770 graphics processor changed AMD's fortunes in the graphics processor industry and put it back in the race for supremacy over the larger rival NVIDIA. The introduction of RV770-based products had a huge impact on the mid-range and high-end graphics card markets, which took NVIDIA by surprise. Jen-Hsun Huang, the CEO of NVIDIA has been quoted saying that they had underestimated their competitor's latest GPU, referring to RV770. While the Radeon HD 4870 graphics accelerator provided direct competition to the 192 shader-laden GeForce GTX 260, the subsequent introduction of a 216 shader variant saw it lose ground, leaving doubling of memory size to carve out the newer SKU, the Radeon HD 4870 1GB. Performance benchmarks of this card from all over the media have been mixed, but show that AMD isn't giving up this chance for gaining technological supremacy.

In Q4 2008, NVIDIA is expected to release three new graphics cards: GeForce GTX 270 and GeForce GTX 290. The cards are based on NVIDIA's G200 refresh, the G200b, which incorporates a new manufacturing technology to facilitate higher clock-speeds, stepping up performance. This looks to threaten the market position of AMD's RV770, since it's already established that G200 when overclocked to its stable limits, achieves more performance than RV770 pushed to its limits. This leaves AMD with some worries, since it cannot afford to lose the wonderful market-position its cash-cow, the RV770 is currently in, to an NVIDIA product that outperforms it by a significant margin, in its price-domain. The company's next generation graphics processor would be the RV870, which still has some time left before it could be rushed in, since its introduction is tied to the constraints of foundry companies such as TSMC, and the availability of the required manufacturing process (40nm silicon lithography) by them. While TSMC takes its time working on that, there's a fair bit of time left, for RV770 to face NVIDIA, which given the circumstances, looks a lost battle. Is AMD going to do something about its flagship GPU? Will AMD make an effort to maintain its competitiveness before the next round of the battle for technological supremacy begins? The answer is tilting in favour of yes.


AMD would be giving the RV770 a refresh, with the introduction of a new graphics processor, which could come out before RV870. This graphics processor is to be codenamed RV790 while the possible new SKU name is kept under the wraps for now. AMD would be looking to maintain the same exact manufacturing process of the RV770 and all its machinery, but it would be making changes to certain parts of the GPU that genuinely facilitate it to run at higher clock-speeds, unleashing the best efficiency level of all its 10 ALU clusters.

Déjà-vu? AMD has already attempted to achieve something similar, with its big plans on the Super-RV770 GPU, where the objective was the same: to achieve higher clock speeds, but the approach wasn't right. All they did back then, was to put batches of RV770 through binning, pick the best performing parts, and use it on premium SKUs with improved cooling. The attempt evidently wasn't very successful: no AMD partner was able to sell graphics cards that ran stable out of the box, in clock-speeds they set out to achieve: excess of 950 MHz.

This time around, the objective remains the same: to make the machinery of RV770 operate at very high clock-speeds, to bring out the best performance-efficiency of those 800 stream processors, but the approach would be different: to reengineer parts of the GPU to facilitate higher clock speeds. This aims to bring in a boost to the shader compute power (SCP) of the GPU, and push its performance. What gains are slated to be brought about? Significant and sufficient. Significant, with the increase of reference clock-speeds beyond those of what the current RV770 can reach with overclocking, and sufficient for making it competitive with G200b based products.

With this, AMD looks to keep its momentum as it puts up a great competition with NVIDIA, yielding great products from both camps, at great prices, all in all propelling the fastest growing segment in the PC hardware industry, graphics processors. This is going to be a Merry Xmas [shopping season] for graphics cards buyers.
Add your own comment

92 Comments on AMD to Give RV770 a Refresh, G200b Counterattack Planned

#76
btarunr
Editor & Senior Moderator
FudFighterduno over the years i have seen a good number of bad traces, and as things get smaller and more complex i wouldnt expect that to dissapear.

we use to fix flawed cards with burnt/damnaged/flawed serface traces with conductive pen then seal it with some clear fingernail polish :)
Did you know, the wiring you see on either sides of the PCB aren't the only wiring? PCB is a layered thing with each layers holding wiring...something conductive pens won't help with.

The whole 512bit more prone to damage thing is just a mathematical probability. Of course the vMem circuitry is a different issue, where more chips = more wear/tear, but it's understood, that on a card with a 512bit memory interface, the vMem is accordingly durable (high-grade components used), something NVIDIA does use on its reference G80 and G200 PCBs.
Posted on Reply
#77
FudFighter
yes i know 6 and 8 layers are common, and i fully know you cant fix internal traces, i never said anything about fixing internal pcb traces....you act like i am a moron/uber noob.........

anything thats more complex will be more prone to problems, look at windows and pc's they are more complex to deal with then a mac, hardware is limmited, so you have less problems, but that dosnt make them better, dosnt really make them worse eather(the user base does that :P )

i think you get what i was talking about, im done trying to explain/justify/wtfe im gonna watch some lost and look for some more stuff to dump on my samsung players.
Posted on Reply
#78
btarunr
Editor & Senior Moderator
FudFighteranything thats more complex will be more prone to problems, look at windows and pc's they are more complex to deal with then a mac, hardware is limmited, so you have less problems, but that dosnt make them better, dosnt really make them worse eather(the user base does that :P )
...if there aren't any measures to make them durable accordingly, yes, but that's not the case. By that logic, a Core 2 Extreme QX9770 is more prone to damage than a E5200 (again notwithstanding overclocking), but that isn't the case, right? Probabilities always exist. Sometimes they're too small to manifest into anything real. I'm not doing anything than this discussion..thank you for it.
Posted on Reply
#79
FudFighter
no but the rate of cores that make it into qx9770's vs e5200's is far lower.

you do know they bin chips dont you?

you do know that intels quads are still 2 core2 duo chips on one packege dont you?

do you know why intel does this?

they do it because the fail rate/flaw rate of dual core chips/dies is less due to lower complexity then it would be with one solid die with 4 cores on it.

what im saying is your logic is flawed, that or you really dont know what your talking about......
Posted on Reply
#80
btarunr
Editor & Senior Moderator
FudFighterno but the rate of cores that make it into qx9770's vs e5200's is far lower.

you do know they bin chips dont you?

you do know that intels quads are still 2 core2 duo chips on one packege dont you?

do you know why intel does this?

they do it because the fail rate/flaw rate of dual core chips/dies is less due to lower complexity then it would be with one solid die with 4 cores on it.

what im saying is your logic is flawed, that or you really dont know what your talking about......
Ah...now use your logic against yourself:

"you do know they bin chips dont you?"

The durability of components used in those complex graphics cards negate their mathematically high probability to fail (merely because of the complexity of them). The probability is only mathematical, not real.
Posted on Reply
#81
[I.R.A]_FBi
FudFighterno but the rate of cores that make it into qx9770's vs e5200's is far lower.

you do know they bin chips dont you?

you do know that intels quads are still 2 core2 duo chips on one packege dont you?

do you know why intel does this?

they do it because the fail rate/flaw rate of dual core chips/dies is less due to lower complexity then it would be with one solid die with 4 cores on it.

what im saying is your logic is flawed, that or you really dont know what your talking about......
why so much effort to try and show up bta?
Posted on Reply
#82
theJesus
[I.R.A]_FBiwhy so much effort to try and show up bta?
Agreed, although the discussion is quite entertaining :p
Posted on Reply
#83
FudFighter
btarunrAh...now use your logic against yourself:

"you do know they bin chips dont you?"

The durability of components used in those complex graphics cards negate their mathematically high probability to fail (merely because of the complexity of them). The probability is only mathematical, not real.
but it also raises cost, this is moving away from the orignal point of my post, and you and the otherguy know it.

Binning chips and componants and building most costly pcb's leads to higher costs, leads to higher prices, I would like to know how high the fail rate of the pcb's themselves if in QA testing, Each fail is $ wasted, so the point is that nvidias costs are higher, as are their prices.

just like back in the day the 9700-9800pro/xt was a 256bit card and the 9500pro/9800se(256bit) was 128bit, some old 9500's where just 9700-9800pro/xt with a bios to dissable 1/2 the memory buss and/or pipes on the card( have seen cards both ways ) they also had native pro versions that where 128bit and FAR cheaper to make, less complext pcb's.

blah, that last bit was a bit of a rammble, point beeing that ati's way this time around as they have in the past they found a cheaper more efficent way to do the same job.

gddr5 on 256bit can have equivlant bandwith to 512+bit gddr3, sure the initial price of gddr5 was higher but i would bet by no the cost has come down a good bit(alot of companys are making it after all) I was reading that nvidia could and likely will move to gddr5, they didnt use gddr4 because of cost and low supply avalable(also wasnt that much better then 3)


blah, you treat me like a moron, and you use "flawed logic" to try and get around the situation.

you used the qx9770(who the hells gonna pay that kinda price for a cpu?) as an example, we coudnt get real 1:1 numbers on that because nobody sane buys those things(over 1k for a cpu.......)

example that can show you what i mean would be the k10's, there are quad cores and tricores.

the tricore's are eather weak or failed quads, amd found a way to make money off flawed chips, they still function just fine, but due to complexity of NATIVE quadcore you ARE going to have higher fails then if you went with multi dual cores on 1 die.

in that reguard intels method was smarter to a point(for home user market) since it was cheaper and had lower fail rates(could alwase sell failed duals as lower singel core models) amd even admited for the non-server market they should have done an intel style setup for the first run, then moved native on the second batch.

as i have a feeling nvidia will endup moving to less complex pcb's with gddr5 with their next real change(non-refresh)

we shal see, i just know that price for perf i would take a 4800 over the gt200 or g92, that being if i hadnt bought the card i got b4 the 3800's where out :)
Posted on Reply
#84
btarunr
Editor & Senior Moderator
FudFighterbut it also raises cost, this is moving away from the orignal point of my post, and you and the otherguy know it.
Well, that's the whole reason why it's priced that way and caters to that segment of the market, right?
FudFighterBinning chips and componants and building most costly pcb's leads to higher costs, leads to higher prices, I would like to know how high the fail rate of the pcb's themselves if in QA testing, Each fail is $ wasted, so the point is that nvidias costs are higher, as are their prices.
Apparently NVIDIA disagrees with you. The PCB has very little role to play in the graphics card's failure. It's always had something to do with the components, or the overclocker's acts. The quality of the components used makes up for the very slight probability of the PCB being the COD for a graphics card...in effect, the PCB is the last thing you'd point your finger to.
FudFightergddr5 on 256bit can have equivlant bandwith to 512+bit gddr3, sure the initial price of gddr5 was higher but i would bet by no the cost has come down a good bit(alot of companys are making it after all) I was reading that nvidia could and likely will move to gddr5, they didnt use gddr4 because of cost and low supply avalable(also wasnt that much better then
I agree x bit GDDR5 = 2x bit GDDR3, but you have to agree that G200 PCBs have been in development for a long time, I'd say right after G80 launch, after NVIDIA started work on 65nm GPUs. Just because the product happened to launch just before another with GDDR5 came about, you can't say "they should have used GDDR5". Whether they cut costs or not, you end up paying the same, they make you to. Don't you get a GeForce GTX 260 for the same price-range you get a HD 4870? So don't come to the conclusion that if they manage to cut costs, they'll hand over the benefit to you, by making you pay less, they'll benefit themselves.
FudFighterblah, you treat me like a moron, and you use "flawed logic" to try and get around the situation.
Whatever you're accusing others of, is apparently all in your mind.
FudFighteryou used the qx9770(who the hells gonna pay that kinda price for a cpu?) as an example, we coudnt get real 1:1 numbers on that because nobody sane buys those things(over 1k for a cpu.......)
Conclusions...conclusions. It's called "premium". People who can buy, will buy, however smart/dumb they are. To cater to that very market, are the $1000~1500 CPUs Intel sells, something AMD did in its days too.
Posted on Reply
#85
Hayder_Master
W1zzard256 bit -> 512 bit does the same that gddr3 -> gddr5 does. double the memory bandwidth. apparently rv770 does not need that much bandwidth or you would see a much bigger difference between 4850 and 4870
ohh interesting thanx w1zzard , sure you right , but let we say 4870 with 512 bit just like 4870x2 case we see high bandwidth in gpu-z and sure it give great performance , not because high size memory but cuz high memory bandwidth , am i right or what is you tip
Posted on Reply
#86
theJesus
FudFighter, how is btarunr treating you like a moron? I think you're being way too defensive; all bta is doing is trying to debate with you about some of the things you've said, because he disagrees.
Posted on Reply
#87
wolf
Better Than Native
agreed, btarunr has used nothing but logic, experience and knowledge to base what he has written, whether or not its 100% factual.

theres no real sense arguing just to argue, you are well entitled to your opinion, but rest assured, nobody is calling, or treating you like a moron.

i do see some very valid points you raise FudFighter, but statements like "you do know they bin chips dont you?" to btarunr does not strengthen your position.

play it cool man, we just wanna discuss cool new hardware :)
Posted on Reply
#88
DarkMatter
FudFighter...
You are mixing things up.

- First of all, we are arguing to the failure rates, not to the price. Complex PCBs are undoubtely more expensive, but you make them because they allow for cheaper parts on other places (i.e GDDR3) or for improved performance. What it is better is something much more complicated than comparing the PCBs. What has hapenned with GT200 and RV770 doesn't proof anything either on this matter, first because GT200 "failed" (couldn't clock it as high as they wanted*) and second because when comparing prices you have to take into account that prices fluctuate and are related to demand. I have said this like a million times, but had Nvidia adopted GDDR5 the same day Ati did, the demand for GDDR5 would have been 3x that of what it has been**, WHEN suppliers couldn't even meet Ati's demand. That would make prices skyrocket. It's easy to look at the market today and say 256bit + GDDR5 is cheaper (I'm still not so sure), but what would have happened if GDDR5 prices were a 50-80% higher? Nvidia because of the high market share they had, couldn't take the risk of finding that out. You can certainly thank Nvidia for RV770's success in price/performance, don't doubt this a moment.

- We have told you that failure rates are indeed higher too (under the same coditions), but not as much as you make them to be, nowhere near to it really. AND that small increase is already taken into account BEFORE they manufacture them and they take actions to make for that difference (conditions are then different). In fact, a big part (biggest one probably) of the increased costs of manufacturing complex PCBs is because of that. In the end the final product is in practice as error free as the simpler one, but at higher costs. As I've discused above those increased costs could not matter, it just depends on your strategy and the world surrounding.

- Don't mix apples to oranges. Microchip manufacturing and PCBs are a totlly different thing. I can't think of any other manufactured product with such high failure rates as microchips, it's part of the complexity of the process of making them. In chips a failure rate difference of 20% can easily happen between a simple design and a more complex one, but that's not the case with PCBs.

And also don't take individual examples as proof of facts. I'm talking about K10. Just as GT200 failed, K10 failed and although those fails are related to their complexity, the nature of the failure surpassed ANY expectations. Although related, the failure was not due to the complexity, but issues with the manufacturing process. You can't take one example and make a point with it. What happens with Nehalem? It IS a native quad core CPU and they are not having the problems K10 had. Making a native quad it is more risky than slapping two duals together but the benefits of a native quad are evident. Even if failures are higher, a native quad is inherently much faster and makes up for the difference: if the native is 30% faster (just as an example), in order to give the same performance you can afford to make each core on the native CPU a 30% simpler. In the end it will depend on if the architectural benefits can make up for the difference at manufacturing time. In K10 it didn't, in Nehalem it does, in Shanghai it does. The conclusion is evident, in practice native quads are demostrating to be the better solution.

* IMO that had to be sorted out before launch, BUT not in time for when specs were finalized. I can't understand otherwise how the chip they couldn't make it run faster is in the top list of overclockers, with average overclocks above 16% percent on stock. With little increased heat and power consumption for more signs...

** Nvidia sold 2 cards for every card Ati sold back then. They had to aim at the same market share AND the same number of sales. With both fighting for GDD5 supply that would have been impossible. A low supply of RAM would have hurt them more than what has actually happened. They lost market share, but their sales didn't suffer so much. Ati on the other hand, desperately needed to gain market share, no matter what, they needed a moral win, and how much could that cost them didn't really matter. They took the risk and with a little help from Nvidia they succeeded.
Posted on Reply
#89
ocre
Everyone here is getting mixed up

I am late maybe someone will read this.

to btarunr: Fudfighter is speaking of productions fail rates which do not make it the shelf. The only impact to the consumer is the price tag, and that is because of everything he has explained and i will not go into. He is not exactly saying that your purchased x4 CPU or video card is more likely gonna fail because it is more complex

to Fudfighter: btarnunr has missed your point and was totally talkin about user fail rates. as you where mostly clear as to what your were saying, your post had nothing to do with what btarunr was posting and you kept on argumentatively responding and both wasnt even on the same subject.

Both are mostly correct just a little bit of a mix up in communication. FudFighter is correct and his point is valid here when speaking of production fail rates causing higher production cost and less profits for Nvidia or any company. If they arent making money then they arent gonna keep selling cards. So maximizing quality with a higher percentage of acceptable yield is an absolute must for any GPU or other manufacturer to stay competitive. This article was about AMD and Nvidia rivals so every bit of fudfighters info apples to this. Now btarnunr is correct in his point as he was trying to get across that just because a component is more complex it doesnt mean it is gonna have a higher fail rate. And he is right it absolutely doesnt, from an end user stand point. If QA does its job correctly there should be no problems with the end results.

I have no enemies, i dont mean to make any. You are both right in your own way
Posted on Reply
#90
DarkMatter
ocreI am late maybe someone will read this.

to btarunr: Fudfighter is speaking of productions fail rates which do not make it the shelf. The only impact to the consumer is the price tag, and that is because of everything he has explained and i will not go into. He is not exactly saying that your purchased x4 CPU or video card is more likely gonna fail because it is more complex

to Fudfighter: btarnunr has missed your point and was totally talkin about user fail rates. as you where mostly clear as to what your were saying, your post had nothing to do with what btarunr was posting and you kept on argumentatively responding and both wasnt even on the same subject.

Both are mostly correct just a little bit of a mix up in communication. FudFighter is correct and his point is valid here when speaking of production fail rates causing higher production cost and less profits for Nvidia or any company. If they arent making money then they arent gonna keep selling cards. So maximizing quality with a higher percentage of acceptable yield is an absolute must for any GPU or other manufacturer to stay competitive. This article was about AMD and Nvidia rivals so every bit of fudfighters info apples to this. Now btarnunr is correct in his point as he was trying to get across that just because a component is more complex it doesnt mean it is gonna have a higher fail rate. And he is right it absolutely doesnt, from an end user stand point. If QA does its job correctly there should be no problems with the end results.

I have no enemies, i dont mean to make any. You are both right in your own way
Yes and no. At manufacturing time a more complex product does not necessarily have a higher failure rate. Not to the point of affecting profitability, that for sure. When you manufacture anything you try to do it following the absolutely cheapest way of doing it, as long as it doesn't affect quality. That means using the cheapest materials that meet your requirements, taking care of the product enough that it is well created and no more, etc. That's why the process of creating the simple thing is cheaper and the product itself ends up being cheaper.

When you are about to create a more complex product, you use better, expensive materials, you use better, slower techniques of manufacturing the product, better and more workers take care of the future of the end product, etc. All those things make the resultant product more expensive, BUT the failure rate is mantained at a level close to that of a simple product. How much you pay (or how much it makes sense to spend) to mantain that low level of failures depends on many things and is up to the company to choose, in the end it's give or take between paying more to have less failures or "spend" that money in QA and paying the materials/workforce of those failed products that will never get to the end of the pipeline.
Posted on Reply
#91
vagxtr
ZerofoolI actually doubt we'll see RV790 this year. Latest news about GT200b talk about yet another delay - to February 09 (the inquirer). So probably RV790 cards will come out then (or whenever NV cards do), they don't want to compete against their own cards now :).



Yes, most likely.
This cards are already taped out. The best fact is the last remaining working rv770 chips are spun off in hd4830 incarnations last month. So that rumor is more than a rumor, the only thingy to sooner introduction depends on buyers momentum. I'd say will se it for christmas al least shortages of the same :laugh:
3dchipsetI'm just curious if they will bring them out this year. So far this is looking like the worse retail shopping in 10 years due to the economy. I honestly would be shocked to see a new offering this year.
well it's not all in ecoomic momentum. This is YAR of RV770 and they could call it as they like just nv fist makes gt200b and renaming it to gt206 :D it's just a marketing bug and they need something to stay competitive and technology @55nm allows them improements .... but it's more likely to se 800SP @40nm it's more cost effective and 150MHz+ guaranteed :toast:
W1zzardi doubt ati will have adjustable shader clocks any time soon. this would be a HUGE design change.



i expect rv790 to be drop in compatible with rv770. that means you could unsolder the gpu from a hd 4850/4870, solder on rv790 and the card would work without any other change on hardware or software side.
yeah yeah they all announce drop-in compatibility, but afair the only true dropin was kt266a->kt333 and boards didnt need redesign, and even old school nForce2 Ultra needed new board some tiny changes when new same 0,18u-process Ultra400 came out. all in all it's not all in proclamations and there will only be pinout compatibility, i guess :o
Posted on Reply
#92
DaMulta
My stars went supernova
I would like to point something out with bin chips as talking about above.....

When they do this the center is used in commercial markets such as Xeons, Opterons, FireGL, and so on. The next step is the normal market you and me.

As btarunr said
their mathematically high probability to fail
So the the farther out you get the worse it gets, but that is also why some cheap chips will run at the same speed as the more expensive one. The beat the high probability to fail when they were made.

Now you might think this is crazy.....Lets just say a 8600Gt is the fastest selling card on the market. They run out of that bin for that card. What do they do stop making them? Nope they pull from the higher up bin and continue on with production because money is money. So if it was a really popular card you could have the TOP bin chip in a very low product, because they are selling them really fast and making their money back faster.

That really does happen with CPUs, and with Video cards bining what they sell.
Posted on Reply
Add your own comment
Dec 22nd, 2024 17:51 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts