# AMD to Give RV770 a Refresh, G200b Counterattack Planned



## btarunr (Nov 7, 2008)

The RV770 graphics processor changed AMD's fortunes in the graphics processor industry and put it back in the race for supremacy over the larger rival NVIDIA. The introduction of RV770-based products had a huge impact on the mid-range and high-end graphics card markets, which took NVIDIA by surprise. Jen-Hsun Huang, the CEO of NVIDIA has been quoted saying that they had underestimated their competitor's latest GPU, referring to RV770. While the Radeon HD 4870 graphics accelerator provided direct competition to the 192 shader-laden GeForce GTX 260, the subsequent introduction of a 216 shader variant saw it lose ground, leaving doubling of memory size to carve out the newer SKU, the Radeon HD 4870 1GB. Performance benchmarks of this card from all over the media have been mixed, but show that AMD isn't giving up this chance for gaining technological supremacy.

In Q4 2008, NVIDIA is expected to release three new graphics cards: GeForce GTX 270 and GeForce GTX 290. The cards are based on NVIDIA's G200 refresh, the G200b, which incorporates a new manufacturing technology to facilitate higher clock-speeds, stepping up performance. This looks to threaten the market position of AMD's RV770, since it's already established that G200 when overclocked to its stable limits, achieves more performance than RV770 pushed to its limits. This leaves AMD with some worries, since it cannot afford to lose the wonderful market-position its cash-cow, the RV770 is currently in, to an NVIDIA product that outperforms it by a significant margin, in its price-domain. The company's next generation graphics processor would be the RV870, which still has some time left before it could be rushed in, since its introduction is tied to the constraints of foundry companies such as TSMC, and the availability of the required manufacturing process (40nm silicon lithography) by them. While TSMC takes its time working on that, there's a fair bit of time left, for RV770 to face NVIDIA, which given the circumstances, looks a lost battle. Is AMD going to do something about its flagship GPU? Will AMD make an effort to maintain its competitiveness before the next round of the battle for technological supremacy begins? The answer is tilting in favour of yes.







AMD would be giving the RV770 a refresh, with the introduction of a new graphics processor, which could come out before RV870. This graphics processor is to be codenamed RV790 while the possible new SKU name is kept under the wraps for now. AMD would be looking to maintain the same exact manufacturing process of the RV770 and all its machinery, but it would be making changes to certain parts of the GPU that genuinely facilitate it to run at higher clock-speeds, unleashing the best efficiency level of all its 10 ALU clusters.

Déjà-vu? AMD has already attempted to achieve something similar, with its big plans on the Super-RV770 GPU, where the objective was the same: to achieve higher clock speeds, but the approach wasn't right. All they did back then, was to put batches of RV770 through binning, pick the best performing parts, and use it on premium SKUs with improved cooling. The attempt evidently wasn't very successful: no AMD partner was able to sell graphics cards that ran stable out of the box, in clock-speeds they set out to achieve: excess of 950 MHz.

This time around, the objective remains the same: to make the machinery of RV770 operate at very high clock-speeds, to bring out the best performance-efficiency of those 800 stream processors, but the approach would be different: to reengineer parts of the GPU to facilitate higher clock speeds. This aims to bring in a boost to the shader compute power (SCP) of the GPU, and push its performance. What gains are slated to be brought about? Significant and sufficient. Significant, with the increase of reference clock-speeds beyond those of what the current RV770 can reach with overclocking, and sufficient for making it competitive with G200b based products.

With this, AMD looks to keep its momentum as it puts up a great competition with NVIDIA, yielding great products from both camps, at great prices, all in all propelling the fastest growing segment in the PC hardware industry, graphics processors. This is going to be a Merry Xmas [shopping season] for graphics cards buyers.

*View at TechPowerUp Main Site*


----------



## [I.R.A]_FBi (Nov 7, 2008)

another one? nice


----------



## wolf2009 (Nov 7, 2008)

Why so many news threads about same thing in the last few days ?

You guys are doing more refreshes on your news than ATI is going to do on their RV770 GPU


----------



## btarunr (Nov 7, 2008)

wolf2009 said:


> Why so many news threads about same thing in the last few days ?



Find me another news thread about RV790. This is the world's first news thread on RV790.


----------



## jbunch07 (Nov 7, 2008)

Lovely, great article Bta!

And good call on ATi's part, Way to step it up, hope it works! 
Cant wait to see what the new clock speeds will be.


----------



## KainXS (Nov 7, 2008)

more than likely a die strink and decreased prices


----------



## btarunr (Nov 7, 2008)

I can haz digg pl? http://digg.com/hardware/AMD_to_Give_RV770_a_Refresh_G200b_Counterattack_Planned


----------



## GFC (Nov 7, 2008)

All i can say is.. YEAY! I can't wait till i see 270GTX vs R790 review )


----------



## jbunch07 (Nov 7, 2008)

I dugg it!


----------



## btarunr (Nov 7, 2008)

KainXS said:


> more than likely a die strink and decreased prices



Not a die-shrink, the 55nm process stays.


----------



## [I.R.A]_FBi (Nov 7, 2008)

dugg


----------



## petepete (Nov 7, 2008)

Very interesting article. Can't wait to see what happens in the next few months


----------



## H82LUZ73 (Nov 7, 2008)

this could be the shader clock and GPU clock increase that was left out of the RV770`s.......


----------



## erocker (Nov 7, 2008)

I'm hoping for adjustable shader clocks.  I digg the article bta!


----------



## jbunch07 (Nov 7, 2008)

erocker said:


> I'm hoping for adjustable shader clocks.  I digg the article bta!



Now that's what I'm talking about! I was hopping for that on the 4 series.


----------



## DaC (Nov 7, 2008)

I've registred on Digg just to Digg this on..... Dugg


----------



## PCpraiser100 (Nov 7, 2008)

Any samples and betas of the core and DX11?????


----------



## btarunr (Nov 7, 2008)

PCpraiser100 said:


> Any samples and betas of the core and DX11?????



No, it's just a RV770 refresh..DX 10.1


----------



## imperialreign (Nov 7, 2008)

I wonder how well the new BIOS for these GPUs would work with current R770s


----------



## DarkMatter (Nov 7, 2008)

imperialreign said:


> I wonder how well the new BIOS for these GPUs would work with current R770s



Not very well:



> What gains are slated to be brought about? Significant and sufficient. Significant, with the increase of reference clock-speeds beyond those of what the current RV770 can reach with overclocking, and sufficient for making it competitive with G200b based products.


----------



## JrRacinFan (Nov 7, 2008)

I say AMD should do an HD4890 with 960SP.


----------



## WarEagleAU (Nov 7, 2008)

Duggified. Thanks for this BTA. This makes my wait on the 4XXX series more worth it now. I hope the refresh will work much like nvidias does. To be honest, with GDDR5 getting out performed by GDDR3, makes me wonder if ATi isnt fully utilizing their memory...


----------



## rizla1 (Nov 8, 2008)

WarEagleAU said:


> Duggified. Thanks for this BTA. This makes my wait on the 4XXX series more worth it now. I hope the refresh will work much like nvidias does. To be honest, with GDDR5 getting out performed by GDDR3, makes me wonder if ATi isnt fully utilizing their memory...



the reason there gpu's are  out performed is cause of the  256 bit  mem controller . and the   nvidia 260 - 448 bit or somthing  and 280 gtx has  500 bit i think.

imagine a 4870 WITH A 500 BIT  MEMORY INTERFACE  it would be crazy fast , + 200 GB/s .


----------



## techie81 (Nov 8, 2008)

Dugg! I hope to still see competitive prices!


----------



## insider (Nov 8, 2008)

At last this is what you call real competition between AMD and nVidia, the global economy is fubar'd for the next 2-3+ years they are gonna have to fight hard on declining sales...


----------



## zithe (Nov 8, 2008)

YAY! A 4900 SERIES!? Or will it be 4890? XD


----------



## btarunr (Nov 8, 2008)

zithe said:


> YAY! A 4900 SERIES!? Or will it be 4890? XD



HD 4860/4890 suits the best guesswork, though it's not known about the naming as of now.


----------



## aj28 (Nov 8, 2008)

rizla1 said:


> the reason there gpu's are  out performed is cause of the  256 bit  mem controller . and the   nvidia 260 - 448 bit or somthing  and 280 gtx has  500 bit i think.
> 
> imagine a 4870 WITH A 500 BIT  MEMORY INTERFACE  it would be crazy fast , + 200 GB/s .



448/512-bit, yes, but that's a pretty major redesign you're talking right there, and wouldn't be real likely until the RV870 generation. Plus with the GDDR5 advantage going for ATi right now, that's not an overly necessary modification to make. Of course by the same token, they have all the more to gain by going to a wider bus... In any case, more than likely we'll just see higher clock speeds, some price drops, and maybe GDDR5 standard on the high-end.

Keep in mind, if this generation has taught us anything it's that a company doesn't need to have the fastest card out there, they just need to maintain the best price to performance ratio on the market. Having the fastest card always helps, but hey, there's always CrossFire =D


----------



## eidairaman1 (Nov 8, 2008)

a Minor Improvement, sort of like the R300 Gen, should be a good set of cards and possibly different cooling designs coming down the pipeline


----------



## theJesus (Nov 8, 2008)

I'm really excited for the new NV cards .  

I wish AMD/ATI would use their old naming scheme where numbers were in increments of either 100 or 50.  Like x1900xt, x1950xt, etc.  If they at least keep their current xx50 and xx70 scheme and don't add any other increments, I'll be happy, cuz at least it makes sense to use xx70 due to RV770.  idk, just stupid little things that bother me, plus I'm tired 

In any case, I look forward to some very competitive pricing this xmas


----------



## tkpenalty (Nov 8, 2008)

Isnt the RV870 going to use high-K instead of SOI?


----------



## Zerofool (Nov 8, 2008)

I actually doubt we'll see RV790 this year. Latest news about GT200b talk about yet another delay - to February 09 (the inquirer). So probably RV790 cards will come out then (or whenever NV cards do), they don't want to compete against their own cards now .



zithe said:


> Probably after the GT200b release just to attract some attention away. =P



Yes, most likely.


----------



## zithe (Nov 8, 2008)

Zerofool said:


> I actually doubt we'll see RV790 this year. Latest news about GT200b talk about yet another delay - to February 09 (the inquirer). So probably RV790 cars will come out then (or whenever NV cards do), they don't want to compete against their own cards .



Probably after the GT200b release just to attract some attention away. =P


----------



## lemonadesoda (Nov 8, 2008)

Dugg.

Hmm, interesting. From X1*9*xx to HD2*9*xx to HD 3*8*xx and 4*8*xx.  That naming convention leaves room for a 39xx and 49xx series. Only a 39xx made no sense since ATI was getting deeply pwned by NVIDIA so they had to jump straight to 48xx. Perhaps that's where the whole shader miscount came from? Perhaps on an EARLY ROADMAP there was a 39xx with 480 shaders after all.


----------



## DarkMatter (Nov 8, 2008)

lemonadesoda said:


> Dugg.
> 
> Hmm, interesting. From X1*9*xx to HD2*9*xx to HD 3*8*xx and 4*8*xx.  That naming convention leaves room for a 39xx and 49xx series. Only a 39xx made no sense since ATI was getting deeply pwned by NVIDIA so they had to jump straight to 48xx. *Perhaps that's where the whole shader miscount came from?* Perhaps on an EARLY ROADMAP there was a 39xx with 480 shaders after all.



Nope I don't think so. I saw many partners advertising the HD4xxx cards in their sites with 480 SPs until the launch day, one day earlier in fact. After that they corrected it the launch day. IMO there's no way they could get confused in that manner and only knew the true number the launch day. It was simply a distraction move by Ati.


----------



## btarunr (Nov 8, 2008)

Something close to that happened with the Radeon HD 4830. At first it was 480 SP / 192bit mem, then 480 SP / 256bit mem, finally 640 SP / 256bit.



Zerofool said:


> I actually doubt we'll see RV790 this year. Latest news about GT200b talk about yet another delay - to February 09 (the inquirer). So probably RV790 cards will come out then (or whenever NV cards do), they don't want to compete against their own cards now .



Whenever that does come out, this does. It's just that AMD doesn't want to be thrown way back by a GTX 270/290 and the subsequent GX2. So it could respond with RV790, RV790 X2, sideport enabled X2 boards (though gains (of sideport for X2) are predicted to be insignificant ATM).


----------



## 3dchipset (Nov 9, 2008)

Is their any word from IHV's about this? From what I've seen in the past from NVIDIA and ATI that we will not see any new cards for the rest of the year. With how the economy is, I can't really see NVIDIA and ATI rush a card out with minor improvements. Especially from ATI. ATI hasn't really made any noise on the 4850X2 at all.

ATI has to concentrate more on "wattage concerns" more then trying to add a couple of more frames to the mix. They already own the "single card" crown.

NVIDIA is in a state of flux. The conference call with NVIDIA did share some light on an actual revision of the GT200. But we have to understand that their Quarterly's are different then the calendar quarters. I will bet anything that we won't see any new cards until February of next year.

Unless IHV's are saying something otherwise, I don't believe any new card this year. Just look at the track records of both companies.


----------



## eidairaman1 (Nov 9, 2008)

lemonadesoda said:


> Dugg.
> 
> Hmm, interesting. From X1*9*xx to HD2*9*xx to HD 3*8*xx and 4*8*xx.  That naming convention leaves room for a 39xx and 49xx series. Only a 39xx made no sense since ATI was getting deeply pwned by NVIDIA so they had to jump straight to 48xx. Perhaps that's where the whole shader miscount came from? Perhaps on an EARLY ROADMAP there was a 39xx with 480 shaders after all.



That naming Convention leaves more Room say, 4850, 4855, 4857 etc.


----------



## Swansen (Nov 9, 2008)

*really???*

does any of this bother anyone other than me ??  What is up lately with AMD (ATI) and Nvidia coming out with a new card every month, just to gain tiny amounts of performance??  What happened to generations ??


----------



## ShadowFold (Nov 9, 2008)

theJesus said:


> I'm really excited for the new NV cards .
> 
> I wish AMD/ATI would use their old naming scheme where numbers were in increments of either 100 or 50.  Like x1900xt, x1950xt, etc.  If they at least keep their current xx50 and xx70 scheme and don't add any other increments, I'll be happy, cuz at least it makes sense to use xx70 due to RV770.  idk, just stupid little things that bother me, plus I'm tired
> 
> In any case, I look forward to some very competitive pricing this xmas



Yea I wish they would have kept it too. HD 3800's could become X2950XT and X2950PRO and HD 4800's X3950XT and X3950PRO.. Those look alot cooler than HD 3870/HD 4850 to me tbh..


----------



## eidairaman1 (Nov 9, 2008)

Swansen said:


> does any of this bother anyone other than me ??  What is up lately with AMD (ATI) and Nvidia coming out with a new card every month, just to gain tiny amounts of performance??  What happened to generations ??



they have been doing this since the Radeon X series and the GF6 series


----------



## eidairaman1 (Nov 9, 2008)

That is because the 4850X2 is a special card from Sapphire, only sapphire researched the design ideas.


3dchipset said:


> Is their any word from IHV's about this? From what I've seen in the past from NVIDIA and ATI that we will not see any new cards for the rest of the year. With how the economy is, I can't really see NVIDIA and ATI rush a card out with minor improvements. Especially from ATI. ATI hasn't really made any noise on the 4850X2 at all.
> 
> ATI has to concentrate more on "wattage concerns" more then trying to add a couple of more frames to the mix. They already own the "single card" crown.
> 
> ...


----------



## FudFighter (Nov 9, 2008)

ShadowFold said:


> Yea I wish they would have kept it too. HD 3800's could become X2950XT and X2950PRO and HD 4800's X3950XT and X3950PRO.. Those look alot cooler than HD 3870/HD 4850 to me tbh..



they wanted to devorce the 3800 from the 2900 because the 2900 had such a bad rep due to heat and power consumption, basickly it made sence for them to go up a model in this case, it wasnt just a die shrink, they also improoved the avivo support as well as some other componants.

we shal see how this situation flushes out, i can see a refresh for christmas honestly, just to hopefully catch some quick $ that or some kinda minor price drops again, to get higher sales on the cards for christmas.

my 8800gts 512 is plenty for now, probbly get something 4870 or better eventuly.


----------



## btarunr (Nov 9, 2008)

3dchipset said:


> Is their any word from IHV's about this? From what I've seen in the past from NVIDIA and ATI that we will not see any new cards for the rest of the year. With how the economy is, I can't really see NVIDIA and ATI rush a card out with minor improvements. Especially from ATI. ATI hasn't really made any noise on the 4850X2 at all.



Yes. And yes, RV790 is slated to follow GT206/G200b release, though it was intended to stay under the wraps. AMD has been very secretive lately.



3dchipset said:


> ATI has to concentrate more on "wattage concerns" more then trying to add a couple of more frames to the mix. They already own the "single card" crown.



The kind of changes AMD is planning will make sure the GPU runs at high clock speeds while not maintaining the same thermal envelope as what a RV770 would hypothetically have at those speeds.



3dchipset said:


> NVIDIA is in a state of flux. The conference call with NVIDIA did share some light on an actual revision of the GT200. But we have to understand that their Quarterly's are different then the calendar quarters. I will bet anything that we won't see any new cards until February of next year.



NV knows it won't be able to have a greater impact on the market than it's having now, during the crucial Xmas shopping season, and the stakes are extremely high. There is an indication of GT206 launch within this year, as GT206 has already had its share of delays due to shader domain problems.


----------



## 3dchipset (Nov 9, 2008)

I'm just curious if they will bring them out this year. So far this is looking like the worse retail shopping in 10 years due to the economy. I honestly would be shocked to see a new offering this year.

If vendors like BFG, XFX, and the likes are the ones talking about it then it could be plausable, but they should start buying up the packages soon (GPU/PCB/MEMORY) If they want to get their cards out on the market for the X-mas season. It takes at least 2 to 3 weeks for design, logo, retail box design, etc... So if they don't have the cards by Thanksgiving, I'm calling a "Nope, not this year" comment.


----------



## btarunr (Nov 9, 2008)

That's right, we've to wait and see.


----------



## Hayder_Master (Nov 9, 2008)

thanx for this btarunr rally very interesting news and the first time i read about  , that's good from ati look like reaction from amd on new nvidia gtx270 and gtx290 , im want take an ati card so im think wait for new one , i hope it come with 512bit to make gddr5 really useful


----------



## Wile E (Nov 9, 2008)

FudFighter said:


> they wanted to devorce the 3800 from the 2900 because the 2900 had such a bad rep due to heat and power consumption, basickly it made sence for them to go up a model in this case, it wasnt just a die shrink, they also improoved the avivo support as well as some other componants.


Not so much Avivo improvement, but the inclusion of an actual UVD, which 2900 doesn't have.

Also, don't forget about the 2900's poor AA performance.


----------



## W1zzard (Nov 9, 2008)

erocker said:


> I'm hoping for adjustable shader clocks.  I digg the article bta!



i doubt ati will have adjustable shader clocks any time soon. this would be a HUGE design change.



imperialreign said:


> I wonder how well the new BIOS for these GPUs would work with current R770s



i expect rv790 to be drop in compatible with rv770. that means you could unsolder the gpu from a hd 4850/4870, solder on rv790 and the card would work without any other change on hardware or software side.


----------



## eidairaman1 (Nov 9, 2008)

hayder.master said:


> thanx for this btarunr rally very interesting news and the first time i read about  , that's good from ati look like reaction from amd on new nvidia gtx270 and gtx290 , im want take an ati card so im think wait for new one , i hope it come with 512bit to make gddr5 really useful



i expect the 5000 line will possibly have that, or even a odd bit bus, possibly even adj sp.


----------



## FudFighter (Nov 9, 2008)

Wile E said:


> Not so much Avivo improvement, but the inclusion of an actual UVD, which 2900 doesn't have.
> 
> Also, don't forget about the 2900's poor AA performance.



duno m8, i have seen some reviews that showed the performance of 3800 cards being quite notably better using apps that can use avivo such as powerdvd,windvd and the like where avivo can take load off the cpu running the video prosessing almost fully on the gpu.

I am still waiting for some mainstream apps/codecs to use nvidia and ati gpu's.


----------



## Frederik S (Nov 9, 2008)

Very nice article btarunr . Looking forward to seeing how they perform.


----------



## W1zzard (Nov 9, 2008)

hayder.master said:


> thanx for this btarunr rally very interesting news and the first time i read about  , that's good from ati look like reaction from amd on new nvidia gtx270 and gtx290 , im want take an ati card so im think wait for new one , i hope it come with 512bit to make gddr5 really useful



256 bit -> 512 bit does the same that gddr3 -> gddr5 does. double the memory bandwidth. apparently rv770 does not need that much bandwidth or you would see a much bigger difference between 4850 and 4870


----------



## wolf (Nov 9, 2008)

very cool article btarunr.... i can really see the potential in RV770 if they can clock the nuts off it


----------



## FudFighter (Nov 9, 2008)

W1zzard said:


> 256 bit -> 512 bit does the same that gddr3 -> gddr5 does. double the memory bandwidth. apparently rv770 does not need that much bandwidth or you would see a much bigger difference between 4850 and 4870



I would have pointed that out, but From his post I get the Impression that he wont listen/understand that.

its like trying to explain that moving to ddr3 for normal users is a dumb move, it costs more and offers lesser perf(for the cheaper stuff).

most people are better off getting more ram insted of "faster" ram.


----------



## eidairaman1 (Nov 9, 2008)

FudFighter, More Ram only helps with Monitors with Resolutions Larger than 1280x1024, otherwise if your playing that resolution 512 MB Ram is enough or even overkill.


----------



## FudFighter (Nov 9, 2008)

u miss understand, i ment system ram, alot of people think 1gb of ddr3(system ram) is better then 2 or even 4gb of ddr2(system ram) and will argue to the end about it........


----------



## eidairaman1 (Nov 9, 2008)

eventually bandwidth does overtake the latency drawback.


----------



## FudFighter (Nov 9, 2008)

but in the case of cheap ddr3 and 1gb(specly with vista) your still better off with cheap, quility ddr2 currently, IF you spend the money ddr3 for intel is a good move, but the cheap stuff at say 1333@9-9-9-xx is NOT going to give you a decent perf for avg user.

try yourself on vista, take 1gb of cheap ram, use vista for a while, then slap in a decent 2 or 4gb kit and watch the diffrance......the perf boost is DRASTIC even for desktop apps.

most joe sixpack type users would be better off with 2gb of cheap yet quility ddr2 then 1gb of cheap ass ddr3 or 4gb of quility ddr2 vs 2gb of cheap ass ddr3.

just a fact of how much memory apps and vista itself use up these days


----------



## eidairaman1 (Nov 9, 2008)

well obviously capacity has a play in Vista, and that is because Vista is more resource demanding than XP was, it seems Minimum Spec for Vista is like 768-1Gig of ram where Recommended is 2-4 Gigs. In General NT is resource demanding than the other Coding that MS has used for Windows.

Now for another Example, 1GB DDR2 vs 2GB DDR3, i say most will go with Capacity over speed due to fact of Vista Memory Demands. Beyond that When you want to Move up from 1 ram to another you have to Switch out motherboards (overall Cheaper than Having to swap CPUs) but aboveall lets get back on track with the Videocards Themselves, not System ram.


----------



## Disruptor4 (Nov 10, 2008)

FudFighter, don't even bother. They don't seem to understand what you're getting at lol.


----------



## FudFighter (Nov 10, 2008)

yeah, thats what i was getting at Disruptor4


----------



## btarunr (Nov 10, 2008)

FudFighter said:


> but in the case of cheap ddr3 and 1gb(specly with vista) your still better off with cheap, quility ddr2 currently, IF you spend the money ddr3 for intel is a good move, but the cheap stuff at say 1333@9-9-9-xx is NOT going to give you a decent perf for avg user.



Few things you need to know:

 GDDR3 ≠ DDR3

 Sure DDR3 gives you higher frequencies, and at latencies that look bad from a DDR2/DDR1 perspective, (eg: 1333 @ say 9-9-9-21), but the fact that the frequency is high(er), theoretically the clock cycle is short (since there's more _cycles_ per unit time), so latencies don't become as much of a problem there.


----------



## eidairaman1 (Nov 10, 2008)

i say its best to skip between ram generations, say go from DDR to DDR3/4 or even from DDR to FBDimm.


----------



## FudFighter (Nov 10, 2008)

yeah, i know btarunr, it was not about gddr or ddr2 or whatever, was more about people not understanding that just because something has a higher number dosnt make it better.

i should have used videocards as an example i guess since people dont get that i was talking about why its pointless to try and explain this stuff to some people.

re-explination 

some people think a 9600gt is better then an 8800gt because, 9600 is higher then 8800, when in reality the 8800gt is hands down the better card.

that make it clearer what i ment?

gddr5 runs at FAR higher clocks then gddr3, so the clocks outbalance the buss bit width, so ati can make the pcb cheaper and less complex(less failed cards) where nvidia pcb's cost alot more to make driving up cost and due to extra complexity they have more pcb's that failed to meet spec or have flaws that endup causing problems down the line(like a card that fails after a few months due to a bad trace burning out) 

sometimes cheaper is better!!!!


----------



## Wile E (Nov 10, 2008)

FudFighter said:


> duno m8, i have seen some reviews that showed the performance of 3800 cards being quite notably better using apps that can use avivo such as powerdvd,windvd and the like where avivo can take load off the cpu running the video prosessing almost fully on the gpu.
> 
> I am still waiting for some mainstream apps/codecs to use nvidia and ati gpu's.



You missed my point. The UVD is what handles video decode on these cards. 2900 didn't have it. That is what gave the 3800 cards their improvement. Thus, it wasn't so much of an Avivo improvement, as it was them actually including the UVD this time. (In other words, it was a shot at ATI.  )


----------



## btarunr (Nov 10, 2008)

FudFighter said:


> some people think a 9600gt is better then an 8800gt because, 9600 is higher then 8800, when in reality the 8800gt is hands down the better card.
> 
> that make it clearer what i ment?
> 
> gddr5 runs at FAR higher clocks then gddr3, so the clocks outbalance the buss bit width, so ati can make the pcb cheaper and less complex(less failed cards)



Higher freq? No..GDDR5 doesn't run at higher frequencies, but pushes ~2x data / unit time, and people choose to equate it to a high-frequency. The memory on a HD 4870 is 900 MHz (actual) while effectively 3600 MHz, whereas for GDDR3 to get there on the same bus width, it takes 1800 MHz (actual, something impossible), or 900 MHz (actual) on 2x the bus width.



FudFighter said:


> where nvidia pcb's cost alot more to make driving up cost and due to extra complexity they have more pcb's that failed to meet spec or have flaws that endup causing problems down the line(like a card that fails after a few months due to a bad trace burning out)



not sure where you got that from


----------



## erocker (Nov 10, 2008)

btarunr said:


> not sure where you got that from



I thought I heard about Nvidia's PCB's costing more right before the launch of the 2xxGTX series.  Larger memory bus on the PCB cost more.


----------



## Wile E (Nov 10, 2008)

erocker said:


> I thought I heard about Nvidia's PCB's costing more right before the launch of the 2xxGTX series.  Larger memory bus on the PCB cost more.



It does. That's part of the reason the 3870 was so much cheaper than the 2900, the other part was the die shrink.


----------



## btarunr (Nov 10, 2008)

Right, and about the "burnout" part?


----------



## Wile E (Nov 10, 2008)

btarunr said:


> Right, and about the "burnout" part?



Who knows? lol. While I can't say they'd be more prone to having a trace burn out, the odds of a bad pcb are higher.


----------



## btarunr (Nov 10, 2008)

Wile E said:


> Who knows? lol. While I can't say they'd be more prone to having a trace burn out, the odds of a bad pcb are higher.



Ah probability..the odds favour a horse over a giraffe to fly


----------



## spearman914 (Nov 10, 2008)

Most likely pricing will be BS.


----------



## FudFighter (Nov 11, 2008)

btarunr said:


> Ah probability..the odds favour a horse over a giraffe to fly



the more complext the pcb the more prone to flaws, just as the more complex the core/cpu the more prone to flaws, trace burnouts have happened due to flawed/damnaged internal traces, say the person laying the traces, twists/slitly tares the one thats being layed(or the machien does it) a flawd/damnaged trace could overheat and burn out, i have seen this in complex pcb's b4, its far more common then you may think, alot of cards that die under stress could easly be dieing from pcb errors not just flawed/bad caps/chips.

say normal trace is ============== thick and you endup with a trace thats like
                    this =======--======  wouldnt that small overly thin area be more likely to burn out then the trace thats layed properly?

now this can happen in any pcb, but the more complex something is the more chance somethings gonna be screwed up.

old adage "the simple plays the best plan" is true more times then not.


----------



## DarkMatter (Nov 11, 2008)

That kind of failures are not that common and the failure rate numbers are always tricky. Imagine you have two models A and B. A is much simpler than B and because of that B has a failure rate 5x bigger than A. Dissaster right? No necesarily, we lack a lot of info. It happens many times that the failure rate for A is smaller than 1% and then even when B has much higher failure rate it's still above a 95% of successful products. This scenario is the most common one and from an engineering point of view that 5% of failures is certainly a lot, they are obviously not doing well, and any engineer would say that, but that doesn't mean the product is going to be affected a lot, price wise and etc.


----------



## FudFighter (Nov 11, 2008)

duno over the years i have seen a good number of bad traces, and as things get smaller and more complex i wouldnt expect that to dissapear.

we use to fix flawed cards with burnt/damnaged/flawed serface traces with conductive pen then seal it with some clear fingernail polish


----------



## btarunr (Nov 11, 2008)

FudFighter said:


> duno over the years i have seen a good number of bad traces, and as things get smaller and more complex i wouldnt expect that to dissapear.
> 
> we use to fix flawed cards with burnt/damnaged/flawed serface traces with conductive pen then seal it with some clear fingernail polish



Did you know, the wiring you see on either sides of the PCB aren't the only wiring? PCB is a layered thing with each layers holding wiring...something conductive pens won't help with. 

The whole 512bit more prone to damage thing is just a mathematical probability. Of course the vMem circuitry is a different issue, where more chips = more wear/tear, but it's understood, that on a card with a 512bit memory interface, the vMem is accordingly durable (high-grade components used), something NVIDIA does use on its reference G80 and G200 PCBs.


----------



## FudFighter (Nov 11, 2008)

yes i know 6 and 8 layers are common, and i fully know you cant fix internal traces, i never said anything about fixing internal pcb traces....you act like i am a moron/uber noob.........

anything thats more complex will be more prone to problems, look at windows and pc's they are more complex to deal with then a mac, hardware is limmited, so you have less problems, but that dosnt make them better, dosnt really make them worse eather(the user base does that  )

i think you get what i was talking about, im done trying to explain/justify/wtfe im gonna watch some lost and look for some more stuff to dump on my samsung players.


----------



## btarunr (Nov 11, 2008)

FudFighter said:


> anything thats more complex will be more prone to problems, look at windows and pc's they are more complex to deal with then a mac, hardware is limmited, so you have less problems, but that dosnt make them better, dosnt really make them worse eather(the user base does that  )



...if there aren't any measures to make them durable accordingly, yes, but that's not the case. By that logic, a Core 2 Extreme QX9770 is more prone to damage than a E5200 (again notwithstanding overclocking), but that isn't the case, right? Probabilities always exist. Sometimes they're too small to manifest into anything real. I'm not doing anything than this discussion..thank you for it.


----------



## FudFighter (Nov 11, 2008)

no but the rate of cores that make it into qx9770's vs e5200's is far lower.

you do know they bin chips dont you?

you do know that intels quads are still 2 core2 duo chips on one packege dont you?

do you know why intel does this?

they do it because the fail rate/flaw rate of dual core chips/dies is less due to lower complexity then it would be with one solid die with 4 cores on it.

what im saying is your logic is flawed, that or you really dont know what your talking about......


----------



## btarunr (Nov 11, 2008)

FudFighter said:


> no but the rate of cores that make it into qx9770's vs e5200's is far lower.
> 
> you do know they bin chips dont you?
> 
> ...



Ah...now use your logic against yourself: 

"you do know they bin chips dont you?"

The durability of components used in those complex graphics cards negate their mathematically high probability to fail (merely because of the complexity of them). The probability is only mathematical, not real.


----------



## [I.R.A]_FBi (Nov 11, 2008)

FudFighter said:


> no but the rate of cores that make it into qx9770's vs e5200's is far lower.
> 
> you do know they bin chips dont you?
> 
> ...



why so much effort to try and show up bta?


----------



## theJesus (Nov 11, 2008)

[I.R.A]_FBi said:


> why so much effort to try and show up bta?


Agreed, although the discussion is quite entertaining


----------



## FudFighter (Nov 11, 2008)

btarunr said:


> Ah...now use your logic against yourself:
> 
> "you do know they bin chips dont you?"
> 
> The durability of components used in those complex graphics cards negate their mathematically high probability to fail (merely because of the complexity of them). The probability is only mathematical, not real.



but it also raises cost, this is moving away from the orignal point of my post, and you and the otherguy know it.

Binning chips and componants and building most costly pcb's leads to higher costs, leads to higher prices, I would like to know how high the fail rate of the pcb's themselves if in QA testing, Each fail is $ wasted, so the point is that nvidias costs are higher, as are their prices.

just like back in the day the 9700-9800pro/xt was a 256bit card and the 9500pro/9800se(256bit) was 128bit, some old 9500's where just 9700-9800pro/xt with a bios to dissable 1/2 the memory buss and/or pipes on the card( have seen cards both ways )  they also had native pro versions that where 128bit and FAR cheaper to make, less complext pcb's.

blah, that last bit was a bit of a rammble, point beeing that ati's way this time around as they have in the past they found a cheaper more efficent way to do the same job.

gddr5 on 256bit can have equivlant bandwith to 512+bit gddr3, sure the initial price of gddr5 was higher but i would bet by no the cost has come down a good bit(alot of companys are making it after all) I was reading that nvidia could and likely will move to gddr5, they didnt use gddr4 because of cost and low supply avalable(also wasnt that much better then 3) 


blah, you treat me like a moron, and you use "flawed logic" to try and get around the situation.

you used the qx9770(who the hells gonna pay that kinda price for a cpu?) as an example, we coudnt get real 1:1 numbers on that because nobody sane buys those things(over 1k for a cpu.......) 

example that can show you what i mean would be the k10's, there are quad cores and tricores.

the tricore's are eather weak or failed quads, amd found a way to make money off flawed chips, they still function just fine, but due to complexity of NATIVE quadcore you ARE going to have higher fails then if you went with multi dual cores on 1 die.

in that reguard intels method was smarter to a point(for home user market) since it was cheaper and had lower fail rates(could alwase sell failed duals as lower singel core models) amd even admited for the non-server market they should have done an intel style setup for the first run, then moved native on the second batch.

as i have a feeling nvidia will endup moving to less complex pcb's with gddr5 with their next real change(non-refresh) 

we shal see, i just know that price for perf i would take a 4800 over the gt200 or g92, that being if i hadnt bought the card i got b4 the 3800's where out


----------



## btarunr (Nov 11, 2008)

FudFighter said:


> but it also raises cost, this is moving away from the orignal point of my post, and you and the otherguy know it.



Well, that's the whole reason why it's priced that way and caters to that segment of the market, right?



FudFighter said:


> Binning chips and componants and building most costly pcb's leads to higher costs, leads to higher prices, I would like to know how high the fail rate of the pcb's themselves if in QA testing, Each fail is $ wasted, so the point is that nvidias costs are higher, as are their prices.



Apparently NVIDIA disagrees with you. The PCB has very little role to play in the graphics card's failure. It's always had something to do with the components, or the overclocker's acts. The quality of the components used makes up for the very slight probability of the PCB being the COD for a graphics card...in effect, the PCB is the last thing you'd point your finger to.




FudFighter said:


> gddr5 on 256bit can have equivlant bandwith to 512+bit gddr3, sure the initial price of gddr5 was higher but i would bet by no the cost has come down a good bit(alot of companys are making it after all) I was reading that nvidia could and likely will move to gddr5, they didnt use gddr4 because of cost and low supply avalable(also wasnt that much better then



I agree  x bit GDDR5 = 2x bit GDDR3, but you have to agree that G200 PCBs have been in development for a long time, I'd say right after G80 launch, after NVIDIA started work on 65nm GPUs. Just because the product happened to launch just before another with GDDR5 came about, you can't say "they should have used GDDR5". Whether they cut costs or not, you end up paying the same, they make you to. Don't you get a GeForce GTX 260 for the same price-range you get a HD 4870? So don't come to the conclusion that if they manage to cut costs, they'll hand over the benefit to you, by making you pay less, they'll benefit themselves.



FudFighter said:


> blah, you treat me like a moron, and you use "flawed logic" to try and get around the situation.



Whatever you're accusing others of, is apparently all in your mind.



FudFighter said:


> you used the qx9770(who the hells gonna pay that kinda price for a cpu?) as an example, we coudnt get real 1:1 numbers on that because nobody sane buys those things(over 1k for a cpu.......)



Conclusions...conclusions. It's called "premium". People who can buy, will buy, however smart/dumb they are. To cater to that very market, are the $1000~1500 CPUs Intel sells, something AMD did in its days too.


----------



## Hayder_Master (Nov 11, 2008)

W1zzard said:


> 256 bit -> 512 bit does the same that gddr3 -> gddr5 does. double the memory bandwidth. apparently rv770 does not need that much bandwidth or you would see a much bigger difference between 4850 and 4870



ohh interesting thanx w1zzard , sure you right , but let we say 4870 with 512 bit just like 4870x2 case we see high bandwidth in gpu-z and sure it give great performance , not because high size memory but cuz high memory bandwidth , am i right or what is you tip


----------



## theJesus (Nov 11, 2008)

FudFighter, how is btarunr treating you like a moron?  I think you're being way too defensive; all bta is doing is trying to debate with you about some of the things you've said, because he disagrees.


----------



## wolf (Nov 11, 2008)

agreed, btarunr has used nothing but logic, experience and knowledge to base what he has written, whether or not its 100% factual.

theres no real sense arguing just to argue, you are well entitled to your opinion, but rest assured, nobody is calling, or treating you like a moron.

i do see some very valid points you raise FudFighter, but statements like "you do know they bin chips dont you?" to btarunr does not strengthen your position.

play it cool man, we just wanna discuss cool new hardware


----------



## DarkMatter (Nov 11, 2008)

FudFighter said:


> ...



You are mixing things up.

- First of all, we are arguing to the failure rates, not to the price. Complex PCBs are undoubtely more expensive, but you make them because they allow for cheaper parts on other places (i.e GDDR3) or for improved performance. What it is better is something much more complicated than comparing the PCBs. What has hapenned with GT200 and RV770 doesn't proof anything either on this matter, first because GT200 "failed" (couldn't clock it as high as they wanted*) and second because when comparing prices you have to take into account that prices fluctuate and are related to demand. I have said this like a million times, but had Nvidia adopted GDDR5 the same day Ati did, the demand for GDDR5 would have been 3x that of what it has been**, WHEN suppliers couldn't even meet Ati's demand. That would make prices skyrocket. It's easy to look at the market today and say 256bit + GDDR5 is cheaper (I'm still not so sure), but what would have happened if GDDR5 prices were a 50-80% higher? Nvidia because of the high market share they had, couldn't take the risk of finding that out. You can certainly thank Nvidia for RV770's success in price/performance, don't doubt this a moment.

- We have told you that failure rates are indeed higher too (under the same coditions), but not as much as you make them to be, nowhere near to it really. AND that small increase is already taken into account BEFORE they manufacture them and they take actions to make for that difference (conditions are then different). In fact, a big part (biggest one probably) of the increased costs of manufacturing complex PCBs is because of that. In the end the final product is in practice as error free as the simpler one, but at higher costs. As I've discused above those increased costs could not matter, it just depends on your strategy and the world surrounding.

- Don't mix apples to oranges. Microchip manufacturing and PCBs are a totlly different thing. I can't think of any other manufactured product with such high failure rates as microchips, it's part of the complexity of the process of making them. In chips a failure rate difference of 20% can easily happen between a simple design and a more complex one, but that's not the case with PCBs. 

And also don't take individual examples as proof of facts. I'm talking about K10. Just as GT200 failed, K10 failed and although those fails are related to their complexity, the nature of the failure surpassed ANY expectations. Although related, the failure was not due to the complexity, but issues with the manufacturing process. You can't take one example and make a point with it. What happens with Nehalem? It IS a native quad core CPU and they are not having the problems K10 had. Making a native quad it is more risky than slapping two duals together but the benefits of a native quad are evident. Even if failures are higher, a native quad is inherently much faster and makes up for the difference: if the native is 30% faster (just as an example), in order to give the same performance you can afford to make each core on the native CPU a 30% simpler. In the end it will depend on if the architectural benefits can make up for the difference at manufacturing time. In K10 it didn't, in Nehalem it does, in Shanghai it does. The conclusion is evident, in practice native quads are demostrating to be the better solution.

* IMO that had to be sorted out before launch, BUT not in time for when specs were finalized. I can't understand otherwise how the chip they couldn't make it run faster is in the top list of overclockers, with average overclocks above 16% percent on stock. With little increased heat and power consumption for more signs...

** Nvidia sold 2 cards for every card Ati sold back then. They had to aim at the same market share AND the same number of sales. With both fighting for GDD5 supply that would have been impossible. A low supply of RAM would have hurt them more than what has actually happened. They lost market share, but their sales didn't suffer so much. Ati on the other hand, desperately needed to gain market share, no matter what, they needed a moral win, and how much could that cost them didn't really matter. They took the risk and with a little help from Nvidia they succeeded.


----------



## ocre (Nov 17, 2008)

*Everyone here is getting mixed up*

I am late maybe someone will read this.  

to btarunr:  Fudfighter is speaking of productions fail rates which do not make it the shelf.  The only impact to the consumer is the price tag, and that is because of everything he has explained and i will not go into.  He is not exactly saying that your purchased x4 CPU or video card is more likely gonna fail because it is more complex

to Fudfighter:  btarnunr has missed your point and was totally talkin about user fail rates.  as you where mostly clear as to what your were saying,  your post had nothing to do with what btarunr was posting and you kept on argumentatively responding and both wasnt even on the same subject.

Both are mostly correct just a little bit of a mix up in communication.  FudFighter is correct and his point is valid here when speaking of production fail rates causing higher production cost and less profits for Nvidia or any company.  If they arent making money then they arent gonna keep selling cards. So maximizing quality with a higher percentage of  acceptable yield is an absolute must for any GPU or other manufacturer to stay competitive.  This article was about AMD and Nvidia rivals so every bit of fudfighters info apples to this.  Now btarnunr is correct in his point as he was trying to get across that just because a component is more complex it doesnt mean it is gonna have a higher fail rate.  And he is right it absolutely doesnt, from an end user stand point.  If QA does its job correctly there should be no problems with the end results.  

I have no enemies, i dont mean to make any.  You are both right in your own way


----------



## DarkMatter (Nov 17, 2008)

ocre said:


> I am late maybe someone will read this.
> 
> to btarunr:  Fudfighter is speaking of productions fail rates which do not make it the shelf.  The only impact to the consumer is the price tag, and that is because of everything he has explained and i will not go into.  He is not exactly saying that your purchased x4 CPU or video card is more likely gonna fail because it is more complex
> 
> ...



Yes and no. At manufacturing time a more complex product does not necessarily have a higher failure rate. Not to the point of affecting profitability, that for sure. When you manufacture anything you try to do it following the absolutely cheapest way of doing it, as long as it doesn't affect quality. That means using the cheapest materials that meet your requirements, taking care of the product enough that it is well created and no more, etc. That's why the process of creating the simple thing is cheaper and the product itself ends up being cheaper. 

When you are about to create a more complex product, you use better, expensive materials, you use better, slower techniques of manufacturing the product, better and more workers take care of the future of the end product, etc. All those things make the resultant product more expensive, BUT the failure rate is mantained at a level close to that of a simple product. How much you pay (or how much it makes sense to spend) to mantain that low level of failures depends on many things and is up to the company to choose, in the end it's give or take between paying more to have less failures or "spend" that money in QA and paying the materials/workforce of those failed products that will never get to the end of the pipeline.


----------



## vagxtr (Nov 18, 2008)

Zerofool said:


> I actually doubt we'll see RV790 this year. Latest news about GT200b talk about yet another delay - to February 09 (the inquirer). So probably RV790 cards will come out then (or whenever NV cards do), they don't want to compete against their own cards now .
> 
> 
> 
> Yes, most likely.



This cards are already taped out. The best fact is the last remaining working rv770 chips are spun off in hd4830 incarnations last month. So that rumor is more than a rumor, the only thingy to sooner introduction depends on buyers momentum. I'd say will se it for christmas al least shortages of the same 



3dchipset said:


> I'm just curious if they will bring them out this year. So far this is looking like the worse retail shopping in 10 years due to the economy. I honestly would be shocked to see a new offering this year.



well it's not all in ecoomic momentum. This is YAR of RV770 and they could call it as they like just nv fist makes gt200b and renaming it to gt206  it's just a marketing bug and they need something to stay competitive and technology @55nm allows them improements .... but it's more likely to se 800SP @40nm it's more cost effective and 150MHz+ guaranteed 



W1zzard said:


> i doubt ati will have adjustable shader clocks any time soon. this would be a HUGE design change.
> 
> 
> 
> i expect rv790 to be drop in compatible with rv770. that means you could unsolder the gpu from a hd 4850/4870, solder on rv790 and the card would work without any other change on hardware or software side.



yeah yeah they all announce drop-in compatibility, but afair the only true dropin was kt266a->kt333 and boards didnt need redesign, and even old school nForce2 Ultra needed new board some tiny changes when new same 0,18u-process Ultra400 came out. all in all it's not all in proclamations and there will only be pinout compatibility, i guess


----------



## DaMulta (Nov 18, 2008)

I would like to point something out with bin chips as talking about above.....

When they do this the center is used in commercial markets such as Xeons, Opterons, FireGL, and so on. The next step is the normal market you and me.

As btarunr said


> their mathematically high probability to fail



So the the farther out you get the worse it gets, but that is also why some cheap chips will run at the same speed as the more expensive one. The beat the high probability to fail when they were made.

Now you might think this is crazy.....Lets just say a 8600Gt is the fastest selling card on the market. They run out of that bin for that card. What do they do stop making them? Nope they pull from the higher up bin and continue on with production because money is money. So if it was a really popular card you could have the TOP bin chip in a very low product, because they are selling them really fast and making their money back faster.

That really does happen with CPUs, and with Video cards bining what they sell.


----------

