# GDDR5 Memory - Under the Hood



## HTC (May 28, 2008)

> In the graphics business, there's no such thing as too much memory bandwidth. Real-time graphics always wants more: more memory, more bandwidth, more processing power. Most graphics cards on the market today use GDDR3 memory, a graphics-card optimized successor to the DDR2 memory common in PC systems (it's mostly unrelated to the DDR3 used in PC system memory).
> 
> A couple years ago, ATI (not yet purchased by AMD ) began promoting and using GDDR4, which lowered voltage requirements and increased bandwidth with a number of signaling tweaks (8-bit prefetch scheme, 8-bit burst length). It was used in a number of ATI graphics cards, but not picked up by Nvidia and, though it became a JEDEC standard, it never really caught on.
> 
> AMD's graphics division is at it again now with GDDR5. Working together with the JEDEC standards body, AMD expects this new memory type to become quite popular and eventually all but replace GDDR3. Though AMD plans to be the first with graphics cards using GDDR5, the planned production by Hynix, Qimonda, and Samsung speak to the sort of volumes that only come with industry-wide adoption. Let's take a look at the new memory standard and what sets it apart from GDDR3 and GDDR4.



Source: ExtremeTech


----------



## btarunr (May 28, 2008)

But what's the point when only one company uses it, and that too on its 'top-of-the-line' product (HD4870), while another company used GDDR3 across 5 generations of products?


----------



## spearman914 (May 28, 2008)

btarunr said:


> But what's the point when only one company uses it, and that too on its 'top-of-the-line' product (HD4870), while another company used GDDR3 across 5 generations of products?



Yea I know. But really, theres no real difference in gaming between GDDR3,4,and 5 yet.


----------



## magibeg (May 28, 2008)

Just have to weigh the costs between things. I guess one of the big questions would be if its cheaper to go GDDR5 with a 256bit bus or to go GDDR3 with a 512bit bus.


----------



## HTC (May 28, 2008)

Have you dudes checked the page? There's more, you know!


----------



## [I.R.A]_FBi (May 28, 2008)

btarunr said:


> But what's the point when only one company uses it, and that too on its 'top-of-the-line' product (HD4870), while another company used GDDR3 across 5 generations of products?



Have you read the link?


----------



## Megasty (May 28, 2008)

> In the graphics business, there's no such thing as too much memory bandwidth.



Too true, the increased speed doesn't hurt either


----------



## Silverel (May 28, 2008)

That was a pretty good read. In the end they figure GDDR5 to be as cost effective as GDDR3, so why not just replace the whole lot of the stuff? It'd be overkill for weaker cards, but they'd probably get better pricing if they went exclusive GDDR5 with Samsung, Qimonda, and Hynix.

Every card I've owned has always benefited more from a higher memory clock than core. I'm down with a 4870 if they got the GDDR5, but I'd rather stick with a 4850 if they get the same stuff.


----------



## Darknova (May 28, 2008)

spearman914 said:


> Yea I know. But really, theres no real difference in gaming between GDDR3,4,and 5 yet.



Says who? GDDR5 has never been seen in a real card.

And GDDR4 is better than GDDR3, it's just offset by the crappy bus widths nvidia and ATi use.

Also, it won't cost more, think about it, GDDR5 costs slightly more than GDDR4 ok, but then GPUs are getting smaller, they are fitting more on a wafer, so each one costs less. In the end, counting everything overall, the graphics card will be cheaper to make.

And to btarunr, because ATi looks to the future (and generally fails lol), where as nvidia is still stuck with it's brute force method (bigger badder GPUs).


----------



## Scheich (May 28, 2008)

There was some news, where someone stated that its more profitable to produce flash memory for wannebe ssd ´s instead of ddr5, so this "shortage" might last for quite some time


----------



## Silverel (May 28, 2008)

What shortage?

Switching to GDDR5 would make smaller chips, less PCB layers, more efficient bus widths, and cause less "shortages".


----------



## HTC (May 29, 2008)

Darknova said:


> Says who? GDDR5 has never been seen in a real card.
> 
> And GDDR4 is better than GDDR3, it's just offset by the crappy bus widths nvidia and ATi use.
> 
> ...



Yeah: compare the die sizes of both nVidia's and ATI's next gen cards


----------



## btarunr (May 30, 2008)

Darknova said:


> And to btarunr, because ATi looks to the future (and generally fails lol), where as nvidia is still stuck with it's brute force method (bigger badder GPUs).



I'm just looking at the near future of GDDR4/5 memory, and that a tiny minority of cards actually use them, that too from ATI which commands less than half of the discrete graphics market share. Since NV makes powerful GPU's that end up faring better than competition, they needn't use better memory and end up using GDDR3 that's dirt cheap these days....profit. Whereas ATI use GDDR4/5 more to build up aspirational value to their products. They need performance increments to come from wherever they can manage to. Stronger/more expensive memory used. It's expensive because companies like Qimonda are pushed to making these memory that are produced on a small scale, lower profit.


----------



## Darknova (May 30, 2008)

btarunr said:


> I'm just looking at the near future of GDDR4/5 memory, and that a tiny minority of cards actually use them, that too from ATI which commands less than half of the discrete graphics market share. Since NV makes powerful GPU's that end up faring better than competition, they needn't use better memory and end up using GDDR3 that's dirt cheap these days....profit. Whereas ATI use GDDR4/5 more to build up aspirational value to their products. They need performance increments to come from wherever they can manage to. Stronger/more expensive memory used. It's expensive because companies like Qimonda are pushed to making these memory that are produced on a small scale, lower profit.



Ok, but did you read the rest of the article? Yes, the memory itself is more expensive, but it allows for less complex PCB designs (lower cost), and the die shrinks (lower cost) and the experience at producing dies at 55nm (lower cost)

So all in all, there won't be that much of a price hike, if any.

Not only that, but GDDR5 is going to be a bigger performance jump than going from GDDR3 to GDDR4, just because GDDR3 is dirt cheap doesn't make it better to use on your next generation GPUs.

And you're wrong, You NEED to pair a strong GPU, with stronger memory. With GPUs getting more and more powerful, they need bigger bandwidths, and GDDR5 is the next logical step.
It's just like in a PC, if you start bottlenecking the GPU, it doesn't matter how powerful you make it, because the GDDR3 will be holding it back.


----------



## Rebo&Zooty (May 30, 2008)

Execly DN, memory price will be higher but pcb price offsets that.

also more complex pcb the more likely you are to have failings in the pcb itself due to production flaws.

the more complex something is the more likely it is to fail. this has alwase been true.

now as to stron gpu not needing strong ram.........I cant say what i am thinking without getting infracted so I will put it a diffrent way.

only a fanboi would say that a strong gpu dosnt need good/strong ram, and in this case i see alot of nvidiots saying that kinda crap because nvidia is sticking with ddr3, honestly the reasion they are sticking with ddr3 is because ITS CHEAP and they have large contracts that make it even cheaper, not because its the best tech for the job, not because it gives the best performance, but because they want to make as much per card as they can, look at their  card prices, they are alwase TO DAMN HIGH per card, i have an 8800gt 512(its being replaced how...) it was an "ok" buy at the time, but the price most ppl where paying for them was 320bucks, thats insain........

ok the 9600and 8800gt/9600gos are decent priced BUT they are still high for what your getting in my view....the 3870 would be a better buy at that price range and far more future proof.

blah i dont want to argue, im tired, its late, and i need some rest........

read the artical, and understand that lower power and higher clocks/bandwith mean that u dont need to make an insainly complex card that costs a ton to build, you can build a cheaper card(pcb) and get the same or better performance.

also note 3 makers are already onboard, would be more follow suit to.......cant wait to see this stuff in action.


----------



## btarunr (May 30, 2008)

Darknova said:


> And you're wrong, You NEED to pair a strong GPU, with stronger memory. With GPUs getting more and more powerful, they need bigger bandwidths, and GDDR5 is the next logical step.
> It's just like in a PC, if you start bottlenecking the GPU, it doesn't matter how powerful you make it, because the GDDR3 will be holding it back.



Well that's what NVidia chooses _not_ to do. They're making the GT200 use GDDR3, but part of the reason is also that the GPU itself is very expensive ($125 /die, $150 /package) so that's $150 for the GPU alone. More in this contentious article. So NVidia is using GDDR3 more for economic reasons. And if this is the scheme of things, they'll keep themselves away from GDDR4/5 for quite some time though they're already a JEDEC standard technologies.


----------



## Darknova (May 30, 2008)

btarunr said:


> Well that's what NVidia chooses _not_ to do. They're making the GT200 use GDDR3, but part of the reason is also that the GPU itself is very expensive ($125 /die, $150 /package) so that's $150 for the GPU alone. More in this contentious article. So NVidia is using GDDR3 more for economic reasons. And if this is the scheme of things, they'll keep themselves away from GDDR4/5 for quite some time though they're already a JEDEC standard technologies.



Have you ever wondered WHY nvidia is making such an expensive GPU? As I've said before, it's just a brute force method to make a more powerful GPU. Instead of stopping, scrapping when they have and creating a really efficient GPU architecture (like ATi did) they don't stand a cat in hells chance against ATi this coming year.

Considering how powerful the 4870 is meant to be, I honestly don't see anyone with any knowledge of GPUs going for nvidia with that hefty a price tag...


----------



## candle_86 (May 30, 2008)

on paper the 2900XT should have crushed all comers, instead it barley put up a fight agasint the 8800GTS 640. AMD can look great on paper, but give me some proof they can compete. As for the 3870 being futureproof i beg to differ. It has 5 groups of 64 shaders. Only 1 in each group can do complex shader work, 2 can do simple, one does interger the other does floating point. Now in the real world this means that 128 of those shader units wont be used if at all, the floating point and interger units, and the simple shaders are not used thanks to AMD's lack to supply a complier for there cards. LEt it look as good as you want, but if AMD can't supply a code complier so code works right on there design they are still screwed.


----------



## wiak (May 30, 2008)

btarunr said:


> But what's the point when only one company uses it, and that too on its 'top-of-the-line' product (HD4870), while another company used GDDR3 across 5 generations of products?


whats the point of Intel @ DDR3? 
adopting new technologies is good
even the memory companys agree and will make GDDR5 a standard so whats the problem?


----------



## candle_86 (May 30, 2008)

the problem is GDDR5 will suffer like GDDR4 did when it was new, insane latancy. Also for Nvidia they started work on the GT200 right after the G80 was shipped, at the time GDDR4 wasn't viable and GDDR5 wasn't heard of. Do you expect Nvidia to stop working on there next gen just to include a new memory controller?


----------



## wiak (May 30, 2008)

candle_86 said:


> the problem is GDDR5 will suffer like GDDR4 did when it was new, insane latancy. Also for Nvidia they started work on the GT200 right after the G80 was shipped, at the time GDDR4 wasn't viable and GDDR5 wasn't heard of. Do you expect Nvidia to stop working on there next gen just to include a new memory controller?


insane what?
its not atis problem, nvidia is lacking a proper memory controller ^^


----------



## btarunr (May 30, 2008)

candle_86 said:


> the problem is GDDR5 will suffer like GDDR4 did when it was new, insane latancy. Also for Nvidia they started work on the GT200 right after the G80 was shipped, at the time GDDR4 wasn't viable and GDDR5 wasn't heard of. Do you expect Nvidia to stop working on there next gen just to include a new memory controller?



ATI started work on the R700 architecture about the same time when they released the HD 2900XT. Taken, GDDR5 was unheard of then, but later, the RV770 did end up with a GDDR5 controller, didn't it? Goes on to show that irrespective of when a company starts work on an architecture, something as modular as a memory controller can be added to the architecture even weeks before they hand over designs to the fabs to make an ES and eventually mass production.

So, about when NV started work on the GT200 is a lame excuse.


----------



## candle_86 (May 30, 2008)

ATI made GDDR5 bta didnt you get the memo they have most likly been working on it just as long. I approve of what Nvidia is doing, using known tech with a wider bus is just as effective, and there is less chance of massive lat issues like there will be with GDDR5. I prefer tried and true, this will be the 2nd time AMD tried something new with there graphics cards and this will be the 2nd time they fail. I was dead right with the 2900XT failing, i said it would before it even went public, and ill be right about this.


----------



## btarunr (May 30, 2008)

GDDR5 is a JEDEC standard. Irrespective of who _makes_ it, any licensed company can use it. HD4870 is more of something that will beat 9800 GTX and go close to GTX 260. It's inexpensive, cool, efficient. Don't try to equate HD4870 to GTX 280, you'll end up comparing a sub-400 dollar card to something that's 600+ dollars. The better comparison would be to HD4870 X2, which is supposed to be cheaper than GTX 280 and has win written all over it.


----------



## Rebo&Zooty (May 30, 2008)

btarunr said:


> GDDR5 is a JEDEC standard. Irrespective of who _makes_ it, any licensed company can use it. HD4870 is more of something that will beat 9800 GTX and go close to GTX 260. It's inexpensive, cool, efficient. Don't try to equate HD4870 to GTX 280, you'll end up comparing a sub-400 dollar card to something that's 600+ dollars. The better comparison would be to HD4870 X2, which is supposed to be cheaper than GTX 280 and has win written all over it.



yes bta, exectly, but you gotta remmber candle has an irrational hate for amd/ati, logic: he fails it....

tho im shocked to see you say the 4870/4870x2 has win writen allover it.....did you forget your nvidia pillz today?


----------



## btarunr (May 30, 2008)

Rebo&Zooty said:


> tho im shocked to see you say the 4870/4870x2 has win writen allover it.....did you forget your nvidia pillz today?



Wutha  how did you know?

It so happened that instead of the usual shipment of NVidiocy pills, they sent me a can of whoopass (that was supposed to go to Intel). Whoopass is a very tasty BBQ sauce.


----------



## Rebo&Zooty (May 30, 2008)

btarunr said:


> Wutha  how did you know?
> 
> It so happened that instead of the usual shipment of NVidiocy pills, they sent me a can of whoopass (that was supposed to go to Intel). Whoopass is a very tasty BBQ sauce.



ah, not into excessivly spicy, but its better then those pills u been taking   guess its helping burn their effects out of your system


----------



## Darknova (May 30, 2008)

candle_86 said:


> ATI made GDDR5 bta didnt you get the memo they have most likly been working on it just as long. I approve of what Nvidia is doing, using known tech with a wider bus is just as effective, and there is less chance of massive lat issues like there will be with GDDR5. I prefer tried and true, this will be the 2nd time AMD tried something new with there graphics cards and this will be the 2nd time they fail. I was dead right with the 2900XT failing, i said it would before it even went public, and ill be right about this.



I prefer better.

GDDR3 has been tried, tested, and beaten to death with a rather large stick. I was right about the 2900XT failing too, but I'm still with ATi, why? because I'm with a company that likes to look ahead rather than beat people to death with ever increasing GPU dies.

nvidia refuses to even consider DX10.1, which is not a massive jump (and don't even bother with the whole "Dx10 not used yet" arguement, it's called the future - google it if you don't understand the concept), but is all about efficiency and performance improvement.

nvidia wants none of it, that to me, is a poor company policy.

ATi beat nvidia with the 1950 series, they will do it again with the 48xx series.


----------



## kylew (May 30, 2008)

Darknova said:


> I prefer better.
> 
> GDDR3 has been tried, tested, and beaten to death with a rather large stick. I was right about the 2900XT failing too, but I'm still with ATi, why? because I'm with a company that likes to look ahead rather than beat people to death with ever increasing GPU dies.
> 
> ...



Well, we've all got a good idea of why NV won't touch DX10.1, seems they simply can't implement it, by the looks of things at least. I believe that the DX10.1 we have now, was what DX10 was originally intended to be before the specification was made optional instead of compulsory. Elite Bastards has a few good articles on the workings of DX10 and 10.1. From those articles, it seems painfully obvious that NV threw a hissy fit over DX10 because of the memory virtualisation. If you remember those early rumors of the 2900, they were said to be coming in an X2 form, but they obviously never did. This also makes me think that RV670 was really meant to be R600, but they had to stop production due to the spec change in DX10. I reckon the 2900s are technically capable of DX10.1, but it's disabled or something. In the end, if DX10.1 was as pointless as NV make out, then Assassin's Creed wouldn't have got that large performance boost, and NV wouldn't be complaining to devs who support it, ie, Assassin's Creed, and 3D Mark Vantage. This, and a few other reasons is why I won't be touching an NV card for a long time to come.


----------



## wiak (May 30, 2008)

GDDR5 = less pins, high performace, lower power, cheaper = cheaper graphics cards for the CUSTOMER

like say GDDR5 on a 256bit is 70% faster than GDDR3 on a 512bit bus and 256bit bus costs 50% less, heck its less complex and can result in a cooler chip


----------



## InnocentCriminal (May 30, 2008)

Interesting articled, enjoyed that. ^^

Some rather interesting points, but some ring truer than others and instead of immature fanboi-isms, we'll just have to wait and see. Which I prefer more than arguing with small minded delinquents in forums. Not all of you are delinquents, obviously. 

I'm itching for Computex as we might get a little insight into the 4k series from ATi and maybe something from the green camp as well.

Even if GDDR5 brings a lot to the plate and can only mean good things, like I said, we'll have to see.


----------



## kylew (May 30, 2008)

candle_86 said:


> on paper the 2900XT should have crushed all comers, instead it barley put up a fight agasint the 8800GTS 640. AMD can look great on paper, but give me some proof they can compete. As for the 3870 being futureproof i beg to differ. It has 5 groups of 64 shaders. Only 1 in each group can do complex shader work, 2 can do simple, one does interger the other does floating point. Now in the real world this means that 128 of those shader units wont be used if at all, the floating point and interger units, and the simple shaders are not used thanks to AMD's lack to supply a complier for there cards. LEt it look as good as you want, but if AMD can't supply a code complier so code works right on there design they are still screwed.



Funny you say that, since the 2900XT ended up being faster than an 8800GTS G80 . This has been common knowledge for some time now. You really do hate on ATi, I don't really like NV, but at least I have good reasons, nothing to do with their performance or making things up to suit my argument. Remember that link I sent you regard the relative performance difference between 3870s and 8800GTs? About how the 8800GT appears faster, yet between res and adding of AA and AF, the GT's frame drop is higher than the 3870's frame drop. Anyone who doesn't get what I'm saying, look at some of w1z's recent graphics reviews, for example, at 1024x768, an 8800GT will be getting 180FPS, a 3870 will getting 150, move up to 1280x1024 with 2AA, the 8800GT's frame rate drops to about 110, the 3870 drops to about 100. We can tell the 8800GT is "faster", but for some reason, its performance is hurt much more when increasing the res and adding AA. Realistically, the 8800GT is the poorer one out of the 2 because it doesn't take much to drop its framerate so much.  Yeah, the 8800GT is marginally faster over all, but for those who argue about ATi's tech is crap and shader based AA doesn't work, they need to actually look at things for themselves rather than just hear something and repeat it parrot fashion as if it's their belief and or opinion.  *cough*candle*cough*


----------



## kylew (May 30, 2008)

InnocentCriminal said:


> Interesting articled, enjoyed that. ^^
> 
> Some rather interesting points, but some ring truer than others and instead of immature fanboi-isms, we'll just have to wait and see. Which I prefer more than arguing with small minded delinquents in forums. Not all of you are delinquents, obviously.
> 
> ...



When is Computex? I hope these do end up coming at June 16th!

PS, I'm not a delinquent am I?


----------



## zOaib (May 30, 2008)

kewl


----------



## InnocentCriminal (May 30, 2008)

kylew said:


> When is Computex? I hope these do end up coming at June 16th!
> 
> PS, I'm not a delinquent am I?



Only if you want to be... 

Computex is next week.


----------



## kylew (May 30, 2008)

InnocentCriminal said:


> Only if you want to be...
> 
> Computex is next week.



I don't wanna be a delinquent! lol


----------



## Thermopylae_480 (May 30, 2008)

InnocentCriminal said:


> Interesting articled, enjoyed that. ^^
> 
> Some rather interesting points, but some ring truer than others and instead of immature fanboi-isms, we'll just have to wait and see. Which I prefer more than arguing with small minded delinquents in forums. Not all of you are delinquents, obviously.
> 
> ...



Comments such as this cause just as many problems on this forum as "Fanboys."  If you have a problem with another user please use the report post button rather than inflaming situations with negative comments and attitude.


----------



## acperience7 (May 30, 2008)

I have to agree with candle 86(post #18). All of this new tech ATI is using is great and all, but the 2900XT made huge promises on paper as well. I think with all the new ideas ATI is implementing in the HD4xxx series they will be extremely competitive, but I also think that nVidia's "brute force" approach will continue to serve them well this generation as it has before, but maybe not as well as they hope.


----------



## kylew (May 30, 2008)

acperience7 said:


> I have to agree with candle 86(post #18). All of this new tech ATI is using is great and all, but the 2900XT made huge promises on paper as well. I think with all the new ideas ATI is implementing in the HD4xxx series they will be extremely competitive, but I also think that nVidia's "brute force" approach will continue to serve them well this generation as it has before, but maybe not as well as they hope.



I think this is as far as they can go now with the brute force method on the G80/G92 architecture unless they manage to sort out 55nm parts. Even when/if they do get to 55nm on GT200, there's only so much it can really do. I still think on 55nm, they've not really got many options left other than to try to redesign their core for the next next-gen.


----------



## candle_86 (May 30, 2008)

wiak said:


> GDDR5 = less pins, high performace, lower power, cheaper = cheaper graphics cards for the CUSTOMER
> 
> like say GDDR5 on a 256bit is 70% faster than GDDR3 on a 512bit bus and 256bit bus costs 50% less, heck its less complex and can result in a cooler chip



where do you get that idea, GDDR5 @ 3000mhz on a 256bit bus = 96mb/s

but GDDR3 @ 2000mhz on a 512bit bus = 128mb/s

there you have it, the ATI video memory would have to run at 4000mhz on a 256bit bus to tie a modern 512bit GDDR3 bus.


----------



## Darknova (May 30, 2008)

candle_86 said:


> where do you get that idea, GDDR5 @ 3000mhz on a 256bit bus = 96mb/s
> 
> but GDDR3 @ 2000mhz on a 512bit bus = 128mb/s
> 
> there you have it, the ATI video memory would have to run at 4000mhz on a 256bit bus to tie a modern 512bit GDDR3 bus.



You actually didn't read the article did you?

Read the article, then try talking to us again.


----------



## candle_86 (May 30, 2008)

kylew said:


> Funny you say that, since the 2900XT ended up being faster than an 8800GTS G80 . This has been common knowledge for some time now. You really do hate on ATi, I don't really like NV, but at least I have good reasons, nothing to do with their performance or making things up to suit my argument. Remember that link I sent you regard the relative performance difference between 3870s and 8800GTs? About how the 8800GT appears faster, yet between res and adding of AA and AF, the GT's frame drop is higher than the 3870's frame drop. Anyone who doesn't get what I'm saying, look at some of w1z's recent graphics reviews, for example, at 1024x768, an 8800GT will be getting 180FPS, a 3870 will getting 150, move up to 1280x1024 with 2AA, the 8800GT's frame rate drops to about 110, the 3870 drops to about 100. We can tell the 8800GT is "faster", but for some reason, its performance is hurt much more when increasing the res and adding AA. Realistically, the 8800GT is the poorer one out of the 2 because it doesn't take much to drop its framerate so much.  Yeah, the 8800GT is marginally faster over all, but for those who argue about ATi's tech is crap and shader based AA doesn't work, they need to actually look at things for themselves rather than just hear something and repeat it parrot fashion as if it's their belief and or opinion.  *cough*candle*cough*



So Nvidia takes a bigger hit, they stay faster in almost every benchmark, and compete with the AMD price point making AMD a bad buy right now. The 8800GS has the 3850 cornered, the 9600GT/9600GSO have the 3870 cornered, and quite frankly the 9800GX2 preforms well enough faster than the x2 that its price is justified. Just face it right now AMD has nothing going for them.


----------



## candle_86 (May 30, 2008)

I've read the article and honestly even if they can transmit so much per pin doesnt mean all that much, it's still DDR at heart and increased latancy thanks to increased speed and reduced power always increases latancy, so you can all you want with the memory but when latancy is high you have to have things like this to make it even work. All that this does actully is say that the memory can dump quickly, but it still falls under the constraints of the bus width and speed. No matter how many pins you have availble to you its Double Data Rate which means it can process on both rising and falling sides, and the speed of the memory and bus width affect actual preformace, these improvements will help negate the latancy but they had to do this because of lat, GDDR3 did the same thing, yet DDR 1000 and GDDR3 1000 both where just as fast, GDDR3 was just cheaper to use than DDR running at 1000mhz. Quite honestly there is more bandwith to be had right now with a 512bit bus than a 256bit bus.


----------



## Rebo&Zooty (May 30, 2008)

Darknova said:


> You actually didn't read the article did you?
> 
> Read the article, then try talking to us again.



might as well give up, hes just a hater, hell if btarunr has to slap him for it and he STILL dosnt listen, then you KNOW he is far from rational


----------



## yogurt_21 (May 30, 2008)

btarunr said:


> ATI started work on the R700 architecture about the same time when they released the HD 2900XT. Taken, GDDR5 was unheard of then, but later, the RV770 did end up with a GDDR5 controller, didn't it? Goes on to show that irrespective of when a company starts work on an architecture, something as modular as a memory controller can be added to the architecture even weeks before they hand over designs to the fabs to make an ES and eventually mass production.
> 
> So, about when NV started work on the GT200 is a lame excuse.



for the record, ati started working on the r700 before the r480 (x850xt) lauched. And I imagine the gt200 has been in development as long. These designs don't roll out overnight you know. lol


----------



## yogurt_21 (May 30, 2008)

candle_86 said:


> ATI made GDDR5 bta didnt you get the memo they have most likly been working on it just as long. I approve of what Nvidia is doing, using known tech with a wider bus is just as effective, and there is less chance of massive lat issues like there will be with GDDR5. I prefer tried and true, this will be the 2nd time AMD tried something new with there graphics cards and this will be the 2nd time they fail. I was dead right with the 2900XT failing, i said it would before it even went public, and ill be right about this.



increasing the bus size also increases the memory latency. I mean seriously it's like a bunch of school children arguing about who made the first shot in the american revolutionary war. like they'd be the experts on that.

edit: and all hail candle the worlds most supreme expert on graphics cards. he knows all see all and predicts the future!

seriously dude, don't get all high and mighty into yourself in front of your computer screen, you're not the expert on this subject and frequently show that. I'd think twice if I were you about making vast predictions on what cards will do well and what one won't.


----------



## largon (May 30, 2008)

Darknova said:


> candle_86 said:
> 
> 
> > where do you get that idea, GDDR5 @ 3000mhz on a 256bit bus = 96*GB*/s
> ...


*Darknova*,
I'm not sure what you are trying to say, *candle_86*'s calculations are correct - and if you're referring to the following paragraph:





			
				extremetech said:
			
		

> Bandwidth first: A system using GDDR3 memory on a 256-bit memory bus running at 1800MHz (effective DDR speed) would deliver 57.6 GB per second. Think of a GeForce 9600GT, for example. The same speed GDDR5 on the same bus would deliver 115.2 GB per second, or twice that amount.


This is just BS. Or, more likely an unintentional lapse from the author. [edit: no, it isn't] It will soon be changed to something like [edit: no, it won't]:


			
				extremetech said:
			
		

> Bandwidth first: A system using GDDR3 memory on a 256-bit memory bus running at 1800MHz (effective DDR speed) would deliver 57.6 GB per second. Think of a GeForce 9600GT, for example. A *double speed* GDDR5 *on a bus half as wide* would deliver *an equal amount*.


----------



## Darknova (May 30, 2008)

Largon, that's the SAME thing, in different words.


----------



## largon (May 30, 2008)

Gah. Fixed.
Too much sun for me today. And it's almost 1AM here...


----------



## v-zero (May 30, 2008)

Memory bandwidth isn't important enough to garner such attention. Improvements in core architecture lead to much greater advances than memory technology. At the high-end even with 256-bit memory buses, GDDR3 at 2GHz still gives us 64GBit/s of bandwidth which is 90%+ of the time not utilized fully because the bottleneck comes from another part of the chip.
I guess my point is that on the current (and I'm guessing next, but we will have to wait to see that) generation of products the memory bandwidth advances supplied by GDDR4/5 over GDDR3 are negligible in comparison to the obvious bottlenecks in core GPU design.


----------



## imperialreign (May 30, 2008)

jumping in late to the "argument" but would like to offer some thoughts


yes, GDDR5 will prob call for extremelly high latencies, as did GDDR4 when compared to GDDR3, and GDDR3 when compared to GDDR2

but what you forget to take into account is that GDDR5 should be able to move more information per clock cycle, like GDDR4 when compared to GDDR3, and GDDR3 when compared to GDDR2

what that translates to - is higher latencies, but more information being transferred  - which null and voids any drawbacks with having to run higher latencies.

and coupled with the fact that newer memory desgins allow for higher clocked MEM, require less voltage - that makes them more efficient compared to their predecessors.


Just like with SYS MEM - DDR3 can move more information than DDR2 can, and runs faster as well.  Sure, it's possible that DDR2 clocked at 1200 will run 51ms latencies, but so can DDR3 at 1600MHz . . . and which standard transfers more information?  The more information that can be moved into and out of the DRAM matrix per clock cycle means the less amount of time you're sitting waiting for things to load up.


----------



## InnocentCriminal (May 31, 2008)

Thermopylae_480 said:


> Comments such as this cause just as many problems on this forum as "Fanboys."  If you have a problem with another user please use the report post button rather than inflaming situations with negative comments and attitude.



I didn't mean exclusively in this forum.


----------



## WarEagleAU (May 31, 2008)

Wow, alot of useful information on here that I didnt know. And no, Im not referring to the article, Im actually speaking about the memory, latencies, speeds and such translate to bandwidth. Ill take a further look at this article, it seems like an awesome read.

Oh and   ATI  Nvidia


----------



## FR@NK (May 31, 2008)

largon said:


> This is just BS. Or, more likely an unintentional lapse from the author. It will soon be changed to something like:


You dont understand how GDDR5 works.....



			
				extremetech then twisted by largon said:
			
		

> Bandwidth first: A system using GDDR3 memory on a 256-bit memory bus running at 1800MHz (effective DDR speed) would deliver 57.6 GB per second. Think of a GeForce 9600GT, for example. A double speed GDDR5 on a bus half as wide would deliver an equal amount.



GDDR3 memory on a 256-bit memory bus running at 1800MHz (effective DDR speed) would deliver 57.6 GB per second:

1800 effective-----256-bit--------2 bits per cycle

(900 MHz) * (256 bits/interface) * (2 bits / Hz) = 460800Mbit/s or 57.6GB/s


A double speed GDDR5 on a bus half as wide:

---Doubled-------half of 256-bit------4 bits per cycle------*STILL TWICE THE BANDWIDTH*

(900*2 MHz) * (128 bits/interface) * (4 bits / Hz) = 921600Mbit/s or 115.2GB/s



			
				extremetech said:
			
		

> Take any GDDR3 bandwidth on a given clock rate and bus width and double it, and you get GDDR5's bandwidth.



This is because GDDR3 can output 2 bits per clock cycle and GDDR5 can output 4 bits per clock cycle.



			
				Qimonda GDDR5 Whitepaper said:
			
		

> GDDR5 operates with two different clock types. A differential command clock (CK) to where address and command inputs are referenced, and a forwarded differential write clock (WCK) where read and write data are referenced to. Being more precise, the GDDR5 SGRAM uses two write clocks, each of them assigned to two bytes. The WCK runs at twice the CK frequency. Taking a GDDR5 with 5 Gbps data rate per pin as an example, the CK clock runs with 1.25 GHz and WCK with 2.5 GHz. The CK and WCK clocks will be aligned during the initialization and training sequence. This alignment allows read and write access with minimum latency.



EDIT:

Thanx for the good read HTC


----------



## Spirou (Jun 1, 2008)

candle_86 said:


> on paper the 2900XT should have crushed all comers, instead it barley put up a fight agasint the 8800GTS 640. AMD can look great on paper, but give me some proof they can compete.



Your wish is my command!

Samsung K4U52324QE-07 GDDR4 0.714ns at work on a Sapphire Atlantis 3870 Silent OC (card is not modified at all) in two of the most demanding games ever: 


 

To the left: CMR DiRT at maxed settings in 1280*1024 with 4xAA, 8xAF, AAA, VSync on, Mipmap quality max, image shows a complete run of Magneti Marelli Crossover.

To the right: Crysis (V1) at tweaked very high DX9 settings in 1280*1024 with 4xAA, 8xAF, AAA, VSync,  Mipmap quality max, image shows combat at the end of the road in first level.

Is there any Nvidia-card on the market that can come close to this?

PS You can see upto 20% CPU-limitation in my screens. So this is not the top speed of this card.


----------



## largon (Jun 1, 2008)

FR@NK said:


> A double speed GDDR5 on a bus half as wide:
> 
> ---Doubled-------half of 256-bit------4 bits per cycle------*STILL TWICE THE BANDWIDTH*
> 
> (900*2 MHz) * (128 bits/interface) * (4 bits / Hz) = 921600Mbit/s or 115.2GB/s


That's not correct either.
There is no such thing as 1800MHz GDDR5. Infact, you actually didn't understand how I understood it wrong. 
So the double-speed GDDR5 ("900*2 MHz" as you said) makes no sense and the real comparison is:

_900MHz GDDR5 * 128bit * 4bits/clk_ = same bandwidth as _900MHz GDDR3 (same frequency, de facto) * 256bit bus (doubled bus) * 2bits/clk_. 

GDDR5 is infact "fake-QDR", I thought it was just ~double in frequency. Like 2GHz real (DDR-4000). Damn those people that mix up _DDR-ratings to MHz_. There's tons of places on the net where you can see things like 4.0*GHz* GDDR5 - as if it was DDR-4000. Like wikipedia (duh) - until I edited the wiki article. Maybe JEDEC should've called it simply as QDR, and not DDR as does it really matter how many datalinks (DQs) you use if it's actually 4bits/clock = quad data rate by definition. For some reason it's kinda disappointing to know it's just a wider, not faster in frequency. GDDR5 is sort of like a "dualcore RAM". 

Hmm... Or maybe GDDR5 should be called QDDR (quasi double data rate)... 




			
				Spirou said:
			
		

> To the right [link]: Crysis (V1) at tweaked very high DX9 settings in 1280*1024 with 4xAA, 8xAF, AAA, VSync,  Mipmap quality max, image shows combat at the end of the road in first level.
> 
> Is there any Nvidia-card on the market that can come close to this?


Your video memory usage -graph gives away that your results are with CCC forced AA (= 0xAA). That would mean only built-in edge AA is applied. If you want AA actually used only choose 4xAA from  in-game options. 
4xAA at 1280x1024 takes ~600MB.

Anyways, I'm running at stable 50FPS at the same settings as you.


----------



## Spirou (Jun 1, 2008)

largon said:


> Your video memory usage -graph gives away that your results are with CCC forced AA (= 0xAA). That would mean only built-in edge AA is applied. If you want AA actually used only choose 4xAA from  in-game options.
> 4xAA at 1280x1024 takes ~600MB.



Actually i ran MSAA (Wide Tent Samples 8X), but there is no difference in memory usage at all. Anti-Aliasing (like Anisotropic Filtering) is based on math functions that don't need memory at all when rendered properly. It can be emulated thru large tables to reduce shader usage reducing texturing bandwidth (using more TMUs) to address filtered data directly from memory but such does not look like true AA and AF.

However: Crysis doesn't use more than 420 MB on HD 38x0 no matter which setting is chosen. Memory usage on Nvidia cards is much higher due to their specific rendering strategy and chip-design.



> Anyways, I'm running at stable 50FPS at the same settings as you.



You must be joking. Even extreme overclocking won't get you higher than 40 million tris per sec which is the average amount for high(!) settings at 40 fps. Usually SLI setups can go that far. With a single GPU you won't get much higher than 30 million tris per sec. I've seen a lot of Crysis benches and by the time i write this noone ever reached 55 million tris per sec. So your screenie simply does not show the same settings.

At 1280*1024 and very high settings plus 4xAA and 8xAF an 8800 Ultra OC reaches 15 fps*, and i am very proud to get 18-22 fps from my card. With tweaked settings between high and very high, i doubt that any single GPU setup can beat 30 fps with less than 600 GFlops and with 85 GB/s memory bandwidth (fully available, not affected by addressed filtering).

* http://www.tomshardware.com/de/fotostrecken/grafik_cpu_leistung2,0101-58000-0-14-15-1-jpg-.html#


----------



## largon (Jun 1, 2008)

*Spirou*,
I ran it again with the same DX9 very high tweak (all knobs @ max) + *in-game selected 4xAA* + driver forced 8xAF + trilinear filtering + vsync:
-> 30-35FPS

Too bad FSAA is so much heavier to run than those full screen & texture blurring tent AAs on Radeons.


----------

