# AMD Radeon HD 6700 Series ''Barts'' Specs Sheet Surfaces



## btarunr (Sep 27, 2010)

Here is the slide we've been waiting for, the specs sheet of AMD's next-generation Radeon HD 6700 series GPUs, based on a new, radically redesigned core, codenamed "Barts". The XT variant denotes Radeon HD 6770, and Pro denotes HD 6750. AMD claims that the HD 6700 series will pack "Twice the Horsepower", over previous generation HD 5700 series. Compared to the "Juniper" die that went into making the Radeon HD 5700 series, Barts features twice the memory bandwidth thanks to its 256-bit wide high-speed memory interface, key components such as the SIMD arrays split into two blocks (like on Cypress), and we're now getting to learn that it uses a more efficient 4-D stream processor design. There are 1280 stream processors available to the HD 6770 (Barts XT), and 1120 stream processors to the HD 6750 (Barts Pro). Both SKUs use the full 256-bit memory bus width. 

The most interesting specification here is the shader compute power. Barts XT churns out 2.3 TFLOP/s with 1280 stream processors, GPU clocked at 900 MHz, while the Radeon HD 5870 manages 2.72 TFLOP/s with 1600 stream processors, 850 MHz. So indeed the redesigned SIMD core is working its magic. Z/Stencil performance also shot up more than 100% over the Radeon HD 5700 series. Both the HD 6770 and HD 6750 will be equipped with 5 GT/s memory chips, at least on the reference-design cards, which are technically capable of running at 1250 MHz (5 GHz effective), though are clocked at 1050 MHz (4.20 GHz effective) on HD 6770, and 1000 MHz (4 GHz effective) on HD 6750. Although these design changes will inevitably result in a larger die compared to Juniper, it could still be smaller than Cypress, and hence, more energy-efficient.





*View at TechPowerUp Main Site*


----------



## DRDNA (Sep 27, 2010)

Nice ...thank you kind Sir!
Can't wait to see some benching!


----------



## MoonPig (Sep 27, 2010)

So the 6770 will be ever so slightly more powerful than the 5870? Must be why it needs 2x 6pin.

the 5770 wasn't more powerful than the 4870... so maybe the 6970 will be stupidly powerful... haha


----------



## Roph (Sep 27, 2010)

Looks like my next card will be a 6700 series card. Good job AMD, my money is waiting for you


----------



## caleb (Sep 27, 2010)

Is the 6750 a next 5750 or 5850 ?
This naming scheme is starting to be confusing


----------



## btarunr (Sep 27, 2010)

caleb said:


> Is the 6750 a next 5750 or 5850 ?
> This naming scheme is starting to be confusing



6750 is the next 5750.


----------



## Loosenut (Sep 27, 2010)

Wonder if they improved scalling for xfire?


----------



## Zehnsucht (Sep 27, 2010)

These better rofl stomp the existing 58xx series, otherwise those will never go down in price 

Look at the lowest price (in SEK) for the 5850. It's higher now than a year ago.


----------



## btarunr (Sep 27, 2010)

Blind guess, if Cayman is 640(x4) keeping up with trend, it should feature 2560 stream processors.


----------



## de.das.dude (Sep 27, 2010)

thanks doc!


----------



## 983264 (Sep 27, 2010)

*Omg*

It looks like the 6770 OVERPOWER the 5850, although the Memory clock of the 6770(1050MHz) is a bit lower than the 5770(1200MHz)... But the core clock is much higher than the 5870(850MHz vs. 6770's 900MHz in stock)...

BTW, is the 6770 is 128bit or 256bit?


----------



## Tatty_One (Sep 27, 2010)

983264 said:


> It looks like the 6770 OVERPOWER the 5850, although the Memory clock of the 6770(MHz) is a bit lower than the 5770(1200MHz)... But the core clock is much higher than the 5870(850MHz vs. 6770's 900MHz in stock)...
> 
> BTW, is the 6770 is 128bit or 256bit?



he said 256bit.


----------



## yogurt_21 (Sep 27, 2010)

btarunr said:


> Blind guess, if Cayman is 640(x4) keeping up with trend, it should feature 2560 stream processors.



which there is also a trend between the 5700's and 5800's of doubling the rop's, memory bit rate and texture units.

though that would create one monster card and seems like it would be too expensive.


----------



## MrMilli (Sep 27, 2010)

I'll quote:
_Northern Islands is the next generation of AMD GPU chips that will arrive in 2010 as a successor to Evergreen. Previously, it was assumed that a product codenamed Southern Islands would appear first using Evergreen shaders surrounded by new "Uncore" components, but more recent news suggest that AMD is jumping directly to their next generation shaders on the existing 40 nm TSMC fabrication process.

The biggest change is in the shaders, they have gone from a 4 simple + 1 complex arrangement to a 4 medium complexity arrangement. This should end up no slower than the old way for simple calculations, the overwhelming majority of the workload, but also be faster for most of the complex operations. [...] Since the shader count is 80% of the old grouping, there is some space saved, and on top of that AMD has had a lot of time to optimize area. On the down side, each shader is marginally bigger, but the end result is a cluster of four new shaders that is smaller than the old 4+1 group, and faster too.
— Charlie Demerjian_


----------



## HXL492 (Sep 27, 2010)

btarunr said:


> 6750 is the next 5750.



The 6750 may be the next 5750 but it will bring a huge performance increase. Just like how the 5770 performs like the 4870


----------



## btarunr (Sep 27, 2010)

HXL492 said:


> The 6750 may be the next 5750 but it will bring a huge performance increase. Just like how the 5770 performs like the 4870



Where did I say on the contrary?


----------



## wahdangun (Sep 27, 2010)

so its just like evergreen to RV 770 all over again and i'm glad AMD kept their promise to double everything on every generation (performance wise)


----------



## HXL492 (Sep 27, 2010)

btarunr said:


> Where did I say on the contrary?



Sorry let me reword that

The 6750 is the next 5750 plus it will bring a huge performance increase


----------



## Kitkat (Sep 27, 2010)

sounds cool im switching to water soon so may skip but if those numbers are true might haveta look first lol.


----------



## JATownes (Sep 27, 2010)

This makes me happy that I held on to my 4850s for his long.  Skipping the 5000 series might not have been such a bad idea after all.


----------



## shb- (Sep 27, 2010)

I woder whats the source of this info e.g. where those dudes @ pc-in-life got it .


----------



## CDdude55 (Sep 27, 2010)

Very nice specs for a midrange performance card, doubling everything is never a bad thing

Though i am really waiting to see what the 68** series is all about. And if they're good, hopefully i'll have a job by then to get a card in that series.(and that's if it can beat my current GTX 470 by a good amount)


----------



## TheMailMan78 (Sep 27, 2010)

6770=5850. Ok WTF.


----------



## bear jesus (Sep 27, 2010)

TheMailMan78 said:


> 6770=5850. Ok WTF.



Does that mean 6870 = 5970? 
I must admit i would love that.


----------



## 20mmrain (Sep 27, 2010)

So from what I can tell the 6770 will be around a 5850/5870 performance area? And the 6750 will be around the 5830/5850 performance area?

With powerful enough tessellation (Rumored) to take down the GTX 400 series. Man if this is the case... why spend $599 for a 6870 when you could get two of these trounce a 6870 and be able to play any game out there.

Hope fully they won't make it like Nvidia did and only have it two way Xfire/SLI. Because if they allowed 3-way.... while it might hurt 6870 sales.... it would kill any GTX 460 sales for sure! I thought that was the whole Idea of releasing these cards first anyway's wasn't it?

Ahhh just hoping...3 or 4 of these would be really fun!!!


----------



## LAN_deRf_HA (Sep 27, 2010)

I'd say nvidia is in trouble, as there's no way they'll have cards ready to counter this so soon after rolling out the 450/460.... but realistically they'll do fine. AMD and nvidia will just slot all their cards in between each other like they've been doing with the 4xx and 5xxx series. There won't be much of a price war now that they've figured out how to price fix without exchanging words. All that plus nvidia has a big enough fan base that they'll buy their products regardless of performance, either for fanboyism or legitimate driver preference.


----------



## CDdude55 (Sep 27, 2010)

20mmrain said:


> So from what I can tell the 6770 will be around a 5850/5870 performance area? And the 6750 will be around the 5830/5850 performance area?
> 
> With powerful enough tessellation (Rumored) to take down the GTX 400 series. Man if this is the case... why spend $599 for a 6870 when you could get two of these trounce a 6870 and be able to play any game out there.
> 
> ...



That's a good point, though i hope the 6870 isn't that expensive, $600 should be able to get me a dual GPU card lol.

Not really big on tessellation performance as it's something not used in many games anyways, so it's definitely not something i would stress out about. If real world gaming performance is much better on a 68** series card then it has my attention.

I think it also depends a lot on the 6870's performance, getting two 6770's may equal one 6870, but if the 6870 is powerful enough, i think it would be best to just get one of those and then another one later, two of those should be almost equivalent to 4 6770's assuming one 6870 is about as powerful as two 6770's(which it probably is), then you also get the benefit of better scaling with only two cards and opposed to something like 3 or 4.


----------



## mdsx1950 (Sep 27, 2010)

bear jesus said:


> Does that mean 6870 = 5970?
> I must admit i would love that.



Me too!

Just imagine the 6970!


----------



## bear jesus (Sep 27, 2010)

LAN_deRf_HA said:


> I'd say nvidia is in trouble, as there's no way they'll have cards ready to counter this so soon after rolling out the 450/460.... but realistically they'll do fine. AMD and nvidia will just slot all their cards in between each other like they've been doing with the 4xx and 5xxx series. There won't be much of a price war now that they've figured out how to price fix without exchanging words. All that plus nvidia has a big enough fan base that they'll buy their products regardless of performance, either for fanboyism or legitimate driver preference.



I would say from that nvidia is not so much in trouble it's just that amd is looking pretty good, if bulldozer and the 28nm gpu's do well then I would assume that amd wil start making some good profit over the next year or 2 and hopefully we will see a lot more competition out of them.


----------



## TheMailMan78 (Sep 27, 2010)

LAN_deRf_HA said:


> I'd say nvidia is in trouble, as there's no way they'll have cards ready to counter this so soon after rolling out the 450/460.... but realistically they'll do fine. AMD and nvidia will just slot all their cards in between each other like they've been doing with the 4xx and 5xxx series. There won't be much of a price war now that they've figured out how to price fix without exchanging words. All that plus nvidia has a big enough fan base that they'll buy their products regardless of performance, either for fanboyism or legitimate driver preference.



It will always come down to price. Any of these cards run the ports we get nowadays. Get whatever is cheapest or fits your needs as raw power is useless anymore.


----------



## bear jesus (Sep 27, 2010)

mdsx1950 said:


> Me too!
> 
> Just imagine the 6970!



Or in your case two 6970's


----------



## CDdude55 (Sep 27, 2010)

TheMailMan78 said:


> Any of these cards run the ports we get nowadays. Get whatever is cheapest or fits your needs as raw power is useless anymore.



That's true unfortunately.:shadedshu


----------



## mdm-adph (Sep 27, 2010)

Maybe I'm out of the loop, but when did AMD start specifying their stream processors in terms of xxx(x4) instead of just the ridiculously huge numbers like 3000?  

And shouldn't it be x5, anyway?


----------



## LAN_deRf_HA (Sep 27, 2010)

TheMailMan78 said:


> It will always come down to price. Any of these cards run the ports we get nowadays. Get whatever is cheapest or fits your needs as raw power is useless anymore.



That's true, but sadly many many people do not think like that. They want that nvidia GTS 220 in their system because it's clearly the greatest card ever. They've known all their life that nvidia is the best. I mean it must be, that's what all their friends say and the logo is in every game. Now what's this 5970 you keep going on about? These idiots don't even know the name of nvidia's top cards but they'll swear by it just the same. God I hate my clientele.


----------



## bear jesus (Sep 27, 2010)

LAN_deRf_HA said:


> That's true, but sadly many many people do not think like that. They want that nvidia GTS 220 in their system because it's clearly the greatest card ever. They've known all their life that nvidia is the best. I mean it must be, that's what all their friends say and the logo is in every game. Now what's this 5970 you keep going on about? These idiots don't even know the name of nvidia's top cards but they'll swear by it just the same. God I hate my clientele.



 I used to have to deal with the same, so many hours wasted on trying to explain things to people and just reciving a blank stare in return or an argument baised on brand name  so glad i dont have to deal with that any more.


----------



## kid41212003 (Sep 27, 2010)

If only their driver department was better.


----------



## arroyo (Sep 27, 2010)

Maybe some day creators of OMEGA Drivers would rise and create proper drivers for AMD/ATI Radeons. Few years ago their driver was far better than ATI one.


----------



## Paintface (Sep 27, 2010)

kid41212003 said:


> If only their driver department was better.



thats so 2003


----------



## TheMailMan78 (Sep 27, 2010)

kid41212003 said:


> If only their driver department was better.





arroyo said:


> Maybe some day creators of OMEGA Drivers would rise and create proper drivers for AMD/ATI Radeons. Few years ago their driver was far better than ATI one.



And maybe one day people will learn how to properly uninstall and install their drivers instead of screaming "ATI DRIVERS SUCKS!" on every forum in the interwebz.


----------



## kid41212003 (Sep 27, 2010)

I haven't bought any ATI cards since HD2000, so I don't really know. It's just i have been following BFBC2 club room for quite a while, and it seems ATI users were having quite a problem .


----------



## CDdude55 (Sep 27, 2010)

I personally don't see anything wrong with their drivers, then again, i don't really dabble much into the drivers and pick them apart to determine if they're shit or not.

Ran a 5770 and 4870 and they both performed fine with the drivers out at those times.


----------



## mdsx1950 (Sep 27, 2010)

bear jesus said:


> Or in your case two 6970's



Yes yes! 

 Or 4 6870s!


----------



## Jakeman97 (Sep 27, 2010)

Nice, a new series with a little increase in performance. Right now I'm runnin' 5770s, so am just gonna sit back and wait for all those that have to increase the size of their epeen and  then jump to the 5870/90 series at a much lower price than they are currently selling for.  I guess I'm just cheap 'cause I always stay one series behind the latest.


----------



## Anusha (Sep 27, 2010)

TheMailMan78 said:


> 6770=5850. Ok WTF.


I'm confused. You expected more or less?


----------



## TheMailMan78 (Sep 27, 2010)

kid41212003 said:


> I haven't bought any ATI cards since HD2000, so I don't really know. It's just i have been following BFBC2 club room for quite a while, and it seems ATI users were having quite a problem .



Sorry man. That wasn't a dig at you personally. It was just in general. There is nothing wrong with ATI drivers but the crossfire scaling.



Anusha said:


> I'm confused. You expected more or less?


No the new naming is retarded. It makes no sense.


----------



## the54thvoid (Sep 27, 2010)

kid41212003 said:


> I haven't bought any ATI cards since HD2000, so I don't really know.



Ya got it right there kid.

I run 5850 crossfired and the prob i had was slow map loads on dx10/11 but that got fixed waaaaay back as far as i'm concerned.

I had a pile of pish with running sli'd 7950GT's and my GTX 295 had a few hiccups.  Both driver teams have issues ocassionally.  I'd say on the whole NV average better drivers* but it doesn't mean the ATI ones are shit.

I think it's more often a flamebait stick or used by 'enthusiastic' brand loyalists to put down the other team when it's doing well.

* - oh yeah, apart from this hiccup this year http://www.zdnet.com/blog/hardware/warning-nvidia-19675-drivers-can-kill-your-graphics-card/7551

But on topic.  I wasn't expecting that (granted it's only a slide) level of performance increase. That is quite good. Hope the thermals and acoustics are good.


----------



## Lionheart (Sep 27, 2010)

Yes but can it play Crysis


----------



## CrystalKing (Sep 27, 2010)

Complete image!






But name is still false!

Accroding to nApoleon latest confirmation, HD68xx will be the final name.

Source:ChipHell


----------



## buggalugs (Sep 27, 2010)

Theres going to be even less reason to go with SLI or crossfire given the power of these cards. If Nvidia doesnt support 3 monitors with 1 card lots more people will choose ATI for multi monitor setup.


----------



## NdMk2o1o (Sep 27, 2010)

W00t got my 5770's on ebay and think I am going to wait a few weeks and grab a 6770 as I was going to grab a 460 instead. 

To the person who said the 5770 didn't beat a 4870, where have you been?

http://www.techpowerup.com/reviews/HIS/HD_5770/30.html

beat it only by a few % and lost out in one of the res by a few %, though the point is there was nothing in the overall performance, the new midrange cards are normally on par with the last gen high end (single GPU)


----------



## CDdude55 (Sep 27, 2010)

CrystalKing said:


> Complete image!
> http://www.chiphell.com/data/attachment/forum/201009/27/100127roko70599pc7p45i.jpg
> 
> But name is still false!
> ...



Nice idle and load wattages. AMD/ATI always excels in that area from the looks of it recently.




NdMk2o1o said:


> To the person who said the 5770 didn't beat a 4870, where have you been?
> 
> http://www.techpowerup.com/reviews/HIS/HD_5770/30.html



They're about the same, with the 5770 getting a _very_ slight edge over it.


----------



## the54thvoid (Sep 27, 2010)

TheMailMan78 said:


> No the new naming is retarded. It makes no sense.



The 6770 (does not) = 5850.

It's about the family performance.  The 6770 is the 4th most powerful performer after: 6970(?) > 6870 > 6850 > *6770*

Whereas: 5970 > 5870 > 5850 > *5770*

They are both fourth in the family.  The relative power of it, i.e. 6770 = 5850 is utterly irrelevant.


----------



## wolf (Sep 27, 2010)

awesome, good to see some more concrete specs surface, and from the look of things, these midrange cards are going to be beastly.

I for one can't wanit to see what the spiritual successor to the legendary 5850 will be, BartsXT looks nice and all but CaymanPRO is what my sights are set on.


----------



## TheMailMan78 (Sep 27, 2010)

the54thvoid said:


> The 6770 (does not) = 5850.
> 
> It's about the family performance.  The 6770 is the 4th most powerful performer after: 6970(?) > 6870 > 6850 > *6770*
> 
> ...



Performance wise it does. So you can guess what the prices will be and that is whats relevant.


----------



## the54thvoid (Sep 27, 2010)

CDdude55 said:


> Nice idle and load wattages. AMD/ATI always excels in that area from the looks of it recently.
> 
> I love you dude.  It's straight man love but love nonetheless.  You have a GTX 470 yet praise AMD.  You my friend are worthy of the* Angelically Unbiased Top Hat Trophy*,


----------



## the54thvoid (Sep 27, 2010)

TheMailMan78 said:


> Performance wise it does. So you can guess what the prices will be and that is whats relevant.



Yes.... 6850 will be about two arms and an ankle.  For a 6870 it'll be quadriplegia.

I fookin' hope they dont hump us all.


----------



## Benetanegia (Sep 27, 2010)

LAN_deRf_HA said:


> I'd say nvidia is in trouble, *as there's no way they'll have cards ready to counter this so soon after rolling out the 450/460*.... but realistically they'll do fine. AMD and nvidia will just slot all their cards in between each other like they've been doing with the 4xx and 5xxx series. There won't be much of a price war now that they've figured out how to price fix without exchanging words. All that plus nvidia has a big enough fan base that they'll buy their products regardless of performance, either for fanboyism or legitimate driver preference.



If you want to avoid my post, bottom line is Nvidia is not in trouble at all, in fact it is in a much better position than it was with GF100.

Nvidia and AMD have several teams working on different chips and generations of chips, so the fact that one chip is late doesn't affect others. It does shake the next releases a bit but mostly from a marketing standpoint, as they first want to sell some high-end chips, before they release the perf/price king (i.e GF100->GF104 == G80->G92). The original schedule in Nvidia was GF100 in Q4 2009, GF104 in Q1 2010 and mainstream/entry in Q2, rinse and repeat with next gen starting in Q4 2010. So basically GF100 was late by 6+ months, GF104 was late by 3 months and GF106 by 2 months or so. Next gen is not going to be late necessarily or too late, i.e Q1 2011 release. Remember that Nvidia doesn't need any re-design at the moment, they just need to add clusters or SIMDs to GF104 and have a "winner" in comparison to GF100 and that should be enough to compete with HD6000 cards.

For example, without engineers et all thinking too much (nothing at all actually ), adding one more cluster to GF104 you end up with a chip slightly smaller than GF100 (less than 3 billion transistors against the 3+ billion in GF100) but with significantly better specs:

Shaders: 480 SP -> 576 SP, 20% increase*
texture units: 64 TMU -> 96 TMU, 50% increase*
ROP: 48 same*
memory: 384 bit same*

* That's without taking into account that GF104 clocks much better than GF100, the new chip could be clocked at 800 Mhz easily and that would mean the new chip would be 30-40% faster than the GTX480, soundly beating the HD5970 and probably the HD6870 by the same ammount as the GTX480 beats the HD5870, except this time Cayman is said to be 400mm and NV chip would be a bit smaller than GF100.

On top of that and considering that finally TSMC's 40nm is at same yields as 55nm, Nvidia could decide to take the risk and instead of releasing a slightly smaller chip, they could go with a slightly bigger, but yummy yummy, chip. How? Same chip as mentioned above except they'd add one more SIMD to the SMs (note how small a cange this is and how easy to engineer/release it would be). GF104 is superscalar and its SMs have 3 SIMDS while having 2 schedulers, wasting one scheduler every odd clock cycle because it has no SIMD unit to talk to. The jump to 4 SIMDs at some point is unavoidable then, why not do it now, taking a small risk**? End result (and compared to GTX480):

764 SPs (+60%), 96 TMU (+50%), 48 ROPs, 384 bit. 750 Mhz...

** Small, because at this point 40nm yields are good, they know the process better and the resulted chip I estimate it would have 3.2 billion transistors and be smaller than GT200 in 65nm. That is, it wouldn't be the biggest chip Nvidia has made, but the benefits are enormous.


----------



## TheMailMan78 (Sep 27, 2010)

the54thvoid said:


> Yes.... 6850 will be about two arms and an ankle.  For a 6870 it'll be quadriplegia.
> 
> I fookin' hope they dont hump us all.



And now you see the light.



Benetanegia said:


> If you want to avoid my post, bottom line is Nvidia is not in trouble at all, in fact it is in a much better position than it was with GF100.
> 
> Nvidia and AMD have several teams working on different chips and generations of chips, so the fact that one chip is late doesn't affect others. It does shake the next releases a bit but mostly from a marketing standpoint, as they first want to sell some high-end chips, before they release the perf/price king (i.e GF100->GF104 == G80->G92). The original schedule in Nvidia was GF100 in Q4 2009, GF104 in Q1 2010 and mainstream/entry in Q2, rinse and repeat with next gen starting in Q4 2010. So basically GF100 was late by 6+ months, GF104 was late by 3 months and GF106 by 2 months or so. Next gen is not going to be late necessarily or too late, i.e Q1 2011 release. Remember that Nvidia doesn't need any re-design at the moment, they just need to add clusters or SIMDs to GF104 and have a "winner" in comparison to GF100 and that should be enough to compete with HD6000 cards.
> 
> ...



So basically what you are saying is Nvidia is not in trouble but is one swing behind. Kinda like the 3870 to ATI.


----------



## wolf (Sep 27, 2010)

TheMailMan78 said:


> So basically what you are saying is Nvidia is not in trouble but is one swing behind. Kinda like the 3870 to ATI.



thats what it sounds like to me, they have quite a few options availaible for a Fermi refresh that has the potential to be ~50% faster than a GTX480 in single card form, IMO.

this next round of GPU wars is certianly going to be an entertaining one


----------



## Benetanegia (Sep 27, 2010)

TheMailMan78 said:


> So basically what you are saying is Nvidia is not in trouble but is one swing behind. Kinda like the 3870 to ATI.



Yeah, kinda, on the schedule thing yes, how Ati released RV670 soon after R600, but on the overall picture, not exactly, because I think Nvidia's chip will be faster (in the case of the 768 SP alternative it would be much faster...), but still severely lagging in the perf/mm^2 and perf/watt area, but in any case the situation is going to be much better than Fermi vs Cypress, much much much better.

That's why I think they will definatey not be in trouble. Unless you think they have been in real trouble in the past 2 quarters...


----------



## CDdude55 (Sep 27, 2010)

the54thvoid said:


> I love you dude.  It's straight man love but love nonetheless.  You have a GTX 470 yet praise AMD.  You my friend are worthy of the* Angelically Unbiased Top Hat Trophy*,



 Thanks.


----------



## douglatins (Sep 27, 2010)

OMG FUCKBUNDA! 6770 close to the 5870? Damn i will jizz for the 6870 and 6970

http://fudzilla.com/graphics/item/20315-radeon-hd-6700-detailed-on-slides


----------



## NdMk2o1o (Sep 27, 2010)

CDdude55 said:


> They're about the same, with the 5770 getting a very slight edge over it.



Edited while you were typing this, my point being a 5770 is on par with a 4870, I would say it actually edges it now with driver improvements.


----------



## the54thvoid (Sep 27, 2010)

douglatins said:


> OMG FUCKBUNDA! 6770 close to the 5870? Damn i will jizz for the 6870 and 6970



I think vendors would rather you just pay with cash.  Last time i jizzed for something i ended up in jail.


----------



## HossHuge (Sep 27, 2010)

So if the 6750 = 5850 and the 6770 = 5870.  Other than less power usage, what's the point in making these?  It's not like it has DX12 or something.  All they need to do is lower the price of the 5XXX series cards and just come out with the 68XX series.

That being said, anything that drives down prices, I'm in favour of.


----------



## btarunr (Sep 27, 2010)

HossHuge said:


> So if the 6750 = 5850 and the 6770 = 5870.  Other than less power usage, what's the point in making these?



Giving you HD 5870-like performance for $200~250, and HD 5850-like performance for $150~199.



douglatins said:


> OMG FUCKBUNDA! 6770 close to the 5870? Damn i will jizz for the 6870 and 6970



Cayman XT close to HD 5970, Cayman Pro close to (or competitive with) GeForce GTX 480, and Antilles (dual Cayman) matchless. Again, my expectations.


----------



## cheezburger (Sep 27, 2010)

CrystalKing said:


> Complete image!
> http://www.chiphell.com/data/attachment/forum/201009/27/100127roko70599pc7p45i.jpg
> 
> But name is still false!
> ...



someday ago chiphell rumor that they have cayman xt on the benchmark test with rumor of 1920:120:32 + 256bit bus +6.4GT GDDR5 ram, but later it turns out to be barts(6770). so not surprise all of these just merely *camouflage* they create with their partner which is try to confuse nV from doing next more and attracting consumer's attention.



TheMailMan78 said:


> It will always come down to price. Any of these cards run the ports we get nowadays. Get whatever is cheapest or fits your needs as raw power is useless anymore.



i wonder you are playing super smash bro 24/7....


----------



## HossHuge (Sep 27, 2010)

btarunr said:


> Giving you HD 5870-like performance for $200~250, and HD 5850-like performance for $150~199.



So I should be able to pick up a 5850 for under $150 soon. Sweet.


----------



## btarunr (Sep 27, 2010)

HossHuge said:


> So I should be able to pick up a 5850 for under $150 soon. Sweet.



Yeah, if AMD partners decide to clear their inventory (which they did not, with the HD 4890, even after HD 5800 series launch).


----------



## HossHuge (Sep 27, 2010)

Of course depending on when and if, could we see the 6970 cards come out with a cost of over a grand?


----------



## mdsx1950 (Sep 27, 2010)

HossHuge said:


> Of course depending on when and if, could we see the 6970 cards come out with a cost of over a grand?


I hope it cost less than a grand. Atleast less the $900.


----------



## btarunr (Sep 27, 2010)

HossHuge said:


> Of course depending on when and if, could we see the 6970 cards come out with a cost of over a grand?



Introduction of HD 5970 did not drastically affect HD 4890 price. Partners rely on CrossFire sales (people thinking it's better they buy a second card than getting rid the first one and buying a new-generation card with that money, and end up with higher performance in current-generation applications). The price goes down a little, but not by much. Not even today will you find a brand new HD 4890 for $150.


----------



## TRIPTEX_CAN (Sep 27, 2010)

I'll be waiting for the 7xxx series this does sound promising but I hope Nvidia can hit the market with something competitive or ATI's prices will stay astronomical.


----------



## kid41212003 (Sep 27, 2010)

TRIPTEX_MTL said:


> I'll be waiting for the 7xxx series this does sound promising but I hope Nvidia can hit the market with something competitive or ATI's prices will stay astronomical.



Only the top cards that have higher performance than GTX480 though. Anything below that should stay competitive.


----------



## Mindweaver (Sep 27, 2010)

I hope the spec's are true or better.


----------



## TheMailMan78 (Sep 27, 2010)

cheezburger said:


> i wonder you are playing super smash bro 24/7....



Trolling me will equal fail for you.



Benetanegia said:


> Yeah, kinda, on the schedule thing yes, how Ati released RV670 soon after R600, but on the overall picture, not exactly, because I think Nvidia's chip will be faster (in the case of the 768 SP alternative it would be much faster...), but still severely lagging in the perf/mm^2 and perf/watt area, but in any case the situation is going to be much better than Fermi vs Cypress, much much much better.
> 
> That's why I think they will definatey not be in trouble. Unless you think they have been in real trouble in the past 2 quarters...


I don't agree with you 100% because well......its you. However I still agree with this.


----------



## Yellow&Nerdy? (Sep 27, 2010)

I see what AMD is doing here. Because the thermal performance on the current Nvidia top-cards is so bad, they can loosen up their standards too and concentrate on performance instead. So what the 68** series cards will probably be is a slightly smaller and slightly more power-efficient than GF100, but with better performance. As for the 6970 goes, nobody knows...

But personally, I don't think that AMD will be able to pull off the 5850 -> 6770, 5970 - > 6870 and so on. Although all the uncore parts on the chips are new, it's still 40nm. I would expect the 6850 to be between the 5870 and the GTX 480/very close to the GTX 480 and the 6870 to be somewhere between the GTX 480 and the 5970. I just hope they don't go coocoo bananas on the price...


----------



## douglatins (Sep 27, 2010)

I want a properly cooled dual card, like the rev2 of the GTX 295 or like the XFX 5970 gun one


----------



## SNiiPE_DoGG (Sep 27, 2010)

In this chart posted the Tflops columns of the 5850/5870 are switched with the Tflops of the 6750/6770... no? 

they are the only specs for each card that are out of line....


----------



## laszlo (Sep 27, 2010)

i expect disappointing performance considering is equal to cypress almost on paper just my opinion


----------



## erocker (Sep 27, 2010)

laszlo said:


> i expect disappointing performance considering is equal to cypress almost on paper just my opinion



These new cards on the chart are the mid-range models, replacing the 5750 and 5770 models. The 5850 and 5870 replacement specs are not known yet.

* @ Paintface  If 6770 is $199 and 6750 is $159 it will be win. However, ATi erm.. AMD has been a little greedy in their pricing so who knows.


----------



## Paintface (Sep 27, 2010)

now big question is price , will we see barts XT performance wise between 5850 and 5870 for less than 200 at launch?


----------



## dj-electric (Sep 27, 2010)

wow those specs are really really amazing, i did not expected over 33% increase in performance on this gpu gen


----------



## Completely Bonkers (Sep 27, 2010)

I would have rather have seen a 70% improvement in performance and a 30% reduction in power, to a sub 80W card, running nearly silently. Not only would it be more suitable for _my_ purposes, it would have been a great signal to the industry... low power is good.

Nonetheless, 6770 looks good.


----------



## cheezburger (Sep 27, 2010)

Benetanegia said:


> If you want to avoid my post, bottom line is Nvidia is not in trouble at all, in fact it is in a much better position than it was with GF100.
> 
> Nvidia and AMD have several teams working on different chips and generations of chips, so the fact that one chip is late doesn't affect others. It does shake the next releases a bit but mostly from a marketing standpoint, as they first want to sell some high-end chips, before they release the perf/price king (i.e GF100->GF104 == G80->G92). The original schedule in Nvidia was GF100 in Q4 2009, GF104 in Q1 2010 and mainstream/entry in Q2, rinse and repeat with next gen starting in Q4 2010. So basically GF100 was late by 6+ months, GF104 was late by 3 months and GF106 by 2 months or so. Next gen is not going to be late necessarily or too late, i.e Q1 2011 release. Remember that Nvidia doesn't need any re-design at the moment, they just need to add clusters or SIMDs to GF104 and have a "winner" in comparison to GF100 and that should be enough to compete with HD6000 cards.
> 
> ...




768 cuda: 96TMU:48rops 384bit bus and 750mhz core clock.....

i wouldn't imagine the die size of this monster....perhaps 600mm^2? serious either cayman and fermi 2's shader had gone way too ridiculous in number....if cayman is 640 ALU with 484mm^2 die space i can't imagine fermi 2 will be any size below 600mm^2...



Paintface said:


> now big question is price , will we see barts XT performance wise between 5850 and 5870 for less than 200 at launch?



no barts pro is out pace 5870 already and barts xt may be competitive with gtx 470/480. according the benchmark from chiphell.


----------



## dalelaroy (Sep 27, 2010)

*Barts Positioning*



caleb said:


> Is the 6750 a next 5750 or 5850 ?
> This naming scheme is starting to be confusing



The Radeon HD 6750 is the new Radeon HD 5830. It is to be positioned against the GTX 460 768.

The Radeon HD 6670 (Turks) will be the new Radeon HD 5750. It will offer the DX 9/10 performance of the Radeon HD 4770 and DX 11 performance midway between that of the Radeon HD 5750 and Radeon HD 5770 at the $99 price of the Radeon HD 4770.

In short Turks will edge out the performance of the GTS 450 using less than the 75 watts of the PCIe slot while costing less than $100.


----------



## LAN_deRf_HA (Sep 27, 2010)

Don't want to start an argument, just pointing out that if you haven't encountered much in the way of ati driver issues try dealing in larger volumes. I used to always use nvidia cards in my builds because people preferred the brand and the experience was just slightly smoother on the low-mid end at the time. Then everyone, and I mean like 99% of clients, started hooking these things up to HDTVs. It looks like shit and they sit 2 ft away but w/e. So I switched to ati because the HDTV experience was much nicer with them. Plug and play and you even got sound out by default. I also got a noticeable increase in bugs. Bear in mind these were always fresh installs. If it wasn't flash incompatibilities crashing the whole system it was graphical errors in games. Luckily I've found they'd fix these issues with time. It just often took 4 driver releases to address some of these things. Yeah nvidia apparently has bugs, but I never ran into them. So comparing the two, I'd say they probably have the same amount of driver issues, it's just that nvidia's seem to be more obscure.


----------



## 3volvedcombat (Sep 27, 2010)

All these people fighting for ATI's Drivers and trying to be legitimate on that there good drivers.

I know, there problems that are rediculas because of ATI drivers, When using a great dx11 5850 or 5870, and yet people have to take there time out of there life, to go try to find fixes to some games, and probably future games. 

It doesn't matter if its just 1 game, or 20 games having issue's and needing refreshed hot-fixes because all cards arnt supported. 

With all the flow of cash, and rep, they need to completely re-change there driver scheme.

I know, I really enjoy, just grabbing a nvidia card, updated the driver base from nvidia, and just plugging it in, having it already recognized, and ready to push fps in games. 

Never having to ever go in the control panel, to edit some AA settings, shut off some extra video processing settings for some old games, or having rediculas forcing issue's. 

nvidia's drivers are really solid, And on the ati side, Ive seen the problems, so many of them, People strive to go download like 10 diffrent 10.xx- to 10.8x drivers to see which one is the most stable and best performing. 

I really never see that with nvidia drivers, cause there all basic, there all solid performing, realiable, easy to use, and dependably stable drivers 85-95% of the time.

on ati's case, that isn't so much the same. 

My friend decided to crap shoot his perfect 1gb 4870's and begged me for my old gtx 260. 

Many people come in my computer shop, say they have had to tweak some in the ccc, or forced to, after googling the problem, to play the game.


----------



## alwayssts (Sep 27, 2010)

20mmrain said:


> So from what I can tell the 6770 will be around a 5850/5870 performance area? And the 6750 will be around the 5830/5850 performance area?
> 
> With powerful enough tessellation (Rumored) to take down the GTX 400 series. Man if this is the case... why spend $599 for a 6870 when you could get two of these trounce a 6870 and be able to play any game out there.
> 
> Hope fully they won't make it like Nvidia did and only have it two way Xfire/SLI. Because if they allowed 3-way.... while it might hurt 6870 sales.... it would kill any GTX 460 sales for sure! I thought that was the whole Idea of releasing these cards first anyway's wasn't it?



The engineering sample pics show that's exactly what they're doing.  This range is limited to 2-way crossfire.  Like you infer, 2-way will likely beat one 6800-series product for a similar amount of money, give or take the benefits of a single card versus crossfire scaling and minimum frame rates.  The question is does AMD take that hit against the 6800 series or do they price the 6700 series higher to avoid it?  If they do price it higher, they risk allowing GF104 parts room to breathe and take those sales for the budget-conscious bang-for-buckers.  I personally think they'll look at as it's okay for the 6700 series crossfire to compare/beat Cayman on bang-for-buck, but avoiud cannibalizing 6800-series crossfire configurations or the COAS (X2) part.  Hence, only 2-way.  Barts may start with a higher price-tag, but I'll bet supply/demand forces them down to the ~$150/200 price range to annihilate the GTX460.

I wonder when it'll be safe to assume Turks is 640sp/16R/32TMU/128-bit?  Smart on AMD's part if they are going this route.  Evergreen was 1/4-1/2-1/1 parts in a series while NI looks to be 1/3, 2/3, 1/1 (granted likely without the added ROPs and mem controller on Cayman).

Hope that each 640sp (8 SIMDs) cluster has it's own setup engine to go along with such a possible divide.  If they split tessellation up like that, Barts would be similar to GF104; Turks similar to GF106 with 2 and 1 triangles per clock respectively.   Cayman would be interesting.  While GF100 does supposedly 4 triangles per clock, if 6870 did 3 and was clocked at 900mhz, GTX480/6870 would essentially be equal in theoretical triangle output.  [Math = (.75X900)/700 = 96%].  Obviously implementation and technique come into play, but it's interesting that AMD may use less transistors and the clock/watt allowances of 40nm to perhaps achieve the same stock result with less power consumption.


----------



## Benetanegia (Sep 27, 2010)

cheezburger said:


> 768 cuda: 96TMU:48rops 384bit bus and 750mhz core clock.....
> 
> i wouldn't imagine the die size of this monster....*perhaps 600mm^2*? serious either cayman and fermi 2's shader had gone way too ridiculous in number....if cayman is 640 ALU with 484mm^2 die space i can't imagine fermi 2 will be any size below 600mm^2...



IMO no, not at all. I was thinking about something like 560 mm^2 max (but I'm even questioning that after writing this post, it could actually be smaller!!). It's not GF100 based, but an evolution based on GF104. Remember how I came up with those numbers.

1- First of all the only thing that I did was to add one more cluster to GF104. That already means 576 SP: 96TMU: 48 ROPS: 384 bit. That is exactly 1.5x GF104 or 2.925 billion transistors. Compared to the 3+ billions on GF100, thats actually a 5% reduction. Let's call this one Prototype A.

2- GF104 has same ammount of TMUs and SFUs as GF100 and 75% of the cuda cores, it also has 66% of the ROPs and memory bus. The end result is a chip that has 66% as many transistors, meaning that the extra cuda cores, TMUs and SFUs don't affect transistor count or die area too much, if at all, as long as they are included in existing SMs. To come up with the 768 SP number the only thing you have to do is add another 16 way SIMD unit to each Shader Multiprocessor in Prototype A, which is exactly one of the things of what was done between GF100 and GF104. That's why I said it would be slightly bigger than GF100, but TBH after figuring out both 66% numbers above, how they seem to be related, and how adding all those extra TMUs and SFUs and cuda cores didn't impact die area at all, I even have to question my first judgement on that. The more I think about it, the more I think that Nvidia might be able to create that 768 SP monster in the same die area or less!! than GF100.


----------



## cheezburger (Sep 27, 2010)

Benetanegia said:


> IMO no, not at all. I was thinking about something like 560 mm^2 max (but I'm even questioning that after writing this post, it could actually be smaller!!). It's not GF100 based, but an evolution based on GF104. Remember how I came up with those numbers.
> 
> 1- First of all the only thing that I did was to add one more cluster to GF104. That already means 576 SP: 96TMU: 48 ROPS: 384 bit. That is exactly 1.5x GF104 or 2.925 billion transistors. Compared to the 3+ billions on GF100, thats actually a 5% reduction. Let's call this one Prototype A.
> 
> 2- GF104 has same ammount of TMUs and SFUs as GF100 and 75% of the cuda cores, it also has 66% of the ROPs and memory bus. The end result is a chip that has 66% as many transistors, meaning that the extra cuda cores, TMUs and SFUs don't affect transistor count or die area too much, if at all, as long as they are included in existing SMs. To come up with the 768 SP number the only thing you have to do is add another 16 way SIMD unit to each Shader Multiprocessor in Prototype A, which is exactly one of the things of what was done between GF100 and GF104. That's why I said it would be slightly bigger than GF100, but TBH after figuring out both 66% numbers above, how they seem to be related, and how adding all those extra TMUs and SFUs and cuda cores didn't impact die area at all, I even have to question my first judgement on that. The more I think about it, the more I think that Nvidia might be able to create that 768 SP monster in the same die area or less!! than GF100.



we all know cuda take about 65~70% of die space in both g100 and g104. which g104 is 336 cuda  with about 367mm^2 die space and 336 cuda ALU took 367mm^2 x 0.65 = 238.55mm^2..consider 768 is about 2.28x of space along...without counting the transistor that form SIMD cluster/rops/ram bus and texture mapping unit.  the tmu/SIMD controller from g100/104 is about 10% of die space which make a g104's tmu about 367mm^2 x 5% = 36.70mm^2. if we increase the tmu from 60 to 96..about 60% increase 36.7mm^2 x 1.6 = 58.72mm^2..while if rops/bus won't change the die size will be come like below:

rop/bus = 20% of g100 = 529mm^2 x 0.2 = 105.8mm^2

SIMD/TMU increase from 60 to 96 = 36.70mm^2 x 1.6 = 58.72mm^2

CUDA increase from 336 to 768 = 338.55mm^2 x 2.28 = 545.257mm^2

(105.8mm^2 + 58.72mm^2 + 545.257mm^2) X105%(hard wiring )= *745.26mm^2*.....

that is huge.....pretty much the largest GPU ever exist...not *slightly* but completely buffer up..

PS: under 28nm it will be another store....may be it can only happen on 28nm??

745.26mm^2 x (28nm/40nm)^2= 365.17mm^2 

however amd can do exactly same with everything double up again...

cayamn in 28nm = 484mm^2 x (28nm/40nm)^2 = 237.16...so end up a hd 7878 with 128rops will be 484mm^2 again in 28nm...


----------



## bear jesus (Sep 27, 2010)

I have to admit all these "leaked" spec are making me hope that the  referance coolers (at least for the high end models) are all vapour chamber baised coolers as i would assume that would help out with the cooling of what i would expect to be some hotter cards than the 5xxx.


----------



## cadaveca (Sep 27, 2010)

Prertty sure all high-end AMD gpus will feature Vapor-Chamber coolers...didn't AMD help develop that tech, or buy it or something?

Cards _sound exciting_, it's just really hard for me to get excited about them at all.


----------



## Benetanegia (Sep 27, 2010)

cheezburger said:


> we all know cuda take about 65~70% of die space in both g100 and g104. which g104 is 336 cuda  with about 367mm^2 die space and 336 cuda ALU took 367mm^2 x 0.65 = 238.55mm^2..consider 768 is about 2.28x of space along...without counting the transistor that form SIMD cluster/rops/ram bus and texture mapping unit.  the tmu/SIMD controller from g100/104 is about 10% of die space which make a g104's tmu about 367mm^2 x 5% = 36.70mm^2. if we increase the tmu from 60 to 96..about 60% increase 36.7mm^2 x 1.6 = 58.72mm^2..while if rops/bus won't change the die size will be come like below:
> 
> rop/bus = 20% of g100 = 529mm^2 x 0.2 = 105.8mm^2
> 
> ...



Sorry I stopped paying attention after the first line, because it would be pointless. GF104 has 384 CUDA cores, with one SM, 48 SPs being disabled.

Have you read my post at all? Why are you adding A LOT of die area based on linear SP/TMU/etc. increase?? Like I said in GF104 Nvidia added many SPs and TMUs over the hypothetical 66% of a GF100 chip *and that did not add any transistor*.

I did my numbers too and the resulted die area is *520mm^2*. Of course it's almost as arbitrary as yours, but at least is based on the correct number os SP/TMU in GF104 and I'm not basing it on how much area each unit takes ion GF100, *because it's not going to be based in GF100*... :shadedshu

And just to see how stupid your numbers are, let's calculate Bart and Cayman shall we? 

Barts: It's almost a Cypress, except the shaders are 4D instead of 5D. So the shader/tmu area is 80% that of Cypress, everything else being equal. 

Cypress was 2xRV770 

http://img.chw.net/sitio/breves/200812/23_RV770_900SP.jpg 

and as you can see the SP area is like 1/3 the chip. So (336*2/3) + (0.8*336/3) = 313mm^2 

Cayman is twice that (or so they say) so: 626mm^2 man that is HUGE!


----------



## bear jesus (Sep 27, 2010)

cadaveca said:


> Prertty sure all high-end AMD gpus will feature Vapor-Chamber coolers...didn't AMD help develop that tech, or buy it or something?
> 
> Cards _sound exciting_, it's just really hard for me to get excited about them at all.



I don't know who developed them but witht he 5xxx card's i thought it was said they woudl be using a new cooling tech but then only the 5970 had a vapour chamber but if i remember correctly leaked pictures of a low end 6xxx card's passie cooler had one.

I have to admit i am excited but not just for the 6xxx cards, im excited upbout my next upgrades so that includes the 6xxx and 7xxx cards from amd, the 580 and 680 (assuming) from nvidia, intel's sandy bridge and amd's bulldozer, there is so much next gen hardware coming out over the next year or 2 that will be perfect to replace my current setup and move onto something insanly powerful even if i don't need that much power and then maybe do it again in about a year or so just for fun


----------



## Drac (Sep 28, 2010)

I cant imagine how will be the perfomance with less than 40 nm, this is just awesome.
My future mental list is motherboards with 16 cores CPU and 32 gb GDDR5(cya ddr3) and a 7XXX xd


----------



## cadaveca (Sep 28, 2010)

bear jesus said:


> I have to admit i am excited but not just for the 6xxx cards, im excited upbout my next upgrades so that includes the 6xxx and 7xxx cards from amd, the 580 and 680 (assuming) from nvidia, intel's sandy bridge and amd's bulldozer, there is so much next gen hardware coming out over the next year or 2 that will be perfect to replace my current setup and move onto something insanly powerful even if i don't need that much power and then maybe do it again in about a year or so just for fun



It is just really shocking to me for them to exceed Moore's Law by reducing the time to double computational power by half.

It's almost too fast...software has issues keeping up as it is...

As it is now, I hopped on the Eyefinity bandwagon on launch of the 5-series, so I really cannot make any purchases until I see how Eyefinity performs, and if some of the bugs that are left still existing now are gone...this damn corrupting cursor is a real pain in the ass.


----------



## jaredpace (Sep 28, 2010)

Barts = HD 6*8*00 series

NDA is Oct. 21


----------



## 983264 (Sep 28, 2010)

jaredpace said:


> Barts = HD 6*8*00 series
> 
> NDA is Oct. 21
> 
> ...



Is this true or not???????


----------



## EastCoasthandle (Sep 28, 2010)

I would like for him to point out and quote specifically the portion of both pics that shows a bart as a 6800 series...


----------



## cheezburger (Sep 28, 2010)

Benetanegia said:


> Sorry I stopped paying attention after the first line, because it would be pointless. GF104 has 384 CUDA cores, with one SM, 48 SPs being disabled.
> 
> Have you read my post at all? Why are you adding A LOT of die area based on linear SP/TMU/etc. increase?? Like I said in GF104 Nvidia added many SPs and TMUs over the hypothetical 66% of a GF100 chip *and that did not add any transistor*.
> 
> ...



sry may be a little bit incorrect. ok let's do it again. 384cuda took  70% of die on g104 and 10% on SIMD/TUMU and @)% on rops/bus. then we put these to together and speculate how big  fermi 2 will be:

2(367x0.7) + (367x0.1)x1.5 + (367x0.2)x1.5 = 678.95mm^2  x 105%(hard wiring) = 713mm^2

cayman has 60% die space that fill with shader/ALU and 25% for rops/bus and 15% for TMU/SIMD

2(336x 0.6 x0.8) + 2(336 x 0.1) + 2(336x0.25) = 2x 278.88= 557.76mm^2 x 110% hard wiring(512bit bus)= 613mm^2 

result...these two are ridiculously big..........

but if cayman is 1920:96:64 +512bit bus instead of double up it will be

1.5(336x0.6x0.8) +1.5(336x0.1) + 2(336x0.25)= 376.32mm^2 x 110% hard wiring for ram/bus optimization (512bit bus)= 413mm^2 for cayman

lets go back to fermi 2 if it's cuda number are 576 instead of crazy 768

1.2(367x0.7) + 1.5(367 x 0.1) + 1.5(336x0.15)= 473.43

which ALU are the reason why gpu can be oversize...


----------



## cadaveca (Sep 28, 2010)

EastCoasthandle said:


> I would like for him to point out and quote specifically the portion of both pics that shows a bart as a 6800 series...


----------



## bear jesus (Sep 28, 2010)

cadaveca said:


> It is just really shocking to me for them to exceed Moore's Law by reducing the time to double computational power by half.
> 
> It's almost too fast...software has issues keeping up as it is...
> 
> As it is now, I hopped on the Eyefinity bandwagon on launch of the 5-series, so I really cannot make any purchases until I see how Eyefinity performs, and if some of the bugs that are left still existing now are gone...this damn corrupting cursor is a real pain in the ass.



I intended to jump on the eyefinity bandwagon with a 5xxx card but i have to admit i'm kind of glad i held off so long, i am hoping that a 6870 might be enough to do eyefinity without going crossfire and by waiting so long that a lot of bugs and compatibility issues may have been worked out.


----------



## pantherx12 (Sep 28, 2010)

bear jesus said:


> I don't know who developed them but witht he 5xxx card's i thought it was said they woudl be using a new cooling tech but then only the 5970 had a vapour chamber but if i remember correctly leaked pictures of a low end 6xxx card's passie cooler had one.
> 
> I have to admit i am excited but not just for the 6xxx cards, im excited upbout my next upgrades so that includes the 6xxx and 7xxx cards from amd, the 580 and 680 (assuming) from nvidia, intel's sandy bridge and amd's bulldozer, there is so much next gen hardware coming out over the next year or 2 that will be perfect to replace my current setup and move onto something insanly powerful even if i don't need that much power and then maybe do it again in about a year or so just for fun




5770s had vapour plates too.

But a tiny crappy heatsink ontop


----------



## bear jesus (Sep 28, 2010)

pantherx12 said:


> 5770s had vapour plates too.
> 
> But a tiny crappy heatsink ontop



 ok i admit a vapor chamber is usless unless it is connected to a good fin array, i liked the design of the 5970's vapour chamber and fins, maybe the same thing for a 6870 just with copper fins  (yes i know very unlikly)


----------



## MrMilli (Sep 28, 2010)

Benetanegia said:


> Barts: It's almost a Cypress, except the shaders are 4D instead of 5D. So the shader/tmu area is 80% that of Cypress, everything else being equal.



That's not really correct.
They have gone from a 4 simple + 1 complex arrangement to a 4 medium complexity arrangement. So there's no way to know atm how much die area the new 4D ALU will take compared to the old 5D ALU.


----------



## jamsbong (Sep 28, 2010)

The ATI new processors are called northern islands, right? I mean ATI always had processors with x5 output and now, they shift their focus toward extra DP computing. Thus, the x4 output. you can read this unclear info from semi accurate.

http://www.semiaccurate.com/2010/09/06/what-amds-northern-islands/

What I'm impressed is that the chip is meant to be a mid-range card with the same number of processors as a 5870 and 256bit mem bandwidth. First thing come to my mind is that the chip could be at least as big as the 5870. But then if ATI wants to sell these stuff cheaply and still make good profit, they need to shrink the size while still using 40nm manufacturing.

So I reckon... Their new Northern island is even smaller than the previous generation of processors. Now thats is something mighty impressive. Semi accurate says something like 80% of the previous gen processor size. Possibly a 334*90% = 301mm^2 ??? still much bigger than the 170mm of juniper though.

AMD's general direction has been to make smaller chip that maintains same performance which result in better energy efficiency and reduced production cost. Bulldozer is a good example where they make 2 CPU into 1.15x size of 1 CPU with minimal compromise in speed.

Overall, I'm really excited about this... and I may be thinking of retiring my loyal 4890 if the 6xxx cards are worthy.


----------



## Benetanegia (Sep 28, 2010)

MrMilli said:


> That's not really correct.
> They have gone from a 4 simple + 1 complex arrangement to a 4 medium complexity arrangement. So there's no way to know atm how much die area the new 4D ALU will take compared to the old 5D ALU.



You missed the part where I said I was going to make an arbitrary calculation. Everything was intended, including when I based it on the RV770, because he based his on GF100, etc.

Bottom line is that it's not as easy as saying double the SPs == double die size, or double that part of the chip, whatever. Like I said, GF104 has much more than 66% of the working units in GF100 put together in 66% of transistors, my point being that SPs themselves, don't take a lot of space and hence a comparatively small 768 SP Nvidia chip is feasible. Figure it out how many they can add through parallelism* until they are close or even mid-way to Ati's number of SPs.

*Ati has parallelism in the SPs 5D, 4D. Nvidia is adding parallelism with superscalar SIMDs, but it works the same way, it adds more throughoutput without adding a lot of transistors, at the expense of some inhefficiency. That's been Ati's architecture for 5 yeasrs already and they are continuing with it, except they are going with 4D now because it's been found over and over again that their average ALU utilization was around 3.6-4.5 all the time.


----------



## wahdangun (Sep 28, 2010)

Benetanegia said:


> snip



please don't discus nv speculation in here, if you wan to make any useless speculation with no real data please create new thread

too much nv fanboy in here that want to derail the thread.


please stay on topic

back to the topic i hope bartpro wont be expensive, and i hope this will push developer to push eye candy a lot further,


----------



## CDdude55 (Sep 28, 2010)

wahdangun said:


> please don't discus nv speculation in here, if you wan to make any useless speculation with no real data please create new thread
> 
> too much nv fanboy in here that want to derail the thread.
> 
> ...



Doubt it, TPU= 88% AMD/ATI ''fans''. A percentage i pulled out my ass, yet close to reality from what i have seen.(if anything it's AMD fans trolling AMD fans lol)

And i agree, i hope it's cheap.


----------



## wolf (Sep 28, 2010)

oh Benetanegia how I love reading your posts about chip architecture and the likes, and no thats not sarcasm, i genuinely mean it 

really good discussion going on here IMO.


----------



## bear jesus (Sep 28, 2010)

CDdude55 said:


> if anything it's AMD fans trolling AMD fans



 I think you could be right.

To be honest i'm just hoping that the 6870 does not cost too much, not enough power in those 6770's  (yes i know i don't have a clue how well they perform )


----------



## wahdangun (Sep 28, 2010)

CDdude55 said:


> Doubt it, TPU= 88% AMD/ATI ''fans''. A percentage i pulled out my ass, yet close to reality from what i have seen.(if anything it's AMD fans trolling AMD fans lol)
> 
> And i agree, i hope it's cheap.



yeah but i'm tiered seeing about that Ati driver was crap, or heard non related nvdia speculation thats don't have any proof in this thread. its just take away the fun


----------



## bear jesus (Sep 28, 2010)

wahdangun said:


> yeah but i', tiered seeing about that Ati driver was crap, or heard non related nvdia speculation thats don't have any proof in this thread. its just take away the fun



I have to agree as i have had a 4870 since release and not a single driver problem for myself yet i still know there are some problems with some cards and setups just liek with nvidia and this is a thread about the barts spec spec not about nvidias upcoming cards spec or either companys driver issues.

One thing that is annoying me more though is the lack of spec on cayman so far as that is the chip that interests me the most, what makes it worse is for several generations the top end chip has been basicly double the mid range chip and all these barts specs make me wonder if it will be the same this time or something thats cut down to keep power usage in check, i geuss only time will tell.


----------



## wahdangun (Sep 28, 2010)

bear jesus said:


> I have to agree as i have had a 4870 since release and not a single driver problem for myself yet i still know there are some problems with some cards and setups just liek with nvidia and this is a thread about the barts spec spec not about nvidias upcoming cards spec or either companys driver issues.
> 
> One thing that is annoying me more though is the lack of spec on cayman so far as that is the chip that interests me the most, what makes it worse is for several generations the top end chip has been basicly double the mid range chip and all these barts specs make me wonder if it will be the same this time or something thats cut down to keep power usage in check, i geuss only time will tell.



yeah, but you are partially true, HD 4770/ HD 4750 didn't have half the spec, its 80% of HD 4870/50

so maybe the cayman just have 20 % increase in SP, we can't predict it because its too many possibilities and combination, i hope after the bart we can have some info leaked after all cayman was planing to be released on november its just a month a way from bart


----------



## bear jesus (Sep 28, 2010)

wahdangun said:


> yeah, but you are partially true, HD 4770/ HD 4750 didn't have half the spec, its 80% of HD 4870/50
> 
> so maybe the cayman just have 20 % increase in SP, we can't predict it because its too many possibilities and combination, i hope after the bart we can have some info leaked after all cayman was planing to be released on november its just a month a way from bart



Very true and as well the 4670 was less than half of the 4870 so i dont think any chips were exactly half the 4870, i geuss i kind of meant around half without knowing what i was saying 

But i think no matter what ati is doing for caymen it will be a nice bump in speed/power and if i'm lucky allow acceptable framerates using eyefinity on a single 6870 (i hope), now to just hope someone hurrys up and starts leaking some specs.


----------



## wahdangun (Sep 28, 2010)

bear jesus said:


> Very true and as well the 4670 was less than half of the 4870 so i dont think any chips were exactly half the 4870, i geuss i kind of meant around half without knowing what i was saying
> 
> But i think no matter what ati is doing for caymen it will be a nice bump in speed/power and if i'm lucky allow acceptable framerates using eyefinity on a single 6870 (i hope), now to just hope someone hurrys up and starts leaking some specs.



yes maybe the next advancement maybe not in eye candy but how many pixel you can get, because to be honest there are too many console port, its really waste of money. 

shit why PC games can be ended like this, full of consollities with crap graphic and poorly written games and on top of that a draconian DRM


----------



## arroyo (Sep 28, 2010)

Why nobody has released XBOX 360 Graphics card yet?
That would be cool to have PCIE-E slot filled with Japer and Xenos chips on PCB. There would be no reason to port console games. We would be playing them on PC.


----------



## bear jesus (Sep 28, 2010)

wahdangun said:


> yes maybe the next advancement maybe not in eye candy but how many pixel you can get, because to be honest there are too many console port, its really waste of money.
> 
> shit why PC games can be ended like this, full of consollities with crap graphic and poorly written games and on top of that a draconian DRM



Well i mainly play source engine games online with friends so although there is not much eye candy (enough for me) it's very easy going on gpu's so i would hope a 6870 would happly run any source engine game (current and future) maxed at 5670x1200 and also because of that i don't normally play many console ports like everyone else seams to be 




arroyo said:


> Why nobody has released XBOX 360 Graphics card yet?
> That would be cool to have PCIE-E slot filled with Japer and Xenos chips on PCB. There would be no reason to port console games. We would be playing them on PC.



I don't know, the idea of buying a gpu that's around the ati/amd r500/600 does not sound great to me when i'm thinking about the amd r900 (northern islands) even if it has some edram. i'm happy to just ignore most console ports and play games that are fun to me


----------



## pantherx12 (Sep 28, 2010)

arroyo said:


> Why nobody has released XBOX 360 Graphics card yet?
> That would be cool to have PCIE-E slot filled with Japer and Xenos chips on PCB. There would be no reason to port console games. We would be playing them on PC.




Because no one wants to downgrade


----------



## CDdude55 (Sep 28, 2010)

wahdangun said:


> yeah but i'm tiered seeing about that Ati driver was crap, or heard non related nvdia speculation thats don't have any proof in this thread. its just take away the fun



But isn't that what everyone is doing in this thread?, every other post is speculation or what they would like to see, when someone pointed out the shitty drivers ATI/AMD uses and it's a fact form what people are saying, then that's just the truth, big deal. Are you that much of ''fan'' that as soon as anyone mentions a competitors name in a an AMD thread you assume it's a fanboy trying to derail from your ''fun''?


----------



## btarunr (Sep 28, 2010)

arroyo said:


> Why nobody has released XBOX 360 Graphics card yet?



Oh they did. It was called ATI Radeon X1800 XT, and was released in 2005.


----------



## mdsx1950 (Sep 28, 2010)

btarunr said:


> Oh they did. It was called ATI Radeon X1800 XT, and was released in 2005.



Thats classic. 


But very true at the same time.


----------



## bear jesus (Sep 28, 2010)

btarunr said:


> Oh they did. It was called ATI Radeon X1800 XT, and was released in 2005.



Exactly, i think most people wouth rather use current hardware to run even bad ports (bad as in say gtaiv that needs way more power to run) with brute force and have the power to max out true pc games.


----------



## yogurt_21 (Sep 28, 2010)

983264 said:


> Is this true or not???????


no it is not true and is the most idiotic rumor I've seen in all releases of new products. I've never seen a manufacturer release a new series in the name scheme of the former highend that performed less than it. 

that'd be like chevy announcing the new corvette ZR1 and instead it's the 425hp v8 camero. fast sure, but not faster than the 638 hp corvette the market was expecting. 

barts is the 6700 series and anyone who says differently is in charlie's pocket. 



EastCoasthandle said:


> I would like for him to point out and quote specifically the portion of both pics that shows a bart as a 6800 series...



exactly all that says is that there is a document entitled "HD 6800 Series Lauch Guidelines" in which more details on the nda are listed. it has no bearing on product names, specs, or anything else. It just points you to another document.


----------



## bear jesus (Sep 28, 2010)

yogurt_21 said:


> no it is not true and is the most idiotic rumor I've seen in all releases of new products. I've never seen a manufacturer release a new series in the name scheme of the former highend that performed less than it.
> 
> barts is the 6700 series and anyone who says differently is in charlie's pocket.



That has to be the worst rumor i have heard in a long time, so many people were quoting the "new" names without quoting a source so i carryed on hoping that amd would not be that stupid. 

I for one am looking forward to the amd 6870 cayman chip


----------



## wahdangun (Sep 28, 2010)

CDdude55 said:


> But isn't that what everyone is doing in this thread?, every other post is speculation or what they would like to see, when someone pointed out the shitty drivers ATI/AMD uses and it's a fact form what people are saying, then that's just the truth, big deal. Are you that much of ''fan'' that as soon as anyone mentions a competitors name in a an AMD thread you assume it's a fanboy trying to derail from your ''fun''?



no i'm not fan of either side, i even build several computer with nvdia, but its hard to follow this thread if someone keep derailing this thread and debate in here without having any relation, and yes they will be ruined my "fun"


----------



## CDdude55 (Sep 28, 2010)

wahdangun said:


> no i'm not fan of either side, i even build several computer with nvdia, but its hard to follow this thread if someone keep derailing this thread and debate in here without having any relation, and yes they will be ruined my "fun"



I see what you're saying and i agree, but that ''fun'' is the same thing you're complaining about, everyone is speculating and pushing out rumor talk and if that's the case and you consider it not fun and ''derailified'' lol, then this thread has been derailed for a while now.


----------



## roast (Sep 28, 2010)

I'm liking these specs. Time to move out of the green camp.


----------



## bear jesus (Sep 28, 2010)

roast said:


> I'm liking these specs. Time to move out of the green camp.



With 2 g285's are you sure you would not rather wait and see the amd 6870 caymen spec? 

I can't help but keep bringing it up, yes i know the barts spec's are looking very nice but it is still the caymen chip that has me the most excited as it is the chip that could stop me buying a pair of gtx460's.


----------



## CDdude55 (Sep 28, 2010)

bear jesus said:


> With 2 g285's are you sure you would not rather wait and see the amd 6870 caymen spec?
> 
> I can't help but keep bringing it up, yes i know the barts spec's are looking very nice but it is still the caymen chip that has me the most excited as it is the chip that could stop me buying a pair of gtx460's.



Agreed.

If you're impressed by barts then caymen should be monstrous. Just hope it's decently priced.


----------



## bear jesus (Sep 28, 2010)

CDdude55 said:


> Just hope it's decently priced.



I admit if it is powerful enough i would not mind too much paying a reasonably expensive price but i admit i would not be too happy if it was a $600 card, as far as i'm concerned the days of paying that much for a single chip gpu should be well and dead.


----------



## mdsx1950 (Sep 28, 2010)

I wouldn't mind paying a $1000 for a single GPU card if it can perform atleast a 20% more than the HD 5970.


----------



## WarEagleAU (Sep 28, 2010)

The 6770 and 6750 wont be their high end will it?


----------



## bear jesus (Sep 28, 2010)

mdsx1950 said:


> I wouldn't mind paying a $1000 for a single GPU card if it can perform atleast a 20% more than the HD 5970.



To be honest in that situation i would rather it perfrom 20% faster than the 5970's thats have the same clocks as the 5870, if they had perfect 100% scaling and be a 2gb card that could easly clock to 1ghz core speed but i would not really want to pay over $800..... ok that's enough dreaming for now


----------



## CDdude55 (Sep 28, 2010)

mdsx1950 said:


> I wouldn't mind paying a $1000 for a single GPU card if it can perform atleast a 20% more than the HD 5970.



That's insane lol.

It's inevitable that we will get single GPU cards that will beat out a 5970, that's the way of technology, but i sure as well wouldn't pay $1000 for it.

Then again if you're rich and can spend that kind of money on hardware, then have at it. I'll be going with whatever is below that card more likely. lol


----------



## jaredpace (Sep 28, 2010)

EastCoasthandle said:


> I would like for him to point out and quote specifically the portion of both pics that shows a bart as a 6800 series...


----------



## bear jesus (Sep 28, 2010)

I just took a look at nordichardware.com to get excited to see an article about the 6870 and 6850 only to read it and feel like this  

AMD Radeon HD 6870 and 6850 launches on October 18th

What do you all think about that?


----------



## erocker (Sep 28, 2010)

I think the only official looking anything from AMD says these cards are the 6750 and 6770. Look at the chart in the first post. All of these websites claim all this other information with no sources. Either way, I could care less what they are called.


----------



## bear jesus (Sep 28, 2010)

erocker said:


> I think the only official looking anything from AMD says these cards are the 6750 and 6770. Look at the chart in the first post. All of these websites claim all this other information with no sources. Either way, I could care less what they are called.



Ok i admit you are right, really it should not matter what they call them and the only minor concern is people thinking a 6870 beats a 5970 but i geuss i should not worry as i would not be one of those people.


----------



## cheezburger (Sep 28, 2010)

cayman xt won't be double up from barts, which will continue the current speculation of 1920:96:64 512bit bus, not previously thought to be 2560:128:64 configuration that's double up from barts. consider the die size will be incredibly huge if the ALU is 640.


----------



## BondExtreme (Sep 28, 2010)

Can't wait to see the prices on the 6k series. I can almost be sure though it will be better than Nvidia's new series. But I am not going to say that as a fact. Yet.


----------



## cheezburger (Sep 28, 2010)

erocker said:


> Wonderful. I thanks for stating the same thing over and over again. Of course, most people think you're wrong.



then bring some evidence to prove me wrong  and how many people think i'm wrong???o_0

because cayaman will still be 32rops and 256bit bus? 

so much of 1920:120:32 or 2560:128:"32" + 256biy bus + 7GT GDDR5 ram  unless amd just want to make mainstream card only but that'll be different story. 

let me tell you something about bus currently both NV and amd are using 32bit bus per ring which on 512bit(2900xt, gtx280) it will need 16 ram ic to maintain the hard wiring on PCB layout design...however if they can tweak the ping bus size from 32bit to 64 bit per ring then here we go...a 512bit bus.



erocker said:


> Wonderful. I thanks for stating the same thing over and over again. Of course, most people think you're wrong. *Bringing evidence against speculation will happen when AMD makes formal announcements.  You don't need to "tell" me anything. Keep dreaming though if it makes you happy.



i'd like to wait for that announcement as well  which how much you want to bet for cayman's official specification?


----------



## erocker (Sep 28, 2010)

Sounds great. I'll wait for launch. Enjoy your speculation.


----------



## btarunr (Sep 28, 2010)

cheezburger said:


> cayman xt won't be double up from barts, which will continue the current speculation of 1920:96:64 512bit bus, not previously thought to be 2560:128:64 configuration that's double up from barts. consider the die size will be incredibly huge if the ALU is 640.



AMD will not use a 512-bit wide memory interface. Wanna bet?


----------



## 1badtechdude (Sep 28, 2010)

OMG the 6770 is sweet, I think I found an upgrade to my 8800! VERY thankful I skipped the current gen cards. Now AMD just has to stop releasing a new gen every damn year!


----------



## btarunr (Sep 28, 2010)

1badtechdude said:


> OMG the 6770 is sweet, I think I found an upgrade to my 8800! VERY thankful I skipped the current gen cards. Now AMD just has to stop releasing a new gen every damn year!



It has become cozy with that. Autumn-Winter time is new AMD GPU time. Spring-Summer time is new NVIDIA GPU time.


----------



## erocker (Sep 28, 2010)

btarunr said:


> AMD will not use a 512-bit wide memory interface. Wanna bet?



He gets my GT 240 if it does have a 512-bit bus. That's a promise... and if he/she wants it.


----------



## btarunr (Sep 28, 2010)

erocker said:


> He gets my GT 240 if it does have a 512-bit bus. That's a promise... and if he/she wants it.



What do I get if it doesn't?


----------



## erocker (Sep 28, 2010)

My GT 240.. if you want it. If you don't want it, you get a sense of satisfaction spurred by common sense.


----------



## dj-electric (Sep 28, 2010)

wait... HD6770 = 1280SP, HD6970 = oh god....


----------



## pantherx12 (Sep 28, 2010)

erocker said:


> My GT 240.. if you want it. If you don't want it, you get a sense of satisfaction spurred by common sense.




I'll take it man


----------



## cheezburger (Sep 28, 2010)

erocker said:


> He gets my GT 240 if it does have a 512-bit bus. That's a promise... and if he/she wants it.



no..even my aged 9600gt reference can kick its ass....i want your 5850


----------



## btarunr (Sep 28, 2010)

Dj-ElectriC said:


> wait... HD6770 = 1280SP, HD6970 = oh god....



Antilles? Since we're still stuck at 40 nm, AMD won't go bruteforce with its high-end GPU. All it has to do is outperform the GeForce GTX 480 512 SP (including at EVGA SSC speeds), maintain lower voltages/fan-noise/temperatures, and AMD is set for a long time. NVIDIA won't go beyond enabling the remaining 32 CUDA cores on its GTX 480, GF100 is a fail GPU with thermals. So don't expect NVIDIA to build a bigger GPU than GF100 on the existing 40 nm process. 

So, 1920 4-D stream processors (I'm beginning to doubt AMD will continue to call these "stream processors"), and 2 GB of memory over a 256-bit wide GDDR5 memory interface clocked at 6.40 GHz (1600 MHz), might just do the trick. No doubt 28 nm process will be ready by late Q1, early Q2 at TSMC, but NVIDIA surely won't make a GPU with higher transistor count than GF100 on it right away. So Cayman is going to have a very long stint.

So, 3840 cores on the Antilles. If AMD does decide to double Barts in the SIMD department for Cayman, you're looking at 5120 cores.



erocker said:


> My GT 240.. if you want it. If you don't want it, you get a sense of satisfaction spurred by common sense.



Or, I'll decide your avatar for a month.


----------



## pantherx12 (Sep 28, 2010)

cheezburger said:


> no..even my aged 9600gt reference can kick its ass....i want your 5850




Use it as a dedicated phsyx card, pow!


----------



## bear jesus (Sep 28, 2010)

an interesting turn of topic betting hardware on hardware, although i geuss it makes sense here


----------



## erocker (Sep 28, 2010)

btarunr said:


> Or, I'll decide your avatar for a month.



Well, if that's the case (since I agree with you on the subject), I'll have to take the stance that I believe it will be a 384-bit bus. Yeah, that's it.. It will be 384-bit. If I'm wrong, send me the avatar of your choice.


----------



## yogurt_21 (Sep 28, 2010)

btarunr said:


> Antilles? Since we're still stuck at 40 nm, AMD won't go bruteforce with its high-end GPU. All it has to do is outperform the GeForce GTX 480 512 SP (including at EVGA SSC speeds), maintain lower voltages/fan-noise/temperatures, and AMD is set for a long time. NVIDIA won't go beyond enabling the remaining 32 CUDA cores on its GTX 480, GF100 is a fail GPU with thermals. So don't expect NVIDIA to build a bigger GPU than GF100 on the existing 40 nm process.
> 
> So, 1920 4-D stream processors (I'm beginning to doubt AMD will continue to call these "stream processors"), and 2 GB of memory over a 256-bit wide GDDR5 memory interface clocked at 6.40 GHz (1600 MHz), might just do the trick. No doubt 28 nm process will be ready by late Q1, early Q2 at TSMC, but NVIDIA surely won't make a GPU with higher transistor count than GF100 on it right away. So Cayman is going to have a very long stint.
> 
> ...



beyond the 512sp gtx480 there still is the dual gf104 rumor to contend with. ANd that will be a much more powerful card.


----------



## cheezburger (Sep 28, 2010)

pantherx12 said:


> Use it as a dedicated phsyx card, pow!



not until i get his 5850 first  then i'll trade my 9600gt to him for GT240 for physx 




btarunr said:


> Antilles? Since we're still stuck at 40 nm, AMD won't go bruteforce with its high-end GPU. All it has to do is outperform the GeForce GTX 480 512 SP (including at EVGA SSC speeds), maintain lower voltages/fan-noise/temperatures, and AMD is set for a long time. NVIDIA won't go beyond enabling the remaining 32 CUDA cores on its GTX 480, GF100 is a fail GPU with thermals. So don't expect NVIDIA to build a bigger GPU than GF100 on the existing 40 nm process.
> 
> So, 1920 4-D stream processors (I'm beginning to doubt AMD will continue to call these "stream processors"), and 2 GB of memory over a 256-bit wide GDDR5 memory interface clocked at 6.40 GHz (1600 MHz), might just do the trick. No doubt 28 nm process will be ready by late Q1, early Q2 at TSMC, but NVIDIA surely won't make a GPU with higher transistor count than GF100 on it right away. So Cayman is going to have a very long stint.
> 
> So, 3840 cores on the Antilles. If AMD does decide to double Barts in the SIMD department for Cayman, you're looking at 5120 cores.




i don't see there's any point adding ridiculous number of shader on exist 40nm fab..based on my previous calculation if cayman is double of barts even except rops/bus increase as you were mention it will turn out to be like below if the spec is 2560:128:32 and 256bit bus

shader die space in cypress is 60% and 4D shader is 80% of 5D shader in size and SIMD controller and TMU took about 15% then here will be 2(334x 0.6 x0.8)+2(334x0.15)+334x0.25 = 320.64 + 100.2 +83.5 = 504.34mm^2 + hard wiring = 510mm^2

that is huge die and such 510mm^2 only has 32 rops????and i don't see any reason why we'd need 640ALU for? folding@home?
and you expect a 510mm^2 chip using a narrow 256bit bus on it?

if the shader turn out to be 5120(1280ALU) then the die size will be: 

4(334x 0.6 x0.8)+4(334x0.15)+334x0.25 = 641.28 + 200.4 + 83.5 = 925.18mm^2 + hard wiring = 940mm^2......

shader like this are pointless if you don't have more rops to push it. like g92 was bottleneck by its 16 rop while it had 128 ALU. and now cayman that has 1280 ALU but 32 rops....that is a big joke...

if the specification turn out to be 1920:96:64 512bit story will be vastly different from above

1.5(334x0.6x0.8)+1.5(334x0.15)+2(334x0.25) = 240.48 + 75.15 + 167 = 482.64mm^2 + hard wiring = 484mm^2

480ALU is what we need in existed 40nm..no go further....


----------



## btarunr (Sep 28, 2010)

yogurt_21 said:


> beyond the 512sp gtx480 there still is the dual gf104 rumor to contend with. ANd that will be a much more powerful card.



Correct, GeForce GTX 49x and GeForce GTX 475 (single GF104, 384 SP, high-clocks, 2 GB?, 4-way SLI support). At best, the GTX 49x might be competitive with the HD 5970, but if a single Cayman XT performs on par with HD 5970, there's little scope for GTX 49x.

GTX 475 is aimed at Barts XT.


----------



## wahdangun (Sep 29, 2010)

erocker said:


> He gets my GT 240 if it does have a 512-bit bus. That's a promise... and if he/she wants it.



so can i join the bet? i really need that GT240 hahahaa


----------



## AsRock (Sep 29, 2010)

Dj-ElectriC said:


> wait... HD6770 = $300?, HD6970 = oh god....


----------



## JATownes (Sep 29, 2010)

IDK if I like the bet.  erocker without a Knight Rider avy would make me sad.  Maybe just make him have a avy of Michael Knight


----------



## wolf (Sep 29, 2010)

cheezburger said:


> then bring some evidence to prove me wrong



or you could bring some evidence to prove yourself right  as oppose to the fruit of your imagination 

I enjoy reading it and all, and if you seculate enough, some part of what you say is bound to be right, it's like flipping a coin.


----------



## cheezburger (Sep 29, 2010)

wolf said:


> or you could bring some evidence to prove yourself right  as oppose to the fruit of your imagination
> 
> I enjoy reading it and all, and if you seculate enough, some part of what you say is bound to be right, it's like flipping a coin.



well first i was confused by the media about the spec of barts that suppose to be 960shader(240ALU):48TMU:32rops but later it turn out to be whistler's spec(which higher than barts and performs between barts and hamlock).  the roadmap suggest that cayman is double of 960:48:32 = 1920:96:64 with 512bit bus while whistler is 1280:64:32 and barts is 960:48:32. which that 1280:64:32 on barts was suppose to be whistler that was going to replace cypress and barts just mid range line. and whistler is different design rather than cut down version of cayman. but now amd just confuse people more and some sort of rename camouflage mix in just make the whole thing more...confuse..... chiphell were use either whistler or barts core during the benchmark i'm sure of it...


code name of amd's hd 6000 line

Antilles (6970~6990???)
Cayman (6850/6870~6950/6970???)
Whistler (???~6850/6870???)
Barts (6750~6770??)
blackcombs
Turks
Caicos


----------



## erocker (Sep 29, 2010)

Whistler? What the heck are you talking about? Show me some info on it? You are making less and less sense.


----------



## bear jesus (Sep 29, 2010)

cheezburger said:


> well first i was confused by the media about the spec of barts that suppose to be 960shader(240ALU):48TMU:32rops but later it turn out to be whistler's spec(which higher than barts and performs between barts and hamlock).  the roadmap suggest that cayman is double of 960:48:32 = 1920:96:64 with 512bit bus while whistler is 1280:64:32 and barts is 960:48:32. which that 1280:64:32 on barts was suppose to be whistler that was going to replace cypress and barts just mid range line. and whistler is different design rather than cut down version of cayman. but now amd just confuse people more and some sort of rename camouflage mix in just make the whole thing more...confuse..... chiphell were use either whistler or barts core during the benchmark i'm sure of it...
> 
> 
> code name of amd's hd 6000 line
> ...



 that confused me, but i blame the fact that it is 4:20am here in britland.

*edit*


erocker said:


> Whistler? What the heck are you talking about? Show me some info on it? You are making less and less sense.



I'm glad it is not just me is who is confused by that post


----------



## cheezburger (Sep 29, 2010)

erocker said:


> Whistler? What the heck are you talking about? Show me some info on it? You are making less and less sense.



http://en.wikipedia.org/wiki/Northern_Islands_(GPU_family)

wiki may seem to be unreliable in some case (like comparison of graphic processing unit.) but this part has been protected under the community so i'm sure that article is real and it hasn't change since june/july.

http://wccftech.com/2010/08/27/upcoming-ati-hd-6000-series-codenames-revealed-catalyst-108/
http://www.nordichardware.com/news/...0-product-names-revealed-in-catalyst-108.html

this also prove its existence


----------



## erocker (Sep 29, 2010)

Whistler is a lower end card coming out next year. Below Barts.


----------



## bear jesus (Sep 29, 2010)

cheezburger said:


> http://en.wikipedia.org/wiki/Northern_Islands_(GPU_family)
> 
> wiki may seem to be unreliable in some case (like comparison of graphic processing unit.) but this part has been protected under the community so i'm sure that article is real and it hasn't change since june/july.



Wiki is ONLY useful when he information shows a reliable source and none of the names have a source, i would assume they were sourced from the names within the drivers but i have never seen any numbers (supposed leaks or otherwise) of anything other than cayman and barts so i don't have a clue.

*edit*


erocker said:


> Whistler is a lower end card coming out next year. Below Barts.



Then i may have seen the picture of that assuming it was the small passive card, i can't remember the name as its too late/early here, i should be asleep.


----------



## cheezburger (Sep 29, 2010)

erocker said:


> Whistler is a lower end card coming out next year. Below Barts.



we don't know...if there's anything other than barts that's *visible* it will be cayman and caico but both of them are still myth-like.... we aint really know if these barts spec are real or belong to whistler or even blackcombs. which it is all unknown. even these prototype card from chiphell was merely just change the cooler but PCB is still barts...(or something idk)

gees...amd is acting like JJ Abrams as time goes.....



bear jesus said:


> Wiki is ONLY useful when he information shows a reliable source and none of the names have a source, i would assume they were sourced from the names within the drivers but i have never seen any numbers (supposed leaks or otherwise) of anything other than cayman and barts so i don't have a clue.



actually everything in wiki are reliable except some fanboism controversy place like product comparison that's relate to spec of purely speculation. but when source come out it will be protect by community and these fanboi will go nowhere.


----------



## erocker (Sep 29, 2010)

Yet we're so darn sure Cayman will be 512 bit. It's not. This year there will be Barts, then Cayman then the dual GPU card. One of those names in the list could be for a different dual GPU card as well.


----------



## cheezburger (Sep 29, 2010)

erocker said:


> Yet we're so darn sure Cayman will be 512 bit. It's not. This year there will be Barts, then Cayman then the dual GPU card. One of those names in the list could be for a different dual GPU card as well.



then where you going to put antilles if cayman is dual gpu? again we don't know yeah. that 512bit is just sarcastic reply on these people would damn believe 32rops + 256bit bus are just enough or like comment of *"oh i couldn't even fill out my 4870's potential why get a powerful card anyway and most of game are ported so we just need a cheap card that needed!!"*. seriously technology is keep forward even you don't need it. and technology is all about performance that not what average people needs or about efficiency. (except iphone/ipad....)  average people don't think/act as elite and they can't be trust.

technology is progress as  ground breaking/brutal force(new technology/architecture)=> tweak,efficient redesign(reconfigure/die shrink)=>upward and push further performance with brutal force again. that is what moore's law about and these average consumer are about to destroy it. however hd 6000 will be just as what moore's law predict and all IC industry will follow until the end of humanity! that won't change forever.

moore's law and technology only serve elite, not average joe.


----------



## erocker (Sep 29, 2010)

cheezburger said:


> then where you going to put antilles if cayman is dual gpu?



Antilles is above cayman... Just like 5870-->5970. The way things are going Antilles is a dual Cayman (be it XT or Pro) GPU card. Perhaps they'll make a dual Barts GPU card as well. To the rest of your post.. Blah blah, heard it before.


----------



## wahdangun (Sep 29, 2010)

erocker said:


> Well, if that's the case (since I agree with you on the subject), I'll have to take the stance that I believe it will be a 384-bit bus. Yeah, that's it.. It will be 384-bit. If I'm wrong, send me the avatar of your choice.



then i will betting 256bit bus with high speed GDDR5


----------



## bear jesus (Sep 29, 2010)

wahdangun said:


> then i will betting 256bit bus with high speed GDDR5



I'm sure i read somewhere that hynix developed 1750mhz/7ghz effective gddr5 that was supposed to be in use/avalible by the end of this year, if that's right i'm sure that something at 5ghz or above would give more than enough bandwith on a 256bit bus on a top end chip.


----------



## Wile E (Sep 29, 2010)

TheMailMan78 said:


> And maybe one day people will learn how to properly uninstall and install their drivers instead of screaming "ATI DRIVERS SUCKS!" on every forum in the interwebz.



Maybe AMD will hire something other than monkeys to code their drivers and installers, then maybe I'll stop screaming "ATI DRIVERS SUCKS!" all the time. They even suck on a clean install.

This is the last AMD card I will have until they put out some decent drivers. The last good one was 10.4a. The last good one before that? 8.10

The hardware is great, software is garbage.


----------



## T3kl0rd (Sep 29, 2010)

5770 was meant to be equal to 4870 with marginal improvements and DX 11 support.  As I predicted, the new 6xxx series will double the power of the 5xxx series despite recycling the 40nm process.  Hopefully I can get another GTX 470 cheap soon. ^^


----------



## cheezburger (Sep 29, 2010)

wahdangun said:


> then i will betting 256bit bus with high speed GDDR5



do you honestly believe a high end card will only feature 32rop and 256bit bus while have ridiculous number of shader? this is no longer r670 to r770 transition that you can add up 2.5x shader to boost performance. that will not be the case this time.  that means more shader don't mean huge boot in performance. under the same rops/bus configuration a balance GPU design will out perform a GPU that's steroid with more shader ALU. like when 4830 compete with 4870 it shows 4830 has more efficient than 4870. all they need to do is enhance the ALU performance rather than make cheaper low complex shader but take more space and number to fill up the performance.


the rumor of 640ALU with 32 rops and narrow 256bit bus will just make it sound stupid if everything double up except rops/bus since the original r600 design is already unbalanced. if they haven't learn from what happened on r770 then they are pretty much hopeless...which i doubt amd are smart company if they done such non profit plan.

this is the fact that adding rops/bus are more profitable than adding ALU number



> shader die space in cypress is 60% and 4D shader is 80% of 5D shader in size and SIMD controller and TMU took about 15% then here will be 2(334x 0.6 x0.8)+2(334x0.15)+334x0.25 = 320.64 + 100.2 +83.5 = 504.34mm^2 + hard wiring = 510mm^2
> 
> that is huge die and such 510mm^2 only has 32 rops????and i don't see any reason why we'd need 640ALU for? folding@home?
> and you expect a 510mm^2 chip using a narrow 256bit bus on it?
> ...



what is most profitable if you can't shrink the die size because of put too many ALU on it

hard fact, a 480:96:64 with 512bit will make a 640:128:32, 256bit like shit in term of die space/power consumption/performance.


----------



## pantherx12 (Sep 29, 2010)

Is it not possible to have 64 rops AND a 256bit bus?


----------



## Tatty_One (Sep 29, 2010)

pantherx12 said:


> Is it not possible to have 64 rops AND a 256bit bus?



No, Clusters of 8 within the bus/rop/sp/tmu relationship..... example, GTX 480.... 384bit bus divided by 8 = 48 ROP's.


----------



## pantherx12 (Sep 29, 2010)

Tatty_One said:


> No, Clusters of 8..... example, GTX 480.... 384bit bus divided by 8 = 48 ROP's.




But that is not set in stone, see 2900xt for past reference.

16 rops over a 512bit bus : ]


64/256 is possible I'm sure of it.


----------



## Tatty_One (Sep 29, 2010)

pantherx12 said:


> But that is not set in stone, see 2900xt for past reference.
> 
> 16 rops over a 512bit bus : ]
> 
> ...



less is possible of course, more is not as I understand it.... with more you get the limitations, with less you don't..... think of a Jaguar S Type car, it will take a 4.2 litre V8 engine, the most common engine however is the 3 litre V6   it won't take the old Sovereign V12 6 litre, hence why Jag had to re-design their engines when they redisgned thier new model range.....

Crap example I know but I just had to!


----------



## Tank (Sep 29, 2010)

so what's the real name then?

i sure hope AMD don't go the nv route with changing naming conventions so early


----------



## Tatty_One (Sep 29, 2010)

cheezburger said:


> technology is progress as  ground breaking/brutal force(new technology/architecture)=> tweak,efficient redesign(reconfigure/die shrink)=>upward and push further performance with brutal force again. that is what moore's law about and these average consumer are about to destroy it. however hd 6000 will be just as what moore's law predict and all IC industry will follow until the end of humanity! that won't change forever.
> 
> moore's law and technology only serve elite, not average joe.




Actually you are over complicating things, I am old so I like it simple, such as..........

"_Developing (New) technology is simply offering more (performance) ....... for less (production costs).... to increase profit (margins_)".  Where retail costs increase because of that development is usually down to one of two factors...... they didn't quite get it right or simply just greed


----------



## bear jesus (Sep 29, 2010)

I honestly had no idea rop's were tied to memory bus size, well a few things make a little more sense now although i feel a little dumb for not knowing this before


----------



## Tatty_One (Sep 29, 2010)

bear jesus said:


> I honestly had no idea rop's were tied to memory bus size, well a few things make a little more sense now although i feel a little dumb for not knowing this before



ROP's > TMU's > SP's (Cuda Cores etc) > bus width all form a relationship of sorts, it can get quite complicated, I am not an expert but there are limitations with some of those relationships.


----------



## dalelaroy (Sep 29, 2010)

*Increasing ROPs*



Tatty_One said:


> less is possible of course, more is not as I understand it.... with more you get the limitations, with less you don't..... think of a Jaguar S Type car, it will take a 4.2 litre V8 engine, the most common engine however is the 3 litre V6   it won't take the old Sovereign V12 6 litre, hence why Jag had to re-design their engines when they redisgned thier new model range.....
> 
> Crap example I know but I just had to!



Saying that doubling the number of ROPs per bus is impossible is absurd. This would imply that, no matter how fast memory gets, the industry would have to go beyond a 512-bit bus to go beyond 64 ROPs. It might however be overkill. Cayman will likely only be 1.5x Barts, and might only be 1.25x Barts. It seems the Radeon HD 2900 GT had only 3 ROPs per memory controller, so an odd number would appear possible. This would imply that 12 ROPs per memory controller could be possible.

It is the Radeon HD 4730 versus the Radeon HD 4830 that demonstrates just what impact changing the number of ROPs can have. They were virtually identicle in specs, except the Radeon HD 4730 had half the ROPs and was clocked at 750MHz versus 575 MHz for the Radeon HD 4830. On average the Radeon HD 4830 beat the Radeon HD 4730 by a small margin. According to my estimate, the Radeon HD 5830 could be clocked at about 700 MHz and provide equivalent performance if it had its full complement of ROPs.

Assuming Barts is as described, and provides the same per shader performance as Cypress, it should provide about a 4.3% increase in performance over Cypress LE, despite being clocked 9.375% lower. This would be due to Barts Pro having 81.25% more ROP performance than Cypress LE. And Barts XT should provide about a 18.685% increase over Cypress Pro despite having just a 10.345% increase in shader performance, because of a 24.138% increase in ROP performance.

With these assumptions, doubling the ROPs would increase performance of Barts XT by about 10.25%. I don't think doubling the ROPs would be cost effective with regards to die size, but if they could, increasing the ROPs per memory controller by 50% probably would.


----------



## Tatty_One (Sep 29, 2010)

dalelaroy said:


> Saying that doubling the number of ROPs per bus is impossible is absurd. This would imply that, no matter how fast memory gets, the industry would have to go to a 1024-bit bus to go beyond 64 ROPs. It might however be overkill. Cayman will likely only be 1.5x Barts, and might only be 1.25x Barts. It seems the Radeon HD 2900 GT had only 3 ROPs per memory controller, so an odd number would appear possible. This would imply that 12 ROPs per memory controller could be possible.
> 
> It is the Radeon HD 4730 versus the Radeon HD 4830 that demonstrates just what impact changing the number of ROPs can have. They were virtually identicle in specs, except the Radeon HD 4730 had half the ROPs and was clocked at 750MHz versus 575 MHz for the Radeon HD 4830. On average the Radeon HD 4830 beat the Radeon HD 4730 by a small margin. According to my estimate, the Radeon HD 5830 could be clocked at about 700 MHz and provide equivalent performance if it had its full complement of ROPs.
> 
> ...



Well nothing technically is impossible..... even though your narrative makes assumptions that it might be possible or it might be impossible?  As i have said, there are a number of other factors that determine performance, not just memory bandwidth and ROP count.  I understand your logic, however there is I beleive with the current architecure the limitation and with that in mind I think it is not possible currently, I cannot speculate on what might be in the future, as it stands.... today, I don't beleive it can be done (otherwide someone would have had a go), possibly in the future as you have indicated but I can only make a call on what is and not what might be.... the question was "can it be doubled" .... 

Lets see if the ROP ratio does increase in relation to bus size on the new 6XXX series, if it does your sure to be right!   and if it don't you will probably just say that they chose not too.


----------



## yogurt_21 (Sep 29, 2010)

Tatty_One said:


> Well nothing technically is impossible..... even though your narrative makes assumptions that it might be possible or it might be impossible?  I understand your logic, however there is I beleive with the current architecure the limitation and with that in mind I think it is not possible currently, I cannot speculate on what might be in the future, as it stands.... today, I don't beleive it can be done (otherwide someone would have had a go), possibly in the future as you have indicated but I can only make a call on what is and not what might be.... the question was "can it be doubled" ....
> 
> Lets see if the ROP ratio does increase in relation to bus size on the new 6XXX series, if it does your sure to be right!   and if it don't you will probably just say that they chose not too.



well it's simply hard to believe that they'd keep the same rop number from the mid range to the highend. so if barts does have 32 rop's, I'd have to assume that caymen has more shader increase alone isn't going to offer that great of a performance boost which would essentially cause a ton more people to buy barts and overclock it rather than waste money on caymen. 

while 64 rop's and 512bit memory are a little ridculous cost wise, the idea of 384-bit and 48 rop's isn't imo.  soo... running down that line.

spec------barts xt------caymen xt
rop's------32-----------48
memory---256-bit-------384bit
shaders---1280---------1920
tmus------64-----------96


----------



## de.das.dude (Sep 29, 2010)

i hope they develop the 6700 fast ! cant wait to see some pix!


----------



## Tatty_One (Sep 29, 2010)

yogurt_21 said:


> well it's simply hard to believe that they'd keep the same rop number from the mid range to the highend. so if barts does have 32 rop's, I'd have to assume that caymen has more shader increase alone isn't going to offer that great of a performance boost which would essentially cause a ton more people to buy barts and overclock it rather than waste money on caymen.
> 
> while 64 rop's and 512bit memory are a little ridculous cost wise, the idea of 384-bit and 48 rop's isn't imo.  soo... running down that line.
> 
> ...



What you (and Dalelaroy)are saying makes sense and I am not arguing against the logic, the point is that the memory bus and ROP count are not the only factors determining the performance.  Cypress is a good example, the 5850 and 5870* have the same memory bus and the same ROP count (32)* to determine the performance segment differences one is clocked higher, it has a greater number of SP's AND more texture units as well as having (reference) faster memory.  Now if that architecture is significantly changed and the relationships between each process within the architecture changes it might be that what is considered the "norm" or the limitation now ceases to be in the future, but again that is speculation.  

Additionally your comparision therefore between "Barts" and "Caymen" is little more than the comparison between the 5850 and the 5870 surely, less bus size and therefore ROP count?  The typical performance differences within the market often can be attained (lets say 15% between 2 models) without having to increase bus size and/or ROP count as Cypress has shown.

Although this does not deal with any limitations between bus width and ROP count that we have mentioned, it does explain very well how segments with the same bus sizes and ROP count can differ a fair bit in performance through other means, I know the link is from semi accurate but this piece is not about speculation but actually makes comparisions with actual hardware and its architecture, if you scroll down to the chart about the 9600 and 9800 and read until the end of the page, it is quite interesting.....

http://www.semiaccurate.com/2010/09/20/northern-islands-barts/


----------



## TheMailMan78 (Sep 29, 2010)

Wile E said:


> Maybe AMD will hire something other than monkeys to code their drivers and installers, then maybe I'll stop screaming "ATI DRIVERS SUCKS!" all the time. They even suck on a clean install.
> 
> This is the last AMD card I will have until they put out some decent drivers. The last good one was 10.4a. The last good one before that? 8.10
> 
> The hardware is great, software is garbage.


----------



## Benetanegia (Sep 29, 2010)

Tatty_One said:


> What you (and Dalelaroy)are saying makes sense and I am not arguing against the logic, however "they" don't keep the same number of ROP's from the mid range to the high end, thats just the point, HD 5850 does not have the same amount of ROP's as the HD 5870 does even though they both have the same memory bus, why?  because as i said, each cards has its market segement as well as they both have different SP's, because there are links with TMU's, SP's and Rop's, the 5850 with it's lesser SP count is given



First of all the HD5850 and HD5870 do have the same ammount of ROPs. Second you are right regarding the links, there is probably a close limit in the relation between ROPs and memory bandwidth, personally I think this limit is mostly on z/stencil. The two main purposes of ROPs are to calculate z/stencil and blend final pixels, either one requires writing to memory, so there is a strong relation between both.

I stand to be corrected in what follows, as I'm in no way an expert, but it's what I understand from the things I do know or have heard about. Let's explain it with an example, and let's take the HD5870 numbers from the chart in the OP.

Memory bandwidth: 153.6 G*B*/s == 1228.8 G*b*/s
Pixel fillrate: 27.2 GPixel/s
Z/stencil: 108.8 G*Samples*/s

Now for stencil the most common used value is one byte per pixel, while Z sample in modern games is either 24 bit or 32 bit, because 16 bit creates artifacts.

Thus the average bit-lenght of samples is going to be between 8 and 24/32, let's settle down to 16 bit samples. Simple math from the specs tells that 108.8 Gsamples x 16 bit samples = 1740,8 Gb/s.

As you can see the required bandwidth to write z/stencil only scenarios already exceeds memory bandwidth limitations and it's worse in the cases when it's doing Z test. Of course the ROPs also have to write down pixels so I understand that is less taxing and makes up for the difference, because typical HDR pixels are 16 bit wide (per channel), so 27.2 GPixel/s x 16 bit* = 435.2 Gb/s and output of current games is 32 bit so 870.4 Gb/s.

* Here I have to admit I don't know if the pixels are blended and written separately by channels or alltogether. In case of the latter, the figure jumps to 1740.8 Gb/s (64 bit x 27.2 GPixel/s) again, and may actually reflect better the relation as the average of both 32 bit and 64 bit outputs is 1305.6 Gb/s, quite similar to the actual memory bandwidth.

As you might have guessed already doubling (even increasing) the ROPs is not going to yield any substantial gains, even with the increased 25% GDDR5 speed of 7 GT/s modules, especially considering that above numbers are only for write operations (and not all of them) and you still have to take into account read operations. 

That being said, the above is just talking about theoretical throughouput and the effective balance. On practice, I think that 32 ROPs are more than enough for the kind of performance we can expect from Cayman and putting 64 would be a waste of die area for little or no gain (something I could see Nvidia doing** but not AMD). 48 would be ideal I guess, but I don't think AMD is willing to use odd numbers, or they would have done it in the past, with crippled 256 bit parts instead of making them 128 bit...

** This is another story, but the reason Nvidia "wastes" die area on 384 bit / 48 ROPs is because they are critical in the proffesional Quadro/Tesla cards, not because it poses any dramatical improvement or necessity on the desktop cards.


----------



## Tatty_One (Sep 29, 2010)

Thats pretty much what I was getting at without wanting to get too technical, I just hate all this speculation, lets all vote for no leaks of partial info until the day the cards hit retail lol.  As I understand it, more games for example become TMU constrained than they ever do through ROP count in the real world, I dont know what the answers will be for the 6XXX series, but cynical old me thinks that a 256/32 will be the norm, I might be wrong though.

PS: You quoted my deleted post lol, you will see I say the rop count is the same) I was answering 2 different threads at the same time and messed on up..... I am too old to multi task these days!


----------



## jasper1605 (Sep 29, 2010)

Wile E said:


> Maybe AMD will hire something other than monkeys to code their drivers and installers, then maybe I'll stop screaming "ATI DRIVERS SUCKS!" all the time. They even suck on a clean install.
> 
> This is the last AMD card I will have until they put out some decent drivers. The last good one was 10.4a. The last good one before that? 8.10
> 
> The hardware is great, software is garbage.



I've not once had a driver issue except on 10.2 I think where all of my speakers would randomly reconfigure on the HDMI sound (my fronts went to the right side, my rears moved front, and my sides disappeared lol)


----------



## cadaveca (Sep 29, 2010)

jasper1605 said:


> I've not once had a driver issue except on 10.2 I think where all of my speakers would randomly reconfigure on the HDMI sound (my fronts went to the right side, my rears moved front, and my sides disappeared lol)



Just because you dpn't have any issues, doesn't mean that it's impossible for anyone to have issues.

For example, I know that 90% of my issues are either related to Crossfire, Eyefinity, or both. According to your system specs, you have neither, so would probably never see any of the issues I have.

And because of these issues, I will be focusing entirely on how the 6-series behaves under similar conditions.

And this is important...specifically when dealing with Eyefinity...AMD has lauded how they chose a hardware solution for Multi-monitor...yet, handing cursor off from one monitor to the next, often corrupts the cursor.

The cursor issue has been around since day one, and AMD has said that they fixed it, it's a known issues, etc..._with a driver_. I'm not too sure that a driver can really fix a hardware problem, but AMD seems pretty confident, even though it's been a year without any real fix.

Until AMD starts being honest about issues like this(need i mentopn my cards overheat due to the fan not spinning up correctly, due to the driver?), and there are some real legitimate claims to AMD's drivers steadly declining in quality.

Better yet, guess how I can avoid the cursor corruption? Two ways...either use a single monitor...or not use the DisplayPort connector...

Granted, maybe I just got some bad cards. I'll be mailing yet another one away for RMA later today, and hopefully that might sort it...time will tell.


----------



## TheMailMan78 (Sep 29, 2010)

cadaveca said:


> Just because you dpn't have any issues, doesn't mean that it's impossible for anyone to have issues.
> 
> For example, I know that 90% of my issues are either related to Crossfire, Eyefinity, or both. According to your system specs, you have neither, so would probably never see any of the issues I have.
> 
> ...



Its just you have bad cards. I ran crossfire with 4850s for a very long time without issue.


----------



## wahdangun (Sep 29, 2010)

cheezburger said:


> do you honestly believe a high end card will only feature 32rop and 256bit bus while have ridiculous number of shader? this is no longer r670 to r770 transition that you can add up 2.5x shader to boost performance. that will not be the case this time.  that means more shader don't mean huge boot in performance. under the same rops/bus configuration a balance GPU design will out perform a GPU that's steroid with more shader ALU. like when 4830 compete with 4870 it shows 4830 has more efficient than 4870. all they need to do is enhance the ALU performance rather than make cheaper low complex shader but take more space and number to fill up the performance.
> 
> 
> the rumor of 640ALU with 32 rops and narrow 256bit bus will just make it sound stupid if everything double up except rops/bus since the original r600 design is already unbalanced. if they haven't learn from what happened on r770 then they are pretty much hopeless...which i doubt amd are smart company if they done such non profit plan.
> ...




are you afraid to bet? lets see who are the winner,


----------



## yogurt_21 (Sep 29, 2010)

Tatty_One said:


> What you (and Dalelaroy)are saying makes sense and I am not arguing against the logic, the point is that the memory bus and ROP count are not the only factors determining the performance.  Cypress is a good example, the 5850 and 5870* have the same memory bus and the same ROP count (32)* to determine the performance segment differences one is clocked higher, it has a greater number of SP's AND more texture units as well as having (reference) faster memory.  Now if that architecture is significantly changed and the relationships between each process within the architecture changes it might be that what is considered the "norm" or the limitation now ceases to be in the future, but again that is speculation.
> 
> Additionally your comparision therefore between "Barts" and "Caymen" is little more than the comparison between the 5850 and the 5870 surely, less bus size and therefore ROP count?  The typical performance differences within the market often can be attained (lets say 15% between 2 models) without having to increase bus size and/or ROP count as Cypress has shown.
> 
> ...



well 1 5*8*50 vs 5*8*70 yet you used it as a reason why the 6*7*70 and 6*8*70 would have the same number of rop's. If you're going to use the cypress you have to incorporate juniper as a comparison for barts to caymen, not cypress pro vs cypress xt. again were talking mid range to highend not lower highend to higher highend. 

so the gap has to be larger between the two to make sense in pricing and market positioning.


second overclock a 5850 to 5870's clocks and it'll bench just a hair lower. overclock a 5850 past a 5870 and it'll bench higher. so while shaders do help, there's plenty of them on all modern gpu's. This is exactly why far more 5850's sold than 5870s, the prformance was similar but the prices were not. 

plus with the swapout from 4 simple + 1 complex to 4 moderately complex we're likly going to see more frames per shader out of the 6k series. So if we're talking the same rop's and more shaders it's unlikely that caymen would be that much better than barts. after all the chart shows barts at 1280 medium complexity shaders that should be a stark contrast with the 320 complex and 1280 simple on cypress xt. 

if you take a look at 5770 vs 5830 where both have 16 rop's, clocks are close with the exception of memory clock and the memory bit is different, but the main difference is 800 shaders vs 1120 shaders  (40% more) the difference averages to 13% in W1z's reviews. Now while I feel 256bit vs 128 bit accounts for at least a couple of those frames It's more than easy enough to make up that amount with overclocking.

so if caymen is only increasing shaders by 50% and tmu's while keeping the same rop's, the performance won't be as scalable as the 5770 to 5870 and we'll have a 6770 capable of taking sales away from the 6870 not just in price/performance but performance in general.

imo it would be a bad bad move when they have the chance to repeat the success of the 5xxx series.


----------



## cadaveca (Sep 29, 2010)

TheMailMan78 said:


> Its just you have bad cards. I ran crossfire with 4850s for a very long time without issue.



Not like you'd know, running a single card.  I had few problems too, with 4850, 4870, or 4890...I actually kinda miss those cards... Alas, they don't support Eyefinity.

Those cards serve as the basis for how bad my current cards actually are...4-series shows AMD can do better. Wonderful gen for AMD, that one...effective, and CHEAP. On the other hand, they also serve as the basis for my interest in 6-series..I hope it's another 4-series.


----------



## Tatty_One (Sep 29, 2010)

yogurt_21 said:


> well 1 5*8*50 vs 5*8*70 yet you used it as a reason why the 6*7*70 and 6*8*70 would have the same number of rop's. If you're going to use the cypress you have to incorporate juniper as a comparison for barts to caymen, not cypress pro vs cypress xt. again were talking mid range to highend not lower highend to higher highend.
> 
> so the gap has to be larger between the two to make sense in pricing and market positioning.
> 
> ...



Lol, I didnt use the comparision as a "reason" they should be compared to bart etc, I used it because in your previous post you said that you found it difficult to belive that mid and high end cards would have the same ROP count and my example clearly shows that is not always the case because the 5850 and 5870 do. All that you have said does not change the fact that currently, in order for the ROP count to be increased, the memory bus must also be increased, so unless you are sure that we will see some 512bit bus versions then which ever way you want a look at it, you are going to pay a huge premium for that, one of the main reasons ATi have been so competative price wise recently is because they have gone for the 256 bus, NVidia's 384bit + bus widths cost more to produce, just in PCB terms alone .
Using the comparison between the 5830 and the 5770, throws up some odd results, as well as what you have mentioned, despite having double the memory bus it has the same ROP count as the 5770 but were you aware, despite it having double the memory bus, the 5830 is actually SLOWER in pixel fill rate than the 5770, now thats for a couple of reasons but my point is Bus and ROP count are just ingredients in the overall performance, people seem to get too hung up on it, you can get to a point where too many ROP's actually strangle performance and show little improvement where other ingredients can give a greater boost.

Now if we do see a 512bit bus..... and I am not saying we won't, then as you have said, there is more potential there, but with that comes a fairly large hike in prices, I have some doubts that AMD want to go down that route personally, although maybe on just the one top end card.......... my point all along has simply been 2 fold.......

1.  Currently I beleive there are limitations on ROP count against Memory Bus size, you aint gonna get 64 ROP's on a 256bit wide bus.
2.   There are a lot more factors to overall performance than just bus size and ROP count.

Simple as that really.


----------



## dalelaroy (Sep 29, 2010)

*Increased ROPs Without Increased Memory bus*



Tatty_One said:


> All that you have said does not change the fact that currently, in order for the ROP count to be increased, the memory bus must also be increased,



Note that although both Redwood and Juniper have 128-bit memory buses, Redwood has 8 ROPs versus Juniper's 16 ROPs. It would not violate the pattern for Cayman to have twice the ROPs of Barts without an increase in memory width. It would simply be applying the Evergreen 128-bit pattern, in which Redwood and Juniper are the only families sharing a bus width, to the Northern Islands 256-bit width, with Barts and Cayman being the only families that share the same bus width.

I think it is more likely that Cayman will have 384-bit memory, but I also think that it might take less board real estate to simply double the ROPs per memory controller. As for the bandwidth argument, even with GF104 having less bandwidth than Cypress, it seems to have greater ROP performance. Doubling the ROPs may be overkill, but Cayman needs at least double the ROPs performance of Barts to take on GF100 in those applications where ROPs are the limitation.


----------



## meran (Sep 29, 2010)

did u see the news on http://guru3d.com/news/radeon-hd-6800-series/http://guru3d.com/news/radeon-hd-6800-series/


----------



## btarunr (Sep 29, 2010)

Nah, no Barts launch on 18th ± 2 days, AFAIK. Also, I'd dismiss that new "we are right, they all were wrong" specs sheet some sites are sharing as "RV770 has 480 stream processors, not 800, as rumors claimed" encore. If Hilbert got those specs from AMD (because that article is written more like stating facts than inquisition), he'd also have an NDA over him. 

In no way am I giving credibility to the information we have, but just saying that at this point that specs sheet is not one bit more credible.


----------



## bear jesus (Sep 29, 2010)

btarunr said:


> Nah, no Barts launch on 18th ± 2 days, AFAIK. Also, I'd dismiss that new "we are right, they all were wrong" specs sheet some sites are sharing as "RV770 has 480 stream processors, not 800, as rumors claimed" encore.



I have given up on trying to make sense of all the "information" on all the different tech sites, it's all lies 

to be honest as it gets closer to release (whenever it may be) it's time to ignore all the "leaks" and just wait for amd to say something official.


----------



## meran (Sep 29, 2010)

so ,it makes sense to built 2xbarts on one board than one huge chip am i right or


----------



## dalelaroy (Sep 29, 2010)

meran said:


> so ,it makes sense to built 2xbarts on one board than one huge chip am i right or



Only from the point of view of marketing. Unless....

I still think that Barts will have 1024 shaders, with Barts XT shipping with 960 shaders active. I think yields of Barts XT will be too low to justify completely replacing Cypress Pro with Barts without a defect tolerant design. However yield of defect free Barts GPUs would be adequate for fully functional GPUs to be used in a dual GPU product. Along those same lines of logic, there should be too few Barts GPUs with defective ROPs to justify a mass market product like Cypress LE, but if these GPUs could be salvaged for a dual GPU product.

This could also explain the Radeon HD 6990. If Cayman XT is, like GTX 480, a cut down Cayman, and called the Radeon HD 6870, then if the dual GPU variant uses fully functional GPUs, it would make sense to call it a Radeon HD 6990 to signify it is more than a dual Radeon HD 6870.


----------



## cheezburger (Sep 29, 2010)

wahdangun said:


> are you afraid to bet? lets see who are the winner,



no i'm  not afraid of betting , it's just i can't ignore the stupidity that's all. amd is not going to make a 500mm^2 die gpu just to add more ALU and feature 256bit bus and 32 rops, when ading ALU will cost more die space? that is hard fact!

just a question. what do you need so many shader for if your frame rate won't increase from 200 fps to 800 fps... just being feature rich? folding@home is generally garbage for vast high end gamer and *"NO ONE WILL BUY A GFX JUST TO RUN FOLDING@HOME TO SAVE THE MANKIND WHILE CAN'T DO SHIT ON FRAME RATE"* if human would die then let them all die....simple.

i would personally throw 500 dollars into water than save human race

anway read  below post  before you start think 32rop, 256bus with ridiculous 2560 shader will hit to the market with such bad scaling design.



> shader die space in cypress is 60% and 4D shader is 80% of 5D shader in size and SIMD controller and TMU took about 15% then here will be 2(334x 0.6 x0.8)+2(334x0.15)+334x0.25 = 320.64 + 100.2 +83.5 = 504.34mm^2 + hard wiring = 510mm^2
> 
> that is huge die and such 510mm^2 only has 32 rops????and i don't see any reason why we'd need 640ALU for? folding@home?
> and you expect a 510mm^2 chip using a narrow 256bit bus on it?
> ...






yogurt_21 said:


> so if caymen is only increasing shaders by 50% and tmu's while keeping the same rop's, the performance won't be as scalable as the 5770 to 5870 and we'll have a 6770 capable of taking sales away from the 6870 not just in price/performance but performance in general.
> 
> imo it would be a bad bad move when they have the chance to repeat the success of the 5xxx series.



hard fact, however people just don't listen



Tatty_One said:


> 1.  Currently I beleive there are limitations on ROP count against Memory Bus size, you aint gonna get 64 ROP's on a 256bit wide bus.
> 2.   There are a lot more factors to overall performance than just bus size and ROP count.
> 
> Simple as that really.



of cause you can not boost up performance by just adding rop/bus. you also can't just add ALU without major increase on rops/bus


----------



## wahdangun (Sep 30, 2010)

cheezburger said:


> no i'm  not afraid of betting , it's just i can't ignore the stupidity that's all. amd is not going to make a 500mm^2 die gpu just to add more ALU and feature 256bit bus and 32 rops, when ading ALU will cost more die space? that is hard fact!
> 
> just a question. what do you need so many shader for if your frame rate won't increase from 200 fps to 800 fps... just being feature rich? folding@home is generally garbage for vast high end gamer and *"NO ONE WILL BUY A GFX JUST TO RUN FOLDING@HOME TO SAVE THE MANKIND WHILE CAN'T DO SHIT ON FRAME RATE"* if human would die then let them all die....simple.
> 
> ...



first of all, i don't give a shit about F@H, and second we don't know for sure, its useless to speculate right now, just look at HD 4870 launch people speculate it will have 480 shader but in the end we get 800 shader more than twice the shader on HD 3870, and btw maybe cayman will just have 20% different on the performance than bart, and if this are big GPU like nvdia, ATI will like to cute the cost and use 256 bit instead, and maybe thats why bart was launched earlier because to wait those high speed GDDR5 ready, just like HD 4850 was launched earlier.


----------



## cheezburger (Sep 30, 2010)

wahdangun said:


> first of all, i don't give a shit about F@H, and second we don't know for sure, its useless to speculate right now, just look at HD 4870 launch people speculate it will have 480 shader but in the end we get 800 shader more than twice the shader on HD 3870, and btw maybe cayman will just have 20% different on the performance than bart, and if this are big GPU like nvdia, ATI will like to cute the cost and use 256 bit instead, and maybe thats why bart was launched earlier because to wait those high speed GDDR5 ready, just like HD 4850 was launched earlier.



you haven't answer my question, why would amd want to make a huge die GPU by adding more ALU/shader if they knew it will cost more by adding more shader? why don;t they just simply optimize their ALU more and adding rops/bus instead?  

this is no long speculation, this is fact! we all know shader cost 60% die space in current evergreen design and adding more then twice shader is non sense and make gpu as big as fermi while no frame rate gain and bad scaling is just plenty stupid. you can add more shader on 3870 is because r670 only has die size of 179mm^2 and 282mm^2 in 4870. increase roughly 60% while adding extra 100ALU/24TMU&SIMD cluster.but if we speculate this on cayman it will be  534mm^2 if you design to add more ALU like it did on r770. you fail one thing, if cayman is ONLY 20% gain in performance over barts then why is amd bother to make it out if it's only 20% over a mid range card while having die size of 500mm^2?? a 480:96:64 will have better scaling and frame rate burst over a 1280: (128)64:32.

guess you didn't know anything about how a gpu work. ALU in gpu are act as program decoder and material generator. while rops(*Raster Operations Pipeline* or *Render Output Units* in nvidia) are operate as material/texture loading and instruction processed by shader/ALU and finalize. more ALU don't ensure performance boost, in extreme case like highest detail/AA/AF it helps frame rate from dropping in serious margin.  for example r670 and r770 don't see much of difference in fps when comes to lower detail/lighting and frame rate are mostly identical except fps. but when come to extreme detail r770 will take advantage because of shader and drop less than r670. however both r670 and r770 having little difference in pixel fill rate except r770 having higher clock and given little more fps. so you want more frame rate then you will need more rops.


----------



## yogurt_21 (Sep 30, 2010)

Tatty_One said:


> Lol, I didnt use the comparision as a "reason" they should be compared to bart etc, I used it because in your previous post you said that you found it difficult to belive that mid and high end cards would have the same ROP count and my example clearly shows that is not always the case because the 5850 and 5870 do. All that you have said does not change the fact that currently, in order for the ROP count to be increased, the memory bus must also be increased, so unless you are sure that we will see some 512bit bus versions then which ever way you want a look at it, you are going to pay a huge premium for that, one of the main reasons ATi have been so competative price wise recently is because they have gone for the 256 bus, NVidia's 384bit + bus widths cost more to produce, just in PCB terms alone .
> Using the comparison between the 5830 and the 5770, throws up some odd results, as well as what you have mentioned, despite having double the memory bus it has the same ROP count as the 5770 but were you aware, despite it having double the memory bus, the 5830 is actually SLOWER in pixel fill rate than the 5770, now thats for a couple of reasons but my point is Bus and ROP count are just ingredients in the overall performance, people seem to get too hung up on it, you can get to a point where too many ROP's actually strangle performance and show little improvement where other ingredients can give a greater boost.
> 
> Now if we do see a 512bit bus..... and I am not saying we won't, then as you have said, there is more potential there, but with that comes a fairly large hike in prices, I have some doubts that AMD want to go down that route personally, although maybe on just the one top end card.......... my point all along has simply been 2 fold.......
> ...





again 5850 and 5870 are in the same range, to actually seperate mid from high or high from enthusiest ati/amd has given vast spec differences, infact double in the case of 5770>5870>5970. so I think the thing you're missign here is the fact that I consider the 5850 a highend part, not a midrange to me midrange spans the 100-200$ price point at launch. highend 300-500 and enthusiest 500+. if you read that correctly fermi has no enthusiest single part in my mind, and only enter that realm in sli. 

and again the 5850 and the 5870 have the came config only different shaders and clocks, what i refered to in my above post is that clocks makes up 99% of the performance difference between the two cards and when you match their clock speeds on the same rig, the 5870 will barely edge out the 5850 at the same clock speeds. Proving that the shader difference between the two doesn't affect performance significantly. 

now doubling the shader count might, but not likly enough to grant as much a performance difference as ther eis between the 5870 and 5770 which regarldless will skew pucharse decision away from the highend parts. Being that highend parts already sell less than midrange and are more expensive to manufacturer it could be a costly decision.

*



			despite having double the memory bus it has the same ROP count as the 5770 but were you aware, despite it having double the memory bus, the 5830 is actually SLOWER in pixel fill rate than the 5770
		
Click to expand...

*
don't know why you posted this as it proves my point, since the 5770 has the same rop/tmu/memory bit per shader balance as the the 5870 it has a nice scalable architecture that as you pointed out has a better fillrate than the 5830 depite the fact that the 5830 has 40% more shaders. so...shaders again aren't enough on their own. They need the raw hp of the rop combined with the tmu to get the job done. And no you comclusion based on the data is incorrect, the 5830 has a SHADER bottleneck, not an rop/tmu one. that's why the 5770 with 40% less shaders and 40% less tmu's can have a higher fillrate. (granting the 200MHZ memory and 50MHZ core increase in clock speed on the 5770 might be helping the fillrate).

based on what we know about ati, though they cannot increase the rop count per memory bit in a series,they can disable them second thing we know is that cypress was essentially two seperate cores on a single die and juniper was a single of those cores. 

it is possible that ATI/AMD already have a working core with 64 rops on a 256bit bus and we're seeign half of that on barts. another thing to keep in mind is that a few years ago 16 rop's were the max ati could do on a 256-bit bus, so at the time I could have argued that they couldn't put 32 rops on that bus width, I would have been wrong. 

besides the fact I don't care if they have to go to a 384-bit bus width with 48 rop's, caymen needs to increase the rop count as well as shaders and tmu's to fit in with barts in the lineup otherwise barts will be the odd man out and steal the sales.


----------



## bear jesus (Sep 30, 2010)

yogurt_21 said:


> besides the fact I don't care if they have to go to a 384-bit bus width with 48 rop's, caymen needs to increase the rop count as well as shaders and tmu's to fit in with barts in the lineup otherwise barts will be the odd man out and steal the sales.



After learning a little more about the limitations in gpu core design I'm kind of hoping it would be a 384bit bus as it looks like it would be the best option for increasing everything but not pushing the die size too far, but then again i am just a noob when it comes to gpu chip design


----------



## dalelaroy (Sep 30, 2010)

cheezburger said:


> not until i get his 5850 first  then i'll trade my 9600gt to him for GT240 for physx
> 
> 
> 
> ...



First of all, I read an interview with an AMD engineer in which he stated that the shaders of Cypress take up 80% of the Cypress die. This was within the context of discussing SIMD pipelines, so he might have meant SIMD pipelines, which would be shaders plus TMUs plus SIMD logic, but even your 60% for shaders plus 15% for TMUs and SIMD logic do not add up to the 80% stated by this engineer. Where do you get your figures.

Second, while it is common to quote 1600 for the number of shaders in Cypress, Cypress actually has 1600 ALUs organized as 320 shaders, that are arranged in 20 SIMD pipelines having 16 shaders and 4 TMUs each. Each shader has 4 simple ALUs and 1 complex ALU. Barts/Cayman is supposed to have 4 moderate complexity ALUs per shader.

Barts/Cayman are not derivatives of Juniper or Cypress. They were designed in parallel with Evergreen by the team(s) that designed RV7xx, including RV740. The engineer that was interviewed stated that the 4 ALU per shader design of Northern Islands took up slightly less space per shader than the 4+1 ALU design of Cypress while delivering between 1.5x to 1.8x the performance per shader of Cypress. The engineer might have meant 1.5x to 1.8x the performance per ALU, deliberately using the wrong term to make things clearer to the interviewer that often mentioned the 1600 shaders of Cypress.

The Radeon HD 5830 has the same number of ROPs and memory controllers as the Radeon HD 4870/4890, and falls between the two of them in average performance despite having 1.4x the number of SIMD pipelines. Chances are that it is not the performance of the individual shaders/TMUs that is crippling Cypress, but the SIMD control logic. My guess is that the NI design team went with a 4 moderate complexity ALU design for NI to simplify the control logic, thus enabling them to achieve at least the per shader performance of RV770 while implementing double precision floating point, as well as the DX11 features. Just getting NI to RV770 level per ALU performance would have given NI 12% higher performance per shader than Cypress. And it is possible that other improvements, including higher utilization of the ALUs due to fewer of them per shader and the number of ALUs per shader being a power of two, increased performance per shader to within 95% of the 4+1 ALU shaders. Thus the 1.5x to 1.8x figure quoted.

My guess is that, since the small die size strategy was well established at the time NI was being designed, and 32nm allows for just a bit over 56% more transistors per mm2 versus 40nm, and the 4 ALU shader design is only slightly smaller than the 4+1 ALU shader design, Turks was to be 1.6x Redwood, Barts 1.6x Juniper, and Cayman 1.6x Cypress with regards to shaders/SIMD pipelines. This would make Turks 128 shaders(512 ALUs), Barts 256 shaders (1024 ALUs), and Cayman 512 shaders (2048 ALUs). When 40nm was cancelled, only Cayman had to be cut down, and this was only to keep the TDP within the limits of what was needed to produce a dual GPU "Cayman".

Bus width is primarily a function of die size, and since Barts would have had about the same die size as Juniper at 32nm, Barts would have started with a 128-bit bus. But with Barts having over 50% more core performance than Juniper, there would have been a push towards either increasing the number of ROPs per memory controller by at least 50% or increasing the memory width by 50%. If they went with the memory width solution, Barts would have had a 192-bit wide bus at 32nm. Cayman was probably not large enough for a 384-bit memory bus at 32nm, so my guess is that the number of ROPs per memory controller was increased.

If indeed the Radeon HD 2900 GT had 12 ROPs (persumably 16 total with 4 disabled) it is Cayman might have had 12 ROPs per memory controller at 32nm. Well actually 16 ROPs per memory controller organized as four clusters of 4 ROPs each, with one ROP cluster per memory controller serving as a spare. I estimate that, at the time the GTX 480 was introduced, approximately 14% of all Radeon HD 5850/5870 yield was being lost to defective ROP clusters. At the time the Radeon HD 5830 was introduced this yield loss to defective ROP clusters would have been higher, thus the need to salvage a part with one ROP cluster per memory controller disabled. ATI probably anticipated similar yield problems at 32nm, and at least wanted one spare ROP cluster per memory controller available to improve yields, so the design could have been three ROP clusters per memory contoller with the third serving only as a spare, but more likely, with the need for 50% higher ROP performance to match the 50% higher core performance, ROP clusters per memory controller were doubled, with the fourth ROP cluster per memory controller serving as a spare.

With 32nm being cancelled and NI reimplemented at 40nm, die size grew, and there was increased perimeter on which to implement edge pads, enabling Barts to grow from 192-bits to 256-bits, and perhaps Cayman can now be 384-bit instead of 256-bit. If not however, I do expect Cayman to have at least 50% more ROPs per memory controller.


----------



## cadaveca (Sep 30, 2010)

daelalroy, I gotta agree with your thoughts about control logic. Given that nVidia has now said that this exact thing is what went wrong with Fermi in development, and given Huang's explanation, I feel it's safe to say that this is definately a sore spot for the 40nm process. Also, AMD has previously mentioned that the dispatch processor would get a serious revamp.


----------



## wahdangun (Sep 30, 2010)

cheezburger said:


> you haven't answer my question, why would amd want to make a huge die GPU by adding more ALU/shader if they knew it will cost more by adding more shader? why don;t they just simply optimize their ALU more and adding rops/bus instead?
> 
> this is no long speculation, this is fact! we all know shader cost 60% die space in current evergreen design and adding more then twice shader is non sense and make gpu as big as fermi while no frame rate gain and bad scaling is just plenty stupid. you can add more shader on 3870 is because r670 only has die size of 179mm^2 and 282mm^2 in 4870. increase roughly 60% while adding extra 100ALU/24TMU&SIMD cluster.but if we speculate this on cayman it will be  534mm^2 if you design to add more ALU like it did on r770. you fail one thing, if cayman is ONLY 20% gain in performance over barts then why is amd bother to make it out if it's only 20% over a mid range card while having die size of 500mm^2?? a 480:96:64 will have better scaling and frame rate burst over a 1280: (128)64:32.
> 
> guess you didn't know anything about how a gpu work. ALU in gpu are act as program decoder and material generator. while rops(*Raster Operations Pipeline* or *Render Output Units* in nvidia) are operate as material/texture loading and instruction processed by shader/ALU and finalize. more ALU don't ensure performance boost, in extreme case like highest detail/AA/AF it helps frame rate from dropping in serious margin.  for example r670 and r770 don't see much of difference in fps when comes to lower detail/lighting and frame rate are mostly identical except fps. but when come to extreme detail r770 will take advantage because of shader and drop less than r670. however both r670 and r770 having little difference in pixel fill rate except r770 having higher clock and given little more fps. so you want more frame rate then you will need more rops.



sorry i don't know how to design the GPU, i'm just saying it because the correlation between each GPU design,


----------



## bear jesus (Sep 30, 2010)

I have to admit all this is getting so confusing, i wish AMD would hurry up and start telling us something official about the cards.


----------



## jasper1605 (Sep 30, 2010)

bear jesus said:


> I have to admit all this is getting so confusing, i wish AMD would hurry up and start telling us something official about the cards.



Amen to that!  For someone who doesn't understand ultra tech lingo to begin with and then reading conflicting views on ROPS SIMD lanes ALUs MEOW (just for kix) it gets very confusing


----------



## bear jesus (Sep 30, 2010)

jasper1605 said:


> Amen to that!  For someone who doesn't understand ultra tech lingo to begin with and then reading conflicting views on ROPS SIMD lanes ALUs MEOW (just for kix) it gets very confusing



 
I have almost given up on trying to understand this all, although i admit it was a good excuse to read up on gpu design but really i'm only that interested in how powerful a card is and how that translates into high fps at high resolution and detail within a reasonable cost.

I damn AMD for being so quiet about it all, i geuss all we can do is wait for the release as i'm not expecting much official information before then, hopefully AMD has a nice supprise for us all.


----------



## Tatty_One (Sep 30, 2010)

yogurt_21 said:


> it is possible that ATI/AMD already have a working core with 64 rops on a 256bit bus and we're seeign half of that on barts. another thing to keep in mind is that a few years ago 16 rop's were the max ati could do on a 256-bit bus, so at the time I could have argued that they couldn't put 32 rops on that bus width, I would have been wrong.
> 
> besides the fact I don't care if they have to go to a 384-bit bus width with 48 rop's, caymen needs to increase the rop count as well as shaders and tmu's to fit in with barts in the lineup otherwise barts will be the odd man out and steal the sales.



We could disagree over individual points on this all day.... as it seems we are,  and to be honest, I have lost the will to live!   So i will just re-iterate my origional point which instigated this lengthy discussion, not just with you but with one or two others....current architecture prohibits more than 32 ROP's on  a 256 bit memory bus, not being an Engineer or whatever, I don't know if thats because it's technically impossible (because of the interlinked technology or whether it is just totally impractical which is precisely why NVidia have had to raise said bus to 384 bit to fit more ROP's on, don't you or anyone else think that if 64 ROP's could be linked to a cheaper 256bit bus without to much grief then manuafacturers would adopt that higher performance lower cost option? (assuming the cost would be lower as no additional PCB layers would need to be added)  I am not saying it is impossible, I am saying that both AMD and NVidia's architecture and relationship between their memory controllers and ROP's suggests strongly to me that this will not happen. 

As I said earlier, I am quite prepared to stand up and proclaim I am wrong if more than 32 appear on a 256 bit Bus.  I don't and have never argued against the benefits of a wider bus with a greater ROP count, just the point that there are many more elements to performance than just that and if the 5870/5850 only show that to a small degree, that is probably simply due to the fact that in retail, AMD's easiest and cheapest option is just to raise core clocks, I am sure if they wanted to they could have increased the performance some more without increasing the bus/ROP count.... but why would they want to with the cards positioning?  I simply think that Cayman may well have more ROP's than 32, i suppose I just don't think that they will be on a 256bit bus   just my thoughts and opinions.


----------



## yogurt_21 (Sep 30, 2010)

Tatty_One said:


> We could disagree over individual points on this all day.... as it seems we are,  and to be honest, I have lost the will to live!   So i will just re-iterate my origional point which instigated this lengthy discussion, not just with you but with one or two others....current architecture prohibits more than 32 ROP's on  a 256 bit memory bus, not being an Engineer or whatever, I don't know if thats because it's technically impossible (because of the interlinked technology or whether it is just totally impractical which is precisely why NVidia have had to raise said bus to 384 bit to fit more ROP's on, don't you or anyone else think that if 64 ROP's could be linked to a cheaper 256bit bus without to much grief then manuafacturers would adopt that higher performance lower cost option? (assuming the cost would be lower as no additional PCB layers would need to be added)  I am not saying it is impossible, I am saying that both AMD and NVidia's architecture and relationship between their memory controllers and ROP's suggests strongly to me that this will not happen.
> 
> As I said earlier, I am quite prepared to stand up and proclaim I am wrong if more than 32 appear on a 256 bit Bus.  I don't and have never argued against the benefits of a wider bus with a greater ROP count, just the point that there are many more elements to performance than just that and if the 5870/5850 only show that to a small degree, that is probably simply due to the fact that in retail, AMD's easiest and cheapest option is just to raise core clocks, I am sure if they wanted to they could have increased the performance some more without increasing the bus/ROP count.... but why would they want to with the cards positioning?  I simply think that Cayman may well have more ROP's than 32, i suppose I just don't think that they will be on a 256bit bus   just my thoughts and opinions.



as always none of are engineers so it's all speculation (and if there is an amd engineer watching this thread, wtf? get back to work!) we'll see how it comes out, they could very well prove us al wrong and show such a strong improvemnt in shader power that we start seeing nvidia style shader counts for all we know. lol


----------



## cheezburger (Sep 30, 2010)

dalelaroy said:


> First of all, I read an interview with an AMD engineer in which he stated that the shaders of Cypress take up 80% of the Cypress die. This was within the context of discussing SIMD pipelines, so he might have meant SIMD pipelines, which would be shaders plus TMUs plus SIMD logic, but even your 60% for shaders plus 15% for TMUs and SIMD logic do not add up to the 80% stated by this engineer. Where do you get your figures.
> 
> Second, while it is common to quote 1600 for the number of shaders in Cypress, Cypress actually has 1600 ALUs organized as 320 shaders, that are arranged in 20 SIMD pipelines having 16 shaders and 4 TMUs each. Each shader has 4 simple ALUs and 1 complex ALU. Barts/Cayman is supposed to have 4 moderate complexity ALUs per shader.
> 
> ...



that 80% is already included TMU/SIMD controller. consider amd's architecture is shader/ALU tight up with TMU/SIMD ctrl in the same module while it separate rop and bus to another section. so basically my calculation is close to it.

hd 2900gt was indeed 16 total with 4 disable. consider the die size and yield is completely identical to xt/pro version. however like 5830, its bad scaling ending generate more heat and far less performance in expectation. any cut down version that to be 3/4 or going odd number like fermi will cause bad scaling and performance loss. especially on amd's bus design it is impossible to go 6/12 configure then 8/16. their SIMD cluster and instruction pipeline has prevent it happen. so it will be logical either stay the same or double it. 40/320bit or 48rop/384bit bus will not possible on amd line, at least not in this generation.


----------



## Wile E (Oct 1, 2010)

TheMailMan78 said:


> Its just you have bad cards. I ran crossfire with 4850s for a very long time without issue.



I've run single 4850, single 4870, crossfire 4850's, 4870+4850, crossfire 4870, 4870x2 + 4870, and finally just 4870X2.

Bugs in every single release past 8.10. Even on completely clean OS installs.


----------



## bear jesus (Oct 1, 2010)

Wile E said:


> I've run single 4850, single 4870, crossfire 4850's, 4870+4850, crossfire 4870, 4870x2 + 4870, and finally just 4870X2.
> 
> Bugs in every single release past 8.10. Even on completely clean OS installs.



To be honest i'm sure one major reason why some people seam to have bugs and others don't is mainly due to different hardware/os setups and also different choices in games.


----------



## Widjaja (Oct 1, 2010)

Wile E said:


> I've run single 4850, single 4870, crossfire 4850's, 4870+4850, crossfire 4870, 4870x2 + 4870, and finally just 4870X2.
> 
> Bugs in every single release past 8.10. Even on completely clean OS installs.



Bugs?

If there are I have not noticed them with my HD4850.


----------



## mdsx1950 (Oct 1, 2010)

Even my 5970s seem to be running without any driver problems. Currently running 10.8. 
Maybe its because i haven't OCed the card and left it the way it is.


----------



## pantherx12 (Oct 1, 2010)

Not had driver issues since 9.3s myself.

Any more news about barts?


----------



## cadaveca (Oct 1, 2010)

mdsx1950 said:


> Even my 5970s seem to be running without any driver problems. Currently running 10.8.
> Maybe its because i haven't OCed the card and left it the way it is.


Same here, but eyefinity has become the bane of my existence. Nevermind that 5970 and Crossfire Eyefinity was supposed to be the only supported config...took till almost February before 5870 got Crossfire and Eyefinity.

Did you get that? 5970 supported Crossfire Eyefinity, but no other Crossfire config supported Eyefinity. Should give you an idea of how truly poor these drivers are...

I dropped back to one card, and one monitor, and barely have any issues. I'm still left with a card that overheats when it's 30c in the room, but I RMA'd one card, and maybe that will fix all my problems...but I am not confident it will, as I still get cursor curruption with Eyefinity and DP monitors. If I take out all the add=ons this gen has in comparison to 4-series, the cards work GREAT!. :shadedshu



pantherx12 said:


> Not had driver issues since 9.3s myself.
> 
> Any more news about barts?



Nope. With ~2 weeks left before NDA expires, I don't tihnk we'll hear very much. Reviewers for launch should have already received cards, so NDAs are tight on info right now.

What we need is pictures of retail card, heatsink, etc...should go a long way. Of course, this assumes that the OCT 18th date is correct....


----------



## mdsx1950 (Oct 1, 2010)

cadaveca said:


> Same here, but eyefinity has become the bane of my existence. Nevermind that 5970 and Crossfire Eyefinity was supposed to be the only supported config...took till almost February before 5870 got Crossfire and Eyefinity.
> 
> Did you get that? 5970 supported Crossfire Eyefinity, but no other Crossfire config supported Eyefinity. Should give you an idea of how truly poor these drivers are...
> 
> ...



Drivers aren't the best but it serves the purpose.. Atleast for me.  And i owned an Eyefinity setup till about two months back.


----------



## Wrigleyvillain (Oct 1, 2010)

I don't have any ATI driver "problems" per se and really never did. However, in terms of my overall gaming experience (and I hate to admit this btw) I feel I simply get an overall better experience with Nvidia software even with the relatively limited time I've used NV cards compared to ATI. I don't think it's any one reason like so many games are Nvidia sponsored TWIMTBP from the ground up or the existence of great tools like NHancer but rather a combination of things. And probably not everyone would feel the same even having the same hardware and games as I've had.

Experiences like CaDaveCa's leave a bad taste in my mouth though.


----------



## wolf (Oct 1, 2010)

If reviewers have or are now starting to recieve cards, no doubt something will leak somewhere and much jizzing will occur.


----------



## cadaveca (Oct 1, 2010)

Wrigleyvillain said:


> Experiences like CaDaveCa's leave a bad taste in my mouth though.




What's really the issue is that there hasn't been any fix for outstanding issues for so long. I've had many cards, RMA'd cards, bought new parts, etc..it's that bit that is most frustrating.

But like I said...originally it was 5970's only for my chosen monitor config, and maybe if I had chosen better in my purchases, I'd not have so many issues. In the end, I was hoping for tri-fire Eyefinity...one gpu per monitor...if one card works well @ 1920x1080 and 8xAA, then 3 should be ideal, right?


:shadedshu





If the 6-series will do Eyefinity right, I'm gonna get 3x 40-inch LCD TVs for it. I was looking last night while shopping for a microwave, and I can get some decent panels for ~$700 locally. I even ahve the wife's approval...

But that damn corrupting cursor issue has got to stop. And I know for sure I'm not the only one with the problem, as it's in the release notes. Thats' a deal-breaker for me.


----------



## Tatty_One (Oct 1, 2010)

I had HUGE problems with crossfire on my old 4850's and 4890's for most of the early 9.xxx catalyst releases, to the point where I actually ditched my 1gb 4850 crossfire setup which was a real shame as they were palit sonics and were running at 880mhz without voltage tweaks, I managed to keep hold of my 4890 crossfire setup for quite some time as finally Ati sorted issues out on about catalyst 9.5.

having said that, with the single 5850 I have had no issues whatsoever period.


----------



## mdsx1950 (Oct 2, 2010)

cadaveca said:


> If the 6-series will do Eyefinity right, I'm gonna get 3x 40-inch LCD TVs for it. I was looking last night while shopping for a microwave, and I can get some decent panels for ~$700 locally.



Wow that will be one hell of a setup! 



> I even have the wife's approval...



 
Lucky.


----------



## wolf (Oct 3, 2010)

cadaveca said:


> If the 6-series will do Eyefinity right, I'm gonna get 3x 40-inch LCD TVs for it



i've considered the same thing, freakin EPIC man.



cadaveca said:


> I even ahve the wife's approval....





mdsx1950 said:


> Lucky.



either that or crazy, or both  I can picture the look on her face when she finds you for the first time in your nook of pixels both jizzing and drooling simultaneously.


----------



## cheezburger (Oct 3, 2010)

dude...it's already on NDA...so save your excitement or disappointment later when it come to shelve.


----------



## CrystalKing (Oct 6, 2010)

Barts XT


















Source: Chiphell/HeavenPR
NEW


----------



## wolf (Oct 6, 2010)

CrystalKing said:


> Barts XT
> http://www.chiphell.com/data/attachment/forum/201010/06/132614rz0re77ar55555u0.jpg
> 
> Source: Chiphell/HeavenPR



simple, I like it.


----------



## cheezburger (Oct 6, 2010)

CrystalKing said:


> Barts XT
> http://www.chiphell.com/data/attachment/forum/201010/06/132614rz0re77ar55555u0.jpg
> 
> Source: Chiphell/HeavenPR



that cooler reminding me of 9800gtx.....


----------



## T3kl0rd (Oct 6, 2010)

CrystalKing said:


> Barts XT
> http://www.chiphell.com/data/attachment/forum/201010/06/132614rz0re77ar55555u0.jpg
> 
> Source: Chiphell/HeavenPR



Looks like every GPU with a PCB long fan but thanks for the pic.


----------



## a_ump (Oct 6, 2010)

reminds me of 8800GTS. It'll have a sticker or something on it forsure, has to be like an eng sample


----------



## jaredpace (Oct 7, 2010)

cadaveca said:


>


----------



## cadaveca (Oct 7, 2010)

I don't see anything that says Barts....


----------



## JATownes (Oct 7, 2010)

jaredpace said:


> http://pic.xfastest.com/z/AMD/2010/6800_Series/6800_2.jpg



If this shot is real, wouldn't it say "Built by AMD" instead of "Built by ATI"?


----------



## Tatty_One (Oct 7, 2010)

JATownes said:


> If this shot is real, wouldn't it say "Built by AMD" instead of "Built by ATI"?



Good point!  the fog falls upon us once again.


----------



## jaredpace (Oct 8, 2010)

cadaveca said:


> I don't see anything that says Barts....



HAHA, okay now?


----------

