# AMD ''Barts'' GPU Detailed Specifications Surface



## btarunr (Sep 16, 2010)

Barely a week after pictures of AMD's "Barts" prototype surfaced, it wasn't long before a specifications sheet followed. The all-important slide from AMD's presentation to its add-in board partners made it to sections of the Chinese media. "Barts" is a successor to "Juniper", on which are based the Radeon HD 5750 and HD 5770. The specs sheet reveals that while indeed the GPU looks to be larger physically, there are other factors that make it big:

*Memory Controller* 
Barts has a 256-bit wide memory interface, which significantly increases its pin-count, and package-size. The "Pro" and "XT" variants (which will go on to be HD 6x50 and HD 6x70, respectively), have memory clocked at 1000 MHz and 1200 MHz, respectively, so that's nearly 100% increase in memory bandwidth .



*Tiny increase in SIMD count, but major restructuring*
Compared to Juniper, there seems to be an increase of only 20% in stream processor count physically. The XT variant has 960 stream processors, while the Pro variant has 800. AMD specifically mentioned SIMD block count, (10 enabled for Pro, 12 enabled for XT). If you noticed the slide, it says that the GPU is based on the "Cypress Dual Engine architecture", meaning that these 10 and 12 SIMD units will be spread across two blocks of 5 (Pro) or 6 (XT) SIMDs each, just like Cypress had two blocks of 10 SIMDs each. 

*Other components*
The Raster Operations unit (ROP) count has been doubled to 32, TMUs stand at 40 for the Pro and 48 for the XT. 

The design methodology is extremely simple. Juniper-based graphics cards anyway carry 8 memory chips to meet up to memory amount requirements of 1 GB using market-popular 1 Gbit GDDR5 chips, so why not just place those 8 chips across a 256-bit wide memory interface and double the memory bandwidth. The increased ROP count, coupled with up to 20% increase in shader compute power gives Barts the competitive edge it needs to face NVIDIA's reinvigorated GeForce 400 series after the introduction of the GeForce GTX 460. As for power draw, AMD projects the Pro variant to draw less than 150W, with the XT drawing "over" 150W. 

*Market Positioning*
AMD doesn't have huge expectations from this. It has its task cut out: to compete with the GeForce GTX 460 768 MB and 1 GB models. While memory count ROP made the cut out NVIDIA's variants, AMD's come from clock speeds and SIMD core counts. It should then become obvious what these GPUs' pricing should look like.

*When?*
Usually when AMD gives out such a presentation to its AIB partners, a market release is about 3 months away.





*View at TechPowerUp Main Site*


----------



## saikamaldoss (Sep 16, 2010)

Wow that's really sweet


----------



## afw (Sep 16, 2010)

Nice ... hope the prices will be decent ...


----------



## werez (Sep 16, 2010)

At least this time AMD is not "sponsoring" game developers to integrate dx999 in future games , and maybe , just maybe we will see some cheaper cards . Looking forward to hd6000 . Go AMD ! MY 260gtx`s are getting old


----------



## Paintface (Sep 16, 2010)

im hoping it will be pricewise around those of the current 5770 unless its scratching 5850 performance. im afraid we will see 4890 performance( a small notch above the 5770 )  and cost 230 dollar. while i paid 190 for my 4890 a year ago.


----------



## halfwaythere (Sep 16, 2010)

This should beat the GF104 pretty bad otherwise nvidia is going to leapfrog them early 2011. I think AMD wants a very good next year before it goes to NI and the new fab process. If the green side doesn't have anything special up their sleeves its not going to be very hard for them.


----------



## TheLaughingMan (Sep 16, 2010)

I don't know.  I have mixed feelings about this move from AMD.  The increase in die size will bring more heat, but I don't see them changing the cooling specs so these will just run hotter and eat more power.  I complete understand the move and why the decided to mark these as 6000 series (as per their own naming rules put in place during the 3000 era), but I don't think this will help their position much.  Lowering the price of the 5770 and 5830 seemed like a better move to me so they can spend the development time/money for these new 5000++ chips on building a better weapon.  They also seem to be using this to avoid dual chip variants that historically have sold poorly and created a lot of heat related failures (Only Exception being the King of Gaming GPU's as I call them in the $500+ range from both camps).

I just think it would have made more sense to make only one of these, call it the 5790 (the 960 version), put it in the price range between the GTX 460 and GTX 465, and then discontinue the 5830.


----------



## bear jesus (Sep 16, 2010)

Dreams of 6870(or whatever it will be called) spec.

512 bit gddr5 at 1600mhz (6400mhz effective) 64 rop's and 96 tmu's and 1920 stream processors and 1000mhz core speed would be a beautiful card


----------



## cadaveca (Sep 16, 2010)

WOAH.






If....if this is 5770 replacement....


Then 5870 replacement is 1920 shaders !?!





3 months...November 17th retail? March 23rd for 6870, with BullDozer-based desktop chips, April retail, to coincide with Duke Nukem launch?


----------



## KainXS (Sep 16, 2010)

depending on how amd reengineered the sp's this thing could be faster than the 5850

damn the 6870 could be a monster, makes me wonder if this is NI or SI


"wonders if its fake"


----------



## wolf (Sep 16, 2010)

bear jesus said:


> Dreams of 6870(or whatever it will be called) spec.
> 
> 512 bit gddr5 at 1600mhz (6400mhz effective) 64 rop's and 96 tmu's and 1920 stream processors and 1000mhz core speed would be a beautiful card



I think you live in a dream world, but one can only hope right.


----------



## bear jesus (Sep 16, 2010)

wolf said:


> I think you live in a dream world, but one can only hope right.




 dam straight im in a dream world, first off i woudl expect another 850mhz core speed, i think 512bit may be possible with current speed gddr5 or a 256bit with super fast gddr5 (1600mhz) but definatly not both, but i think the 1920 sp's is likly


----------



## W1zzard (Sep 16, 2010)

bear jesus said:


> 512 bit gddr5 at 1600mhz (6400mhz effective) 64 rop's and 96 tmu's and 1920 stream processors and 1000mhz core speed would be a beautiful card



$1000 graphics card


----------



## wolf (Sep 16, 2010)

bear jesus said:


> dam straight im in a dream world, first off i woudl expect another 850mhz core speed, i think 512bit may be possible with current speed gddr5 or a 256bit with super fast gddr5 (1600mhz) but definatly not both, but i think the 1920 sp's is likly



id say 256 bit with 1600mhz (6400mhz) GDDR5, 850mhz core is what they seem to like now for XT variants. also I agree 1920 sp's looks likely, but given the Barts core has 32 ROPS, I am very keen to see where they go ROP wise, if its 48 or more you are looking at a serious increase in AA grunt right there, and they will have finally caught if not surpassed Nv on that front.


----------



## xaira (Sep 16, 2010)

6870 with 64 rops, this thing will be a monster any way you slice it, i just hope they do a better job with the 6830 than the 5830, because that card is pointless


----------



## xrealm20 (Sep 16, 2010)

Looking sweet! - I may just have to pick one of these up....


----------



## bear jesus (Sep 16, 2010)

W1zzard said:


> $1000 graphics card



Very good point, the only reason i have a 4870 is because how cheap it was for the power, if the 6870 was the worlds most powerful single chip card at $1000 i'm sure i would be choosing between the 5xxx or 4xx cards.


----------



## CDdude55 (Sep 16, 2010)

Sounds great, but it's positioned against the GTX 460?. So one of my GTX 470's probably beat ''bart'' to fair extent.

Guess I'll wait for the 6870/6850.


----------



## btarunr (Sep 16, 2010)

If you'd like to save a kitten liked this story, please digg it.


----------



## Lionheart (Sep 16, 2010)

They better call this the HD6770, if they change the naming scheme around and call it the HD6870, then thats just retarded cause the HD5870 would be more powerful then a HD6870 judging from those specs shown and the rumours about AMD's new naming scheme.


----------



## CDdude55 (Sep 16, 2010)

CHAOS_KILLA said:


> They better call this the HD6770, if they change the naming scheme around and call it the HD6870, then thats just retarded cause the HD5870 would be more powerful then a HD6870 judging from those specs shown and the rumours about AMD's new naming scheme.



Barts is supposed to be the ''6750''/''6770'', it's meant to ''fight'' with the GTX 460.

So the 6870 should technically be better.


----------



## Lionheart (Sep 16, 2010)

CDdude55 said:


> Barts is supposed to be the ''6750''/''6770'', it's meant to ''fight'' with the GTX 460.
> 
> So the 6870 should technically be better.



Yeah I figured it would be named that, but hearing all the rumours about the HD6770 is going to be name the HD6870 and the HD6870 was going to change to the HD6970 and the dual GPU would be the HD6990 I just thought it was stupid


----------



## CDdude55 (Sep 16, 2010)

CHAOS_KILLA said:


> Yeah I figured it would be named that, but hearing all the rumours about the HD6770 is going to be name the HD6870 and the HD6870 was going to change to the HD6970 and the dual GPU would be the HD6990 I just thought it was stupid



That would suck. It would cause way to much confusion:shadedshu


----------



## $ReaPeR$ (Sep 16, 2010)

glad to see things moving in the red camp, since i got the 5830 though ill wait for the 7xxx or 8xxx series


----------



## CDdude55 (Sep 16, 2010)

$ReaPeR$ said:


> glad to see things moving in the red camp, since i got the 5830 though ill wait for the 7xxx or 8xxx series



Sell that 5830, pick up some extra cash, and buy a 5870/5970(or GTX 480) instead if you want to wait till the 7 or 8 series.


----------



## Semi-Lobster (Sep 16, 2010)

To be honest, I'm pretty disappointed by the high power consumption. For their, so far, short existence (starting with the revolutionary 4770), the X700 series have been excellent thanks to their low power consumption which was great for entry level gamers but at 150w, that is putting these cards on par with the 5850 while at the same time, not being as good as the 5850, at that rate, you might as well get a 5850 since the prices will drop once the 6000 series hits store shelves.


----------



## cadaveca (Sep 16, 2010)

Semi-Lobster said:


> but at 150w, that is putting these cards on par with the 5850 while at the same time, not being as good as the 5850, at that rate, you might as well get a 5850 since the prices will drop once the 6000 series hits store shelves.



What if it's faster than 5850? If "rumour" is true, and the shader complexity has changed, those 960 shaders might be far better performing than the current design. Those 960 shaders might be equal to 1920 of today, in certain situations.

ATI's 4+1 shader design might now be 2+2. We might see far higher gpu utilization, and the rumour in the past of a vastly superior "ultra-threading dispatch processor" seems to point more in this direction.

Looking into the past...4770 basically = 3870, and 5770 basically = 4890. So, this 6770, should be somewhere around, in the least, 5850 to 5870 performance, if done right.


----------



## cheezburger (Sep 16, 2010)

which most of rumor had fail XD. back to few days ago non any people believe me  while making their own specification by just add more shader while keep 16 rops/128bit for Barts. now these people need to grab guns and shoot themselves... anyway Barts is going to be 192ALU/48TMU/32ROPS/256bit bus while cayman will be 384ALU/96TMU/64ROPS/512bit bus. believe or not, amd is heading to high end/professional market.

So much of 7GT GDDR5 256bit bus/ 1920shader/120 tmu/32rops and pricing at $299 LOL



bear jesus said:


> Dreams of 6870(or whatever it will be called) spec.
> 
> 512 bit gddr5 at 1600mhz (6400mhz effective) 64 rop's and 96 tmu's and 1920 stream processors and 1000mhz core speed would be a beautiful card



CAYMAN IS NOT GOING TO HAVE 480 ALU(or 1920 shaders). even in 4D format 4 complexity arrangement. these ALU shader are too costly and cause huge die space. which it will end up like fermi.

6.4GT GDDR5 is also eats more power than lower frequency ram. it would be stupid for amd to make that move....


----------



## Semi-Lobster (Sep 16, 2010)

cadaveca said:


> What if it's faster than 5850? If "rumour" is true, and the shader complexity has changed, those 960 shaders might be far better performing than the current design. Those 960 shaders might be equal to 1920 of today, in certain situations.
> 
> ATI's 4+1 shader design might now be 2+2. We might see far higher gpu utilization, and the rumour in the past of a vastly superior "ultra-threading dispatch processor" seems to point more in this direction.
> 
> Looking into the past...4770 basically = 3870, and 5770 basically = 4890. So, this 6770, should be somewhere around, in the least, 5850 to 5870 performance, if done right.



You re right, the only thing I'm not onboard with is the 4890=5770! I've had both and the 5770 is at best as good as the 4870. If you are right though, the next step down, the 6600 series (I wonder if AMD will release a 6660? ) will hopefully be as good as the 5770 and have lower power consumption


----------



## cadaveca (Sep 16, 2010)

Semi-Lobster said:


> You re right, the only thing I'm not onboard with is the 4890=5770! I've had both and the 5770 is at best as good as the 4870. If you are right though, the next step down, the 6600 series (I wonder if AMD will release a 6660? ) will hopefully be as good as the 5770 and have lower power consumption



Yeah, it's not exact, and 4770 was better than 3870, but 5770 kinda lacks the 4980 grunt, due to 128-bit mem bus.

I think AMD may skip "6600" series, as is too close to old geforce card naming, but maybe they will go with 64x0/65x0.

Late next month real details should be out, so I'm more than happy to wait and see what they bring to the table...

But I'm still sitting here waiting for CrossHair4Extreme for my 1090T, so I will also wait for next spring, and the high-end cards, before making any purchases, no matter how good these cards will be...


----------



## Semi-Lobster (Sep 16, 2010)

cadaveca said:


> Yeah, it's not exact, and 4770 was better than 3870, but 5770 kinda lacks the 4980 grunt, due to 128-bit mem bus.
> 
> I think AMD may skip "6600" series, as is too close to old geforce card naming, but maybe they will go with 64x0/65x0.
> 
> ...



The ATI/AMD naming cycle has been in its current form since the 2000 series, the number system is to inform consumers about the video card's relation with other video cards. 800 is high performance, 700 (which hasn't been around for very long) is more mainstream performance, 600 series is mainstream, 500/400/300 are all budget and the 100 and 000 are USUALLY (but not always) reserved for IGPs. Not using the 600 would leave a weird gap for no reason in the line up. If AMD was going to do something that drastic they would probably prefer to radically change the entire naming system and we all know that this series is going to be the 6000 series


----------



## cadaveca (Sep 16, 2010)

Semi-Lobster said:


> The ATI/AMD naming cycle has been in its current form since the 2000 series, the number system is to inform consumers about the video card's relation with other video cards. 800 is high performance, 700 (which hasn't been around for very long) is more mainstream performance, 600 series is mainstream, 500/400/300 are all budget and the 100 and 000 are USUALLY (but not always) reserved for IGPs. Not using the 600 would leave a weird gap for no reason in the line up. If AMD was going to do something that drastic they would probably prefer to radically change the entire naming system and we all know that this series is going to be the 6000 series



Sure, I agree with that, but they introduced the 3870x2...then the 4870x2...but with 5-series, they called the dual cpu card 5970...instead of 5870x2...

For that reason alone, I wouldn't put it past them to go outside long-standing naming conventions...You could even say that now they are AMD as as whole, and not ATi/AMD, anything is possible...


----------



## cheezburger (Sep 17, 2010)

Semi-Lobster said:


> The ATI/AMD naming cycle has been in its current form since the 2000 series, the number system is to inform consumers about the video card's relation with other video cards. 800 is high performance, 700 (which hasn't been around for very long) is more mainstream performance, 600 series is mainstream, 500/400/300 are all budget and the 100 and 000 are USUALLY (but not always) reserved for IGPs. Not using the 600 would leave a weird gap for no reason in the line up. If AMD was going to do something that drastic they would probably prefer to radically change the entire naming system and we all know that this series is going to be the 6000 series



amd's current naming scheme

x900- dual GPU setup/enthusiast

x800- high end/professional 

x700- performance

x600- mainstream

x500~x300-  budget


----------



## OneMoar (Sep 17, 2010)

do want  ...


----------



## cheezburger (Sep 17, 2010)

wolf said:


> id say 256 bit with 1600mhz (6400mhz) GDDR5, 850mhz core is what they seem to like now for XT variants. also I agree 1920 sp's looks likely, but given the Barts core has 32 ROPS, I am very keen to see where they go ROP wise, if its 48 or more you are looking at a serious increase in AA grunt right there, and they will have finally caught if not surpassed Nv on that front.



again, do you have any proof that cayman is going to be 6.4GT GDDR5 with 256bit bus/ and 32rops? because it came from some chinese/korean site that has totally no evidence to define that benchmark was real.

this is what happen when gpuz cant utilize 9600gt







do you think cayman is going to be 256bit because of gpuz error? if Barts is 256bit and half of cayman's spec than there's no reason cayman cant be 64 rops and 512bit bus


----------



## cadaveca (Sep 17, 2010)

cheezburger said:


> do you think cayman is going to be 256bit because of gpuz error? if Barts is 256bit and half of cayman's spec than there's no reason cayman cant be 64 rops and 512bit bus



Did the fact that this current info comes from an "official AMD slide" escape you?:shadedshu

ChipHell has been a pretty reliable source in the past.


----------



## cheezburger (Sep 17, 2010)

cadaveca said:


> Did the fact that this current info comes from an "official AMD slide" escape you?:shadedshu
> 
> ChipHell has been a pretty reliable source in the past.



they might be putting barts in the benchmark rather than cayman since cayman prototype hasn't even being out yet. some rumor said it will start test in this month while barts had finished testing back in june. which  in most of case they didn't even have cayman yet when they leaked the photo. also we aint really know that 68xx naming postion is for barts or cayman. because according to the same website they also rumor some of hd 6000 MAY BE rebrand from existed hd 5000 line. so who knows? plus chiphell wasn't always correct. like they were predict barts was going to be 1200(5D format) ALU:60TMU and 16 rops with 128bit bus. but today's new is like a palm that slap their face really hard....


----------



## cadaveca (Sep 17, 2010)

Except of course, that they posted the new info, correcting themselves. the benchmarks don't matter...something as simple as a driver change makes benchmarks useless.

they may be making info up...they might be misled, even...it's really so unimportant, I don't understand why you think the sole source of info posting newer info that contradicts thier earlier info, is a bad thing?

Anyway, with only a month or so before launch, none of it matters, as the truth will come out very soon.


----------



## cheezburger (Sep 17, 2010)

cadaveca said:


> Except of course, that they posted the new info, correcting themselves. the benchmarks don't matter...something as simple as a driver change makes benchmarks useless.
> 
> they may be making info up...they might be misled, even...it's really so unimportant, I don't understand why you think the sole source of info posting newer info that contradicts thier earlier info, is a bad thing?
> 
> Anyway, with only a month or so before launch, none of it matters, as the truth will come out very soon.



the only thing dislike them is because they rumor something that hasn't even release with engineering sample which it will end up mislead general audience. using barts in the benchmark instead of cayman is not guilty. however since they already knew the sample they used is not cayman which they shouldn't told people it is cayman that with 256bit bus... but that's all based on IF they knew already.


----------



## OneMoar (Sep 17, 2010)

cheezburger said:


> the only thing dislike them is because they rumor something that hasn't even release with engineering sample which it will end up mislead general audience. using barts in the benchmark instead of cayman is not guilty. however since they already knew the sample they used is not cayman which they shouldn't told people it is cayman that with 256bit bus... but that's all based on IF they knew already.




SHHH


----------



## cadaveca (Sep 17, 2010)

cheezburger said:


> but that's all based on IF they knew already.


Sure, but it's thier reputation, right? Who cares?

Nobody should believe a single thing when it comes to tech rumours, until real, official info comes out, through official channels.

AMD has been playing catch-up since R600 and Phenom I. Both were largely over-hyped, and under-delivered.

All these products are unimportant. They don't really offer anything new...just a bit more added on to what already exists. "Fusion" is where the real future is, and all these products, no matter who is making them, are merely stop-gaps to generate income until they get it RIGHT. And the programming needs work.

To me, it seems that AMD is making the proper moves behind the scenes to prepare for this shift. Since they bought ATI, they have been headed towards a specific goal..and it's not really that close, just yet.

I'm gonna buy a high-end 6-series card. In fact, I'll probably buy 4 or more. But that card isn't even gonna come this year...it doesn't make any sense, business-wise, to do so.

But this 6770, it has to come out. And it's got to be real good. AMD needs to keep nvidia down, and they need a new card to do that. GTX460 is just that good.

In the future, nvidia is screwed in the x86 marketplace. Take a look at thier stock value over the past 8 months, and you'll see that investors agree. AMD is down 36% vs nV's 44% YTD. 

Without 32nm, nobody should expect too much, either. If these cards are even 33% faster than 5-series, AMD has done a good job. If it's more than that...AMD really has killed nV.


The few benches that were shown don't say anything in regards to real-world performance. I'll take this info here today though. I mean really now...AMD's own marketing says it all..."The Future is Fusion". Um, Hello?


----------



## cheezburger (Sep 17, 2010)

cadaveca said:


> Sure, but it's thier reputation, right? Who cares?
> 
> Nobody should believe a single thing when it comes to tech rumours, until real, official info comes out, through official channels.
> 
> ...



agreed, however many ppl in this forum don't have any sense of rations and easily roll over by romurs. (yeah, so much of 480 ALU for cayman... with only 256bit bus and 32 rops..)

right now unless nvidia can come out another revolutionary architecture, like amd does at this moment, or else they can only hope 28nm fab as soon as possible. since gtx 460 is already far larger than cypress. i dont think they can add anymore feature on it like amd did with cayman/barts. not until nvidia get rip of these bulky shader first and finally start over...but if barts is already outperform gtx 480 in 33% margin i personally doubt NV has any hope on current 40nm fab....

PS: hell! cayman is reveal only 10~15% larger than g104 but g104 is far outclassed


----------



## toyo (Sep 17, 2010)

It used to be a time when you had to wait until the very last hour to know how a card will perform... and there were sweet surprises... like the HD 4800 series.

From my point of view, the Radeons have a huge disadvantage with their lack of CUDA support. Maybe it will pay off supporting OpenCL, who knows.

And how could AMD let Nvidia get exclusive support from Adobe in Mercury engine? I can't understand. It's like they really want to position their cards as good only for gaming. Wake up, AMD.


----------



## Athlon2K15 (Sep 17, 2010)

So AMD is going to release these first rather than releasing their next top dawg GPU?


----------



## cadaveca (Sep 17, 2010)

AthlonX2 said:


> So AMD is going to release these first rather than releasing their next top dawg GPU?



Yeah, seems that way. I mean, that's how they used to do it too...new, smaller chip, on the new process, right?

So, same timeframe, but no new process. This means the new gen won't be all it could have been, but that's because of TSMC, not AMD, and effects nV just as hard. I find it hard to fault AMD in this situation.


And if my theory on high-end gpu performance is right, they really need bulldozer before they release a new high-end gpu, and as well so that they release an entire PLATFORM, rather than just a cpu and chipset, and then a gpu.

TSMC threw a big huge wrench in the gpu market, but I can honestly say I saw this coming for years...I have been saying for years that ATI should get away from using TSMC.


Imagine, if AMD had 28nm now, and nV didn't?



nVidia really would have to roll over and die. NO x86, no new fab process...AMD kinda missed out on that one.


----------



## cheezburger (Sep 17, 2010)

cadaveca said:


> Yeah, seems that way. I mean, that's how they used to do it too...new, smaller chip, on the new process, right?
> 
> So, same timeframe, but no new process. This means the new gen won't be all it could have been, but that's because of TSMC, not AMD, and effects nV just as hard. I find it hard to fault AMD in this situation.
> 
> ...


 

don't worry, intel is to the back, they want cuda for so long....i wouldn't imagine if intel acquire nvidia and come out a completely steroided fermi II with 22nm fab process.....it will be hell for amd..


----------



## Sasqui (Sep 17, 2010)

"Positioned against"...

that implies two things:


It will be similar (or better) performance
It will be similar (or lower) price

Doesn't that mean having only two competitors really *doesn't* keep the price in check?


----------



## cadaveca (Sep 17, 2010)

cheezburger said:


> don't worry, intel is to the back, they want cuda for so long....i wouldn't imagine if intel acquire nvidia and come out a completely steroided fermi II with 22nm fab process.....it will be hell for amd..



I think Intel would rather let AMD smash nV, and then pick up the bits and pieces later, for alot less cost. AMD would be doing Intel a favor. plus add that I don't think Intel has capacity for nV gpus, so would still be reliant on TSMC.




Sasqui said:


> Doesn't that mean having only two competitors really *doesn't* keep the price in check?












WAIT. You're just figuring this out now?

Anyway, I'm hoping for same price.


----------



## ToTTenTranz (Sep 17, 2010)

Well, this pretty much confirms the change from the old 5D shaders.
They're probably also bumping the geometry performance, namely DX11 tesselation, along with the new shaders.

Nonetheless, I have no doubt that Bart will be a whole lot smaller than GF104, thus cheaper to produce. Besides, since the HD5830 can be made with a relatively small PCB, I have no doubts this card won't be much bigger than the HD5770.


I do think they could just cut the prices in their current HD5000 line to stupidly low values (their yields should be sky-high by now) while holding off for the 32/28nm process. nVidia's underperforming Fermi architecture would allow them to do that.


----------



## CDdude55 (Sep 17, 2010)

ToTTenTranz said:


> nVidia's underperforming Fermi architecture would allow them to do that.



My GTX 470's says something completely different.


----------



## 20mmrain (Sep 17, 2010)

Well the thing I can't help but notice is that the 5770 has 800 stream Processors.... and this Card (supposedly the "6770") has 300 Shaders. 

Now since I don't really get into the lingo of what means what.... I just know what's the most powerful at the time and how to overclock it well  

I thought that Stream Processors was what ATI/AMD called their Shader cores correct? 

If I have that right.... wouldn't this mean that this has to be new architecture? Considering that the old Stream processors were weaker then Nvidias Shader "Cuda" Cores? Now this card only has a number of 300 compared to the 5770's 800?

If I have this understood correctly.... this will be one hell of a series. We might finally be able to adjust Shader clocks on ATI cards too!? I just can't wait to see what we have instore for this generation.

I will tell you what though. Even if this card is meant to go against the GTX 460.... the DX 11 tessellation on these cards compared to Fermi (If the benchmarks are true) look like this series will leave fermi in the dust and now where to be seen. 

I will definetly sell my GTX 460's for a pair of these. If not go even higher up the ladder if the price is right

That's not to mention the 960 Shader version of this card. This thing should be crazy as hell! 

But to someone out there who said that "We could see this card be on the 5870 levels" I hope so for AMD but I hope not for our sake. Because if this is the case.... we are looking at a mid card for $400 buck each and a top card for $1000 grand or more easy.


----------



## cheezburger (Sep 17, 2010)

ToTTenTranz said:


> Well, this pretty much confirms the change from the old 5D shaders.
> They're probably also bumping the geometry performance, namely DX11 tesselation, along with the new shaders.
> 
> Nonetheless, I have no doubt that Bart will be a whole lot smaller than GF104, thus cheaper to produce. Besides, since the HD5830 can be made with a relatively small PCB, I have no doubts this card won't be much bigger than the HD5770.
> ...




i doubt they may just discontinue it rather than keep it around as there would be pointless, a card has poor shader fill rate while having more shader than new series that easily outperform it with less shader/ALU and not even playable when comes tessellation. also the die isn't much smaller/cheaper than northern islands which makes no profit at all as manufacture continues. if a barts can outperform gtx 480 while keep the price around $259~299 will make no room for evergreen. dont expecting evergreen become another g92..



20mmrain said:


> Well the thing I can't help but notice is that the 5770 has 800 stream Processors.... and this Card (supposedly the "6770") has 300 Shaders.
> 
> Now since I don't really get into the lingo of what means what.... I just know what's the most powerful at the time and how to overclock it well
> 
> ...



for what i know 4D format is like s core that can send 4 data in one clock, unlike previous design of 5D format that has to divided into 5 small dedicate shader pipeline. this design(4 complexity arrangement) is done by one shader core than split core to do the multiple purpose(4 simple + 1 complex format ). more like GDDR5 but it's in inside of shader core. if it's correct a shader effective data rate would be "core clock x 4" if barts's core frequency is 900mhz than the shader core will run 3.6GT(3600MT) effectively.


----------



## TheLaughingMan (Sep 17, 2010)

20mmrain said:


> Well the thing I can't help but notice is that the 5770 has 800 stream Processors.... and this Card (supposedly the "6770") has 300 Shaders.
> 
> Now since I don't really get into the lingo of what means what.... I just know what's the most powerful at the time and how to overclock it well
> 
> ...



Everything you said there about the 6770 is completely wrong and I am not sure where you got that number.  They 67x0 GPU's to be released soon got double the memory bandwidth, double the ROPs (from 16 to 32), and ~20% more Streaming Processor cores for the XT.

6750 = 800 Streaming Processors             5750 = 720 Streaming Processors
6770 = 960 Streaming Processors             5770 = 800 Streaming Processors


----------



## dir_d (Sep 17, 2010)

Well if that number is correct, and they really did move this card to 4way cores that means the 5770 had 160x5 (800) stream processors and this new card will have 300x4  (1200). This card maybe be very well be 6870 and be on par with the 5870 if the numbers are correct.

edit...if the shaders still stay the same 200x4 (800) still a nice bump in performance by keeping the same number of shaders.


----------



## 20mmrain (Sep 17, 2010)

TheLaughingMan said:


> Everything you said there about the 6770 is completely wrong and I am not sure where you got that number.  They 67x0 GPU's to be released soon got double the memory bandwidth, double the ROPs (from 16 to 32), and ~20% more Streaming Processor cores for the XT.
> 
> 6750 = 800 Streaming Processors             5750 = 720 Streaming Processors
> 6770 = 960 Streaming Processors             5770 = 800 Streaming Processors



Oops LOL that makes much more sense Sorry it is late here and I wasn't paying that close attention.....

Here is what I saw.....







You see my cuircled red area.... on my screen it looks like 300. That is why I thought something was werid for the next level card down to go from 920 Shader to 300 shaders.

LOL goofy move on my part 

*So nevermind I retract my previous ideas and statements "Blush" *

*Still on another note 32 ROP's 800 to 920 shader's 256 Bit bus..... all looks bad ass to me. Wish I could visualize more what this all meant but after looking at comparable cards and specs today. It seems what others here have been saying might be true. This card could be very close the the 5850/5870 area.*


----------



## cadaveca (Sep 17, 2010)

Could be that 6770 simply adds 160 more shaders, plus 256-bit bus, and there is no shader change at all. And to me, that would SUCK!!




Wouldn't be much faster than current 4890, with DX11 tacked on.



Considering the "competition", there's no reason to expect more than that at all(Uh, based on cypress dual engine, each engine now has one extra cluster, hello?). New shaders might kill 470, even, if clocked high enough, so I think that might be asking to much...but boy, would it ever be nice.


----------



## cheezburger (Sep 17, 2010)

cadaveca said:


> Could be that 6770 simply adds 160 more shaders, plus 256-bit bus, and there is no shader change at all. And to me, that would SUCK!!
> 
> 
> 
> ...



then why would amd gone this far and putting double size of rops/ram bus.....if the shader is still 5D:shadedshu


----------



## cadaveca (Sep 17, 2010)

Becuase devs have been complaining about that exact thing for some time. If you go back in time, about 5 years, in my "microstutter" posts(yes, been looking at that stuff that long), texture performance(xTexture Memory Units) have been a big issue. One might even suppose that maybe this is why nv's tech excels so greatly, although math performance is much lower than AMD.

I mean, realisticlly this is what makes the most sense. I think when 32nm existed, the shader change was planned, but because they are stuck in 40nm, the changes they can make are limited...higher-order shaders are going to generate more heat in the shader core, but simply enlarging the die size, and adding more shaders, just means you need to provide better overall cooling capacity for the already existing cooler.

This is business, after all, so the biggest impact, with the smallest cost, works best. New shaders migth seem like htey'd cost less, but the dev time costs alot too. Would be far more practical to invest further time in any such changes, and make small, but effective, ones, that increase profits.


----------



## Benetanegia (Sep 17, 2010)

cheezburger said:


> then why would amd gone this far and putting double size of rops/ram bus.....if the shader is still 5D:shadedshu



They doubled the rops/bus width because the 5770 was already very limited. Regardless of how much more shader/texture performance the new card has 10%, 20%, 50%... a higher rop power was necessary if a tangible improvements was to be seen. AMD is commited to only use widths that are multiples of 2, instead of using intrmediate numbers like Nvidia does so that's why the 256 bit figure comes in, even though it will probably be overkill on these cards. Don't expect Cayman to have more than 256 bits. It wouldn't be the first time in history when a midrange and high-end chips share the bus width...


----------



## cheezburger (Sep 17, 2010)

Benetanegia said:


> They doubled the rops/bus width because the 5770 was already very limited. Regardless of how much more shader/texture performance the new card has 10%, 20%, 50%... a higher rop power was necessary if a tangible improvements was to be seen. AMD is commited to only use widths that are multiples of 2, instead of using intrmediate numbers like Nvidia does so that's why the 256 bit figure comes in, even though it will probably be overkill on these cards. Don't expect Cayman to have more than 256 bits. It wouldn't be the first time in history when a midrange and high-end chips share the bus width...



really? just few days ago some of you said barts's rops wouldn't be over 16 and stick with 128bit bus...now things change...if cayman is about twice of barts spec than that will make 64 rops....i dont think a 256bit bus can feed these rops..again things will goes different than you think...plus faster ram eats more power and runs hotter and more costly than slower ram sodont get any hope on 7gt gddr5 ram as well. because it will end up having dual 8 pin pcie on a card that's merely 32 rops/256bit bus.....

it would be historical event if both high end and main stream share the same size bus...dont tell me about g92....it's never meant to be high end if werent gt 200 delayed...

anyway just wait and see cayman's spec


----------



## halfwaythere (Sep 17, 2010)

Folks Barts is going to be the next 68xx cards. Cayman is going to be 6950 and 6970 while Antilles, the dual gpu card, will be 6990. Half this thread is filled with miss-information and people talking non-sense. 

Anyways new stuff:



> HD 6850 HD 5770 HD 5830 HD 5850
> Codename Bart Pro XT Juniper Cypress Cypress Pro LE
> Technology 40 nm 40 nm 40 nm 40 nm
> Stream Proc. 1120 pcs. 800 pcs. 1120 pcs. 1440 pcs.
> ...



http://webcache.googleusercontent.c...m/nyhet/12713-fler-detaljer-om-radeon-hd-6850


----------



## bear jesus (Sep 17, 2010)

I think many people (including myself) are still hoping that amd are not stupid enough to mess up the naming so bad. There is no reason to change what has worked well for years.

Anything to do with spec of the top end chip is random geussing or dreaming in my case, untill amd releases the offecial spec nothing can be said for sure but i would rather dream up random spec's than sit around waiting for the real info and i think the same applys to most others here


----------



## halfwaythere (Sep 17, 2010)

Seems pretty logical to me: they want to keep the x7xx naming scheme for the 128 bit parts. And since Turks is going to be a tessellation tweaked Juniper while Barts is a much more advanced Cypress derivate. 

For the potential buyers this means Barts with Cypress like performance around the 200$ mark while Turks, an improved Juniper, below the 150$. Whats there to complain about?


----------



## Animalpak (Sep 17, 2010)

nice new gpu every 6 months politics


----------



## bear jesus (Sep 17, 2010)

halfwaythere said:


> Seems pretty logical to me: they want to keep the x7xx naming scheme for the 128 bit parts. And since Turks is going to be a tessellation tweaked Juniper while Barts is a much more advanced Cypress derivate.
> 
> For the potential buyers this means Barts with Cypress like performance around the 200$ mark while Turks, an improved Juniper, below the 150$. Whats there to complain about?



umm it could be that i have no idea what is going on with the naming but i thought people had been throwing around the idea that barts would be x8xx names as in 6870 and 6850 and the next level up (Cayman?) would be 6970 and 6950 with the top dual chip being a 6990 thus why it made no sense to me why they whould change the naming to something like that... i think i'm just confused by all the rumors and false information floating around as usual before hardware launches.

*edit* i think posting first thing in the morning is not a great idea for me  the problem is not knowing what the caymen chips will be spec'd is what is confusing me really as normally they have been double the mid range cards in recent years but if they are not this time around i geuss i can accept the new naming makes some sense but if a 5870 beats a 6870 then i would be going back to not understanding the change. i dont neven know where everyone is getting these names from, is there a source?


----------



## largon (Sep 17, 2010)

I smell a _burger_ full o' crap here.


----------



## bear jesus (Sep 17, 2010)

largon said:


> I smell a _burger_ full o' crap here.



 I just wish all the 6xxx cards were out sooner so we had the official spec already.


----------



## meran (Sep 17, 2010)

oh mama my 8800gt needs to step down it served me well for 2.5 years


----------



## caleb (Sep 17, 2010)

Don't you think its lame to write "its faster than Nvidia" on such slides?


----------



## TheMailMan78 (Sep 17, 2010)

So what will be the 5850 equivalent? I'm confused by this new naming scheam. 6850? 6950?


----------



## pantherx12 (Sep 17, 2010)

If they do change the naming scheme I'm going to rep the old woman attitude of complaining via a letter ha ha!

That will show them.

Any whom, I for one am not expecting 5850 performance from this card... mostly because I'd hate to see ATI repping a + 500 USD gpu as their top single gpu card ....

But what the hell, I have a job now. Next year assuming these arnt crap, and bulldozer isn't crap I'm going to get an all AMD rig, buy everything new for once as well : ]

aside from heatsinks.


----------



## KainXS (Sep 17, 2010)

largon said:


> I smell a _burger_ full o' crap here.



lol


----------



## yogurt_21 (Sep 17, 2010)

20% bump in shaders ok, doubling the rop's ? that seems unlikely, would be awesome, but unlikely. It's far easier to add shaders than it is to add rop's *unless* we're looking at a cripled cypress core here with a new name.


----------



## Paintface (Sep 17, 2010)

All i worry about is the price.

a 5770 goes for 140
a 5850 goes for 260

if it scratches the 5850 performance i hope it goes for around 200 dollar

If it is merely a bit faster than the 5770 i hope it isnt a cent more expensive than $140

I say this cause i currently have a 4890 vapor x that i bought for $180 a year ago, barts will be 2 generations newer, if i cant buy a card being performance wise close to the 5850 or equal for 200 then i am out of options again to upgrade since the 5850 follow up whatever the name will be a 300+ card more than likely.


----------



## yogurt_21 (Sep 17, 2010)

Paintface said:


> All i worry about is the price.
> 
> a 5770 goes for 140
> a 5850 goes for 260
> ...



look for performance between the 5830 and the 5850. ie exactly where targeted.


----------



## IceCreamBarr (Sep 17, 2010)

*Better performance/W*

Wow, marketing is really stretching here!  Why not say "better performance/length of card", or "better performance than any other blue PCB", or any other comparison that is basically useless unless you own a server farm.  If the power draw is not taxing your PSU, does anyone care how many cycles they get per watt?

Barr


----------



## cheezburger (Sep 17, 2010)

yogurt_21 said:


> 20% bump in shaders ok, doubling the rop's ? that seems unlikely, would be awesome, but unlikely. It's far easier to add shaders than it is to add rop's *unless* we're looking at a cripled cypress core here with a new name.



more shader dont  always necessary provide more performance, unless you're a big fan of unreal 3 engine....adding more shader is much easier for hardwiring and die design but also it can easily increase die space and also generate *MORE HEAT* on same die area. you would wonder why a 256mm^2 r770 being so ridiculously hot compare to 240m^2 older process of g94...... so do not expecting amd going to just increase shader number like r670 to r770. ridiculous number of shader don't help performance....it's rops, z buffer, data bus width and shader architect we're talking about. in extreme case even giving 3200 shader to cypress will still unable to keep pace with g100 but generate more heat and eventually end up to be another 2900xt....


----------



## cadaveca (Sep 17, 2010)

largon said:


> I smell a _burger_ full o' crap here.






That's what happens when people speculate...nobody should be taking any of this seriously.


Wait a minute, I already said that. Funny...


----------



## cheezburger (Sep 17, 2010)

largon said:


> I smell a _burger_ full o' crap here.



i smell nvidia fan here too


----------



## cadaveca (Sep 17, 2010)

I actually smell someone who usually knows what's up. Smells kinda like success...


----------



## NeSeNVi (Sep 17, 2010)

Semi-Lobster said:


> *To be honest, I'm pretty disappointed by the high power consumption.* For their, so far, short existence (starting with the revolutionary 4770), the X700 series have been excellent thanks to their low power consumption which was great for entry level gamers but at 150w, that is putting these cards on par with the 5850 while at the same time, not being as good as the 5850, at that rate, you might as well get a 5850 since the prices will drop once the 6000 series hits store shelves.


This is what I wanted to say after reading this news too. Totally agree.


----------



## Imsochobo (Sep 17, 2010)

CDdude55 said:


> My GTX 470's says something completely different.



Nvidia think diffrent, they aint getting money for a big ass expensive card that have to compete with half as complex cards...
Sorry mate, its not a great videocard.

It may serve you well on performance though, if thats what matters, you got what you want.

Fermi 470 and 480 is rubbish in their current state, but its a generation change! much like 2900.

Amd have done well with effecient designs!



cheezburger said:


> more shader dont  always necessary provide more performance, unless you're a big fan of unreal 3 engine....adding more shader is much easier for hardwiring and die design but also it can easily increase die space and also generate *MORE HEAT* on same die area. you would wonder why a 256mm^2 r770 being so ridiculously hot compare to 240m^2 older process of g94...... so do not expecting amd going to just increase shader number like r670 to r770. ridiculous number of shader don't help performance....it's rops, z buffer, data bus width and shader architect we're talking about. in extreme case even giving 3200 shader to cypress will still unable to keep pace with g100 but generate more heat and eventually end up to be another 2900xt....



Ehm, shaders does alot.
Ati have found a very magical ratio number, it have proven to be a well balanced.
The only thing they actually needed vs nvidia... was JUST shader power/ tesselation power. where fermi was superiour, ati have more rops if i remember.

256 bit Is enough for 6870.
192 bit would be enough for 6770 i guess, but yeah, odd memory numbers..

Ati need to just improve tesselation stuff, their new arch may have this, we'll find out with 6xxx and for real with 7xxx.

Excited to see what future holds!


----------



## CDdude55 (Sep 17, 2010)

Imsochobo said:


> Nvidia think diffrent, they aint getting money for a big ass expensive card that have to compete with half as complex cards...
> Sorry mate, its not a great videocard.
> 
> It may serve you well on performance though, if thats what matters, you got what you want.
> ...



I agree in the aspects that they are not very efficient cards compared to what AMD is currently running with.

But yes, on the performance side of things you are really getting a good treat at a nice price.


----------



## wolf (Sep 17, 2010)

Imsochobo said:


> It may serve you well on performance though, if thats what matters, you got what you want.



this statement is pure lol to me, performance is always what I look at first, all other considerations are secondary, and in that respect GF100 rocks my socks.

I'd rather consider better performance first than start with pain-in-the-ass things like power consumption and heat. assuming you've built a good enough rig to handle throwing in high end cards.


----------



## CDdude55 (Sep 17, 2010)

wolf said:


> this statement is pure lol to me, performance is always what I look at first, all other considerations are secondary, and in that respect GF100 rocks my socks.
> 
> I'd rather consider better performance first than start with pain-in-the-ass things like power consumption and heat. assuming you've built a good enough rig to handle throwing in high end cards.



Exactly.


----------



## Imsochobo (Sep 17, 2010)

CDdude55 said:


> Exactly.



hehe, was my previous goal for me, performance.
Now i run 5850, and loaning a 2nd one.
Tried the 470, overheated in my microatx... and the hdmi sound was horrible...

the 2nd is a must for me, so ati is onto something, only way i see it is that nvidia will be swollowed by someone, much like ati, erm amd. 
But the heat could be solved in some way i guess. and noise when watching movies was just, not good at all, 5850 was pretty much spot on for me 
Bought it at launch, price now i 33% higher, so I'm a very satisfied costumer! I thought i would regret it, but nope ! 

Nvidia is focusing way too much on cuda, instead opencl and its performance should be what they should go for.
Physx, isnt that much worth really, used to have a geforce in my pc for it, but got used maybe once every 3rd month, so whats the point.

I just hope opencl will be taking off, Coding for it is quite easy in fact, so dont see any dont's for it.
And we can enjoy the apps for both AMD and Nvidia gpu's!

opencl, fusion, sandy, dx11, lots of things in movement now that benefit us all.
Anyways, back on track here.. ati is really pushing out quickly! I think this may be because of the problems with some artifacts in some systems with HD5xxx.
Mouse pointer for example with multi-display, i have the problem in starcraft 2 ever now n then, not a biggie, its just a green line, but after a min it returns to normal.


----------



## laszlo (Sep 17, 2010)

hmmm i smell a bart fart?


----------



## TheLaughingMan (Sep 17, 2010)

halfwaythere said:


> Folks Barts is going to be the next 68xx cards. Cayman is going to be 6950 and 6970 while Antilles, the dual gpu card, will be 6990. Half this thread is filled with miss-information and people talking non-sense.
> 
> Anyways new stuff:
> 
> ...



This article is just plan wrong.  I am going to go with that.


----------



## wolf (Sep 17, 2010)

TheLaughingMan said:


> This article is just plan wrong.  I am going to go with that.



sounds like it to me, it doesn't make any sense to differ from their current naming scheme, exactly where the chips fit into it makes little difference tho.


----------



## cheezburger (Sep 18, 2010)

Imsochobo said:


> Amd have done well with effecient designs!
> 
> 
> 
> ...



how does shader done well in performance? sorry to disappoint you but most of modern game are rop/z buffer hungry than shader. like i mention before the only game engine that require shader like amd's is unreal 3 and some crappy console game like halo. crysis/stalker/modern warfare are require more rops/data bus power than  amd's 5D shader. of cause you're look up 4xxx series because they are cheap however they don't do very well on native PC game as they only take advantage on some console migration game(HAWK is also xbox migration....but this will end up another PC/console war so i'd stop here). also you wont get any extreme high 100+ frame rate in some highest setting and only stuck with "reasonable" frame rate of 30s due to lack of rops in r770 and poor data rate per rops. don't tell me you can just keep clock up and up with core frequency and GDDR5.

big card don't make profit? then what makes profit? people that don't use CAD and play video game wouldn't even bother install a graphic card. console gamer wouldn't buy a graphic card as well as their PC are not built by them selves and not use it for game purpose(doh they are console gamer....) entry gamer would rather have a laptop and play sims and other homosexual-like games. sorry sir intel took these part greatly......the only part left for both NV and AMD is high end gamer and professional user. would you spend 200 bulks on a card that only work great on console migration games or a card cost $400 but convertible to any games. you can say what ever you want about how shitty g100 are but they done pretty well job squash cypress in many games despite it runs more power. so what these gamer wouldn't even care about pale bear and global warming anyway!  most of people wouldn't care this planet even if it dies.....anyway... 

you guys kept talking about tesellation but you have no idea about the structure design. unlike fermi's tessellation that was integrated in their cuda core amd's design is on the rops(look the die picture...)!!the trade off for this opposite design is cypress's smaller die. the only way to improve this is increase rops or redesign the tessellation engine. data bus can also affecting tessellation performance. result cypress wasn't even 1/10 of g100 in heaven benchmark. how do you improve tessellation without increase something or a major redesign? keep r600 architecture will be chronic suicidal...

again cayman will be 512bit 64 rops and cost 600+ dollars either you like it or not.......


----------



## largon (Sep 19, 2010)

One fact about Cayman: 
There's at least one person here who will be grossly disappointed in it due to unreasonable expectations.


----------



## bear jesus (Sep 19, 2010)

largon said:


> One fact about Cayman:
> There's at least one person here who will be grossly disappointed in it due to unreasonable expectations.



Very true 

I just hope it is not me *wishes really hard for 256 bit gddr5 at 1600mhz (6400mhz effective), 64 rop's, 96 tmu's, 1920 stream processors and 850mhz core*


----------



## cheezburger (Sep 19, 2010)

largon said:


> One fact about Cayman:
> There's at least one person here who will be grossly disappointed in it due to unreasonable expectations.







bear jesus said:


> Very true
> 
> I just hope it is not me *wishes really hard for 256 bit gddr5 at 1600mhz (6400mhz effective), 64 rop's, 96 tmu's, 1920 stream processors and 850mhz core*



it was reported that the benches photo of HD68xx from chiphell was actually  barts xt rather than cayman that was rename from previous coded name hd 6770 and the photo was date in late july to early august . which they didn't even have cayman yet back then so stop saying that cayman will be 256bit bus because we don't even know what cayman will bring to us. some speculation of 2560:160:32, 256bit bus and 6.4GT GDDR5 ram is complete wrong...like some speculation of barts earlier which is also full of false BS. (1600:80:16 + 128bit bus...my ass) you cannot increase shader like what r670 to r770 anymore but some idiot just don't get it...shader dont do that much in native PC game. especially amd's non efficient 4+1 shader no matter how much you putting in the die it won't work as great you think and then you people will start whining about how nv doing dirty trick in competition blah blah blah. they just do much better by putting everything they can that's all...you're never explain why amd's card can perform so close with relatively higher priced nv card on console game that's because most of console migration are came from xbox and xbox game favor amd's 5d shader with overdose lighting effect on the texture surface with poor detail and limited frame rate... what's bulk for bang? i never play any console migration title and these low quality game are the reason why amd card *sell like hot cake*. console game will destroy the future technology invention and this will happen soon enough.


----------



## bear jesus (Sep 19, 2010)

cheezburger said:


> it was reported that the benches photo of HD68xx from chiphell was actually  barts xt rather than cayman that was rename from previous coded name hd 6770 and the photo was date in late july to early august . which they didn't even have cayman yet back then so stop saying that cayman will be 356bit bus because we don't even know what cayman will bring to us. some speculation of 2560:160:32, 256bit bus and 6.4GT GDDR5 ram is complete wrong...like some speculation of barts earlier which is also full of false BS. (1600:80:16 + 128bit bus...my ass) you cannot increase shader like what r670 to r770 anymore but some idiot just don't get it...



I never said it would be any size or spec, i just keep saying i am hoping, wishing or dreaming of random spec and basing my wishes and dreams on the fact that it could be double the barts spec listed on here as for multiple generations the high end has been double the mid rage basicly, although i admit i copy pasted from the wrong post of mine. it should have said this

"I just hope it is not me *wishes really hard for 512 bit gddr5 at 1600mhz (6400mhz effective), 64 rop's, 96 tmu's, 1920 stream processors and 850mhz core* "
and that is hardly serious, there is no logical reason to use 1600mhz gddr5 with a 512bit bus unless the core is so crazy powerful it would need that much bandwith and i very much doubt that


----------



## bear jesus (Sep 19, 2010)

cheezburger said:


> console game will destroy the future technology invention and this will happen soon enough.



I have to ask what is it with you and consoles and console games, im not a fan of consoles and don't own any but i don't complain about it 

There is many many many fail pc games with low quality everything, does that mean that pc gaming is killing pc gaming and is destroying the future of technology? 

Just curious how relative anything about consoles is to a thread about an upcoming gpu's spec.(not trying to be an ass or anything just curious)


----------



## CDdude55 (Sep 19, 2010)

largon said:


> One fact about Cayman:
> There's at least one person here who will be grossly disappointed in it due to unreasonable expectations.



Agreed.


----------



## wolf (Sep 19, 2010)

CDdude55 said:


> Agreed.



thats for sure... its better to have modest hopes and be pleasantly surprised by a card than to pull awesome numbers from nowhere and get dissapointed if it doesn't happen, awesome though they may be.

side note CDdude, your eligible for a custom title bro.


----------



## cheezburger (Sep 19, 2010)

bear jesus said:


> I have to ask what is it with you and consoles and console games, im not a fan of consoles and don't own any but i don't complain about it
> 
> There is many many many fail pc games with low quality everything, does that mean that pc gaming is killing pc gaming and is destroying the future of technology?
> 
> Just curious how relative anything about consoles is to a thread about an upcoming gpu's spec.(not trying to be an ass or anything just curious)



console game took over the market is not just happen yesterday. since crysis we haven't see any hardware killing title for 3 years...yes three years! heavy weight title is the reason that push hardware progression further.  from doom 3, fear to crysis they created many of legendary hardware such as athlonx2, nv43, core 2 duo and g80. it push the technology far beyond. this is until console migration came out and start  taking over on these casual market who own a f***ing dell pc and destroy hardcore gaming and future development of hardware. maybe there are only 1% of high end power user compare to 99% general average joe. that 1% are what push what technology we know today.

console migration is why make average user not to upgrade their part or buy cheap gpu that does not perform better. for example a low graphic quality  console title will make a sub 200 bulks amd card(r770) have 100 fps while a $400 nv's card(gt200) offering 200+fps. however problem comes. average people will not see the dfference in performance gap and rather satisfy on minimum "playable" framerate. which result amd card sells better because average casaul gamer don't need high end gpu and enjoy fps that is higher than 30fps*reasonable* "performance" and don't need $400 dollor card than can push 200+fps. result high end technology going backward... we see the game hardware requirement is totally *NO* different from 4 years ago. our technology had stay the same for 4~5 years!! and these average idiot is what cause it. denied nvidia and denied any possibility of cayman/barts/new architecture is denied future invention. you also denied your own future! :shadedshu


----------



## CDdude55 (Sep 19, 2010)

cheezburger said:


> console game took over the market is not just happen yesterday. since crysis we haven't see any hardware killing title for 3 years...yes three years! heavy weight title is the reason that push hardware progression further.  from doom 3, fear to crysis they created many of legendary hardware such as athlonx2, nv43, core 2 duo and g80. it push the technology far beyond. this is until console migration came out and start  taking over on these casual market who own a f***ing dell pc and destroy hardcore gaming and future development of hardware. maybe there are only 1% of high end power user compare to 99% general average joe. that 1% are what push what technology we know today.
> 
> console migration is why make average user not to upgrade their part or buy cheap gpu that does not perform better. for example a low graphic quality  console title will make a sub 200 bulks amd card(r770) have 100 fps while a $400 nv's card(gt200) offering 200+fps. however problem comes. average people will not see the dfference in performance gap and rather satisfy on minimum "playable" framerate. which result amd card sells better because average casaul gamer don't need high end gpu and enjoy fps that is higher than 30fps*reasonable* "performance" and don't need $400 dollor card than can push 200+fps. result high end technology going backward... we see the game hardware requirement is totally *NO* different from 4 years ago. our technology had stay the same for 4~5 years!! and these average idiot is what cause it. denied nvidia and denied any possibility of cayman/barts/new architecture is denied future invention. you also denied your own future! :shadedshu



You're right to an extent, first off there have been some pretty heavy hardware straining games out for PC int he past three years like Cryostasis, S.T.A.L.K.E.R. and Metro 2033 to name a few.I do think that games these days aren't being tailor made for the PC and are instead directly ported to the PC while leaving behind essentials that are important to us PC gamers. I don't think a game could ''push out new technology'' as you were stating, companies don't go ''OMG F.E.A.R. is coming out soon guys, time to make a chip that can run it!1!!!1'', now of course Intel does pay some of them to feature that whole ''Plays great on Core i7!'' logo in the games (like Crysis), but of course that doesn't mean it's tailor made for that specific game. I think the problem is in the developers themselves, console make more money for them due to the bigger base of people, making a PC version of that game is a total after thought these days. You go for the bigger fish first and then throw the line in later for the smaller ones. No matter where gaming is, technology will always move forward, whether it be a big or littler step, no matter what there is always some kind of ''innovation'' happening. Consoles aren't the problem, it's the developers that don't take the time out to optimize and make a proper PC game.

Also, it depends on the person on what they need or want for gaming, are you really surprised that it's the mainstream cards and mainstream computers and hardware that sell more?, that's what those companies focus on more, because that's where the most profit is, they aren't focused on that tiny percentage that is us. Whether or not someone needs a high end GPU is all choice, does your average gamer need a 5970 with an overclocked i7, do they need 2x GTX 480's?, the needs on an ''average joe PC gamer'' are vastly different from us, the small percentage. Devs see that and realize they can make money off of those people by dumbing down our games and making it so that us, the ''hardcore'' gamers and enthusiasts shunned. And why not shun us?, we barely make them any money anyways, most of us spend more money on our system then we'll even spend on there games anyways.  The crappiest sytems and parts make the most profits, the uninformed make the most profits for them.


----------



## wahdangun (Sep 20, 2010)

cheezburger said:


> it was reported that the benches photo of HD68xx from chiphell was actually  barts xt rather than cayman that was rename from previous coded name hd 6770 and the photo was date in late july to early august . which they didn't even have cayman yet back then so stop saying that cayman will be 256bit bus because we don't even know what cayman will bring to us. some speculation of 2560:160:32, 256bit bus and 6.4GT GDDR5 ram is complete wrong...like some speculation of barts earlier which is also full of false BS. (1600:80:16 + 128bit bus...my ass) you cannot increase shader like what r670 to r770 anymore but some idiot just don't get it...shader dont do that much in native PC game. especially amd's non efficient 4+1 shader no matter how much you putting in the die it won't work as great you think and then you people will start whining about how nv doing dirty trick in competition blah blah blah. they just do much better by putting everything they can that's all...you're never explain why amd's card can perform so close with relatively higher priced nv card on console game that's because most of console migration are came from xbox and xbox game favor amd's 5d shader with overdose lighting effect on the texture surface with poor detail and limited frame rate... what's bulk for bang? i never play any console migration title and these low quality game are the reason why amd card *sell like hot cake*. console game will destroy the future technology invention and this will happen soon enough.



what the hell are you saying, do you know why AMD chose 5D shader ? 

its because amd can insert more shader processor and more efficient than nvdia big shader, and IF NATIVE pc games are crap on ati then why oh why crysis run superb on ati than nvdia counterpart ? 

and btw i don't want to go back when everything was EXPENSIVE heck i even remember seeing P3 800 mghz cost a whoping $1000, but i want dev to push the hardware more, we want another crysis,


----------



## cheezburger (Sep 21, 2010)

wahdangun said:


> what the hell are you saying, do you know why AMD chose 5D shader ?
> 
> its because amd can insert more shader processor and more efficient than nvdia big shader, and IF NATIVE pc games are crap on ati then why oh why crysis run superb on ati than nvdia counterpart ?
> 
> and btw i don't want to go back when everything was EXPENSIVE heck i even remember seeing P3 800 mghz cost a whoping $1000, but i want dev to push the hardware more, we want another crysis,



same on 5D shader, why didn't amd go on 2 complex + 3 simple rather than 4 simple + only one complex? because amd already optimize for console migration that tend to use simple shader instruction on their game engines. they are 5 D alright but most of them(4 simple) would become useless when countering complex instruction and more flexible coding(such as physx or opencl) which left only 1 complex port functional during game play. most of native PC games are far complex coding than consoles. while nvidia's BIG shader are more convertible then amd's 5D in everyway. and about crysis under most of setting even a 4890 having problem out pace 9800gtx+ in every bench( do not bring vapor-x version these so called 1.2ghz super overclock edition that beats stock clock gtx 260....don't give me that shit.....) which amd lost every title in native pc game except on console!! amd's market share is nearly as equal as console migration sells every years that why amd was planing to stay mid range card and wait for console move to next step. that's where most of people prefer "the most profit spot". 

and yah, without a $1000 p3 800mhz in 10 years ago you woudn't even have p3 400mhz for cheaper price and there for you won't have any powerful processor like core2 or powerful gpu that can play decent graphic like crysis. maybe your pc is 486 and play fubby island everyday i presume?


----------



## Frick (Sep 21, 2010)

Do you seriously want those times back? 

Because I just read your post and you're almost delusional.


----------



## wahdangun (Sep 21, 2010)

ok, first of all, its doesn't matter if the game is console port or not but the real deal was how the dev code their games so even if the game was console port from X-box its doesn't guarantee that the games was better on ati, same as native PC games

just look at the crysis :







so even stock HD 4870 beat GTX 260.

or look at this hawx :






even though it console port, it still faster on nvdia card


----------



## dalelaroy (Sep 21, 2010)

*Swag*



bear jesus said:


> umm it could be that i have no idea what is going on with the naming but i thought people had been throwing around the idea that barts would be x8xx names as in 6870 and 6850 and the next level up (Cayman?) would be 6970 and 6950 with the top dual chip being a 6990 thus why it made no sense to me why they whould change the naming to something like that... i think i'm just confused by all the rumors and false information floating around as usual before hardware launches.
> 
> *edit* i think posting first thing in the morning is not a great idea for me  the problem is not knowing what the caymen chips will be spec'd is what is confusing me really as normally they have been double the mid range cards in recent years but if they are not this time around i geuss i can accept the new naming makes some sense but if a 5870 beats a 6870 then i would be going back to not understanding the change. i dont neven know where everyone is getting these names from, is there a source?



I do  not believe these specs are real. My best guess is that, with 32nm enabling 60% more transistors on the same die area as 40nm, Barts started out as a GPU with 60% more shader clusters than Juniper, and Cayman 60% more shader clusters than Cypress. With the shift from 4 simple + 1 complex shader clusters in Evergreen to 4 moderate complexity shader clusters in Northern Islands, Cayman had 2048 shaders versus Bart's 1024 shaders. This change, together with the tessellation improvements, resulted in the die size of Cayman growing to about 400mm2 with 32nm. With the cancellation of 32nm, NI had to be implemented at 40nm, which would have resulted in NI being over 600mm2. This wouldn't be a problem for a single GPU, but would have made a dual GPU variant of the high end Cayman too hot. Thus Cayman was reduced from 2048 shaders to 1280 shaders. But Barts remained at 1024 shaders. After all, Barts was 80% of Cayman, which was the same ratio as the 40nm 4770 versus the 55nm 48xx(RV770), and the 32nm 5790 versus the 40nm 58xx(Cypress).

With Barts being 80% of Cayman, which is just South of 400mm2, Barts is nearly the same die size as Cypress. Best guess is that Barts is about 95% of the die size of Cypress. Originally Cayman LE was to replace the low yield Radeon HD 5870, with the highest binning Barts having 14 of 16 execution blocks active and clocked at 900 MHz, making it tolerant of up to two defects, and sufficiently high yielding despite its high clock rate. The performance per shader of Barts was 1.5x to 1.8x the performance per shader of Cypress depending on the application, with Barts showing the smallest improvement where Cypress is strongest relative to GF100, and the largest improvement where Cypress is weakest relative to GF100. Together with the bump to 900 MHz, the original Barts XT (Radeon HD 6770 with 896 shaders @ 900 MHz) would have delivered from 1.16 to 1.39 times the performance of the Radeon HD 5850 it would have been replacing.

Turks would have 512 shaders versus Barts XT's 892 shaders providing the same shader ratio as the GF106 versus the Geforce GTX 460. Then nVidia threw a curve ball. GTS 450 would ship at 783 MHz versus just 675 MHz for GTX 460. Since Turks can't ship at more than 900 MHz, the clock of the Radeon HD 6770 had to be adjusted accordingly to maintain the same performance ratio between the Radeon HD 6670 and the Radeon HD 6770 as between GTS 450 and GTX 460. Thus the clock of the Radeon HD 6770 was dropped to 775 MHz, reducing the Radeon HD 6770 to from 0.997 to 1.197 times the performance of the Radeon HD 5850, but making room for a Barts core with 960 active shaders at 900 MHz to replace the Radeon HD 5870. This new Barts XT would have from 0.953 to 1.143 times the performance of the Radeon HD 5870, thus making the Cayman LE redundant. Thus Barts XT became the Radeon HD 6830, or at least this potential name change was discussed and leaked.

At the same time, it was realized that, while the yield of a fully functional Cayman part would be too low to justify launching a fully functional part single GPU card, dual GPU cards are sold in low enough volume that fully functional GPUs can be used in the dual GPU card. Thus the change from the Radeon HD 6970 to the Radeon HD 6990. Since the high end dual GPU card would have two fully functional GPUs, while the Radeon HD 6870 would have two execution blocks disabled, it didn't make sense to call the dual GPU card a Radeon HD 6970.

Additionally, realizing that the 40nm Barts die would not be significantly smaller than the Cypress die, ATI continued with development of the Cypress derivative 1280 shader Radeon HD 5790 on 40nm to meet excess demand for Radeon Hd 5830 class GPUs. However, realizing that the performance of the Radeon HD 5790 would have to be bumped up if it were to, instead of supplementing the Radeon HD 5830 supply, supplement the Radeon HD 6750 supply, thus reducing or even negating the need to run excess Barts wafers to meet demand for the Radeon HD 6750, ATI decided to change the name of the Radeon HD 5790 to the Radeon HD 5840, and introduce a lower binning Radeon HD 5820 to improve yields.

Thus the rumors for the name changes. The Barts XT will be a 15 execution block part instead of a 14 execution block part, and possibly assume the name of the Radeon HD 6830.
Antilles will have all 20 execution blocks active instead of just 18, and be called the Radeon HD 6990. And what was formerly going to be called the Radeon HD 5790 is going to ship as the Radeon HD 5820 and Radeon HD 5840, even as the original 58xx series is replaced by Barts.

Introductory pricing should be:
Barts XT @ $299
Barts Pro @ $219
Barts LE @ $179
Radeon HD 5840 @ $179
Radeon HD 5820 @ $149


----------



## cheezburger (Sep 21, 2010)

wahdangun said:


> ok, first of all, its doesn't matter if the game is console port or not but the real deal was how the dev code their games so even if the game was console port from X-box its doesn't guarantee that the games was better on ati, same as native PC games
> 
> just look at the crysis :
> 
> ...



i called it BS! in previous review NV GT200 gain massively 40% lead on crysis over hd 4890. but that's under nvidia 780 chipset. however under intel chipset they are significantly slow down and make look like hd 4890 took adavntage over NV(intel was f*** on NV for a while so no surprise for such low performance on i5/i7 platform) hawk was amd brand title game and was console ported so i'm not surprise amd would take such lead...


----------



## largon (Sep 21, 2010)

(Referring to *CDdude55*'s post, which for some reason, got deleted.)
I'd say he (cheezburger) is in the right place, kinda, but he has _a lot_ of reading to do before his posts are worth reading as at the moment most of what he writes is just horribly inaccurate or blatantly wrong.


----------



## btarunr (Sep 21, 2010)

Alright people, stay closer to the topic. I allow a broad scope for discussion because often interesting things come out of it. Bickering is not one of them.


----------



## wahdangun (Sep 22, 2010)

cheezburger said:


> i called it BS! in previous review NV GT200 gain massively 40% lead on crysis over hd 4890. but that's under nvidia 780 chipset. however under intel chipset they are significantly slow down and make look like hd 4890 took adavntage over NV(intel was f*** on NV for a while so no surprise for such low performance on i5/i7 platform) hawk was amd brand title game and was console ported so i'm not surprise amd would take such lead...



are you being sarcastic ?




btw i hope cayman don't turn out to be power hungry monster like fermi, and when exactly cayman get released ? is it around October too?


----------



## pantherx12 (Sep 22, 2010)

Cheezburger, I imagine if it had a 40% increase then their could of been some driver fiddling to make the game run faster rather than a fair test.

Because I had a 9800gt ( yes I know it's not a gtx) Asus Matrix edition, and well it just didn't get close to my 4890, at all. lol

Especially when I ran my 4890 at 1ghz stock volts


----------



## wolf (Sep 22, 2010)

cheezburger said:


> ...in previous review NV GT200 gain massively 40% lead on crysis over hd 4890...



GT200 is a bit vague... this could mean from the scope of a GTX260 192sp model right the way up to a GTX285.



pantherx12 said:


> ...Because I had a 9800gt ( yes I know it's not a gtx) Asus Matrix edition, and well it just didn't get close to my 4890, at all. lol...



9800GT in reality is a fair bit behind a 4890, even a 9800GTX+/GTS250 are well behind.

-------------------

If Barts XT is as fast or faster than a 5850 they have a winner on their hands IMO


----------



## WarEagleAU (Sep 22, 2010)

Sounds nice and looks to be a refresh of juniper.


----------

