Thursday, September 16th 2010
AMD ''Barts'' GPU Detailed Specifications Surface
Barely a week after pictures of AMD's "Barts" prototype surfaced, it wasn't long before a specifications sheet followed. The all-important slide from AMD's presentation to its add-in board partners made it to sections of the Chinese media. "Barts" is a successor to "Juniper", on which are based the Radeon HD 5750 and HD 5770. The specs sheet reveals that while indeed the GPU looks to be larger physically, there are other factors that make it big:
Memory Controller
Barts has a 256-bit wide memory interface, which significantly increases its pin-count, and package-size. The "Pro" and "XT" variants (which will go on to be HD 6x50 and HD 6x70, respectively), have memory clocked at 1000 MHz and 1200 MHz, respectively, so that's nearly 100% increase in memory bandwidth .
Tiny increase in SIMD count, but major restructuring
Compared to Juniper, there seems to be an increase of only 20% in stream processor count physically. The XT variant has 960 stream processors, while the Pro variant has 800. AMD specifically mentioned SIMD block count, (10 enabled for Pro, 12 enabled for XT). If you noticed the slide, it says that the GPU is based on the "Cypress Dual Engine architecture", meaning that these 10 and 12 SIMD units will be spread across two blocks of 5 (Pro) or 6 (XT) SIMDs each, just like Cypress had two blocks of 10 SIMDs each.
Other components
The Raster Operations unit (ROP) count has been doubled to 32, TMUs stand at 40 for the Pro and 48 for the XT.
The design methodology is extremely simple. Juniper-based graphics cards anyway carry 8 memory chips to meet up to memory amount requirements of 1 GB using market-popular 1 Gbit GDDR5 chips, so why not just place those 8 chips across a 256-bit wide memory interface and double the memory bandwidth. The increased ROP count, coupled with up to 20% increase in shader compute power gives Barts the competitive edge it needs to face NVIDIA's reinvigorated GeForce 400 series after the introduction of the GeForce GTX 460. As for power draw, AMD projects the Pro variant to draw less than 150W, with the XT drawing "over" 150W.
Market Positioning
AMD doesn't have huge expectations from this. It has its task cut out: to compete with the GeForce GTX 460 768 MB and 1 GB models. While memory count ROP made the cut out NVIDIA's variants, AMD's come from clock speeds and SIMD core counts. It should then become obvious what these GPUs' pricing should look like.
When?
Usually when AMD gives out such a presentation to its AIB partners, a market release is about 3 months away.
Source:
ChipHell
Memory Controller
Barts has a 256-bit wide memory interface, which significantly increases its pin-count, and package-size. The "Pro" and "XT" variants (which will go on to be HD 6x50 and HD 6x70, respectively), have memory clocked at 1000 MHz and 1200 MHz, respectively, so that's nearly 100% increase in memory bandwidth .
Tiny increase in SIMD count, but major restructuring
Compared to Juniper, there seems to be an increase of only 20% in stream processor count physically. The XT variant has 960 stream processors, while the Pro variant has 800. AMD specifically mentioned SIMD block count, (10 enabled for Pro, 12 enabled for XT). If you noticed the slide, it says that the GPU is based on the "Cypress Dual Engine architecture", meaning that these 10 and 12 SIMD units will be spread across two blocks of 5 (Pro) or 6 (XT) SIMDs each, just like Cypress had two blocks of 10 SIMDs each.
Other components
The Raster Operations unit (ROP) count has been doubled to 32, TMUs stand at 40 for the Pro and 48 for the XT.
The design methodology is extremely simple. Juniper-based graphics cards anyway carry 8 memory chips to meet up to memory amount requirements of 1 GB using market-popular 1 Gbit GDDR5 chips, so why not just place those 8 chips across a 256-bit wide memory interface and double the memory bandwidth. The increased ROP count, coupled with up to 20% increase in shader compute power gives Barts the competitive edge it needs to face NVIDIA's reinvigorated GeForce 400 series after the introduction of the GeForce GTX 460. As for power draw, AMD projects the Pro variant to draw less than 150W, with the XT drawing "over" 150W.
Market Positioning
AMD doesn't have huge expectations from this. It has its task cut out: to compete with the GeForce GTX 460 768 MB and 1 GB models. While memory count ROP made the cut out NVIDIA's variants, AMD's come from clock speeds and SIMD core counts. It should then become obvious what these GPUs' pricing should look like.
When?
Usually when AMD gives out such a presentation to its AIB partners, a market release is about 3 months away.
110 Comments on AMD ''Barts'' GPU Detailed Specifications Surface
ATI's 4+1 shader design might now be 2+2. We might see far higher gpu utilization, and the rumour in the past of a vastly superior "ultra-threading dispatch processor" seems to point more in this direction.
Looking into the past...4770 basically = 3870, and 5770 basically = 4890. So, this 6770, should be somewhere around, in the least, 5850 to 5870 performance, if done right.
So much of 7GT GDDR5 256bit bus/ 1920shader/120 tmu/32rops and pricing at $299 LOL CAYMAN IS NOT GOING TO HAVE 480 ALU(or 1920 shaders). even in 4D format 4 complexity arrangement. these ALU shader are too costly and cause huge die space. which it will end up like fermi.
6.4GT GDDR5 is also eats more power than lower frequency ram. it would be stupid for amd to make that move....
I think AMD may skip "6600" series, as is too close to old geforce card naming, but maybe they will go with 64x0/65x0.
Late next month real details should be out, so I'm more than happy to wait and see what they bring to the table...
But I'm still sitting here waiting for CrossHair4Extreme for my 1090T, so I will also wait for next spring, and the high-end cards, before making any purchases, no matter how good these cards will be...
For that reason alone, I wouldn't put it past them to go outside long-standing naming conventions...You could even say that now they are AMD as as whole, and not ATi/AMD, anything is possible...
x900- dual GPU setup/enthusiast
x800- high end/professional
x700- performance
x600- mainstream
x500~x300- budget
this is what happen when gpuz cant utilize 9600gt
do you think cayman is going to be 256bit because of gpuz error? if Barts is 256bit and half of cayman's spec than there's no reason cayman cant be 64 rops and 512bit bus
ChipHell has been a pretty reliable source in the past.
they may be making info up...they might be misled, even...it's really so unimportant, I don't understand why you think the sole source of info posting newer info that contradicts thier earlier info, is a bad thing?
Anyway, with only a month or so before launch, none of it matters, as the truth will come out very soon.
Nobody should believe a single thing when it comes to tech rumours, until real, official info comes out, through official channels.
AMD has been playing catch-up since R600 and Phenom I. Both were largely over-hyped, and under-delivered.
All these products are unimportant. They don't really offer anything new...just a bit more added on to what already exists. "Fusion" is where the real future is, and all these products, no matter who is making them, are merely stop-gaps to generate income until they get it RIGHT. And the programming needs work.
To me, it seems that AMD is making the proper moves behind the scenes to prepare for this shift. Since they bought ATI, they have been headed towards a specific goal..and it's not really that close, just yet.
I'm gonna buy a high-end 6-series card. In fact, I'll probably buy 4 or more. But that card isn't even gonna come this year...it doesn't make any sense, business-wise, to do so.
But this 6770, it has to come out. And it's got to be real good. AMD needs to keep nvidia down, and they need a new card to do that. GTX460 is just that good.
In the future, nvidia is screwed in the x86 marketplace. Take a look at thier stock value over the past 8 months, and you'll see that investors agree. AMD is down 36% vs nV's 44% YTD.
Without 32nm, nobody should expect too much, either. If these cards are even 33% faster than 5-series, AMD has done a good job. If it's more than that...AMD really has killed nV.
The few benches that were shown don't say anything in regards to real-world performance. I'll take this info here today though. I mean really now...AMD's own marketing says it all..."The Future is Fusion". Um, Hello?
right now unless nvidia can come out another revolutionary architecture, like amd does at this moment, or else they can only hope 28nm fab as soon as possible. since gtx 460 is already far larger than cypress. i dont think they can add anymore feature on it like amd did with cayman/barts. not until nvidia get rip of these bulky shader first and finally start over...but if barts is already outperform gtx 480 in 33% margin i personally doubt NV has any hope on current 40nm fab....
PS: hell! cayman is reveal only 10~15% larger than g104 but g104 is far outclassed
From my point of view, the Radeons have a huge disadvantage with their lack of CUDA support. Maybe it will pay off supporting OpenCL, who knows.
And how could AMD let Nvidia get exclusive support from Adobe in Mercury engine? I can't understand. It's like they really want to position their cards as good only for gaming. Wake up, AMD.
So, same timeframe, but no new process. This means the new gen won't be all it could have been, but that's because of TSMC, not AMD, and effects nV just as hard. I find it hard to fault AMD in this situation.
And if my theory on high-end gpu performance is right, they really need bulldozer before they release a new high-end gpu, and as well so that they release an entire PLATFORM, rather than just a cpu and chipset, and then a gpu.
TSMC threw a big huge wrench in the gpu market, but I can honestly say I saw this coming for years...I have been saying for years that ATI should get away from using TSMC.
Imagine, if AMD had 28nm now, and nV didn't?
:roll:
nVidia really would have to roll over and die. NO x86, no new fab process...AMD kinda missed out on that one.
that implies two things:
- It will be similar (or better) performance
- It will be similar (or lower) price
Doesn't that mean having only two competitors really doesn't keep the price in check?WAIT. You're just figuring this out now?:wtf:
Anyway, I'm hoping for same price.
They're probably also bumping the geometry performance, namely DX11 tesselation, along with the new shaders.
Nonetheless, I have no doubt that Bart will be a whole lot smaller than GF104, thus cheaper to produce. Besides, since the HD5830 can be made with a relatively small PCB, I have no doubts this card won't be much bigger than the HD5770.
I do think they could just cut the prices in their current HD5000 line to stupidly low values (their yields should be sky-high by now) while holding off for the 32/28nm process. nVidia's underperforming Fermi architecture would allow them to do that.
Now since I don't really get into the lingo of what means what.... I just know what's the most powerful at the time and how to overclock it well :)
I thought that Stream Processors was what ATI/AMD called their Shader cores correct?
If I have that right.... wouldn't this mean that this has to be new architecture? Considering that the old Stream processors were weaker then Nvidias Shader "Cuda" Cores? Now this card only has a number of 300 compared to the 5770's 800?
If I have this understood correctly.... this will be one hell of a series. We might finally be able to adjust Shader clocks on ATI cards too!? I just can't wait to see what we have instore for this generation.
I will tell you what though. Even if this card is meant to go against the GTX 460.... the DX 11 tessellation on these cards compared to Fermi (If the benchmarks are true) look like this series will leave fermi in the dust and now where to be seen.
I will definetly sell my GTX 460's for a pair of these. If not go even higher up the ladder if the price is right
That's not to mention the 960 Shader version of this card. This thing should be crazy as hell! :rockout:
But to someone out there who said that "We could see this card be on the 5870 levels" I hope so for AMD but I hope not for our sake. Because if this is the case.... we are looking at a mid card for $400 buck each and a top card for $1000 grand or more easy.