Thursday, September 16th 2010
AMD ''Barts'' GPU Detailed Specifications Surface
Barely a week after pictures of AMD's "Barts" prototype surfaced, it wasn't long before a specifications sheet followed. The all-important slide from AMD's presentation to its add-in board partners made it to sections of the Chinese media. "Barts" is a successor to "Juniper", on which are based the Radeon HD 5750 and HD 5770. The specs sheet reveals that while indeed the GPU looks to be larger physically, there are other factors that make it big:
Memory Controller
Barts has a 256-bit wide memory interface, which significantly increases its pin-count, and package-size. The "Pro" and "XT" variants (which will go on to be HD 6x50 and HD 6x70, respectively), have memory clocked at 1000 MHz and 1200 MHz, respectively, so that's nearly 100% increase in memory bandwidth .
Tiny increase in SIMD count, but major restructuring
Compared to Juniper, there seems to be an increase of only 20% in stream processor count physically. The XT variant has 960 stream processors, while the Pro variant has 800. AMD specifically mentioned SIMD block count, (10 enabled for Pro, 12 enabled for XT). If you noticed the slide, it says that the GPU is based on the "Cypress Dual Engine architecture", meaning that these 10 and 12 SIMD units will be spread across two blocks of 5 (Pro) or 6 (XT) SIMDs each, just like Cypress had two blocks of 10 SIMDs each.
Other components
The Raster Operations unit (ROP) count has been doubled to 32, TMUs stand at 40 for the Pro and 48 for the XT.
The design methodology is extremely simple. Juniper-based graphics cards anyway carry 8 memory chips to meet up to memory amount requirements of 1 GB using market-popular 1 Gbit GDDR5 chips, so why not just place those 8 chips across a 256-bit wide memory interface and double the memory bandwidth. The increased ROP count, coupled with up to 20% increase in shader compute power gives Barts the competitive edge it needs to face NVIDIA's reinvigorated GeForce 400 series after the introduction of the GeForce GTX 460. As for power draw, AMD projects the Pro variant to draw less than 150W, with the XT drawing "over" 150W.
Market Positioning
AMD doesn't have huge expectations from this. It has its task cut out: to compete with the GeForce GTX 460 768 MB and 1 GB models. While memory count ROP made the cut out NVIDIA's variants, AMD's come from clock speeds and SIMD core counts. It should then become obvious what these GPUs' pricing should look like.
When?
Usually when AMD gives out such a presentation to its AIB partners, a market release is about 3 months away.
Source:
ChipHell
Memory Controller
Barts has a 256-bit wide memory interface, which significantly increases its pin-count, and package-size. The "Pro" and "XT" variants (which will go on to be HD 6x50 and HD 6x70, respectively), have memory clocked at 1000 MHz and 1200 MHz, respectively, so that's nearly 100% increase in memory bandwidth .
Tiny increase in SIMD count, but major restructuring
Compared to Juniper, there seems to be an increase of only 20% in stream processor count physically. The XT variant has 960 stream processors, while the Pro variant has 800. AMD specifically mentioned SIMD block count, (10 enabled for Pro, 12 enabled for XT). If you noticed the slide, it says that the GPU is based on the "Cypress Dual Engine architecture", meaning that these 10 and 12 SIMD units will be spread across two blocks of 5 (Pro) or 6 (XT) SIMDs each, just like Cypress had two blocks of 10 SIMDs each.
Other components
The Raster Operations unit (ROP) count has been doubled to 32, TMUs stand at 40 for the Pro and 48 for the XT.
The design methodology is extremely simple. Juniper-based graphics cards anyway carry 8 memory chips to meet up to memory amount requirements of 1 GB using market-popular 1 Gbit GDDR5 chips, so why not just place those 8 chips across a 256-bit wide memory interface and double the memory bandwidth. The increased ROP count, coupled with up to 20% increase in shader compute power gives Barts the competitive edge it needs to face NVIDIA's reinvigorated GeForce 400 series after the introduction of the GeForce GTX 460. As for power draw, AMD projects the Pro variant to draw less than 150W, with the XT drawing "over" 150W.
Market Positioning
AMD doesn't have huge expectations from this. It has its task cut out: to compete with the GeForce GTX 460 768 MB and 1 GB models. While memory count ROP made the cut out NVIDIA's variants, AMD's come from clock speeds and SIMD core counts. It should then become obvious what these GPUs' pricing should look like.
When?
Usually when AMD gives out such a presentation to its AIB partners, a market release is about 3 months away.
110 Comments on AMD ''Barts'' GPU Detailed Specifications Surface
That's what happens when people speculate...nobody should be taking any of this seriously.
Wait a minute, I already said that. Funny...
:D
Sorry mate, its not a great videocard.
It may serve you well on performance though, if thats what matters, you got what you want.
Fermi 470 and 480 is rubbish in their current state, but its a generation change! much like 2900.
Amd have done well with effecient designs! Ehm, shaders does alot.
Ati have found a very magical ratio number, it have proven to be a well balanced.
The only thing they actually needed vs nvidia... was JUST shader power/ tesselation power. where fermi was superiour, ati have more rops if i remember.
256 bit Is enough for 6870.
192 bit would be enough for 6770 i guess, but yeah, odd memory numbers..
Ati need to just improve tesselation stuff, their new arch may have this, we'll find out with 6xxx and for real with 7xxx.
Excited to see what future holds!
But yes, on the performance side of things you are really getting a good treat at a nice price. :)
I'd rather consider better performance first than start with pain-in-the-ass things like power consumption and heat. assuming you've built a good enough rig to handle throwing in high end cards.
Now i run 5850, and loaning a 2nd one.
Tried the 470, overheated in my microatx... and the hdmi sound was horrible...
the 2nd is a must for me, so ati is onto something, only way i see it is that nvidia will be swollowed by someone, much like ati, erm amd. :)
But the heat could be solved in some way i guess. and noise when watching movies was just, not good at all, 5850 was pretty much spot on for me :)
Bought it at launch, price now i 33% higher, so I'm a very satisfied costumer! I thought i would regret it, but nope ! :D
Nvidia is focusing way too much on cuda, instead opencl and its performance should be what they should go for.
Physx, isnt that much worth really, used to have a geforce in my pc for it, but got used maybe once every 3rd month, so whats the point.
I just hope opencl will be taking off, Coding for it is quite easy in fact, so dont see any dont's for it.
And we can enjoy the apps for both AMD and Nvidia gpu's!
opencl, fusion, sandy, dx11, lots of things in movement now that benefit us all.
Anyways, back on track here.. ati is really pushing out quickly! I think this may be because of the problems with some artifacts in some systems with HD5xxx.
Mouse pointer for example with multi-display, i have the problem in starcraft 2 ever now n then, not a biggie, its just a green line, but after a min it returns to normal.
big card don't make profit? then what makes profit? people that don't use CAD and play video game wouldn't even bother install a graphic card. console gamer wouldn't buy a graphic card as well as their PC are not built by them selves and not use it for game purpose(doh they are console gamer....) entry gamer would rather have a laptop and play sims and other homosexual-like games. sorry sir intel took these part greatly......the only part left for both NV and AMD is high end gamer and professional user. would you spend 200 bulks on a card that only work great on console migration games or a card cost $400 but convertible to any games. you can say what ever you want about how shitty g100 are but they done pretty well job squash cypress in many games despite it runs more power. so what these gamer wouldn't even care about pale bear and global warming anyway! most of people wouldn't care this planet even if it dies.....anyway...
you guys kept talking about tesellation but you have no idea about the structure design. unlike fermi's tessellation that was integrated in their cuda core amd's design is on the rops(look the die picture...)!!the trade off for this opposite design is cypress's smaller die. the only way to improve this is increase rops or redesign the tessellation engine. data bus can also affecting tessellation performance. result cypress wasn't even 1/10 of g100 in heaven benchmark. how do you improve tessellation without increase something or a major redesign? keep r600 architecture will be chronic suicidal...
again cayman will be 512bit 64 rops and cost 600+ dollars either you like it or not.......
There's at least one person here who will be grossly disappointed in it due to unreasonable expectations.
I just hope it is not me *wishes really hard for 256 bit gddr5 at 1600mhz (6400mhz effective), 64 rop's, 96 tmu's, 1920 stream processors and 850mhz core* :D
"I just hope it is not me *wishes really hard for 512 bit gddr5 at 1600mhz (6400mhz effective), 64 rop's, 96 tmu's, 1920 stream processors and 850mhz core* :D"
and that is hardly serious, there is no logical reason to use 1600mhz gddr5 with a 512bit bus unless the core is so crazy powerful it would need that much bandwith and i very much doubt that
There is many many many fail pc games with low quality everything, does that mean that pc gaming is killing pc gaming and is destroying the future of technology? :p
Just curious how relative anything about consoles is to a thread about an upcoming gpu's spec.(not trying to be an ass or anything just curious)
side note CDdude, your eligible for a custom title bro. :toast:
console migration is why make average user not to upgrade their part or buy cheap gpu that does not perform better. for example a low graphic quality console title will make a sub 200 bulks amd card(r770) have 100 fps while a $400 nv's card(gt200) offering 200+fps. however problem comes. average people will not see the dfference in performance gap and rather satisfy on minimum "playable" framerate. which result amd card sells better because average casaul gamer don't need high end gpu and enjoy fps that is higher than 30fpsreasonable "performance" and don't need $400 dollor card than can push 200+fps. result high end technology going backward... we see the game hardware requirement is totally NO different from 4 years ago. our technology had stay the same for 4~5 years!! and these average idiot is what cause it. denied nvidia and denied any possibility of cayman/barts/new architecture is denied future invention. you also denied your own future! :shadedshu
Also, it depends on the person on what they need or want for gaming, are you really surprised that it's the mainstream cards and mainstream computers and hardware that sell more?, that's what those companies focus on more, because that's where the most profit is, they aren't focused on that tiny percentage that is us. Whether or not someone needs a high end GPU is all choice, does your average gamer need a 5970 with an overclocked i7, do they need 2x GTX 480's?, the needs on an ''average joe PC gamer'' are vastly different from us, the small percentage. Devs see that and realize they can make money off of those people by dumbing down our games and making it so that us, the ''hardcore'' gamers and enthusiasts shunned. And why not shun us?, we barely make them any money anyways, most of us spend more money on our system then we'll even spend on there games anyways. The crappiest sytems and parts make the most profits, the uninformed make the most profits for them.
its because amd can insert more shader processor and more efficient than nvdia big shader, and IF NATIVE pc games are crap on ati then why oh why crysis run superb on ati than nvdia counterpart ?
and btw i don't want to go back when everything was EXPENSIVE heck i even remember seeing P3 800 mghz cost a whoping $1000, but i want dev to push the hardware more, we want another crysis,
and yah, without a $1000 p3 800mhz in 10 years ago you woudn't even have p3 400mhz for cheaper price and there for you won't have any powerful processor like core2 or powerful gpu that can play decent graphic like crysis. maybe your pc is 486 and play fubby island everyday i presume?