Thursday, September 16th 2010

AMD ''Barts'' GPU Detailed Specifications Surface

Barely a week after pictures of AMD's "Barts" prototype surfaced, it wasn't long before a specifications sheet followed. The all-important slide from AMD's presentation to its add-in board partners made it to sections of the Chinese media. "Barts" is a successor to "Juniper", on which are based the Radeon HD 5750 and HD 5770. The specs sheet reveals that while indeed the GPU looks to be larger physically, there are other factors that make it big:

Memory Controller
Barts has a 256-bit wide memory interface, which significantly increases its pin-count, and package-size. The "Pro" and "XT" variants (which will go on to be HD 6x50 and HD 6x70, respectively), have memory clocked at 1000 MHz and 1200 MHz, respectively, so that's nearly 100% increase in memory bandwidth .

Tiny increase in SIMD count, but major restructuring
Compared to Juniper, there seems to be an increase of only 20% in stream processor count physically. The XT variant has 960 stream processors, while the Pro variant has 800. AMD specifically mentioned SIMD block count, (10 enabled for Pro, 12 enabled for XT). If you noticed the slide, it says that the GPU is based on the "Cypress Dual Engine architecture", meaning that these 10 and 12 SIMD units will be spread across two blocks of 5 (Pro) or 6 (XT) SIMDs each, just like Cypress had two blocks of 10 SIMDs each.

Other components
The Raster Operations unit (ROP) count has been doubled to 32, TMUs stand at 40 for the Pro and 48 for the XT.

The design methodology is extremely simple. Juniper-based graphics cards anyway carry 8 memory chips to meet up to memory amount requirements of 1 GB using market-popular 1 Gbit GDDR5 chips, so why not just place those 8 chips across a 256-bit wide memory interface and double the memory bandwidth. The increased ROP count, coupled with up to 20% increase in shader compute power gives Barts the competitive edge it needs to face NVIDIA's reinvigorated GeForce 400 series after the introduction of the GeForce GTX 460. As for power draw, AMD projects the Pro variant to draw less than 150W, with the XT drawing "over" 150W.

Market Positioning
AMD doesn't have huge expectations from this. It has its task cut out: to compete with the GeForce GTX 460 768 MB and 1 GB models. While memory count ROP made the cut out NVIDIA's variants, AMD's come from clock speeds and SIMD core counts. It should then become obvious what these GPUs' pricing should look like.

When?
Usually when AMD gives out such a presentation to its AIB partners, a market release is about 3 months away.
Source: ChipHell
Add your own comment

110 Comments on AMD ''Barts'' GPU Detailed Specifications Surface

#101
Frick
Fishfaced Nincompoop
Do you seriously want those times back? :wtf:

Because I just read your post and you're almost delusional.
Posted on Reply
#102
Unregistered
ok, first of all, its doesn't matter if the game is console port or not but the real deal was how the dev code their games so even if the game was console port from X-box its doesn't guarantee that the games was better on ati, same as native PC games

just look at the crysis :



so even stock HD 4870 beat GTX 260.

or look at this hawx :



even though it console port, it still faster on nvdia card
#103
dalelaroy
Swag
bear jesusumm it could be that i have no idea what is going on with the naming but i thought people had been throwing around the idea that barts would be x8xx names as in 6870 and 6850 and the next level up (Cayman?) would be 6970 and 6950 with the top dual chip being a 6990 thus why it made no sense to me why they whould change the naming to something like that... i think i'm just confused by all the rumors and false information floating around as usual before hardware launches.

*edit* i think posting first thing in the morning is not a great idea for me :p the problem is not knowing what the caymen chips will be spec'd is what is confusing me really as normally they have been double the mid range cards in recent years but if they are not this time around i geuss i can accept the new naming makes some sense but if a 5870 beats a 6870 then i would be going back to not understanding the change. i dont neven know where everyone is getting these names from, is there a source?
I do not believe these specs are real. My best guess is that, with 32nm enabling 60% more transistors on the same die area as 40nm, Barts started out as a GPU with 60% more shader clusters than Juniper, and Cayman 60% more shader clusters than Cypress. With the shift from 4 simple + 1 complex shader clusters in Evergreen to 4 moderate complexity shader clusters in Northern Islands, Cayman had 2048 shaders versus Bart's 1024 shaders. This change, together with the tessellation improvements, resulted in the die size of Cayman growing to about 400mm2 with 32nm. With the cancellation of 32nm, NI had to be implemented at 40nm, which would have resulted in NI being over 600mm2. This wouldn't be a problem for a single GPU, but would have made a dual GPU variant of the high end Cayman too hot. Thus Cayman was reduced from 2048 shaders to 1280 shaders. But Barts remained at 1024 shaders. After all, Barts was 80% of Cayman, which was the same ratio as the 40nm 4770 versus the 55nm 48xx(RV770), and the 32nm 5790 versus the 40nm 58xx(Cypress).

With Barts being 80% of Cayman, which is just South of 400mm2, Barts is nearly the same die size as Cypress. Best guess is that Barts is about 95% of the die size of Cypress. Originally Cayman LE was to replace the low yield Radeon HD 5870, with the highest binning Barts having 14 of 16 execution blocks active and clocked at 900 MHz, making it tolerant of up to two defects, and sufficiently high yielding despite its high clock rate. The performance per shader of Barts was 1.5x to 1.8x the performance per shader of Cypress depending on the application, with Barts showing the smallest improvement where Cypress is strongest relative to GF100, and the largest improvement where Cypress is weakest relative to GF100. Together with the bump to 900 MHz, the original Barts XT (Radeon HD 6770 with 896 shaders @ 900 MHz) would have delivered from 1.16 to 1.39 times the performance of the Radeon HD 5850 it would have been replacing.

Turks would have 512 shaders versus Barts XT's 892 shaders providing the same shader ratio as the GF106 versus the Geforce GTX 460. Then nVidia threw a curve ball. GTS 450 would ship at 783 MHz versus just 675 MHz for GTX 460. Since Turks can't ship at more than 900 MHz, the clock of the Radeon HD 6770 had to be adjusted accordingly to maintain the same performance ratio between the Radeon HD 6670 and the Radeon HD 6770 as between GTS 450 and GTX 460. Thus the clock of the Radeon HD 6770 was dropped to 775 MHz, reducing the Radeon HD 6770 to from 0.997 to 1.197 times the performance of the Radeon HD 5850, but making room for a Barts core with 960 active shaders at 900 MHz to replace the Radeon HD 5870. This new Barts XT would have from 0.953 to 1.143 times the performance of the Radeon HD 5870, thus making the Cayman LE redundant. Thus Barts XT became the Radeon HD 6830, or at least this potential name change was discussed and leaked.

At the same time, it was realized that, while the yield of a fully functional Cayman part would be too low to justify launching a fully functional part single GPU card, dual GPU cards are sold in low enough volume that fully functional GPUs can be used in the dual GPU card. Thus the change from the Radeon HD 6970 to the Radeon HD 6990. Since the high end dual GPU card would have two fully functional GPUs, while the Radeon HD 6870 would have two execution blocks disabled, it didn't make sense to call the dual GPU card a Radeon HD 6970.

Additionally, realizing that the 40nm Barts die would not be significantly smaller than the Cypress die, ATI continued with development of the Cypress derivative 1280 shader Radeon HD 5790 on 40nm to meet excess demand for Radeon Hd 5830 class GPUs. However, realizing that the performance of the Radeon HD 5790 would have to be bumped up if it were to, instead of supplementing the Radeon HD 5830 supply, supplement the Radeon HD 6750 supply, thus reducing or even negating the need to run excess Barts wafers to meet demand for the Radeon HD 6750, ATI decided to change the name of the Radeon HD 5790 to the Radeon HD 5840, and introduce a lower binning Radeon HD 5820 to improve yields.

Thus the rumors for the name changes. The Barts XT will be a 15 execution block part instead of a 14 execution block part, and possibly assume the name of the Radeon HD 6830.
Antilles will have all 20 execution blocks active instead of just 18, and be called the Radeon HD 6990. And what was formerly going to be called the Radeon HD 5790 is going to ship as the Radeon HD 5820 and Radeon HD 5840, even as the original 58xx series is replaced by Barts.

Introductory pricing should be:
Barts XT @ $299
Barts Pro @ $219
Barts LE @ $179
Radeon HD 5840 @ $179
Radeon HD 5820 @ $149
Posted on Reply
#104
cheezburger
wahdangunok, first of all, its doesn't matter if the game is console port or not but the real deal was how the dev code their games so even if the game was console port from X-box its doesn't guarantee that the games was better on ati, same as native PC games

just look at the crysis :

tpucdn.com/reviews/ASUS/GeForce_GTS_450_TOP_DirectCU/images/crysis_1920_1200.gif

so even stock HD 4870 beat GTX 260.

or look at this hawx :

tpucdn.com/reviews/ASUS/GeForce_GTS_450_TOP_DirectCU/images/hawx_1920_1200.gif

even though it console port, it still faster on nvdia card
i called it BS! in previous review NV GT200 gain massively 40% lead on crysis over hd 4890. but that's under nvidia 780 chipset. however under intel chipset they are significantly slow down and make look like hd 4890 took adavntage over NV(intel was f*** on NV for a while so no surprise for such low performance on i5/i7 platform) hawk was amd brand title game and was console ported so i'm not surprise amd would take such lead...
Posted on Reply
#105
largon
(Referring to CDdude55's post, which for some reason, got deleted.)
I'd say he (cheezburger) is in the right place, kinda, but he has a lot of reading to do before his posts are worth reading as at the moment most of what he writes is just horribly inaccurate or blatantly wrong.
Posted on Reply
#106
btarunr
Editor & Senior Moderator
Alright people, stay closer to the topic. I allow a broad scope for discussion because often interesting things come out of it. Bickering is not one of them.
Posted on Reply
#107
Unregistered
cheezburgeri called it BS! in previous review NV GT200 gain massively 40% lead on crysis over hd 4890. but that's under nvidia 780 chipset. however under intel chipset they are significantly slow down and make look like hd 4890 took adavntage over NV(intel was f*** on NV for a while so no surprise for such low performance on i5/i7 platform) hawk was amd brand title game and was console ported so i'm not surprise amd would take such lead...
are you being sarcastic ?




btw i hope cayman don't turn out to be power hungry monster like fermi, and when exactly cayman get released ? is it around October too?
#108
pantherx12
Cheezburger, I imagine if it had a 40% increase then their could of been some driver fiddling to make the game run faster rather than a fair test.

Because I had a 9800gt ( yes I know it's not a gtx) Asus Matrix edition, and well it just didn't get close to my 4890, at all. lol

Especially when I ran my 4890 at 1ghz stock volts :D
Posted on Reply
#109
wolf
Better Than Native
cheezburger...in previous review NV GT200 gain massively 40% lead on crysis over hd 4890...
GT200 is a bit vague... this could mean from the scope of a GTX260 192sp model right the way up to a GTX285.
pantherx12...Because I had a 9800gt ( yes I know it's not a gtx) Asus Matrix edition, and well it just didn't get close to my 4890, at all. lol...
9800GT in reality is a fair bit behind a 4890, even a 9800GTX+/GTS250 are well behind.

-------------------

If Barts XT is as fast or faster than a 5850 they have a winner on their hands IMO
Posted on Reply
#110
WarEagleAU
Bird of Prey
Sounds nice and looks to be a refresh of juniper.
Posted on Reply
Add your own comment
Nov 26th, 2024 09:25 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts