Monday, September 6th 2010

Picture of AMD ''Cayman'' Prototype Surfaces

Here is the first picture of a working prototype of the AMD Radeon HD 6000 series "Cayman" graphics card. This particular card is reportedly the "XT" variant, or what will go on to be the HD 6x70, which is the top single-GPU SKU based on AMD's next-generation "Cayman" performance GPU. The picture reveals a card that appears to be roughly the size of a Radeon HD 5870, with a slightly more complex-looking cooler. The PCB is red in color, and the display output is slightly different compared to the Radeon HD 5800 series: there are two DVI, one HDMI, and two mini-DisplayPort connectors. The specifications of the GPU remain largely unknown, except it's being reported that the GPU is built on the TSMC 40 nm process. The refreshed Radeon HD 6000 series GPU lineup, coupled with next-generation Bulldozer architecture CPUs and Fusion APUs are sure to make AMD's lineup for 2011 quite an interesting one.

Update (9/9): A new picture of the reverse side of the PCB reveals 8 memory chips (256-bit wide memory bus), 6+2 phase VRM, and 6-pin + 8-pin power inputs.
Source: ChipHell
Add your own comment

118 Comments on Picture of AMD ''Cayman'' Prototype Surfaces

#101
cadaveca
My name is Dave
erockerI'm thinking that is exactly what they are doing with the 6 series.
I can't help but be a bit excited by that. I mean, sure, I'm just coming to these conclusions NOW about HD5-series, but I'm sure AMd has been aware of this for some time now.

And maybe that driver change was a pre-emptive strike in preparation for these cards...:twitch:


I'm still more interested in Bulldozer-based Fusion chips though. The combination of a that cpu, plus these add-in cards(if less complex, but higher-order math), might be the huge boost that pushes AMD back into the performance lead when it comes to 3D.
wahdangunbut actually 5D shader is more die efficient than NVDIA counter part (i'm reading some anandtech article when RV770 was out), and thats why 160 core in HD 4870 can compete with 192 cuda core in GTX 260. and if the game can fully utilize the 5D shader then it can be a lot more powerful
To me, it seems that that only really suits the HPC crowd, and older-style game programming though. Largon's mention of Furmark kinda illustrates that very well, IMHO. No game pushes HD5-series like Furmark...the math in Furmark is very simple, and games are not.

I'm looking for a few other specific changes, and AMD really might have a huge winner here...I guess time will tell.
Posted on Reply
#102
cheezburger
largonYep. All Volterra.

Wider memory bus doesn't take more die area?
That's just wrong. Memory bus width has a huge impact on die size.
On RV770, the 256bit memory controller with I/O pads takes 14% of the die size. Around 36mm².
On R600 that 512bit takes ~35-40% of total die size, that's whopping 125-170mm².
Also, it takes twice the amount of memory chips for 512bit so cost goes up due to many many factors.
125-170mm^2 on 80nm doesn't mean it will take as much die space in 40nm

512bit bus in 40nm: 170mm^2/(80nm/40nm)^2 = 42.5mm^2 ~10%
256bit bus in 40nm: 36mm^2/(55nm/40nm)^2 = 19.4mm^2

which overall a 512bit bus will only take about 12% if in current cypress's 334mm^2 die. it isn't really that big.
Posted on Reply
#103
cadaveca
My name is Dave
cheezburger125-170mm^2 on 80nm doesn't mean it will take as much die space in 40nm

512bit bus in 40nm: 170mm^2/(80nm/40nm)^2 = 42.5mm^2 ~10%
256bit bus in 40nm: 36mm^2/(55nm/40nm)^2 = 19.4mm^2

which overall a 512bit bus will only take about 12% if in current cypress's 334mm^2 die. it isn't really that big.
12% is alot if it's not needed. And given that we know there are just 8 memory chips, it's basically impossible for it to be truly 512-bit...that would require 16x ram ICs. Memory bandwidth isn't the issue for Cypress...so there would be no need for such a drastic change.
Posted on Reply
#104
cheezburger
cadaveca12% is alot if it's not needed. And given that we know there are just 8 memory chips, it's basically impossible for it to be truly 512-bit...that would require 16x ram ICs. Memory bandwidth isn't the issue for Cypress...so there would be no need for such a drastic change.
12% is a lot? 256bit bus already took about 15% of die on hd 4870 already...

5770 is also a 8 ram chip card but it only had 128bit so basically chip number is nothing to do with bus like nvidia's design for its card. just to remind you x2900xt is ALSO have only 8 chip as well. for what you said about 16x 12x ram IC was nvidia's exclusive architecture which amd's card can add as much ram as possible without concerning ram bus and ram controller.
Posted on Reply
#105
cadaveca
My name is Dave
cheezburger12% is a lot? 256bit bus already took about 15% of die on hd 4870 already...

5770 is also a 8 ram chip card but it only had 128bit so basically chip number is nothing to do with bus like nvidia's design for its card. just to remind you x2900xt is ALSO have only 8 chip as well. for what you said about 16x 12x ram IC was nvidia's exclusive architecture which amd's card can add as much ram as possible without concerning ram bus and ram controller.
First, your point about 5770 only illustrates my point. GDDR5 only works in so many configurations..and there are only so many types of IC available(5770 gets 8xIC in 128 bit the same way 5870 can get 2GB on 256bit). Together, with those infos, I CAN make those conclusions. nVidia's 384-bit and lower works on the same principle...there are several 64-bit busses, and each bus can only contain certain configurations of ram IC...in effect Fermi has 2x more 64-bit controllers, and as such, needs those extra ICs.

2900XT, truly, is only 256-bit. It was considered 512-bit becuase it had 256-bit to the "ringstops", and then 256-bit from "ringstop" to mem IC's. because these two busses could operate independantly, both can have data in-flight, it was effectively given 512-bits of data transfer...but the memory bus, is NOT truly 512bit.

You are ignoring that AMD is a business, and as such, profitability is concern #1. As such, changes that increase pricing, must have a real tangible benefit, or they wil lbe cut from the design...Cypress, at first, was a much larger chip than we got...for exactly this reason. With that in mind, they can make better, more PRICE EFFECTIVE use of that die space, than adding 512-bit memory control.

So, I can say that the pictured card is 256-bit only...due to ICs...the only other option, based on available parts, is a 128-bit GPU, and that would not suffice for a high-end SKU.
Posted on Reply
#106
wolf
Better Than Native
really all it needs is the faster memory chips, 512 bit is useless the way GDDR5 speeds are soaring.
Posted on Reply
#107
largon
GDDR5 will get a nice speed bump when differentially clocked chips and GPU memctrls start appearing. And I reckon Cayman wields a controller capable of differential IO...
Prepare to say hello to 5-10GHz GDDR5 chippery.
cheezburger125-170mm^2 on 80nm doesn't mean it will take as much die space in 40nm

512bit bus in 40nm: 170mm^2/(80nm/40nm)^2 = 42.5mm^2 ~10%
256bit bus in 40nm: 36mm^2/(55nm/40nm)^2 = 19.4mm^2

which overall a 512bit bus will only take about 12% if in current cypress's 334mm^2 die. it isn't really that big.
Only in theory. Reality is less ideal.
Scratch that. You're in a whole wrong ballpark.

You're comparing GDDR3/4 memctrl and GDDR5. GDDR5 uses some ~20% more pins (area), so 512bit GDDR5 ctrl is larger than 512bit GDDR3/4 ctrl. And what's worse, MEMIO does not scale linearly with fab process. It actually scales hardly at all on today's process'. The problem is IO pads, that is, the solder balls between IC and chip carrier are sized what they are and there's no way shrinking the distance between 'em. Difference between IO pads on, say, a 90nm chip and 32nm chip is nowhere near as large as one would think.
cheezburgerjust to remind you x2900xt is ALSO have only 8 chip as well. for what you said about 16x 12x ram IC was nvidia's exclusive architecture which amd's card can add as much ram as possible without concerning ram bus and ram controller.
That's just plain wrong.
Bus width is dictated by number of memory chips and bit width of those chips. Any and all GDDR3/4/5 chips come only up to 32bit wide. HD2900XT has 16 chips. And that's a fact.
I have two HD2900XT cards here so I can point 'em to you if you want to argue.
cadaveca2900XT, truly, is only 256-bit. It was considered 512-bit becuase it had 256-bit to the "ringstops", and then 256-bit from "ringstop" to mem IC's. because these two busses could operate independantly, both can have data in-flight, it was effectively given 512-bits of data transfer...but the memory bus, is NOT truly 512bit.
Not true. R600 was as true 512bit as can be.
There was eight ringstops of which each was a dualchannel controller (64bit) and each ringstop connected to two other stops with 1024bit bidirectional bus (512bit˄ + 512bit˅).
Posted on Reply
#108
cadaveca
My name is Dave
largonNot true. R600 was as true 512bit as can be.
There was eight ringstops of which each was a dualchannel controller (64bit) and each ringstop connected to two other stops with 1024bit bidirectional bus (512bit˄ + 512bit˅).
Yeah you know where I screwed up the math.:laugh:

but that isn't even 100% true, as there is a ringstop for PCI-E, and a ringstop for Crossfire connector. But yes, 8 for memory control.
Posted on Reply
#109
wolf
Better Than Native
largon always knows best, you can take that to the bank.
Posted on Reply
#110
mastrdrver
Huh 5D shader is gone.

4D is what is coming on the 6 series.

Like I said in the 6k thread, expect a shader setup like what nVidia changed to for Fermi (not exact but similar). Shaders will be grouped with parts of the DX11 pipeline including the tessellation part since triangles/clock is what will define a DX11 gpu. Of course you will need to group shaders with these so as to work as units together.

Kind of like a multi core cpu, but not really.
Posted on Reply
#111
inferKNOX
wahdangunman, but i hope AMD ditch DVI all together, and use Displayport, and bundle displayport to DVi instead, DVi is gigantic and take a lot more space than displayport
That's quite an interesting idea. I wonder if AMD has that planned for the future cards.
It is said that DP is more flexible and what-so-ever + free, so it would seem that it's quite possible.
Posted on Reply
#112
crazyeyesreaper
Not a Moderator
i would prefer they dont as it would essential fuck up my entire setup here and im not a fan of display port at all
Posted on Reply
#114
Super XP
AMD put a lot of time and effort to improve infrastructure and enhance performance.
Cayman XT is going to obliterate the competition. It's good times for ATI / AMD for the past year(s). Or should I say for us gamers and for competition as a whole.
Posted on Reply
#115
JATownes
The Lurker
Wow...Thread necro from a few month ago....Way to bring a Cayman thread back from the dead.
Posted on Reply
#116
CrystalKing
Cayman XT hd6970 it's very very long

DVI*2+HDMI+miniDP*2 and 6+8pin 2GB GDDR5 860mhz



Posted on Reply
#117
TheMailMan78
Big Member
Relese it already. Enough with this bullshit.
Posted on Reply
#118
overclocking101
yeah those pics of a "long" card are not working for me but i assume the 69XX cards will be long probably around 10.5-12 inches just as the last ones were with a 13-14 inch dual gpu card
Posted on Reply
Add your own comment
Jul 22nd, 2024 13:28 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts