Monday, September 6th 2010

Picture of AMD ''Cayman'' Prototype Surfaces
Here is the first picture of a working prototype of the AMD Radeon HD 6000 series "Cayman" graphics card. This particular card is reportedly the "XT" variant, or what will go on to be the HD 6x70, which is the top single-GPU SKU based on AMD's next-generation "Cayman" performance GPU. The picture reveals a card that appears to be roughly the size of a Radeon HD 5870, with a slightly more complex-looking cooler. The PCB is red in color, and the display output is slightly different compared to the Radeon HD 5800 series: there are two DVI, one HDMI, and two mini-DisplayPort connectors. The specifications of the GPU remain largely unknown, except it's being reported that the GPU is built on the TSMC 40 nm process. The refreshed Radeon HD 6000 series GPU lineup, coupled with next-generation Bulldozer architecture CPUs and Fusion APUs are sure to make AMD's lineup for 2011 quite an interesting one.Update (9/9): A new picture of the reverse side of the PCB reveals 8 memory chips (256-bit wide memory bus), 6+2 phase VRM, and 6-pin + 8-pin power inputs.
Source:
ChipHell
118 Comments on Picture of AMD ''Cayman'' Prototype Surfaces
maybe if this was a board scan....but really?
I'll be very glad if the 6850 has 2GB and that DP->DVI adapter that AMD has just released; it'll be perfectly ready for Eyefinity out of the box!:rockout:
It's making me drool for it so bad!:twitch:
If this is a new card would it still be branded "ATI" ?
And every generation of graphic cards improve on one aspect or another, but mainly to increase the total performance, compared to a previous gen. So to answer your questions, yes it is pretty awesome and NEEDZ MOAR!!
when the architecture change and die shrink process continuing evolve there's no reason to stick with narrow bus. especially there's still plenty of room for 512bit bus in 40nm and most of die area in hd 5000 had been "wasted" by "5D" shader structure as 5D require more hard wiring than 4D. a cypress had 1600 shader pipeline can only form of 320 shader block, which only equal to 320 shader core in 5D architecture but 4D art it would only require 1280 shader pipeline to form 320 shader block and may reduce the die space from unused 320 shader unit(1600-1280=320) and many of unnecessary hardwiring because of bad architecture that descended from r600. as for 512bit bus x2900xt did fall hard when introduce it. but that was 3 years ago while fabrication was still at 90/80 nm. i meant how big can a 512bit ram controller be? 40nm is good enough to contain 512bit bus even with current setup of 5D structure from cypress while still remain within 400mm^2.
so why need to putting 512bit bus? because there is speed limit in GDDR5, also high speed ram comes with greater latency and more unstable which also will reduce the ram chip life cycle. also greater ram timing cause huge performance hit as well. a 7GT GDDR5 dont exist! that means each data rate has to be 1750mhz! the physical limit of single rate ram speed is 1400mhz(according from tom's hardware). unless amd can bring up next gen GDDR6 (octet "x8" data rate) or it will be impossible to make a 7GT ram with exist quad data rate GDDR5.
this is the reason why amd has to move forward from 256bit to 512bit.
Fwiw why would they need a bus larger than 256? If they do put those 7 Gt/s GDDR5 memory chips on these board then N.I. will have a bus load of bandwidth compared to Fermi. If those GPUz shots are correct, then they increased the bandwidth by ~33% and achieved a bandwidth higher than Fermi with less bus width and probably a cheaper board making the end product we buy at retail cheaper.
2. 7GT GDDR5 don't exist the highest you can go is 5GT(1250mt per rate)
3. high frequency ram comes with higher latency compare to lower frequency ram and high clockrate will make ram unstable and generate heats.
4: no matter how complicate that 512bit layout would effecting on PCB layout it would make cayman far cheaper than g100 in production due to the difference of die size. (400mm^2 vs 576mm^2) larger die require more layout on pcb board than what bandwidth bus impact in pcb design.
2. What?! :confused:
3. Graphics don't care about latency. Bandwidth matters with gpus not latency.
4. The real question is why make the pcb cost more when you can achieve the same thing with less bus width?
Another reason there is no need for a 512 bus with 7 Gt/s ram is because you have the R500 all over again with excess cost going to something that isn't ever going to be fully utilized. Why not save some money (and die space room) for something that is going to be more beneficial or just pocket the savings all together and relay it to the end user in retail price?
I've read GDDR5 can reach 7GT/s but anyway you have your opinion but i think you're wrong and it will be 256bit.
On a different topic with all those connections it looks like we might have eyefinity without the need for active adapters.
EDIT: Thanks, i knew i read that somewhere.
2. the 7GT GDDR5...it is not stable. even the news was announce back in 2008. where are they?
3 graphic card dont care latency... hmmm guess you never try nibitor and nvflash...a standard GDDR3 cycle timing is 35. while under the same clockrate i turn it to 50 then interesting things happen. when i test the 3dmark06 it end up having artifact(it wasn't really the hardware issue, ,more like the ram can't keep up with texture fill rate) and spike lag. now you telling me latency is not important?
4 market position, more bus width, more flexibility to encounter the counterpart's next gen. also bigger bus takes advantage on AA/AF/MSAA setting
"Hynix had announced its plans to introduce 7 GT/s GDDR5 chips back in November 2008. The company is known to commence volume production of the 7 GT/s chip by the end of Q2 2009."
and even if they do have this high speed ram won't nvidia just get the same with their new improved g104? plus they have bigger bus compare to any of amd's current line. don't tell me about the bang for bulk, the high end market don't care about these little money. hell they still buying fermi without caring how many pale bear die each day! main stream market? sound like screaming from amd's cpu that was beaten so bad by intel's line. remember! high end market might see little compare to mainstream, especially after the great depression. but high end product represented the engineering leadership crown that would caught investor's eye. why nvidia is still around after 7 quarter straight loss? because there are many investor still back nvidia up. while amd/ati has completely no support. if they want more fund they better bring the flagship line. like intel with its 980x.
PS: that gpu mark is fake.
Basically what im saying is your cant win this argument so you dropped it and tried to pick a different way to do the same thing troll elsewhere this is AMD gpu thread lets keep it as such.
but on top of nvidia investors if there so well off why did XFX jump to AMD and decide to say F off to the 400 series why is BFG bankrupt and now gone the way of the dodo those are some seriously losses from what i can see. There still around because ATi is still around takes more then a few losses to cause a company to fail.
example AMD has been in the hole or playing second fiddle since around 2005 there still here ATi has been playing catch up to nvidia for years untill the 4000 series. Point is just cause they have losses dosent mean jack shit. Theres a Thing called credit line and these huge corporations have huge huge lines of credit to keep moving forward and to keep there doors open "You dont get rich saving money" "You cant make money without first spending money"
oh another tid bit last i remember AMDs stock was going up up up and Nvidia was on the decline meaning more investor confidence in ATi/AMD and less in Nvidia
Add to that the memory company wanting to move old stock first and now we are in late 2010 with 7GT/s memory available and a new design ready to use it. Its not so much about money. Its about power consumption and temps. AMD understand we dont want hot and power hungry GPUs. Nvidia havent listened to us yet.
"You cant make money without first spending money" that was what intel was doing while amd enjoy its success that is what happen where it destroy amd in 2006. they wanted to SAVE money on R&D and slow the development and enjoy and stay their success while just try to get the cash from market. until core 2 came up amd merely had any backup plan because the idea of saving money!made them loss both market share and investor. if a company really want to save
money. cut the employee benefit first. amd has long history of lavish treatment to their employee and company spent billions just for lunch... same time intel would forced layoff any based engineer that's over 45 yrs old. no lavish spent and well organized. that is why intel is on the top. like 3dfx in the past these european manage style have to change. if you talking about saving money american style company like intel/nvidia would rather spend all of fund on project development than employee's lunch list... i also remeber they say these to ati back in fx era...but nvidia back up and slam ati really hard with nv40 and g72. how do you define the term of "power consumption"? and how hot can a 400mm^2 gpu be? well it is still far better than gtx200 and fermi's 576mm^2 (even in extreme case cayman would have 2/3 smaller than g100 and still have 512bit bus)a 64 rops 80 tmu and 1280 shader cayman will comsume more power than cypress indeed but will still be far better than gtx 480
true and i mentioned that already ati was behind from the 6000 series all the way up till nvidias gt 200 series thats 5 product cycles yet ATi is still here for the most parts just as nvidia will be
and i still call bullshit on the lunch vs product if nvidia spent more on product developrment they wouldnt need a gpu that uses 320w to rival an ati gpu that uses 212w
also dosent matter if the gt200 has GDDR5 or not why because performance wouldnt benefit in the least.
also again 512bit bus is extremely costly and the extra bandwidth would do NOTHING to make the gpu faster a gpu is like a whole package 512bit bus gives more bandwidth but if the GPU cant make use of what it already has giving more dosent do a damn thing.
and it dosent matter much a gtx 460 still uses more power then a 5850 and the 1 gig variants use nearly as much power as a 5870. but are still slower in the respective stock configurations.
Lets face a few facts none of this really means jackshit
currently Nvidia is behind in market share they were 8 months late to market with anything DX11 and they still have yet to finish there DX11 lineup ATi is already moving onwards with there 2nd gen DX11 cards and in the meantime it allows them to test parts of there next series the hd7000 meaning there basically getting real time performance estimates on parts of a future architecture while nvidia is still trying to finish the 400 series product lineup
and again 512bit bus wont do a god damn thing ppl said the same shit about the 5870 being memory bandwidth starved and its not its the ROP count so i highly doubt the 6000 series needs any more bandwidth then 5000 series provides but it gets it anyway in terms of faster memory speeds. and again we have no concrete info so basically i see a bunch of assumptions based on FUD that has no real source.