# Picture of AMD ''Cayman'' Prototype Surfaces



## btarunr (Sep 6, 2010)

Here is the first picture of a working prototype of the AMD Radeon HD 6000 series "Cayman" graphics card. This particular card is reportedly the "XT" variant, or what will go on to be the HD 6x70, which is the top single-GPU SKU based on AMD's next-generation "Cayman" performance GPU. The picture reveals a card that appears to be roughly the size of a Radeon HD 5870, with a slightly more complex-looking cooler. The PCB is red in color, and the display output is slightly different compared to the Radeon HD 5800 series: there are two DVI, one HDMI, and two mini-DisplayPort connectors. The specifications of the GPU remain largely unknown, except it's being reported that the GPU is built on the TSMC 40 nm process. The refreshed Radeon HD 6000 series GPU lineup, coupled with next-generation Bulldozer architecture CPUs and Fusion APUs are sure to make AMD's lineup for 2011 quite an interesting one. 



 



*Update (9/9):* A new picture of the reverse side of the PCB reveals 8 memory chips (256-bit wide memory bus), 6+2 phase VRM, and 6-pin + 8-pin power inputs.

*View at TechPowerUp Main Site*


----------



## Crazykenny (Sep 6, 2010)

I honestly cant wait for them to be released and benchmarked to sh*t. I'm esspecially interested if AMD can get the performance throne back again, single GPU wise, meaning they have to beat the GTX480.


----------



## _JP_ (Sep 6, 2010)

Looks like they really aren't looking to bring out a completely new card, they just want to improve from the HD 5000. The cooler seems a bit more evolved from the HD 5000 reference, but other than that, I don't see much difference, because I can't. 
Christmas doesn't come fast enough. 
By the way, XT? Are those coming back?


----------



## rodneyhchef (Sep 6, 2010)

_JP_ said:


> Looks like they really aren't looking to bring out a completely new card, they just want to improve from the HD 5000. The cooler seems a bit more evolved from the HD 5000 reference, but other than that, I don't see much difference, because I can't.
> Christmas doesn't come fast enough.
> By the way, XT? Are those coming back?



Pro/XT is still used internally by ATi to denote the 50/70 series respectively.


----------



## DaedalusHelios (Sep 6, 2010)

Photo looks to be partially computer generated. Around the fan placement the brightness and grainy look gives it away IMO.


----------



## btarunr (Sep 6, 2010)

DaedalusHelios said:


> Photo looks to be partially computer generated. Around the fan placement the brightness and grainy look gives it away IMO.



It's cut out from a larger picture of the card installed on a system. You can see that a monitor is already connected to one of the DVI ports, and power connectors are in place. They just blacked-out parts of the picture outside the card. They just did an ordinary job blacking out, so it's not CGI.


----------



## caleb (Sep 6, 2010)

What are those tiny ports near hdmi ?


----------



## _JP_ (Sep 6, 2010)

They are DisplayPorts.
Not very used and known, because most screens nowadays don't support the connector.


----------



## DaedalusHelios (Sep 6, 2010)

btarunr said:


> It's cut out from a larger picture of the card installed on a system. You can see that a monitor is already connected to one of the DVI ports, and power connectors are in place. They just blacked-out parts of the picture outside the card. They just did an ordinary job blacking out, so it's not CGI.










I said partially as in it was a prototype that they _altered the fan only_ in the photo. Looks grainy like an alteration to make it angled. Could just be from a cheap digital camera having trouble with low light but its looks altered to me. They might have other photos that show it better. I wouldn't think they would need to photoshop it into the photo but it just looks that way to me. lol


----------



## pantherx12 (Sep 6, 2010)

Could of made the heatsink about the inch longer.


----------



## Atom_Anti (Sep 6, 2010)

Crazykenny said:


> I honestly cant wait for them to be released and benchmarked to sh*t. I'm esspecially interested if AMD can get the performance throne back again, single GPU wise, meaning they have to beat the GTX480.



It is probably a GTX480 killer, see some leaked benchmarks here:
http://www.fudzilla.com/graphics/gr...-hd-6800-series-performance-benchmarks-leaked


----------



## _JP_ (Sep 6, 2010)

Atom_Anti said:


> http://www.fudzilla.com/graphics/gr...-hd-6800-series-performance-benchmarks-leaked


FUD is worth as much as you paid for it, nothing new here, at least nothing that we haven't seen in TPU...


----------



## HossHuge (Sep 6, 2010)

Are they rolling out a complete line from 6450 to 6970?

Here are some specs according to WIKI.

http://en.wikipedia.org/wiki/Compar...g_units#Southern_Islands_.28HD_6xxx.29_series


----------



## Atom_Anti (Sep 6, 2010)

_JP_ said:


> FUD is worth as much as you paid for it, nothing new here, at least nothing that we haven't seen in TPU...



I have not seen any similar here in TPU.


----------



## _JP_ (Sep 6, 2010)

According to Wiki, 2GB of RAM seems to start being implemented on high-end cards with still few games being able to fill 1GB. They seem to want this standardized.
Also, nice to see possible 512-bit memory buses. Last time I saw one of those was on a 2900XT (wow, long time ago).
Good thing is that the new HD 6870 is only going to consume 10W more than the current one, supposedly from the extra RAM.
But all of this hasn't been confirmed yet so it's FUD...It'll be be good if it happens...I'll patiently wait anyway...



Atom_Anti said:


> I have not seen any similar here in TPU.


Start messing with the Search feature.
See here and here.


----------



## mastrdrver (Sep 6, 2010)

HossHuge said:


> Are they rolling out a complete line from 6450 to 6970?
> 
> Here are some specs according to WIKI.
> 
> http://en.wikipedia.org/wiki/Compar...g_units#Southern_Islands_.28HD_6xxx.29_series



Ah yes, Wikipedia. The other fud.


----------



## gumpty (Sep 6, 2010)

_JP_ said:


> Looks like they really aren't looking to bring out a completely new card, they just want to improve from the HD 5000.



I think originally they were planning on bringing out a new architecture for the 6k series, but it was being built around TSMC's 32nm process. But then TSMC canned the 32nm and went straight to a smaller one. AMD were stuck so they have bunged a few of the next-gen features with the previous gen's architecture and built it on 40nm as a stopgap (although the performance will still be better).

Something like that anyway.


----------



## HossHuge (Sep 6, 2010)

mastrdrver said:


> Ah yes, Wikipedia. The other fud.



Again, according to WIKI. I didn't say I believed it.  Don't shoot the messenger...


----------



## Atom_Anti (Sep 6, 2010)

_JP_ said:


> Start messing with the Search feature.



Than read the Wiki messes, because that won't have 512-bit memory buses.


----------



## _JP_ (Sep 6, 2010)

Atom_Anti said:


> Than read the Wiki messes, because that won't have 512-bit memory buses.





_JP_ said:


> But all of this hasn't been confirmed yet so it's FUD...It'll be be good if it happens...I'll patiently wait anyway...


Have I made myself clear?
And because it is FUD, your sentence backfires. As in, how do you know it won't have 512-bit bus width?


----------



## pantherx12 (Sep 6, 2010)

Wouldn't 512-bit bus with gdrr5 equate to bat-shit insane memory bandwidth?


----------



## LAN_deRf_HA (Sep 6, 2010)

After seeing what a big temp difference the evga highflow bracket makes I really think ati needs to stop with the little two third grating. If I got this card first thing I'd do is either snip out those grill struts or just cut the whole bracket down the center. Also pretty positive it's still using a 256 bit bus, just paired with what I'd call proper GDDR5... running at the high speeds manufacturers have advertised in press releases but never shown in products, until now. Oh and yay for getting rid of that happy meal plastic racing stripe... EDIT* nvm, I see it's on the side now.


----------



## _JP_ (Sep 6, 2010)

pantherx12 said:


> Wouldn't 512-bit bus with gdrr5 equate to bat-shit insane memory bandwidth?


Most likely, yes. More than 200GB/s. I can only see this useful for Eyefinity users (all 6 screens), for single screens it will be overkill. BUT the thing is that it's all FUD for now...


LAN_deRf_HA said:


> Oh and yay for getting rid of that happy meal plastic racing stripe.


I don't know, I kinda liked that. But I guess they had to innovate somewhere else other than the card's specs.


----------



## btarunr (Sep 6, 2010)

There is no 512-bit memory interface. It's 256-bit, but making use of 7 GT/s memory chips (so one can expect 30~35% increase in memory bandwidth over Cypress). Don't refer to Wikipedia for unannounced products without any citations. They're usually some fanboy's wetdreams.


----------



## _JP_ (Sep 6, 2010)

btarunr said:


> There is no 512-bit memory interface. It's 256-bit, but making use of 7 GT/s memory chips (so one can expect 30~35% increase in memory bandwidth over Cypress).


Yeah, I guess I remember reading that somewhere....


btarunr said:


> Don't refer to Wikipedia for unannounced products without any citations. They're usually some fanboy's wetdreams.



Duly noted. But I wasn't referring to what's on Wiki as fact, just telling what's written there and comparing it's worth against what's written on fudzilla...it's all rumors until something official actually comes out, announced or released. I am very aware of that.


----------



## Atom_Anti (Sep 6, 2010)

_JP_ said:


> Have I made myself clear?
> And because it is FUD, your sentence backfires. As in, how do you know it won't have 512-bit bus width?



Because the 512bit would drive the cost up significantly and nobody wants that to happen. It is already makes 204,8 GB/s bandwith with 256bit bus and 1600Mhz GDDR5. That is pretty awesome, isn't it? or need more?


----------



## JATownes (Sep 6, 2010)

Bring on Cayman. My bank account is waiting to liquidate some cash.


----------



## Phxprovost (Sep 6, 2010)

ohh boy a side shot of a blank cooler covered in watermarks, color me excited 

maybe if this was a board scan....but really?


----------



## inferKNOX (Sep 6, 2010)

_JP_ said:


> According to Wiki, *2GB of RAM seems to start being implemented on high-end cards* with still few games being able to fill 1GB. *They seem to want this standardized.*
> Also, nice to see possible 512-bit memory buses. Last time I saw one of those was on a 2900XT (wow, long time ago).
> Good thing is that the new HD 6870 is only going to consume 10W more than the current one, supposedly from the extra RAM.
> But all of this hasn't been confirmed yet so it's FUD...It'll be be good if it happens...I'll patiently wait anyway...
> ...




I think that if that is the case, it's to get better AA performance in Eyefinity setups. The memory becomes the limiting factor with Eyefinity's higher resolutions, so it would make sense for them to add more. I was looking for a 2GB 5870 to run 24/7 Eyefinity on 3x 1920x1200, but found the cards costing relatively silly prices.:shadedshu
I'll be very glad if the 6850 has 2GB and that DP->DVI adapter that AMD has just released; it'll be perfectly ready for Eyefinity out of the box!
It's making me drool for it _so_ bad!


----------



## LAN_deRf_HA (Sep 6, 2010)

I think you'll still be out of luck even if it does come with 2 GBs. I'm betting the 6870 will launch at $400 and very quickly get jacked up to $450-500. AMD will have no competition for a long time, and just with the 5000 series they're going to take advantage of it. The 5850 cards are still not down to the original launch price. Those first buyers got one hell of a deal.


----------



## BazookaJoe (Sep 6, 2010)

I thought AMD had retired the"ATI" Brand.

If this is a new card would it still be branded "ATI" ?


----------



## crazyeyesreaper (Sep 6, 2010)

indeed i did woot woot   and yea love the pictures with 50k watermarks lol i should photoshop them out for shits and giggles


----------



## _JP_ (Sep 6, 2010)

Atom_Anti said:


> Because the 512bit would drive the cost up significantly and *nobody wants that to happen*. It is already makes 204,8 GB/s bandwith with 256bit bus and 1600Mhz GDDR5. That is pretty awesome, isn't it? or need more?


By "nobody" you must be referring to the consumers, because the manufacturers don't care how much it costs, as long as it translates in profit (the HD 5970s go up to $1.2k, so m'eh).
And every generation of graphic cards improve on one aspect or another, but mainly to increase the total performance, compared to a previous gen. So to answer your questions, yes it is pretty awesome and NEEDZ MOAR!!


----------



## WarEagleAU (Sep 6, 2010)

Sweet, some specs and eventual reviews will be lovely.


----------



## DrPepper (Sep 6, 2010)

Wow these pictures give me a semi


----------



## overclocking101 (Sep 6, 2010)

DrPepper said:


> Wow these pictures give me a semi
> 
> http://www.spacecraftmfg.com/images/10-8-02 semi exterior MH.jpg



thats thebest post in the entire thread! to me it just seems odd. with the size just and all from 4XX to 5XX  but having it the same for 6XX?? these pics to me are nothing but photoshopped pics. ut chiphell was the first with leaked photos of the 5XXX cards. but amd going back to RED pcb?? yuck.


----------



## Taskforce (Sep 6, 2010)

*RED* PCB? yuck! i hope not.


----------



## mtosev (Sep 6, 2010)

btarunr said:


> There is no 512-bit memory interface. It's 256-bit, but making use of 7 GT/s memory chips (so one can expect 30~35% increase in memory bandwidth over Cypress). Don't refer to Wikipedia for unannounced products without any citations. They're usually some fanboy's wetdreams.



are the launch dates on wikipedia correct or are they also inaccurate?


----------



## cheezburger (Sep 6, 2010)

btarunr said:


> There is no 512-bit memory interface. It's 256-bit, but making use of 7 GT/s memory chips (so one can expect 30~35% increase in memory bandwidth over Cypress). Don't refer to Wikipedia for unannounced products without any citations. They're usually some fanboy's wetdreams.




last time i heard from some nvidiot reply these kind of comment on hd5xxx spec few years ago(2008?). most of them said "amd is risk in financial trouble, they wouldn't bother add more ROPs on rv870. only thing they would do is add more shader" however these nvidiot end up disappointed  as cypress has 32rops rather then from previous prediction of 16 rops. but it ends up these nvidiot criticize cypress was a dual core gpu rather then a completely stand along gpu die...these endless argument continuous  even in today

when the architecture change and die shrink process continuing evolve there's no reason to stick with narrow bus. especially there's still plenty of room for 512bit bus in 40nm and most of die area in hd 5000 had been "wasted" by "5D" shader structure as 5D require more hard wiring than 4D. a cypress had 1600 shader pipeline can only form of 320 shader block, which only equal to 320 shader core in 5D architecture but 4D art it would only require 1280 shader pipeline to form 320 shader block and may reduce the die space from unused 320 shader unit(1600-1280=320) and many of unnecessary hardwiring because of bad architecture that descended from r600. as for 512bit bus x2900xt did fall hard when introduce it. but that was 3 years ago while fabrication was still at 90/80 nm.  i meant how big can a 512bit ram controller be? 40nm is good enough to contain 512bit bus even with current setup of 5D structure from cypress while still remain within 400mm^2. 

so why need to putting 512bit bus? because there is speed limit in GDDR5, also high speed ram comes with greater latency and more unstable which also will reduce the ram chip life cycle. also greater ram timing cause huge performance hit as well.  a 7GT GDDR5 dont exist! that means each data rate has to be 1750mhz! the physical limit of single rate ram speed is 1400mhz(according from tom's hardware). unless amd can bring up next gen GDDR6 (octet "x8" data rate) or it will be impossible to make a 7GT ram with exist quad data rate GDDR5. 
this is the reason why amd has to move forward from 256bit to 512bit.


----------



## mastrdrver (Sep 7, 2010)

_JP_ said:


> By "nobody" you must be referring to the consumers, because the manufacturers don't care how much it costs, as long as it translates in profit (the HD 5970s go up to $1.2k, so m'eh).
> And every generation of graphic cards improve on one aspect or another, but mainly to increase the total performance, compared to a previous gen. So to answer your questions, yes it is pretty awesome and NEEDZ MOAR!!



Since manufacturers care about profit, it benefits them if ATI only uses a 256 bus because the board will be less complicated and therefore cheaper to make. So yes, they do care how much it costs. Why do you think low end boards are so cheap if they don't care about costs?

Fwiw why would they need a bus larger than 256? If they do put those 7 Gt/s GDDR5 memory chips on these board then N.I. will have a bus load of bandwidth compared to Fermi. If those GPUz shots are correct, then they increased the bandwidth by ~33% and achieved a bandwidth higher than Fermi with less bus width and probably a cheaper board making the end product we buy at retail cheaper.


----------



## cheezburger (Sep 7, 2010)

Atom_Anti said:


> Because the 512bit would drive the cost up significantly and nobody wants that to happen. It is already makes 204,8 GB/s bandwith with 256bit bus and 1600Mhz GDDR5. That is pretty awesome, isn't it? or need more?





mastrdrver said:


> Since manufacturers care about profit, it benefits them if ATI only uses a 256 bus because the board will be less complicated and therefore cheaper to make. So yes, they do care how much it costs. Why do you think low end boards are so cheap if they don't care about costs?
> 
> Fwiw why would they need a bus larger than 256? If they do put those 7 Gt/s GDDR5 memory chips on these board then N.I. will have a bus load of bandwidth compared to Fermi. If those GPUz shots are correct, then they increased the bandwidth by ~33% and achieved a bandwidth higher than Fermi with less bus width and probably a cheaper board making the end product we buy at retail cheaper.




1. cayman is exclusive for high end market so they don't really care about pcb layout
2. 7GT GDDR5 don't exist the highest you can go is 5GT(1250mt per rate)
3. high frequency ram comes with higher latency compare to lower frequency ram and high clockrate will make ram unstable and generate heats. 
4: no matter how complicate that 512bit layout would effecting on PCB layout it would make cayman far cheaper than g100 in production due to the difference of die size. (400mm^2 vs 576mm^2) larger die require more layout on pcb board than what bandwidth bus impact in pcb design.


----------



## mastrdrver (Sep 7, 2010)

cheezburger said:


> 1. cayman is exclusive for high end market so they don't really care about pcb layout
> 2. 7GT GDDR5 don't exist the highest you can go is 5GT(1250mt per rate)
> 3. high frequency ram comes with higher latency compare to lower frequency ram and high clockrate will make ram unstable and generate heats.
> 4: no matter how complicate that 512bit layout would effecting on PCB layout it would make cayman far cheaper than g100 in production due to the difference of die size. (400mm^2 vs 576mm^2) larger die require more layout on pcb board than what bandwidth bus impact in pcb design.



1. and Barts is exclusive for middle of the market so what? Just because its high end doesn't mean you can blow money on things that are going to go to waste
2. What?!  
3. Graphics don't care about latency. Bandwidth matters with gpus not latency.
4. The real question is why make the pcb cost more when you can achieve the same thing with less bus width?

Another reason there is no need for a 512 bus with 7 Gt/s ram is because you have the R500 all over again with excess cost going to something that isn't ever going to be fully utilized. Why not save some money (and die space room) for something that is going to be more beneficial or just pocket the savings all together and relay it to the end user in retail price?


----------



## buggalugs (Sep 7, 2010)

cheezburger said:


> 2. 7GT GDDR5 don't exist the highest you can go is 5GT(1250mt per rate)
> .



You keep saying that but i dont think its true and it doesnt mesh with the GPUz screenshot. It it were 512 bit theres no way it would be running at 1600Mhz.

 I've read GDDR5 can reach 7GT/s but anyway you have your opinion but i think you're wrong and it will be 256bit.

 On a different topic with all those connections it looks like we might have eyefinity without the need for active adapters.

EDIT: 



mastrdrver said:


> 1.
> 2. What?!



Thanks, i knew i read that somewhere.


----------



## cheezburger (Sep 7, 2010)

buggalugs said:


> You keep saying that but i dont think its true and it doesnt mesh with the GPUz screenshot. It it were 512 bit theres no way it would be running at 1600Mhz.
> 
> I've read GDDR5 can reach 7GT/s but anyway you have your opinion but i think you're wrong and it will be 256bit.
> 
> On a different topic with all those connections it looks like we might have eyefinity without the need for active adapters.



it's well known that older gpuz cant utilize newer gpu, I don't need to bring any example. GDDR5 exceed its limit unless you telling me that someone had tweak it to 1.6gt per rate or this is just absurd..



mastrdrver said:


> 1. and Barts is exclusive for middle of the market so what? Just because its high end doesn't mean you can blow money on things that are going to go to waste
> 2. What?!
> 3. Graphics don't care about latency. Bandwidth matters with gpus not latency.
> 4. The real question is why make the pcb cost more when you can achieve the same thing with less bus width?
> ...



1. barts IS provide exclusively for mid range market, if you look on die size and and other spec such as rops/tmu/shader/ram bandwidth/bus. and most important is it support 256bit bus with crazy ram speed of 1.3gt per rate. now you telling me that cayman is also 256bit as well? amd about PCB layout a standard reference 5870 PCB is capable to contain 512bit bus(yes 12 layer! some non reference PCB may made of even 15 layers ) while cost far less than gtx 480's 10 layer PCB. so 512bit controller doesn't going to make more production cost. the only fact is the GPU die and that is why g100 is so screwed in this rate.

2. the 7GT GDDR5...it is not stable. even the news was announce back in 2008. where are they?

3 graphic card dont care latency... hmmm guess you never try nibitor and nvflash...a standard GDDR3 cycle timing is 35. while under the same clockrate i turn it to 50 then interesting things happen. when i test the 3dmark06  it end up having artifact(it wasn't really the hardware issue, ,more like the ram can't keep up with texture fill rate) and spike lag. now you telling me latency is not important?

4 market position, more bus width, more flexibility to encounter  the counterpart's next gen. also bigger bus takes advantage on AA/AF/MSAA setting


----------



## crazyeyesreaper (Sep 7, 2010)

look no matter what no one really gives a shit cause its all smoke and mirrors with images that have logos plastered all over them everyone just needs to sit down and shut the **** up and wait for more info arguing about specs that don't exist yet for a gpu that isn't available for us to purchase is asinine lets just wait to hear the official info from the source thats not FUD and go from there


----------



## buggalugs (Sep 7, 2010)

cheezburger said:


> it's well known that older gpuz cant utilize newer gpu, I don't need to bring any example. GDDR5 exceed its limit unless you telling me that someone had tweak it to 1.6gt per rate or this is just absurd..



Obviously you dont read links.

"Hynix had announced its plans to introduce 7 GT/s GDDR5 chips back in November 2008. The company is known to commence volume production of the 7 GT/s chip by the end of Q2 2009."


----------



## cheezburger (Sep 7, 2010)

buggalugs said:


> Obviously you dont read links.
> 
> "Hynix had announced its plans to introduce 7 GT/s GDDR5 chips back in November 2008. The company is known to commence volume production of the 7 GT/s chip by the end of Q2 2009."



end of Q2 2009. now where are they?

and even if they do have this high speed ram won't nvidia just get the same with their new improved g104? plus they have bigger bus compare to any of amd's current line. don't tell me about the bang for bulk, the high end market don't care about these little money. hell they still buying fermi without caring how many pale bear die each day! main stream market? sound like screaming from amd's cpu that was beaten so bad by intel's line. remember! high end market might see little compare to mainstream, especially after the great depression. but high end product represented the engineering leadership crown that would caught investor's eye. why nvidia is still around after 7 quarter straight loss?  because there are many investor still back nvidia up.  while amd/ati has completely no support. if they want more fund they better bring the flagship line. like intel with its 980x.

PS: that gpu mark is fake.


----------



## crazyeyesreaper (Sep 7, 2010)

uh how does Nvidia have bigger bus nvidias top gpus are already out and the 320bit + bus dosent really slaughter there 256bit competition the 475 and 485 wont be THAT much better then what they have probably more effiecient yes but if the above 2 are way better they cant move old stock  fact is any card nvidia comes out with right now wont use the ram because there releasing LOW END CARD to shore up there market share and how does AMD/ati have no support last i checked they now control more of the GPU market then Nvidia meaning there top dog in sales market share everything currently. and the only GF 104 avaible is a 460 and oh wait its only on PAR with a 275/285 in most games and DX11 is still pretty much a joke currently as much as i love the features it offers.

Basically what im saying is your cant win this argument so you dropped it and tried to pick a different way to do the same thing troll elsewhere this is AMD gpu thread lets keep it as such.

but on top of nvidia investors if there so well off why did XFX jump to AMD and decide to say F off to the 400 series why is BFG bankrupt and now gone the way of the dodo those are some seriously losses from what i can see. There still around because ATi is still around takes more then a few losses to cause a company to fail.

example AMD has been in the hole or playing second fiddle since around 2005 there still here  ATi has been playing catch up to nvidia for years untill the 4000 series. Point is just cause they have losses dosent mean jack shit. Theres a Thing called credit line and these huge corporations have huge huge lines of credit to keep moving forward and to keep there doors open "You dont get rich saving money" "You cant make money without first spending money"

oh another tid bit last i remember AMDs stock was going up up up and Nvidia was on the decline meaning more investor confidence in ATi/AMD and less in Nvidia


----------



## buggalugs (Sep 7, 2010)

cheezburger said:


> end of Q2 2009. now where are they?
> 
> .



The 5XXX series was released in 2009 so it was designed in probably 2008 long before 7GT/s memory was available. Nvidias gpus are much the same.

 Add to that the memory company wanting to move old stock first and now we are in late 2010 with 7GT/s memory available and a new design ready to use it.



cheezburger said:


> the high end market don't care about these little money..
> .



Its not so much about money. Its about power consumption and  temps. AMD understand we dont want hot and power hungry GPUs. Nvidia havent listened to us yet.


----------



## cheezburger (Sep 7, 2010)

crazyeyesreaper said:


> uh how does Nvidia have bigger bus nvidias top gpus are already out and the 320bit + bus dosent really slaughter there 256bit competition the 475 and 485 wont be THAT much better then what they have probably more effiecient yes but if the above 2 are way better they cant move old stock  fact is any card nvidia comes out with right now wont use the ram because there releasing LOW END CARD to shore up there market share and how does AMD/ati have no support last i checked they now control more of the GPU market then Nvidia meaning there top dog in sales market share everything currently. and the only GF 104 avaible is a 460 and oh wait its only on PAR with a 275/285 in most games and DX11 is still pretty much a joke currently as much as i love the features it offers.
> 
> Basically what im saying is your cant win this argument so you dropped it and tried to pick a different way to do the same thing troll elsewhere this is AMD gpu thread lets keep it as such.
> 
> ...




if 275/285 have GDDR5 they will smoke gtx 460. and xfx was kicked by nvidia by violated NDA agreement. not because xfx like amd... 

"You cant make money without first spending money" that was what intel was doing while amd enjoy its success that is what happen where it destroy amd in 2006. they wanted to SAVE money on R&D and slow the development and enjoy and stay  their success while just try to get the cash from market. until core 2 came up amd merely had any backup plan because the idea of saving money!made them loss both market share and investor. if a company really want to save
 money. cut the employee benefit first. amd has long history of lavish treatment to their employee and company spent billions just for lunch... same time intel would forced layoff any based engineer that's over 45 yrs old. no lavish spent and well organized. that is why intel is on the top.   like 3dfx in the past these european manage style have to change. if you talking about saving money american style company like intel/nvidia would rather spend all of fund on project development than employee's lunch list...



> oh another tid bit last i remember AMDs stock was going up up up and Nvidia was on the decline meaning more investor confidence in ATi/AMD and less in Nvidia



i also remeber they say these to ati back in fx era...but nvidia back up and slam ati really hard with nv40 and g72.



> Its not so much about money. Its about power consumption and  temps. AMD understand we dont want hot and power hungry GPUs. Nvidia havent listened to us yet.



how do you define the term of "power consumption"? and how hot can a 400mm^2 gpu be? well it is still far better than gtx200 and fermi's 576mm^2 (even in extreme case cayman would have 2/3 smaller than g100 and still have 512bit bus)a 64 rops 80 tmu and 1280 shader cayman will comsume more power than cypress indeed but will still be far better than gtx 480


----------



## crazyeyesreaper (Sep 7, 2010)

^ source material or i call fud on the employee treatment bs

true and i mentioned that already ati was behind from the 6000 series all the way up till nvidias gt 200 series thats  5 product cycles yet ATi is still here for the most parts just as nvidia will be

and i still call bullshit on the lunch vs product if nvidia spent more on product developrment they wouldnt need a gpu that uses 320w to rival an ati gpu that uses 212w

also dosent matter if the gt200 has GDDR5 or not why because performance wouldnt benefit in the least.

also again 512bit bus is extremely costly and the extra bandwidth would do NOTHING to make the gpu faster a gpu is like a whole package 512bit bus gives more bandwidth but if the GPU cant make use of what it already has giving more dosent do a damn thing. 

and it dosent matter much a gtx 460 still uses more power then a 5850 and the 1 gig variants use nearly as much power as a 5870. but are still slower in the respective stock configurations. 

Lets face a few facts none of this really means jackshit

currently Nvidia is behind in market share they were 8 months late to market with anything DX11 and they still have yet to finish there DX11 lineup ATi is already moving onwards with there 2nd gen DX11 cards and in the meantime it allows them to test parts of there next series the hd7000 meaning there basically getting real time performance estimates on parts of a future architecture while nvidia is still trying to finish the 400 series product lineup

and again 512bit bus wont do a god damn thing ppl said the same shit about the 5870 being memory bandwidth starved and its not its the ROP count so i highly doubt the 6000 series needs any more bandwidth then 5000 series provides but it gets it anyway in terms of faster memory speeds.  and again we have no concrete info so basically i see a bunch of assumptions based on FUD that has no real source.


----------



## buggalugs (Sep 7, 2010)

haha Cheezburger give it up man. No 512bit memory bus for you!!


----------



## LAN_deRf_HA (Sep 7, 2010)

That speed of GDDR5 has existed for awhile, it just wasn't cost effective to immediately start mass producing it. It is now. AMD is a company intent on making money, not running around like a chicken with it's head cut off. That's why the 6 series flagship will have a 256 bit bus, a 512 bus is moronic. It will raise the price, reduce sales, and not increase profit margin. Not to mention provide far more bandwidth than the core could utilize. An utter waste; your irrational dream is.


----------



## cheezburger (Sep 7, 2010)

crazyeyesreaper said:


> ^ source material or i call fud on the employee treatment bs
> 
> true and i mentioned that already ati was behind from the 6000 series all the way up till nvidias gt 200 series thats  5 product cycles yet ATi is still here for the most parts just as nvidia will be
> 
> ...



source? go google it...

fermi consume 320w was because it added a lot of non gaming particle(general computing) which waste hugely on die size. if they could take it off the power rate will drop 30%.... 

tell me why 512bit is cost a lot first. it might cost a lot back in r600 with pathetic 80nm fab but today this will be solve thanks to 40nm process mostly because r600 only had 16 rops which most of bus width were wasted. but today a cypress has 32rops thing goes different. each rops only share 8bit per length. but 32rop is still enough to fit in 256bit bus(barely...). but what happen if a gpu that has 64rop? it will cause bottleneck in communication between gpu and ram. it doesn't matter overall bandwidth if most of data stuck at rops due to the narrow bus width. for example a cypress xt suppose to be double of rv770xt in every spec but end up only 55% increase in performance. while rv770xt is about double of rv670 in every bench? why cause such difference? the answer, the bus width can't feed enough of data length to GPU rops/shaders. a 512bit is necessary for future gpu that have more rops/shader



buggalugs said:


> haha Cheezburger give it up man. No 512bit memory bus for you!!



source? 



LAN_deRf_HA said:


> That speed of GDDR5 has existed for awhile, it just wasn't cost effective to immediately start mass producing it. It is now. AMD is a company intent on making money, not running around like a chicken with it's head cut off. That's why the 6 series flagship will have a 256 bit bus, a 512 bus is moronic. It will raise the price, reduce sales, and not increase profit margin. Not to mention provide far more bandwidth than the core could utilize. An utter waste; your irrational dream is.



again tell me why 512bit cost a lot? because of bad experience from r600?

do you think a a $599 card for high end is expansive?


----------



## LAN_deRf_HA (Sep 7, 2010)

cheezburger said:


> again tell me why 512bit cost a lot? because of bad experience from r600?
> 
> do you think a a $599 card for high end is expansive?



I sweep away most of your points then you respond by asking me questions not even related to what I said? I didn't say 512 bit costs a lot, I said it costs more. And what on earth are you trying to say with the second question? For someone so into AMD you don't seem to think highly of them. No way are they going to do something as moronic as release a $600 single gpu card. The 6 series is meant to replace the 5 series, not coexist at some absurd price point above it. Your logic lacks logic.


----------



## crazyeyesreaper (Sep 7, 2010)

its simple 

larger bus means more pins more pins means more complex the more complex the more likely of failure higher failure rate of a die means lower yield lower yield means less profit

also the pins can only be so small before there to brittle to solder to a PCB meaning a 512bit = more pins and the pins can only get so small before they cant be shrunk down meaning 512bit takes more space 

so all a 512bit does effectively for todays GDDR5 is 

*give bandwidth the gpu cant use 
*makes a more complex PCB design
*has a higher risk of failure due to complexity

source read this article and learn something
http://www.extremetech.com/article2/0,2845,2309870,00.asp\

a perfect example of why 512isnt needed

4850 vs 4870 despite double the bandwidth from GDDR3 to GDDR5 the 4870 is only 20% faster the ram nor ram speed made the difference it was the higher core clock as the to cards used the same gpu just different ram to no real benefit.

same can be said of 256bit vs 512bit   a 4870 could have a 512bit interface but if it uses GDDR3 its only equal to GDDR5 at 256bit  what this means is the 512bit was more complex to produce but had no benefit over the cheaper bus with faster lower voltage memory

what this means is even if you double the bus width and the bandwidth it wont make a 6870 any faster to warrant the cost of the design. now you can have your opinion but last i checked you werent an Engineer working for Nvidia or ATi and seem to have no understanding of this subject in anyway to form a decent and well informed opinion on it nor are you able to see the big picture in the design of the GPU


"Just because you can dosent mean you should" is what comes to mind in terms of 256bit vs 512bit

another way to see it is you lose 10% overall wafer space so you get 10% less GPUs per wafer but gain 5% in performance that 5% gain dosent make you more money because the 5% gain from 256bit to 512bit is something they can get in a cheaper more cost effective way. So basically if a wafer provides 100gpus at 256bit and 95% performance and 512bit offers 90gpus at 100% performance if we count that in terms of products to market sure it might only be $20 on the manufacturing end but if those are wafers of $700 gpus those extra 10gpus just earned said company an extra $7000 per 100gpus at 256bit vs 90 at 512bit thats why you wont see 512bit.

becuase 1,000,000 gpus vs 900,000 if all of them are full functional is a huge profit difference for companies not to mention the 1m gpus at 256bit will most likely have higher yield then 512bit means in terms of usable GPUs

the 1m might be 800,000 that can be used but the 900k might only be 600k usable so by the time you run the numbers your precious 512bit costs millions in profit. These companies are not here to hold your hand thats your mothers job there here to make money 512bit wont make them any more money then 256bit it infact costs more and since ATi/AMD is trying to maintain a Positive cashflow means there going to take the tiny insignificant 1 fps loss in crysis between 256 and 512 and reap an extra $20 instead

note my math is hypothetical i dont know the actual manufacturing costs of the GPU itself and all components but im sure most around here will agree the logic itself is what matters and its solid


----------



## btarunr (Sep 7, 2010)

BazookaJoe said:


> I thought AMD had retired the"ATI" Brand.
> 
> If this is a new card would it still be branded "ATI" ?



Prototype may have been made long before AMD announced ATI's brand dissolution. It's a prototype, and ATI has been using that exact fan since Radeon HD 2900 Series.


----------



## crazyeyesreaper (Sep 7, 2010)

awww shucks i was hoping BTA would curb stomp my posts and make the epic you cant deny my logic post to save us all from the 512bit discussion


----------



## xtremesv (Sep 7, 2010)

AMD Cayman will be a Cypress refreshed to regain the most powerful GPU crown again, I don't expect revolutionary architecture changes, maybe something creative with the tessellator(s?). I bet that Cayman will have around 2400 stream processors, 100 TU's and 48 ROP's with a GPU clock between 850 and 950 MHz.

This thread turned into a memory bus discussion. IMHO, the bus doesn't matter, you don't get anything having for example a 512 bit bus with a 400MHz DDR2 memory, what matters is the GDDR5 clock in this case, so AMD can stick to a cheaper 256 bit bus and clock the memory higher. Then I'd say GDDR5 clocked to 1500 MHz on a 256 bit bus (192 GB/s) is my safe bet.


----------



## cheezburger (Sep 7, 2010)

crazyeyesreaper said:


> its simple
> 
> larger bus means more pins more pins means more complex the more complex the more likely of failure higher failure rate of a die means lower yield lower yield means less profit
> 
> also the pins can only be so small before there to brittle to solder to a PCB meaning a 512bit = more pins and the pins can only get so small before they cant be shrunk down meaning 512bit takes more space



well i'd only reply to this part. because the rest were pretty much the same argument...

larger bus require more pin on the bga board that contain gpu. indeed but it doesn't really give any further die increase and nothing to do with die/wafer. just the board become more complex and more layer for pcb board and increase size of gpu footprint, but not die size. the source also didn't mention it will cause lower yield on gpu die. mostly it would just cause difficulty to graphic card manufacturer to design the board. amd didn't lose any profit because this is just for high end part exclusively. and eventually neither nvidia and amd can go without bigger bus in future.



> it is you lose 10% overall wafer space so you get 10% less GPUs per wafer but gain 5% in performance that 5% gain dosent make you more money because the 5% gain from 256bit to 512bit is something they can get in a cheaper more cost effective way. So basically if a wafer provides 100gpus at 256bit and 95% performance and 512bit offers 90gpus at 100% performance if we count that in terms of products to market sure it might only be $20 on the manufacturing end but if those are wafers of $700 gpus those extra 10gpus just earned said company an extra $7000 per 100gpus at 256bit vs 90 at 512bit thats why you wont see 512bit.



then remove some unnecessary design such as 5D shader and stop adding more shader like what they did in r700...they were wasted far more die space by putting these additional float point feature(again like fermi....for that stupid general compute and that stupid fold@home?) if they cut it off they could have save a lot of die space to stuff more feature for pure performance....though i think they did make some new tweak on southern islands by tried to remove as much of those useless feature and bring back what graphic card suppose to be- rendering graphic.. r600's massive shader architecture was one of worst way to improve performance.


xtremesv said:


> AMD Cayman will be a Cypress refreshed to regain the most powerful GPU crown again, I don't expect revolutionary architecture changes, maybe something creative with the tessellator(s?). I bet that Cayman will have around 2400 stream processors, 100 TU's and 48 ROP's with a GPU clock between 850 and 950 MHz.
> 
> This thread turned into a memory bus discussion. IMHO, the bus doesn't matter, you don't get anything having for example a 512 bit bus with a 400MHz DDR2 memory, what matters is the GDDR5 clock in this case, so AMD can stick to a cheaper 256 bit bus and clock the memory higher. Then I'd say GDDR5 clocked to 1500 MHz on a 256 bit bus (192 GB/s) is my safe bet.



48rops means "half note" design....that is why fermi fall so bad


----------



## crazyeyesreaper (Sep 7, 2010)

and sure they can go without it theres GDDR GDDR2 GDDR3 GDDR4 GDDR5 whats to say GDDR6 dosent double the bandwidth again much like GDDR3 vs GDDR5 hmm??? eitherway dosent matter im walking away from this you can believe in 512bit all u want but were dicussing HD6800 series and it wont have 512bit

afterall why would they make the 6000 series even more expensive to produce when they will most likely only be around for 10months - a year much like the 5k cards before there replaced by the 7000 series which will be an all new architecture from the ground up it makes no sense to have a stop gap gpu cycle cost any more then is needed to hold there market share and keep Nvidia in check


----------



## wahdangun (Sep 7, 2010)

wow, chill out guys, it doesn't matter if it 512 bit or even 256 bit, the matter is if the card play crysis in 100 + FPS that was the most important thing, and not cost arm and leg.


wow, i never expect this card com out so quickly,


----------



## cheezburger (Sep 7, 2010)

crazyeyesreaper said:


> a it makes no sense to have a stop gap gpu cycle cost any more then is needed to hold there market share and keep Nvidia in check




that is exactly what ati was thinking when they thought their r300 would last forever and people would satisfy the current performance and not going further. nvidia may stuck a bit but it will come back  prove more value than 3rd rate company like amd as people like you will stick and enjoy that little success.. in this rate it's likely will happen just like old day. it will be a big checkmate if nvidia make a 512bit 64/128rops and 512 cuda while remove all of GPGPU feature. it happen before(nv40 was said to be nothing but it gave a big hit to ati when it released )

an ancient wisdom "if you can't make history, you will be abandon by history"

PS: oh just right after the discussion when i was about to bring evidence of 512bit from wiki and somebody just had to erase it and mess the whole article. wow. crazy if you are part of hack group you are in seriously trouble. wikipedia is under investigate. didn't realize someone just cant take the truth lol


----------



## mastrdrver (Sep 7, 2010)

Dude, I'll put down money that no single gpu cayman card will come with a 512 bit bus. They may come with something more than 256 (which I very highly doubt) but I would bet a large amount of money that no card will come with a 512 bit memory bus.

AMD has said several times why they won't do it. You should read the two Anandtech articles about the 4 and 5 series if you havn't.


----------



## inferKNOX (Sep 7, 2010)

buggalugs said:


> On a different topic with all those connections it looks like we might have eyefinity without the need for active adapters.



How do you figure that?
To use 3 monitors, you'd still have to convert the mini-DP to DVI, since using 2 DVI ports disables the HDMI, leaving only the 2 DP ports.


----------



## meran (Sep 7, 2010)

it will be 256bit with 6400mhz gddr5 so stop arguing,nvidia went with 384bit cuz they cant make gddr5 touch 5000 also alot of people think that more bit is better but the real thing is more memory speed is allot better ,u get rid of:
1: 2x wiring on the PCB 
2: 2x memory chips 
3: get rid of EMI,which will make u hit higher clocks,
4: so its allot cheaper and less complex PCB
see why the 460 can hit higher memory speed than 480 cuz its less complex con the memory controller side and the PCB side 
and if they make 384bit it wouldn't hit 6400mhz easily would it ??


----------



## KainXS (Sep 7, 2010)

AMD is highly against increasing the memory bus on their cards, which is why they waited till the 5870 to do it, while nvidia was doing it 3 series before them, they had no choice, and they're not going to do it again for a while, I would rather want to know more about the architecture itself and know whether or not more tesselators were added than sit here and whine about the memory bus, without knowing anything about the architecture, talking about needing a bigger memory bus dosen't really mean much at this point.


----------



## meran (Sep 7, 2010)

KainXS said:


> AMD is highly against increasing the memory bus on their cards, which is why they waited till the 5870 to do it, while nvidia was doing it 3 series before them, they had no choice, and they're not going to do it again for a while, I would rather want to know more about the architecture itself and know whether or not more tesselators were added than sit here and whine about the memory bus, without knowing anything about the architecture, talking about needing a bigger memory bus dosen't really mean much at this point.



im with ya 

check this out if its real it will nvidia
http://forums.anandtech.com/showpost.php?p=30402647&postcount=497


----------



## cheezburger (Sep 7, 2010)

meran said:


> it will be 256bit with 6400mhz gddr5 so stop arguing,nvidia went with 384bit cuz they cant make gddr5 touch 5000 also alot of people think that more bit is better but the real thing is more memory speed is allot better ,u get rid of:
> 1: 2x wiring on the PCB
> 2: 2x memory chips
> 3: get rid of EMI,which will make u hit higher clocks,
> ...



higher clockrate and even higher clockrate....where did i heard that before? oh netburst from intel! do you really think clockrate is important? no it is IPC and DPR that are important. fermi fails was because it had added too many feature on scientific calculation and general computing.(yeah suck that fold@home, most of high end user don't even care how many people die in cancer every year...) a 2ghz radeon with 12gt GDDR ram does not perform any better than a completely spec of fermi II that is not no longer a GPGPU. from die size to layout/wiring cost radeon don't have any more advantage. may nv would cost 20~30 dollars more may be? as far as for ram and 384bit if they can have a complete die shrink and get rid off any unnecessary particle like general computing they would have a lot more head room for these DDR speed. but again 1.6ghz or above speed is not that necessary. however in some game like crysis/stalker it take advantage on bus/rop than on shaders/ram speed. most of amd fan don't know the truth while just kept blame crytech for optimized game engine for nv card. but that is how radeon card is today. high core frequency while having poor instruction per cycle and tiny cache, relatively small rops and inefficient 5D shader that took a lot of die space. hell! 4870 is still fall behind of 8800gts 640 in crysis and fear,quake war. in most of test bench clockrate do little to nothing to real time gaming and shader only do well when it's right architecture(like g80) or game that's well optimized to buzzard shader like amd's 5D (like hawk and unreal tournament ). may be only 3dmark favor higher clockrate!?

this had been said so many time that cayman is for high end market. so so it has to be 64rops~128rops/3D~4Dshader/512bit bus and it will cut off many of unused shader off from cypress design. PCB layout and wiring cost would not be consider. unless you want to buy a crappy "high end" like 4870 that couldn't even compete 8800gtx in 70% of games. these useless shader only work on game that based on unreal 3 engine, but again unreal 3 engine is shit and only exclusive for console. like i said before if they get rip of 5D shader and make it 4D or even 3D shader and turn it to pure gaming card will save plenty of space. 

gtx 460 hit higher ram clock was because they want to grab back some market share which force them to do so, and mostly done by overclock and overvoltage because they can't get faster ram.  this is not because of "cuz its less complex con the memory controller side and the PCB side " it was because nvidia hasn't been license yet to integrate faster GDDR5 ram and yet amd and hynix holding the GDDR5 patent as long as amd no authorize,  nvidia/gtx480 will never get any faster ram. and these memory controller/pcb layout cost only be consider in mid range board so obviously GTX460 is doing a correct move. but it's not correct if on upcoming gtx 485.... if hynix can license nv with faster ram like amd and with its wider bus fermi will destroy any future line of radeon if that are still continue that pathetic r600 design.

if a 6770(barts) is a revolutionary design then why not cayman be a revolutionary design as well? unless you telling me they are enjoy their success and try get more milk than put out better product like good old k8/r300 day. no wonder amd/ati are always be a secondary company...


----------



## buggalugs (Sep 7, 2010)

inferKNOX said:


> How do you figure that?
> To use 3 monitors, you'd still have to convert the mini-DP to DVI, since using 2 DVI ports disables the HDMI, leaving only the 2 DP ports.



Not neccesarily, thats just the way AMD designed it for the 5xxx series doesnt mean its impossible to do.

 Sapphire have a special 5770 flex model that can use 3 DVI monitors and no active adapters.

http://www.pccasegear.com/index.php?main_page=product_info&cPath=193_962&products_id=15368


----------



## crazyeyesreaper (Sep 7, 2010)

lol i smell nvidia fanboy here lets face it the green team was late there still late they wont compete till 28nm the 6000 series gives ati the lead for another year meaning 2 years where ATi now AMD has been top dog in terms of GPUs you can spin it however you want it dosent change the fact even the stripped down gtx460 for gaming STILL consumes more power then a 5850 and close to a 5870 but is 20-40% slower in single card configs and the 480 while being the fastest single gpu card is still only 2nd fastest in terms of single card overall

and you seem to like talking about r 600 and r 300 what about the g92 lol been in use for nearly  4 years with no re improvement where as ati has scaled there design to offer better performance you can argue your points all you want because fact is the biggest baddest gpu nvidia had which was the 512 shader gtx 480 was only 5% faster then the 480 shader current high end both companies have room for improvement. and how is the r600 design pathetic? last i checked the 5850 5870 and 5970 are highly competitive compared to nvidias offerings and in general use nearly 35-40% less power to do so and if its truly fail then why did Nvidia lose so much market share to this pathetic design?


----------



## buggalugs (Sep 7, 2010)

> =cheezburger;. no wonder amd/ati are always be a secondary company...



AMD has been on top since the 5xxx series came out in 2009. They've sold millions of them and they continue to sell like hotcakes.


----------



## cheezburger (Sep 7, 2010)

crazyeyesreaper said:


> lol i smell nvidia fanboy here lets face it the green team was late there still late they wont compete till 28nm the 6000 series gives ati the lead for another year meaning 2 years where ATi now AMD has been top dog in terms of GPUs you can spin it however you want it dosent change the fact even the stripped down gtx460 for gaming STILL consumes more power then a 5850 and close to a 5870 but is 20-40% slower in single card configs and the 480 while being the fastest single gpu card is still only 2nd fastest in terms of single card overall
> 
> and you seem to like talking about r 600 and r 300 what about the g92 lol been in use for nearly  4 years with no re improvement where as ati has scaled there design to offer better performance you can argue your points all you want because fact is the biggest baddest gpu nvidia had which was the 512 shader gtx 480 was only 5% faster then the 480 shader current high end both companies have room for improvement. and how is the r600 design pathetic? last i checked the 5850 5870 and 5970 are highly competitive compared to nvidias offerings and in general use nearly 35-40% less power to do so and if its truly fail then why did Nvidia lose so much market share to this pathetic design?



read read read my previous post first! 

fermi consume 40% more power and 50% larger die size was because of too many non addon feature(GPGPU). if they can remove it amd will be in seriously trouble. if gtx 480 is no longer a gpgpu the die size will shrink at least 40% while keep the same spec. you seem don't understand how important rops and bus would cause huge performance hit don't you? yeah of cause gtx 480 is still second pace in single card solution. but don't forget this, dual gpu PCB cost about twice as much as a GTX480 can be. the wiring is far far complex than any of single gpu board content. let me tell you this a gtx 480 board is far cheap than hamlock XT board the only disadvantage was die size that's all. if amd insist making only dual gpu board they'd have to consider more cost in layout than an "UNNECESSARY" 512bit bus board can be. GTX460's case is more like 5830, it cut off many feature while didn't optimize transistor make it more inefficient on its die size(a 324mm^2 "cripple" die would generate power of 400mm^2 which seem logical). 

you keep talking how well r600 art is while ignore the fact they sacrifice the performance for saving production cost. g92 came out at november 2007 which is not even 3 years yet where did you get the idea of 4 years? even g92 is old but not even hd 4800/5770 still cant even out pace g94 in tremendously margin and gts 250 is on the side compete with 4850 and amd still couldn't get and better midstream product to outcast old g92 line  where were amd doing these year  and nvidia did lose share only because of discontinue of g92/g94 line while there are no product to replace it. they only lose a bit because of school opening season and they have nothing to sell. not because of fermi. and yet r600 IS a pathetic design no matter you agree or not.


----------



## DrPepper (Sep 7, 2010)

cheezburger said:


> higher clockrate and even higher clockrate....where did i heard that before? oh netburst from intel! do you really think clockrate is important? no it is IPC and DPR that are important.



High clockspeeds do not = netburst. The clockspeeds for i7 are just as high as they were for netburst except netburst could go higher, yet it can do much more. 



> 4870 is still fall behind of 8800gts 640 in crysis and fear,quake war. in most of test bench clockrate do little to nothing to real time gaming and shader only do well when it's right architecture(like g80) or game that's well optimized to buzzard shader like amd's 5D (like hawk and unreal tournament ). may be only 3dmark favor higher clockrate!?



Total garbage.



> unless you want to buy a crappy "high end" like 4870 that couldn't even compete 8800gtx in 70% of games. these useless shader only work on game that based on unreal 3 engine, but again unreal 3 engine is shit and only exclusive for console. like i said before if they get rip of 5D shader and make it 4D or even 3D shader and turn it to pure gaming card will save plenty of space.



More garbage.


----------



## btarunr (Sep 7, 2010)

Stick to the topic, people.


----------



## vMG (Sep 7, 2010)

_JP_ said:


> They are DisplayPorts.
> Not very used and known, because most screens nowadays don't support the connector.



It's meant for Apple products.


----------



## mastrdrver (Sep 7, 2010)

_JP_ said:


> They are DisplayPorts.
> Not very used and known, because most screens nowadays don't support the connector.



Actually are they not mini display ports?

If so you'd need an adapter to even use a normal display port on a monitor.


----------



## pantherx12 (Sep 7, 2010)

vMG said:


> It's meant for Apple products.



And all the rest, NEC,HP,DELL,LENEVO,EIZO.

It's just not a widely accepted connector type yet.

I imagine due to its size quite a lot manufacturers may use it that or mini HDMI or something.

Regarding CAYMAN XT, I want more pics!


----------



## cheezburger (Sep 7, 2010)

crazyeyesreaper said:


> uh last i checked the 4890 runs with the gtx 275 which = the 280 so a 4890 isnt really fail since for most part in that generation all ati cards were cheaper then there nvidia counterparts and oh yea i still love the $330 price tag on the 285 when the 5850 was $259 on release eitherway more on topic pic looks intresting i prefer the black and red with no stupid stickers



the only moment hd 4890 had chance to walk side with gtx 275 was on unreal 3 engine as there won't be any game engine would take advantage on r600 art.... i dont need to mention the rest of result as you all know what's coming. go look hd 4890 bench on anandtech and tom's hardware first they will give you more detail how failure r600 art was like. oh hd 4890 may be cheap but it is mid stream card and mid range card SUPPOSE to be cheap(150~229 for mid range), but amd just over priced it($299). :shadedshu


----------



## crazyeyesreaper (Sep 7, 2010)

http://www.techpowerup.com/reviews/Sapphire/HD_4890_Toxic_Vapor-X/9.html
http://www.techpowerup.com/reviews/Sapphire/HD_4890_Toxic_Vapor-X/14.html
http://www.techpowerup.com/reviews/Sapphire/HD_4890_Toxic_Vapor-X/18.html
http://www.techpowerup.com/reviews/Sapphire/HD_4890_Toxic_Vapor-X/7.html

source material now go read a real GPU review all of the above use different game engines and whats this at low res the 4890 is slower but at higher res oh wait it trades blows with the 275 280 and 285

so from that material above it performed better and was cheaper at its time oh snap.

6000cayman looks interesting tweaked and more efficient design has my attention as i do plan to buy a few 6 series cards and if they all look as sexy as the one pictured im in


----------



## erocker (Sep 7, 2010)

btarunr said:


> Stick to the topic, people.





cheezburger said:


> the only moment hd 4890 had chance to walk side with gtx 275 was on unreal 3 engine as there won't be any game engine would take advantage on r600 art.... i dont need to mention the rest of result as you all know what's coming. go look hd 4890 bench on anandtech and tom's hardware first they will give you more detail how failure r600 art was like. oh hd 4890 may be cheap but it is mid stream card and mid range card SUPPOSE to be cheap(150~229 for mid range), but amd just over priced it($299). :shadedshu





crazyeyesreaper said:


> http://www.techpowerup.com/reviews/Sapphire/HD_4890_Toxic_Vapor-X/9.html
> http://www.techpowerup.com/reviews/Sapphire/HD_4890_Toxic_Vapor-X/14.html
> http://www.techpowerup.com/reviews/Sapphire/HD_4890_Toxic_Vapor-X/18.html
> http://www.techpowerup.com/reviews/Sapphire/HD_4890_Toxic_Vapor-X/7.html
> ...



You guys see Bta's post up there? I suggest you follow his advice as this is your last warning.


----------



## KainXS (Sep 7, 2010)

there are rumors that AMD will redesign the SP's for more performance

the Sp's on the 2XXX-5XXX series





the Sp's on the 6XXX series




the 6 series will probably have more Sp's also

Edit

oops didn't see your post erocker sry


----------



## erocker (Sep 7, 2010)

KainXS said:


> just give it up, he's in his own world, mr 128 rops on a 512bit bus
> 
> there are rumors that AMD will redesign the SP's for more performance
> 
> ...



So delete your post then? You added pictures after you say "oops?" Please stop.


----------



## pantherx12 (Sep 7, 2010)

erocker said:


> So delete your post then? You added pictures after you say "oops?" Please stop.




Those pictures at-least are relevant to the discussion, the cayman xt will be using this new set-up, handy visual explanation really : ]


----------



## meran (Sep 8, 2010)

crazyeyesreaper said:


> lol i smell nvidia fanboy here lets face it the green team was late there still late they wont compete till 28nm the 6000 series gives ati the lead for another year meaning 2 years where ATi now AMD has been top dog in terms of GPUs you can spin it however you want it dosent change the fact even the stripped down gtx460 for gaming STILL consumes more power then a 5850 and close to a 5870 but is 20-40% slower in single card configs and the 480 while being the fastest single gpu card is still only 2nd fastest in terms of single card overall
> 
> and you seem to like talking about r 600 and r 300 what about the g92 lol been in use for nearly  4 years with no re improvement where as ati has scaled there design to offer better performance you can argue your points all you want because fact is the biggest baddest gpu nvidia had which was the 512 shader gtx 480 was only 5% faster then the 480 shader current high end both companies have room for improvement. and how is the r600 design pathetic? last i checked the 5850 5870 and 5970 are highly competitive compared to nvidias offerings and in general use nearly 35-40% less power to do so and if its truly fail then why did Nvidia lose so much market share to this pathetic design?



+1 im with ya


----------



## mastrdrver (Sep 8, 2010)

Does that look like two PCIe 6 pins connected to the card?

I think it will be quite impressive if the card only needs that for a chip ~400mm2. That's one thing that I think will make me keep my 5870s is because the run cool and don't pull a lot of power especially in idle. I don't want to go back to something like my 4870x2. Sure it was powerful for a single card but I had to run 50-60% fan to keep it cool.


----------



## a_ump (Sep 8, 2010)

i'm surprised there aren't any game bench leaks besides crysis. For me the hype isn't really for their release, we all expect the family to overall perform at least 20% faster. 

What i'm hyped for is to see what Nvidia say and how they counter-act if they can at all. AMD has also gained a lot more of the GPU marketshare over the past 2yrs. There was a graph that said it was actually at 51% as of 2010, so that's only gonna grow with this blow to nvidia, amazing how the tides turn bc i sure as hell never expected AMD to be back on top over Nvidia.

EDIT: now that i think about it, there were quite a few people that were saying nvidia was getting lazy and not being inventive enough. and they were right lol


----------



## Wile E (Sep 8, 2010)

I hate when they put display connectors in the second slot spacing. I like having the option of going single slot with a full cover block.


----------



## pantherx12 (Sep 8, 2010)

I'd like to see more ATi cards that don't have that dual block DVI, rather have 2 next to each other and a mini DP or something..


----------



## wahdangun (Sep 8, 2010)

man, but i hope AMD ditch DVI all together, and use Displayport, and bundle displayport to DVi instead, DVi is gigantic and take a lot more space than displayport


----------



## wolf (Sep 8, 2010)

mastrdrver said:


> Actually are they not mini display ports?
> 
> If so you'd need an adapter to even use a normal display port on a monitor.



I believe you are correct sir, they look like mini's, but here's hoping the card ships with at least 1 adapter.


----------



## CrystalKing (Sep 8, 2010)

PCB pics

Cayman XT Sample Cards, PCB likes HD5870, core 900mhz, but I don't know if it is the final frequency, 6+8pin


----------



## cadaveca (Sep 8, 2010)

Looks like 256-bit, to me...


----------



## erocker (Sep 8, 2010)

8+6 pin PCI-E connectors. 8 memory chips.


----------



## cadaveca (Sep 8, 2010)

6 phase gpu power, 2-phase mem? So say 230w?  Here's hoping 2xDP and 1x DVI works for Eyefinity...


----------



## largon (Sep 8, 2010)

cadaveca said:


> 6 phase gpu power, 2-phase mem?


Yep. All Volterra. 



cheezburger said:


> larger bus require more pin on the bga board that contain gpu. indeed but it doesn't really give any further die increase and nothing to do with die/wafer. just the board become more complex and more layer for pcb board and increase size of gpu footprint, but not die size.


Wider memory bus doesn't take more die area? 
That's just wrong. Memory bus width has a huge impact on die size. 
On RV770, the 256bit memory controller with I/O pads takes 14% of the die size. Around 36mm². 
On R600 that 512bit takes ~35-40% of total die size, that's whopping 125-170mm². 
Also, it takes twice the amount of memory chips for 512bit so cost goes up due to many many factors. 


> then remove some unnecessary design such as 5D shader and stop adding more shader like what they did in r700...they were wasted far more die space by putting these additional float point feature


Huh? The added shaders made up majority of the performance increase we saw in RV670->RV770. 







> r600's massive shader architecture was one of worst way to improve performance.


----------



## cadaveca (Sep 8, 2010)

largon said:


> Huh? The added shaders made up majority of the performance increase we saw in RV670->RV770.



Based on tsting I've been doing recently, I really think they need to lower shader numbers, and make the actual shaders MORE complex. The "Ultra-Threaded" nature of ATi's design is TOO THREADED...so they need to deal with that in some way. those pics up above seem to say that same thing, and that makes me quite happy.

However, if they launch without Bulldozer...\

I really don't want to see 6870 yet. I do not have confidence that thet rest of the market is ready for this chip. ATi wants to spank nV...but I tihnk they are just seeing RED.


----------



## erocker (Sep 8, 2010)

cadaveca said:


> Based on tsting I've been doing recently, I really think they need to lower shader numbers, and make the actual shaders MORE complex. .



I'm thinking that is exactly what they are doing with the 6 series.


----------



## largon (Sep 8, 2010)

I do agree they can't go on bloating the SIMD core anymore as Cypress demonstrates returns are indeed diminishing. But RV670->R770 transition was a smooth move, ofcourse they did almost double RBE throughput but most of the perf gain was in the expansion of shader core.


----------



## wahdangun (Sep 8, 2010)

cadaveca said:


> Based on tsting I've been doing recently, I really think they need to lower shader numbers, and make the actual shaders MORE complex. The "Ultra-Threaded" nature of ATi's design is TOO THREADED...so they need to deal with that in some way. those pics up above seem to say that same thing, and that makes me quite happy.
> 
> However, if they launch without Bulldozer...\
> 
> I really don't want to see 6870 yet. I do not have confidence that thet rest of the market is ready for this chip. ATi wants to spank nV...but I tihnk they are just seeing RED.



but actually 5D shader is more die efficient than NVDIA counter part (i'm reading some anandtech article when RV770 was out), and thats why 160 core in HD 4870 can compete with 192 cuda core in GTX 260. and if the game can fully utilize the 5D shader then it can be a lot more powerful


----------



## largon (Sep 8, 2010)

For example, 5D shaders totally steamroll in Furmark, which isn't even optimized for it.


----------



## cadaveca (Sep 8, 2010)

erocker said:


> I'm thinking that is exactly what they are doing with the 6 series.



I can't help but be a bit excited by that. I mean, sure, I'm just coming to these conclusions NOW about HD5-series, but I'm sure AMd has been aware of this for some time now.

And maybe that driver change was a pre-emptive strike in preparation for these cards...


I'm still more interested in Bulldozer-based Fusion chips though. The combination of a that cpu, plus these add-in cards(if less complex, but higher-order math), might be the huge boost that pushes AMD back into the performance lead when it comes to 3D.



wahdangun said:


> but actually 5D shader is more die efficient than NVDIA counter part (i'm reading some anandtech article when RV770 was out), and thats why 160 core in HD 4870 can compete with 192 cuda core in GTX 260. and if the game can fully utilize the 5D shader then it can be a lot more powerful



To me, it seems that that only really suits the HPC crowd, and older-style game programming though. Largon's mention of Furmark kinda illustrates that very well, IMHO. No game pushes HD5-series like Furmark...the math in Furmark is very simple, and games are not.

I'm looking for a few other specific changes, and AMD really might have a huge winner here...I guess time will tell.


----------



## cheezburger (Sep 8, 2010)

largon said:


> Yep. All Volterra.
> 
> Wider memory bus doesn't take more die area?
> That's just wrong. Memory bus width has a huge impact on die size.
> ...



125-170mm^2 on 80nm doesn't mean it will take as much die space in 40nm

512bit bus in 40nm: 170mm^2/(80nm/40nm)^2 = 42.5mm^2 ~10%
256bit bus in 40nm: 36mm^2/(55nm/40nm)^2 = 19.4mm^2

which overall a 512bit bus will only take about 12% if in current cypress's 334mm^2 die. it isn't really that big.


----------



## cadaveca (Sep 8, 2010)

cheezburger said:


> 125-170mm^2 on 80nm doesn't mean it will take as much die space in 40nm
> 
> 512bit bus in 40nm: 170mm^2/(80nm/40nm)^2 = 42.5mm^2 ~10%
> 256bit bus in 40nm: 36mm^2/(55nm/40nm)^2 = 19.4mm^2
> ...



12% is alot if it's not needed. And given that we know there are just 8 memory chips, it's basically impossible for it to be truly 512-bit...that would require 16x ram ICs. Memory bandwidth isn't the issue for Cypress...so there would be no need for such a drastic change.


----------



## cheezburger (Sep 8, 2010)

cadaveca said:


> 12% is alot if it's not needed. And given that we know there are just 8 memory chips, it's basically impossible for it to be truly 512-bit...that would require 16x ram ICs. Memory bandwidth isn't the issue for Cypress...so there would be no need for such a drastic change.



12% is a lot? 256bit bus already took about 15% of die on hd 4870 already...

5770 is also a 8 ram chip card but it only had 128bit so basically chip number is nothing to do with bus like nvidia's design for its card. just to remind you x2900xt is ALSO have only 8 chip as well. for what you said about 16x 12x ram IC was nvidia's exclusive architecture which amd's card can add as much ram as possible without concerning ram bus and ram controller.


----------



## cadaveca (Sep 8, 2010)

cheezburger said:


> 12% is a lot? 256bit bus already took about 15% of die on hd 4870 already...
> 
> 5770 is also a 8 ram chip card but it only had 128bit so basically chip number is nothing to do with bus like nvidia's design for its card. just to remind you x2900xt is ALSO have only 8 chip as well. for what you said about 16x 12x ram IC was nvidia's exclusive architecture which amd's card can add as much ram as possible without concerning ram bus and ram controller.



First, your point about 5770 only illustrates my point. GDDR5 only works in so many configurations..and there are only so many types of IC available(5770 gets 8xIC in 128 bit the same way 5870 can get 2GB on 256bit). Together, with those infos, I CAN make those conclusions. nVidia's 384-bit and lower works on the same principle...there are several 64-bit busses, and each bus can only contain certain configurations of ram IC...in effect Fermi has 2x more 64-bit controllers, and as such, needs those extra ICs.

2900XT, truly, is only 256-bit. It was considered 512-bit becuase it had 256-bit to the "ringstops", and then 256-bit from "ringstop" to mem IC's. because these two busses could operate independantly, both can have data in-flight, it was effectively given 512-bits of data transfer...but the memory bus, is NOT truly 512bit.

You are ignoring that AMD is a business, and as such, profitability is concern #1. As such, changes that increase pricing, must have a real tangible benefit, or they wil lbe cut from the design...Cypress, at first, was a much larger chip than we got...for exactly this reason. With that in mind, they can make better, more PRICE EFFECTIVE use of that die space, than adding 512-bit memory control.

So, I can say that the pictured card is 256-bit only...due to ICs...the only other option, based on available parts, is a 128-bit GPU, and that would not suffice for a high-end SKU.


----------



## wolf (Sep 8, 2010)

really all it needs is the faster memory chips, 512 bit is useless the way GDDR5 speeds are soaring.


----------



## largon (Sep 8, 2010)

GDDR5 will get a nice speed bump when differentially clocked chips and GPU memctrls start appearing. And I reckon Cayman wields a controller capable of differential IO... 
Prepare to say hello to 5-10GHz GDDR5 chippery. 


cheezburger said:


> 125-170mm^2 on 80nm doesn't mean it will take as much die space in 40nm
> 
> 512bit bus in 40nm: 170mm^2/(80nm/40nm)^2 = 42.5mm^2 ~10%
> 256bit bus in 40nm: 36mm^2/(55nm/40nm)^2 = 19.4mm^2
> ...


Only in theory. Reality is less ideal. 
Scratch that. You're in a whole wrong ballpark. 

You're comparing GDDR3/4 memctrl and GDDR5. GDDR5 uses some ~20% more pins (area), so 512bit GDDR5 ctrl is larger than 512bit GDDR3/4 ctrl. And what's worse, MEMIO does not scale linearly with fab process. It actually scales hardly at all on today's process'. The problem is IO pads, that is, the solder balls between IC and chip carrier are sized what they are and there's no way shrinking the distance between 'em. Difference between IO pads on, say, a 90nm chip and 32nm chip is nowhere near as large as one would think. 


cheezburger said:


> just to remind you x2900xt is ALSO have only 8 chip as well. for what you said about 16x 12x ram IC was nvidia's exclusive architecture which amd's card can add as much ram as possible without concerning ram bus and ram controller.


That's just plain wrong. 
Bus width is dictated by number of memory chips and bit width of those chips. Any and all GDDR3/4/5 chips come only up to 32bit wide. HD2900XT has 16 chips. And that's a fact. 
I have two HD2900XT cards here so I can point 'em to you if you want to argue. 








cadaveca said:


> 2900XT, truly, is only 256-bit. It was considered 512-bit becuase it had 256-bit to the "ringstops", and then 256-bit from "ringstop" to mem IC's. because these two busses could operate independantly, both can have data in-flight, it was effectively given 512-bits of data transfer...but the memory bus, is NOT truly 512bit.


Not true. R600 was as true 512bit as can be. 
There was eight ringstops of which each was a dualchannel controller (64bit) and each ringstop connected to two other stops with 1024bit bidirectional bus (512bit˄ + 512bit˅).


----------



## cadaveca (Sep 8, 2010)

largon said:


> Not true. R600 was as true 512bit as can be.
> There was eight ringstops of which each was a dualchannel controller (64bit) and each ringstop connected to two other stops with 1024bit bidirectional bus (512bit˄ + 512bit˅).




Yeah you know where I screwed up the math.

but that isn't even 100% true, as there is a ringstop for PCI-E, and a ringstop for Crossfire connector. But yes, 8 for memory control.


----------



## wolf (Sep 9, 2010)

largon always knows best, you can take that to the bank.


----------



## mastrdrver (Sep 9, 2010)

Huh 5D shader is gone.

4D is what is coming on the 6 series.

Like I said in the 6k thread, expect a shader setup like what nVidia changed to for Fermi (not exact but similar). Shaders will be grouped with parts of the DX11 pipeline including the tessellation part since triangles/clock is what will define a DX11 gpu. Of course you will need to group shaders with these so as to work as units together.

Kind of like a multi core cpu, but not really.


----------



## inferKNOX (Sep 9, 2010)

wahdangun said:


> man, but *i hope AMD ditch DVI all together, and use Displayport*, and bundle displayport to DVi instead, DVi is gigantic and take a lot more space than displayport


That's quite an interesting idea. I wonder if AMD has that planned for the future cards.
It is said that DP is more flexible and what-so-ever + free, so it would seem that it's quite possible.


----------



## crazyeyesreaper (Sep 9, 2010)

i would prefer they dont as it would essential fuck up my entire setup here and im not a fan of display port at all


----------



## overclocking101 (Sep 9, 2010)

any new pics yet?


----------



## Super XP (Nov 4, 2010)

> AMD put a lot of time and effort to improve infrastructure and enhance performance.


Cayman XT is going to obliterate the competition. It's good times for ATI / AMD for the past year(s). Or should I say for us gamers and for competition as a whole.


----------



## JATownes (Nov 5, 2010)

Wow...Thread necro from a few month ago....Way to bring a Cayman thread back from the dead.


----------



## CrystalKing (Nov 5, 2010)

*Cayman XT hd6970 it's very very long*

DVI*2+HDMI+miniDP*2 and 6+8pin 2GB GDDR5 860mhz


----------



## TheMailMan78 (Nov 5, 2010)

Relese it already. Enough with this bullshit.


----------



## overclocking101 (Nov 6, 2010)

yeah those pics of a "long" card are not working for me but i assume the 69XX cards will be long probably around 10.5-12 inches just as the last ones were with a 13-14 inch dual gpu card


----------

