# AMD to Skip 20 nm, Jump Straight to 14 nm with "Arctic Islands" GPU Family



## btarunr (Apr 23, 2015)

AMD's next-generation GPU family, which it plans to launch some time in 2016, codenamed "Arctic Islands," will see the company skip the 20 nanometer silicon fab process from 28 nm, and jump straight to 14 nm FinFET. Whether the company will stick with TSMC, which is seeing crippling hurdles to implement its 20 nm node for GPU vendors; or hire a new fab, remains to be seen. Intel and Samsung are currently the only fabs with 14 nm nodes that have attained production capacity. Intel is manufacturing its Core "Broadwell" CPUs, while Samsung is manufacturing its Exynos 7 (refresh) SoCs. Intel's joint-venture with Micron Technology, IMFlash, is manufacturing NAND flash chips on 14 nm.

Named after islands in the Arctic circle, and a possible hint at the low TDP of the chips, benefiting from 14 nm, "Arctic Islands" will be led by "Greenland," a large GPU that will implement the company's most advanced stream processor design, and implement HBM2 memory, which offers 57% higher memory bandwidth at just 48% the power consumption of GDDR5. Korean memory manufacturer SK Hynix is ready with its HBM2 chip designs.

*View at TechPowerUp Main Site*


----------



## matar (Apr 23, 2015)

Not an AMD Fan but I have to say that's a smart move AMD


----------



## Noel.VSL (Apr 23, 2015)

14nm NAND Flashmemory, really!?


----------



## HumanSmoke (Apr 23, 2015)

Can't say it comes as a surprise, but all those people yelling from the rooftops about the 390X being a 20nm design must be getting acid indigestion about now.


matar said:


> Not an AMD Fan but I have to say that's a smart move AMD


20nm from either GloFo or TSMC is wholly unsuited for large power budget IC's. Bit of a no-brainer that both AMD and the completion would skip the process node. With Nvidia already looking to cover their bases by sourcing from TSMC and Samsung, AMD really couldn't commit to a late (and by all accounts, underperforming) 20nm.


----------



## john_ (Apr 23, 2015)

HumanSmoke said:


> Can't say it comes as a surprise, but all those people yelling from the rooftops about the 390X being a 20nm design must be getting acid indigestion about now.


I though the 20nm big GPU idea died months ago. I didn't knew there where still debates about that.


----------



## micropage7 (Apr 23, 2015)

nice move, but seriously AMD should offer more, if they cant pass their competitor they should offer something new


----------



## HumanSmoke (Apr 23, 2015)

john_ said:


> I though the 20nm big GPU idea died months ago. I didn't knew there where still debates about that.


Well, I wasn't pointing at TPU in particular, but some people still refuse to believe that both TSMC's CLN20SOC and GloFo's 20LPM were never intended for large power budget IC's that characterise GPUs, as their own respective literature spells out.


----------



## john_ (Apr 23, 2015)

HumanSmoke said:


> Well, I wasn't pointing at TPU in particular, but some people still refuse to believe that both TSMC's CLN20SOC and GloFo's 20LPM were never intended for large power budget IC's that characterise GPUs, as their own respective literature spells out.


The only rumors I know they still persist about big chips at 20nm, have to do with Xbox One and PS4 APUs. Everything else about 20nm is either about Nolan/Amur or other ARM SOCs. But discrete GPUs on 20nm for desktops are abandoned as an idea many months ago from most people/publications. It would have been great if Granada was a 20nm Hawaii instead of just a rebrand, but it's not happening.


----------



## Caring1 (Apr 23, 2015)

micropage7 said:


> nice move, but seriously AMD should offer more, if they cant pass their competitor they should offer something new


AMD are already comparable on performance, a power reduction is all they need to get the edge, combined with ease of overclocking cores and memory they should come out in front.


----------



## GreiverBlade (Apr 23, 2015)

micropage7 said:


> nice move, but seriously AMD should offer more, if they cant pass their competitor they should offer something new


passing? they are toes to toes ... technically

well power consumption is a letdown, but as for example my 290 still keep up the brand new 970 that a friend has in a quite similar setup to mine ... and the 980 is not too far above so that even a 290X can keep it in check

ok now there is the Titan X and the "upcoming" 980Ti but ... the T'X is a steal and the 980Ti is a "bend over, here it come again" scenario.

reducing the manufacturing node mean reduced power need if i am not mistaken? so maybe 14nm will be the feature that will enable Arctic Island to have a lowered power consumption

(even if Volcanic island will still be on 28nm it will still be a competitor for Maxwell and the upcoming nV cards until Arctic island is released in the wild)

i guess i will keep my 290 until the next next gen ....


----------



## dorsetknob (Apr 23, 2015)

Noel.VSL said:


> 14nm NAND Flashmemory, really!?


Re read the OP 1   or 



btarunr said:


> Named after islands in the Arctic circle, and a possible hint at the low TDP of the chips, benefiting from 14 nm, "Arctic Islands" will be led by "Greenland," a large GPU that will implement the company's most advanced stream processor design, and implement HBM2 memory, which offers 57% higher memory bandwidth at just 48% the power consumption of GDDR5. Korean memory manufacturer SK Hynix is ready with its HBM2 chip designs.





btarunr said:


> Samsung is manufacturing its Exynos 7 (refresh) SoCs. Intel's joint-venture with Micron Technology, IMFlash, is manufacturing NAND flash chips on 14 nm.



there was no mention of AMD using NAND Flashmemory


----------



## micropage7 (Apr 23, 2015)

GreiverBlade said:


> passing? they are toes to toes ... technically
> 
> well power consumption is a letdown, but as for example my 290 still keep up the brand new 970 that a friend has in a quite similar setup to mine ... and the 980 is not too far above so that even a 290X can keep it in check
> 
> ...


yeah, the 14nm may reduce the power consumption, and as usual top of the line has basic problems:its heat and high power consumption

personally i wonder how about the performance of new 14nm on the road


----------



## Ferrum Master (Apr 23, 2015)

I just want a new GPU... quit the tease already... my 7970 is starting to cough up blood already...


----------



## Lionheart (Apr 23, 2015)

micropage7 said:


> nice move, but seriously AMD should offer more, if they cant pass their competitor they should offer something new



HBM memory is new...


----------



## 64K (Apr 23, 2015)

Then we should see a considerable leap in performance from the 390X to the 490X. I also think AMD has code named it Arctic Islands because they have found a way to design a GPU that doesn't draw as much power as they currently do to compete with Nvidia. I hope their flagship is still around a 250 watt card design though. Using that much wattage on the improved efficiency of the 14nm process should be a beast.


----------



## buggalugs (Apr 23, 2015)

AMD should have dumped TSMC long ago although there isn't that many choices. AMD should try to do a deal with Samsung.


----------



## the54thvoid (Apr 23, 2015)

Caring1 said:


> AMD are already comparable on performance, a power reduction is all they need to get the edge, combined with ease of overclocking cores and memory they should come out in front.



Depends on how you define comparable.  Perf/price - they're better than Nvidia.  Perf/watt they are way behind.  On pure performance you can argue, Titan X -> GTX980 -> Titan Black -> GTX 780ti ->(or =) R9 290X.  All single GPU of course.  295X2 beats them all when crossfire scaling works.

AMD need the 390X to shift the market.  By 2016 Nvidia will be on their Pascal designs.  June can't come soon enough for me, let alone 2016.


----------



## Naito (Apr 23, 2015)

All I can say is that this will be a very interesting time; not that I'll be getting 4K any time soon, but will be keen to see what tech on such a fab is capable of. That, and DX12.


----------



## JMccovery (Apr 23, 2015)

buggalugs said:


> AMD should have dumped TSMC long ago although there isn't that many choices. AMD should try to do a deal with Samsung.



Who was able to compete with TSMC 'long ago', hmm?

Absolutely no one.

Samsung wasn't even a blip on the radar until recently.


----------



## jabbadap (Apr 23, 2015)

Hmh there's no upcoming 14nm process on TSMC. TSMC:s road maps shows next node after 16nm ff+ is 10nm ff. So if amd is truly going to 14nm it must be GF/samsung nodes(I doubt intel will sell them its 14nm nodes).


----------



## Kaotik (Apr 23, 2015)

Actually GlobalFoundries has 14nm in production already, too. 
Their biggest owner, Mubadala Development, announced early this month that GloFo is already ramping up 14nm production for a client (meaning it's not even test chips)


----------



## Jorge (Apr 23, 2015)

Kaotik said:


> Actually GlobalFoundries has 14nm in production already, too.
> Their biggest owner, Mubadala Development, announced early this month that GloFo is already ramping up 14nm production for a client (meaning it's not even test chips)



Exactly. And GloFo will be delivering a variety of 13 Nm chips - CPU, APU and GPU for said customers.


----------



## Casecutter (Apr 23, 2015)

btarunr said:


> _Named after islands in the Arctic circle, and a possible hint at the low TDP of the chips, benefiting from 14 nm, "Arctic Islands" will be led by "Greenland," a large GPU that will implement the company's most advanced stream processor design, and implement HBM2 memory, which offers 57% higher memory bandwidth at just 48% the power consumption of GDDR5._ Korean memory manufacturer SK Hynix is ready with its HBM2 chip designs.


Whew... that was 3 breath sentence.

But it's that last one that surprises, all of a sudden rumors say 1st Gen HBM is constrained, even though SK Hynix indicated client shipments started in January 2015.  While this says SK Hynix is "ready" for HBM2, sure not near production but appears on track.

What's more in question is where is TSMC with 16 nm FinFET?  As from some of the rumors others have been "investigating options" or at "keeping open mind" for their next shrink.  Some speculate TSMC might not have full production for large power budget IC's until Q3 2016.  Such a lapse might give AMD the window to get Arctic Islands parts solidly vetted at GloFo and still be ready by this time next year.


----------



## Bjorn_Of_Iceland (Apr 23, 2015)

GreiverBlade said:


> passing? they are toes to toes ... technically
> 
> well power consumption is a letdown, but as for example my 290 still keep up the brand new 970 that a friend has in a quite similar setup to mine ... and the 980 is not too far above so that even a 290X can keep it in check


So is my GTX780.. a 2 year old card.. and the 980 is not too far above so that even a 780ti can keep it in check.

AMD is lagging that much, they needed to skip a 20nm just to make them competitive.


----------



## Casecutter (Apr 23, 2015)

Bjorn_Of_Iceland said:


> So is my GTX780.. a 2 year old card.


 Could almost be... GTX 780 launched May 23, 2013


----------



## alwayssts (Apr 23, 2015)

Casecutter said:


> Whew... that was 3 breath sentence.
> 
> But it's that last one that surprises, all of a sudden rumors say 1st Gen HBM is constrained, even though SK Hynix indicated client shipments started in January 2015.  While this says SK Hynix is "ready" for HBM2, sure not near production but appears on track.
> 
> What's more in question is where is TSMC with 16 nm FinFET?  As from some of the rumors others have been "investigating options" or at "keeping open mind" for their next shrink.  Some speculate TSMC might not have full production for large power budget IC's until Q3 2016.  Such a lapse might give AMD the window to get Arctic Islands parts solidly vetted at GloFo and still be ready by this time next year.



I get the impression it is the 2x1GB stacks that are constrained; everything points to that imho.

First, and for a long time, we heard 'Fiji' was only going to be 4GB (4x1GB).  Then we heard murmurs AMD was internally battling with offering an 8GB design, even though it may hold up production and raise the price over $700.  Then, we got that slide deck that included what appeared to be info fresh off the line about making 2x1GB stacks (likely meaning the bandwidth of a single 1GB stack with two connected stacks or 2x chips in a stack)...something that nobody really saw coming (HBM1 was going to be 4hi 1GB, HBM2 up to 8hi 4GB).  I have little doubt this was a last-second addition/decision as they noticed peoples' concerns with 4GB per gpu (especially in crossfire) for such an expensive investment.  This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow.

AMD really seems in a tough place with that.  4GB is likely (optimally) not enough for the 390x, especially with multi-gpu in the current landscape, but 8GB is likely a little too much (and expensive) for a single card (and I bet 390 non-x will be perfectly fine with 4GB aimed at 1440p)...it's the reason a 6GB similar-performance design from nvidia makes sense....that's just about the peak performance we can realistically expect from a single gpu on 28nm.

One more time with gusto:  28nm will get us ~3/4 of the way to 4k/8GB making sense on the whole.  14nm will pick up the slack..the rest is just gravy (in performance or power savings).

While I want 4k playability as much as anyone in demanding titles (I'm thinking a dual config on 14nm is in my future, depending on how single cards + dx12 handle the situation), I can't help but wonder if the cards built for 1440p60+ will be the big winners this go-round, as the value gap is so large.  That is to say, 390 (non-x, 4GB), perhaps a cheaper gtx 980, and/or a similarly-priced salvage GM200.


----------



## TheoneandonlyMrK (Apr 23, 2015)

interesting that they are breaking news on this while the 390X isnt out yet, they must be sure of its performance imho.

 I'd take the implied constraint on HBM memory at face value ,I mean was it possible for them to make enough , not in one plant ,that shits gonna be hot potatos for a few years yet and pricing will confirm this.


----------



## alwayssts (Apr 23, 2015)

theoneandonlymrk said:


> interesting that they are breaking news on this while the 390X isnt out yet, they must be sure of its performance imho.
> 
> I'd take the implied constraint on HBM memory at face value ,I mean was it possible for them to make enough , not in one plant ,that shits gonna be hot potatos for a few years yet and pricing will confirm this.




It's surely a weird situation with HBM1.  Hynix has exactly one customer, and that one customer from all accounts has had their product ready for some time but refuses to launch it on account of older products in the channel, as well as supposedly massively optimizing drivers before release.  With such a floating target, as well as uncertainty of sales (given the high price, unknown competitive landscape etc)...I couldn't really blame Hynix for keeping supply tight (if 1GB is indeed 'constrained' as well).


----------



## TheoneandonlyMrK (Apr 23, 2015)

where are you getting your info that Hynix has one customer,thats just odd, i have no proof to the contrary but no business i ever heard of bet all its eggs on 1 basket


----------



## alwayssts (Apr 23, 2015)

theoneandonlymrk said:


> where are you getting your info that Hynix has one customer,thats just odd, i have no proof to the contrary but no business i ever heard of bet all its eggs on 1 basket



Perhaps that is over-reaching in assumption...point taken, but it seems pretty obvious they are the first, and every other product coming later appears to use HBM2.  It's not unheard of (Samsung's GDDR4 says 'hi'), especially given the technology will evolve is a very obvious way (essentially stacking currently-common low-density ddr3 to more-recent higher-density ddr4 as manufacturing of that style of memory proliferates).

AFAIK the main customers will be AMD (GPUs, APUs) and nVIDIA (at least GPUs).  We know nvidia isn't jumping on until HBM2 (Pascal), and it can be assumed by the approximate dates on roadmaps APUs will also use HBM2.  We know Arctic Islands will use HBM2.

There may be others, but afaict HBM1 is more-or-less a trial product...a risk version of the technology...developed by not only Hynix, but also AMD for a very specific purpose;  AMD needed bandwidth while keeping their die size and power consumption in check for a 28nm gpu product.  The realistic advantages over GDDR5 with a gpu on a smaller core process that can accommodate it (say 8ghz gddr5 on 14nm) aren't gigantic for HBM1, but it truly blooms with HBM2.  The fact is they needed that high level of efficient bandwidth now to be competitive given their core technology....hence it seems HBM1 is essentially stacking 2Gb DDR3 while the mass commercial product will be stacking more-relevant (and cheaper by that time) up to 4-8Gb DDR4.


----------



## TheoneandonlyMrK (Apr 23, 2015)

So largely just your opinion and outlook upon it then , fair enough.

I personally dont think that AMD and Hynix co-operating on this tech precludes its use in other markets for Hynix ,,with imaging sensors ,Fpgas and some other less known about networking and instrumentation chips being candidates for its use(while not hindering Amd's use of it).

Is Nvidia going to use this on Pascal?? or could that be some other variant like Micron/intel's ,point is with 3D/HBM/3DS were going to be seeing the same high bandwidth memory standards(JEDEC) used on various different propositions over the next few years so I dont think any CO-op tie ins are going to last that long if they do at all and exclusivity wont last but a year at best.


----------



## hojnikb (Apr 23, 2015)

dorsetknob said:


> Re read the OP 1   or
> 
> 
> 
> ...


thats besides the point.

The article got it wrong, as there is no such thing as 14nm flash memory.


----------



## dorsetknob (Apr 23, 2015)

hojnikb said:


> The article got it wrong, as there is no such thing as 14nm flash memory.




Really then this is a daydream 
https://www.google.co.uk/search?q=1...rce=univ&ei=Yk05Vev6DtDUaveTgYgC&ved=0CEsQsAQ


----------



## alwayssts (Apr 23, 2015)

theoneandonlymrk said:


> So largely just your opinion and outlook upon it then , fair enough.
> 
> I personally dont think that AMD and Hynix co-operating on this tech precludes its use in other markets for Hynix ,,with imaging sensors ,Fpgas and some other less known about networking and instrumentation chips being candidates for its use(while not hindering Amd's use of it).
> 
> Is Nvidia going to use this on Pascal?? or could that be some other variant like Micron/intel's ,point is with 3D/HBM/3DS were going to be seeing the same high bandwidth memory standards(JEDEC) used on various different propositions over the next few years so I dont think any CO-op tie ins are going to last that long if they do at all and exclusivity wont last but a year at best.




You're right, and it's certainly possible.  That said, other companies seem set in their ways of wider buses, proprietary cache (as you mentioned), and/or more dense alternatives/cheaper alternatives to HBM1.  HBM2 certainly could, and likely will, be widely adopted.

It's my understanding nvidia will use HBM2 in Pascal.  Their latest roadmap essentially gave their plan away:  biggest chip will use 12GB/768Gbps ram iirc.  That means 3xHBM2 4GB.

I think an interesting way for nvidia to prove a point about HBM1 is to simply do the following:


GM204 shrunk to ~1/2 it's size on 14/16nm (so essentially 200-some mm2), with 4/8GB (4-8Gb) 8ghz GDDR5 running at something like 1850/8000

vs

FijiXT

Hypothetically...who wins?


----------



## Casecutter (Apr 23, 2015)

alwayssts said:


> I get the impression it is the 2x1GB stacks that are constrained; everything points to that imho.
> 
> First, and for a long time, we heard 'Fiji' was only going to be 4GB (4x1GB).  Then we heard murmurs AMD was internally battling with offering an 8GB design, even though it may hold up production and raise the price over $700.  Then, we got that slide deck that included what appeared to be info fresh off the line about making 2x1GB stacks (likely meaning the bandwidth of a single 1GB stack with two connected stacks or 2x chips in a stack)...something that nobody really saw coming (HBM1 was going to be 4hi 1GB, HBM2 up to 8hi 4GB).  I have little doubt this was a last-second addition/decision as they noticed peoples' concerns with 4GB per gpu (especially in crossfire) for such an expensive investment.  This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow.
> 
> ...


 
_Always_ good info, and I honed in with you saying, _“just about the peak performance we can realistically expect from a single gpu on 28nm”._

As to the issue of 4Gb not being enough or needing 8Gb... Isn’t more, the amount of memory is almost meaningless if you don't have the processing power to support it?  I thought I read 8Gb of HBM will offer up to 1 Terabyte of bandwidth, so given that wouldn't it be a waste for AMD to add extra memory if the GPU designs on 28nm physically prevents a die size that could exploit all that.  Would Fiji with 4096 SP's, not have the oomph and watch 50% of such 1Tb bandwidth go unused?

You made a good point when saying, _"This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow."_  But, isn’t that a good thing as a single 390X is not going to offer excellent 4K, but a Crossfire and all 8Gb (2x 4Gb) would act as one.  While is any of the color compression (memory) of Tonga able to factored into what Fiji might be able to exploit? I mean Tonga was made for Apples 5K Retina display could that provide an advantage for 4K panels?


----------



## TheoneandonlyMrK (Apr 23, 2015)

Be nice to find out eh ,Id obviously vote Amd there jk

I have not a clue, the maths is easy,, but imho its too hypothetical, to clean to easy and chips dont bin that way , not many nodes have panned out exactly how they were scripted to and its that which makes this cat and mouse chip game so worthy of debate.


----------



## HumanSmoke (Apr 23, 2015)

alwayssts said:


> It's surely a weird situation with HBM1.  Hynix has exactly one customer, and that one customer from all accounts has had their product ready for some time but refuses to launch it on account of older products in the channel, as well as supposedly massively optimizing drivers before release.  With such a floating target, as well as uncertainty of sales (given the high price, unknown competitive landscape etc)...I couldn't really blame Hynix for keeping supply tight (if 1GB is indeed 'constrained' as well).


HBM by the accounts I've seen, is not being aggressively ramped- probably due to manufacturing costs and defect rates needing to be passed on the product end price. Manufacturing a GPU+HBM on interposer has it's own yield/manufacturing defect and tolerance issues (complexity and overall size - which could well top out at larger than 800mm²). Xilinx has been shipping 2.5D for a couple of years or more, and has just started production of 3D FPGA's. Neither small, nor cheap, nor easy to manufacture as this article on the original Virtex-7 concludes. On the plus side, the price for these 3D chips drops rapidly once the yield/manufacturing issues are under control ( the $8K price is roughly half what it was a year ago).


----------



## arbiter (Apr 23, 2015)

alwayssts said:


> It's surely a weird situation with HBM1.  Hynix has exactly one customer, and that one customer from all accounts has had their product ready for some time but refuses to launch it on account of older products in the channel, as well as supposedly massively optimizing drivers before release.  With such a floating target, as well as uncertainty of sales (given the high price, unknown competitive landscape etc)...I couldn't really blame Hynix for keeping supply tight (if 1GB is indeed 'constrained' as well).


I doubt that is the case they refuse to launch cause other products in the channel. AMD isn't in position to do delay launch of a new product given their spot $ wise. Its more like they got issues they are working out on the product. 

Part of reason i believe amd is competitive with nvidia is due to higher memory bandwidth that keeps their gpu's there.  AMD likely fears that day nvidia switches to HBM.


----------



## HumanSmoke (Apr 23, 2015)

arbiter said:


> I doubt that is the case they refuse to launch cause other products in the channel.


Most definitely not. HBM is a catalogue product for any vendor to integrate, but I suspect like all 2.5D/3D stacked IC's, the manufacturing cost needs to be justified by the end product's return on investment.
AMD fulfil the launch customer requirement, but I suspect that many other vendors are waiting to see how 2.5D pricing aligns with product maturity, and 3D pricing/licencing and standards shake out. AFAIA, the HBM spec while ratified by JEDEC is still part of ongoing (and not currently resolved) test/validation ratification/spec finalization process - such as the IEEE P1838 spec that (I assume) will provide a common test/validation platform for 3D heterogeneous die stacking across HBM, HMC, Wide I/O2 etc.


arbiter said:


> AMD isn't in position to do delay launch of a new product given their spot $ wise. Its more like they got issues they are working out on the product.


Would seem logical. AMD's R&D is probably stretched pretty thin considering the number of projects they have on their books. I'm also guessing that a huge GPU (by AMD's standards) incorporating a new memory technology that needs a much more sophisticated assembly process than just slapping BGA chips onto PCB presents its own problems.


----------



## alwayssts (Apr 23, 2015)

theoneandonlymrk said:


> Be nice to find out eh ,Id obviously vote Amd there jk
> 
> I have not a clue, the maths is easy,, but imho its too hypothetical, to clean to easy and chips dont bin that way , not many nodes have panned out exactly how they were scripted to and its that which makes this cat and mouse chip game so worthy of debate.



It certainly is in jest...but here's my theoretical:

980 needs 7ghz/256-bit at 1620mhz (Yes, it's that over-specced).  At 8ghz it could could support up to 1850mhz.  Samsung's tech *should* give around a ~30% performance boost (my brain seems to think it'll be 29.7%).  Currently, maxwell clocks around 22.5% better than other 28nm designs...which are around to slightly less than 1v = 1ghz.  Extra bandwidth from 980 gives it roughly a 4% performance boost going on a typical clock of 1291.5mhz (according to wizard's review), if you wish to do scaling that way.  Since I matched them, we don't need that.


1850*(2048sp+512sfu)/4096 = 1156.25 Fiji at matched bw/clock....

...but Fiji has 33% more bandwidth than it needs (or should get ~5.33_% performance from extra bw) so...

1050*1.05333_ =  1106mhz 'real' performance 

If you want to get SUPER technical, Fiji's voltage could be 1.14v, matching the lowest voltage of HBM (which operates at 1.14-1.26v) and should overclock some.   Theoretically that gm214 would need to be around 1.164v (1850/1.297/1.225= 1.164)....which just so happens to be the accepted best scaling voltage/power consumption on 28nm.  Weird...that.  

You could go even further, assuming Fiji could take up to 1.26v, as could the HBM....and that HBM is going to at least be proportional to 1600mhz ddr3 at 1.35v...squaring those averages all away (and assuming Fiji scales like most chips; not Hawaii) you could end up with something like a 1240mhz/1493mhz Fiji comparing to a ~2100/9074 (yes, that could actually happen) GM214.  It wouldn't be much different than how 770 was set up and clocked, proportionally (similar to gk204 at around 1300mhz; small design at high voltage, if not the pipeline adjusted to do so at a lower voltage).  Given that nvidia clearly took their pipeline/clockspeed cues from ARM designs (which are 2ghz+ on 14nm), and their current memory controllers are over-volted (1.6 vs 1.5v spec)....it's possible (if totally unlikely)!


*TLDR: * Depending on how you look at it, they would be really really damn close...and would be interesting to see just for kicks.  That's not to say they won't just go straight to Pascal...which I have got to assume will be something like 32/64/96 rop designs scaled to 1/2/3 HBM similar to the setup of maxwell (.5/1/1.5). 

Yeah yeah...It's all just speculation...but I find the similarities in the possibilities of design scaling (versus previous gen) quite uncanny.  There are really only so many ways to correlatively skin a cat (between units, clockspeeds, and bw) and these companies plan their way forward years ahead of time (hoping nodes will somewhat fit what they designed)...and one such as that makes a lot of sense.

I'm getting into the crazy talk and writing a sentence every ten minutes between doing other stuff....must be time to sleep.


----------



## alwayssts (Apr 23, 2015)

arbiter said:


> I doubt that is the case they refuse to launch cause other products in the channel. AMD isn't in position to do delay launch of a new product given their spot $ wise. Its more like they got issues they are working out on the product.




Except....didn't they say exactly that in their last earnings call?

I am in no disagreement that putting 4/8(16?  How does 2x1GB work?) stacked ram die + 1/2 gpus on a (probably) 832 or 1214mm2 interposer is likely a huge pain in the ass....It just seemed that was at least *part* of the issue.



Casecutter said:


> _Always_ good info, and I honed in with you saying, _“just about the peak performance we can realistically expect from a single gpu on 28nm”._
> 
> As to the issue of 4Gb not being enough or needing 8Gb... Isn’t more, the amount of memory is almost meaningless if you don't have the processing power to support it?  I thought I read 8Gb of HBM will offer up to 1 Terabyte of bandwidth, so given that wouldn't it be a waste for AMD to add extra memory if the GPU designs on 28nm physically prevents a die size that could exploit all that.  Would Fiji with 4096 SP's, not have the oomph and watch 50% of such 1Tb bandwidth go unused?
> 
> You made a good point when saying, _"This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow."_  But, isn’t that a good thing as a single 390X is not going to offer excellent 4K, but a Crossfire and all 8Gb (2x 4Gb) would act as one.  While is any of the color compression (memory) of Tonga able to factored into what Fiji might be able to exploit? I mean Tonga was made for Apples 5K Retina display could that provide an advantage for 4K panels?



Lot of Q's there.

Buffer size and bandwidth are two different things.  Sure, they could swap things out of buffer with faster bandwidth, but that's generally impractical (and why extra bandwidth doesn't give much more performance).  A larger tangible buffer for higher-rez textures is absolutely necessary if you have the processing power to support it, which I think Fiji does (greater than 60fps at 1440p requiring ~4GB.)

I do not believe AMD's (single-card) 8GB setup will be 1280Gbps, I think that is the distinction made by '2x1GB'.  I believe it will be 640 just like the 4GB model.  I would love to be wrong, as that would provide a fairly healthy boost to performance just based on the scale.

I personally believe (scaling between 1440p->2160p) is where the processing power of 390x will lie.  Surely some games will run great at 30->60fps at 4k, but on the whole I think we're just starting to nudge over 30 at 4k....it's generally a correlation to the consoles (720p xbox, 900p ps4).  I personally don't think 4k60 will be a consistant ultra-setting reality until 14nm and dual gpus....hopefully totalling 16GB in dx12.  Buffer requirement could even go higher, and if there's room, PC versions can always use more effects to offset whatever scaling differences.

I'm not at all saying the improvements in dx12 don't matter, they absolutely do, only that for the lifespan of this card they cannot be depended upon (yet)...and in the future, worse-case, they may still not be.  How many dx9 (ports) titles do we still see?

When you smush everything together into a box, I personally believe these cards avg out making sense around ~3200x1800 and 6GB.  Obviously ram amount will play a larger factor later on, as it becomes feasible to scale textures from consoles making the most of their capabilities.  That means more xbox games will be 720p, more ps4 games slightly higher, rather than the current 1080p.  Currently the most important scaling factor is raw performance (from those inflated resolutions on the consoles).

There are certainly a lot of factors to consider, and obviously even more unknowns.  I can only go on the the patterns we've seen.

For instance, I use a 60fps metric.  Just like performance dx12 may bring, perhaps we will all quickly adopt some form of adaptive sync making that moot.  As it currently sits though, I personally can only draw from the worst-case/lowest common denominator, as nothing else is currently widely applicable.


----------



## Wshlist (Apr 24, 2015)

matar said:


> Not an AMD Fan but I have to say that's a smart move AMD



When they first moved to 28 it turned out the fabs had huge issues with it and many chips on the die failed leading to high prices as I recall.
So it's a risky move, if the 14nm production technology is not going well you are in deep shit.


----------



## net2007 (Apr 24, 2015)

alwayssts said:


> I get the impression it is the 2x1GB stacks that are constrained; everything points to that imho.
> 
> First, and for a long time, we heard 'Fiji' was only going to be 4GB (4x1GB).  Then we heard murmurs AMD was internally battling with offering an 8GB design, even though it may hold up production and raise the price over $700.  Then, we got that slide deck that included what appeared to be info fresh off the line about making 2x1GB stacks (likely meaning the bandwidth of a single 1GB stack with two connected stacks or 2x chips in a stack)...something that nobody really saw coming (HBM1 was going to be 4hi 1GB, HBM2 up to 8hi 4GB).  I have little doubt this was a last-second addition/decision as they noticed peoples' concerns with 4GB per gpu (especially in crossfire) for such an expensive investment.  This can be noticed by the frantic 'dx12 can combine ram from multi gpus into a single pool' coming across the AMD PR bow.
> 
> ...





If it's true about the dx12 stacking... man.. 970's ftw. 290x ftw.


----------



## arbiter (Apr 24, 2015)

net2007 said:


> If it's true about the dx12 stacking... man.. 970's ftw. 290x ftw.



Don't know how that stacking will really work if it does. Might not work as well as people if it has to go through pci-e bus for 1 card to talk to other cards memory or what ever it will do.


----------



## lilhasselhoffer (Apr 24, 2015)

I'm seeing plenty of people talking about DX12, and I don't get it.  There is no plan out there which states DX12 will only appear on these new cards, and in fact Nvidea has stated that their current line-up is DX12 capable (though what this means in real terms is anyone's guess).  Basing wild assumptions off of incomplete and inconsistent data is foolish in the extreme.

"Arctic Islands" is a fun name, but why exactly does everyone think the cards will be so much cooler?  Heat transfer from a surface is a function of the area, when looking at a simplistic model of a chip.  When you decrease the manufacturing size by half, you lose 75% of the surface area.  Yes, you'll also have to decrease voltage inside the chip, but if you look at a transistor as a very poor resistor you'll see that power = amperage * voltage = amperage^2 * resistance.  To decrease the power flowing through the transistor, just to match the same thermal limits of the old design, you need to either half the amperage or quarter the resistance.  While this is possible, AMD has had the tendency to not do this.


HBM is interesting as a concept, but we're still more than 8 months from seeing anything using it.  Will AMD or Nvidea use the technology better, I cannot say.  I'm willing to simply remain silent until actual numbers come out.  Any speculation about a completely unproven technology are just foolish.



TL;DR:
All of this discussion is random speculation.  People are arguing about things that they've got no business arguing about.  Perhaps, just once, we can wait and see about the actual performance, rather than being disappointed when our wild speculations doesn't match with what we actually get.  I'm looking forward to whatever AMD offers, because it generally competes with Nvidea on some level, and makes sure GPU prices aren't ridiculous.


----------



## GhostRyder (Apr 24, 2015)

Well I think it became pretty clear at a point the delay in this top card was not so much because of 20nm as rumored but because they were waiting on HBM to be up to par with higher quantities.  I mean they realized 4gb will only satisfy peoples hunger for a short time especially with the amount of leaked/rumored/hinted at performance from these GPU's.  One thing AMD has had going for it for a long while has been memory size which has always helped it ahead in the higher resolutions category and they need to at least keep that to be competitive for the high end market where most people come to that area for high end needs (I mean higher refreshes, resolution, etc).

At this point, them skipping it was inevitable as it was not good for high end performance.  Lets just hope 14nm is a great success in the distant future.


----------



## 64K (Apr 24, 2015)

lilhasselhoffer said:


> "Arctic Islands" is a fun name, but why exactly does everyone think the cards will be so much cooler?



I'm guessing AMD chose that code name because they have found a way to not only take advantage of the improved efficiency of the 14nm process but also a more efficient architecture on top of that. Like Nvidia did with Maxwell. Same 28nm process as Kepler but more efficient so it used less watts.

AMD knows that they currently have a reputation for designing GPUs that run too hot and use too many watts for the same performance as an Nvidia GPU. I'm not saying they deserve that reputation but it does exist. Over and over I see people citing those two reasons as why they won't buy an AMD card. As far as the extra watts used it doesn't amount to anything much on an electricity bill for an average gamer playing 15-20 hours a week unless you live in an area where electricity is ridiculously expensive or you're running your card at max 24/7 for Folding or Mining. For me the difference would be about 8 cents a month on my power bill between a reference  GTX 780 Ti (peak 269 watts) and a reference  R9 290X (peak 282 watts) from W1zzard's reviews based on the last generations flagship cards. Even if AMD used 100 watts more than Nvidia it still wouldn't amount to much. 65 cents a month difference at 10 cents per kWh.

AMD is already the brunt of many jokes about heat/power issues. I don't think they would add fuel to the fire by releasing a hot inefficient GPU and calling it Arctic Islands.


----------



## Casecutter (Apr 24, 2015)

64K said:


> AMD knows that they currently have a reputation for designing GPUs that run too hot...  I don't think they would add fuel to the fire by releasing a hot inefficient GPU and calling it Arctic Islands.


 
I'd just remind those, it wasn't until AMD did GCN and/or 28mn, that being poor on power/heat became the narrative, and even then they weren’t out of bounds with Kepler.

Maxwell is good, and saving while gaming is commendable, but the "vampire" load during sleep compared to AMD ZeroCore is noteworthy over a months' time.

I ask why Apple went with the AMD Tonga for their iMac 5K Retina display?  Sure it could’ve been either Apple/Nvidia just didn’t care to or need to "partner up".  It might have been a timing thing, or more the spec's for GM206 didn’t provide the oomph, while a GTX 970M (GM204) wasn't the right fit spec's/price for Apple. 

Still business is business and keeping the competition from any win enhances one's "cred".  Interestingly, we don’t see that Nvidia has MXM version of the GM206?


----------



## GhostRyder (Apr 24, 2015)

64K said:


> AMD knows that they currently have a reputation for designing GPUs that run too hot and use too many watts for the same performance as an Nvidia GPU. I'm not saying they deserve that reputation but it does exist.


The irony still baffles me on that, with NVidia does it then its ok but if AMD does it then its the greatest sin in all of the computing world.



Casecutter said:


> I ask why Apple went with the AMD Tonga for their iMac 5K Retina display?  Sure it could’ve been either Apple/Nvidia just didn’t care to or need to "partner up".  It might have been a timing thing, or more the spec's for GM206 didn’t provide the oomph, while a GTX 970M (GM204) wasn't the right fit spec's/price for Apple.
> 
> Still business is business and keeping the competition from any win enhances one's "cred".  Interestingly, we don’t see that Nvidia has MXM version of the GM206?


AMD was chosen by Apple most times because they are more flexible than NVidia is.  They allow Apple to make more modifications as necessary to their designs to fit within their spectrum.  Not to mention I am sure there are areas AMD cuts them some slack especially in pricing to make it more appealing.


----------



## Casecutter (Apr 24, 2015)

GhostRyder said:


> AMD was chosen by Apple most times because they are more flexible than NVidia is.  They allow Apple to make more modifications as necessary to their designs to fit within their spectrum.  Not to mention I am sure there are areas AMD cuts them some slack especially in pricing to make it more appealing.


Sure it probably was AMD's "willingness/flexibility" to design and tape-out a custom like Tonga to hit Apple's requirements to suitability drive their 5K Retina display. Providing appropriate graphics all while upholding a power envelope for such AIO construction was paramount, the energy saving during sleep was "feather in cap" for both total efficiency and thermal management when idle.

For R9 285 being a "gelding" from such a design constrained process it came away fairly respectable.


----------



## arbiter (Apr 24, 2015)

GhostRyder said:


> AMD was chosen by Apple most times because they are more flexible than NVidia is. They allow Apple to make more modifications as necessary to their designs to fit within their spectrum. Not to mention I am sure there are areas AMD cuts them some slack especially in pricing to make it more appealing.



I am sure a lot also had to do with Apple could get the chip for super cheap too keep their insanely high margin's on all the products their slap their logo on. Some of the non-butt kissing reviews of the that 5k imac, that gpu has a hard time pushing that rez just in normal desktop work. can see stuttering when desktop animations are working. Even a 290x/980 would be hard press to push that many pixels.


----------



## lilhasselhoffer (Apr 25, 2015)

64K said:


> I'm guessing AMD chose that code name because they have found a way to not only take advantage of the improved efficiency of the 14nm process but also a more efficient architecture on top of that. Like Nvidia did with Maxwell. Same 28nm process as Kepler but more efficient so it used less watts.
> 
> AMD knows that they currently have a reputation for designing GPUs that run too hot and use too many watts for the same performance as an Nvidia GPU. I'm not saying they deserve that reputation but it does exist. Over and over I see people citing those two reasons as why they won't buy an AMD card. As far as the extra watts used it doesn't amount to anything much on an electricity bill for an average gamer playing 15-20 hours a week unless you live in an area where electricity is ridiculously expensive or you're running your card at max 24/7 for Folding or Mining. For me the difference would be about 8 cents a month on my power bill between a reference  GTX 780 Ti (peak 269 watts) and a reference  R9 290X (peak 282 watts) from W1zzard's reviews based on the last generations flagship cards. Even if AMD used 100 watts more than Nvidia it still wouldn't amount to much. 65 cents a month difference at 10 cents per kWh.
> 
> AMD is already the brunt of many jokes about heat/power issues. I don't think they would add fuel to the fire by releasing a hot inefficient GPU and calling it Arctic Islands.



You really haven't answered the question posed here.

Yes, GCN has has a bit of a negative image due to heat production.  Do you also propose that the reason they called the last generation fire islands was because they generated heat?  If I was in marketing, and had the choice to name the project after a feature of the hardware, naming it after excess heat production would demonstrate substantial stupidity.

We can conjecture that they'll be cooler, or we could maker them cooler with a mild underclock.  We could also design a stock cooler that wasn't absolute crap (read: so many of the 2xx series coolers were custom because the stock cooler from AMD was terribad).  AMD chose to push performance numbers by hitting the edges of their thermal envelop, and save money by designing a cooler that met these base requirements.  This isn't a design driven off of a name for the project.  If it was, the next CPU core would be called "Intel killer."  All of this funnels back into my statement that any conclusions drawn now are useless.  No facts and no knowledge mean any conclusion can be as easily dismissed as stated.


----------



## HumanSmoke (Apr 25, 2015)

Casecutter said:


> I'd just remind those, it wasn't until AMD did GCN and/or 28mn, that being poor on power/heat became the narrative


You were in a coma during the whole Fermi/Thermi frenzy? 34 pages of it on the GTX 480 review alone. Even AMD were falling over themselves pointing out that heat+noise = bad....although I guess AMD now have second thoughts on publicizing that sort of thing


Casecutter said:


> I ask why Apple went with the AMD Tonga for their iMac 5K Retina display?  Sure it could’ve been either Apple/Nvidia just didn’t care to or need to "partner up".  It might have been a timing thing, or more the spec's for GM206 didn’t provide the oomph, while a GTX 970M (GM204) wasn't the right fit spec's/price for Apple.


Probably application, timing and pricing. Nvidia provide the only discrete graphics for Apples MBP which is a power/heat sensitive application. GM 206 probably wasn't a good fit for Apple's timeline, and Nvidia probably weren't prepared to price the parts at break-even margins. As I have noted before, AMD supply FirePro's to Apple. The D500 (a W7000/W8000 hybrid) to D700 (W9000) upgrade for Mac Pro costs $300 per card. The difference in those retail FirePro SKU's is ~$1800 per card, a $1500 difference. If Apple can afford to offer a rebranded W9000 for $300 over the cost of a cut down W8000, and still apply their margins for profit and amortized warranty, how favourable is the contract pricing for Apple?


Casecutter said:


> Maxwell is good, and saving while gaming is commendable, but the "vampire" load during sleep compared to AMD ZeroCore is noteworthy over a months' time.


8-10W an hour is noteworthy??? What does that make 3D, GPGPU, and HTPC video usage scenarios then?


Casecutter said:


> Still business is business and keeping the competition from any win enhances one's "cred".


A win + a decent contract price might matter more in a business environment. People have a habit of seeing through purchased "design wins". Intel and Nvidia's SoC programs don't look that great when the financials are taken into account - you don't see many people lauding the hardware precisely because many of the wins are bought and paid for.


Casecutter said:


> Interestingly, we don’t see that Nvidia has MXM version of the GM206?


It wouldn't make any kind of sense to use GM 206 for mobile unless the company plan on moving GM 107 down one tier in the hierarchy- and given the number of "design wins" that the 850M/860M/950M/960M is racking up, that doesn't look likely.
From an engineering/ROI viewpoint what makes sense? Using a full die GM 206 for mobile parts, or using a 50% salvage GM 204 ( the GM 204 GTX 965M SKU has the same logic enabled as the GM 206) that has the same (or a little better) performance-per-watt and a larger heat dissipation heatsink?


----------



## WhoDecidedThat (Apr 25, 2015)

lilhasselhoffer said:


> I'm seeing plenty of people talking about DX12, and I don't get it.  There is no plan out there which states DX12 will only appear on these new cards, and in fact Nvidia has stated that their current line-up is DX12 capable (though what this means in real terms is anyone's guess).


I think they are talking about what DX12 _software_ i.e. games will bring to the table. It is just as exciting a prospect as a new GPU coming in.


----------



## GhostRyder (Apr 26, 2015)

arbiter said:


> I am sure a lot also had to do with Apple could get the chip for super cheap too keep their insanely high margin's on all the products their slap their logo on. Some of the non-butt kissing reviews of the that 5k imac, that gpu has a hard time pushing that rez just in normal desktop work. can see stuttering when desktop animations are working. Even a 290x/980 would be hard press to push that many pixels.


Well that is what I said, it comes down to money and the OEM being flexible which is why Apple chooses them.  But as far as pushing 5k, well it can handle the basics but was never meant to be the ultimate performance as nothing we have could offer decent performance at 5k without using multiple GPU's.



lilhasselhoffer said:


> You really haven't answered the question posed here.
> 
> Yes, GCN has has a bit of a negative image due to heat production.  Do you also propose that the reason they called the last generation fire islands was because they generated heat?  If I was in marketing, and had the choice to name the project after a feature of the hardware, naming it after excess heat production would demonstrate substantial stupidity.
> 
> We can conjecture that they'll be cooler, or we could maker them cooler with a mild underclock.  We could also design a stock cooler that wasn't absolute crap (read: so many of the 2xx series coolers were custom because the stock cooler from AMD was terribad).  AMD chose to push performance numbers by hitting the edges of their thermal envelop, and save money by designing a cooler that met these base requirements.  This isn't a design driven off of a name for the project.  If it was, the next CPU core would be called "Intel killer."  All of this funnels back into my statement that any conclusions drawn now are useless.  No facts and no knowledge mean any conclusion can be as easily dismissed as stated.


AMD got more flak on this than NVidia did for the same thing...The problem also was not that they designed a bad heatsink, it was more they did not make a better one as it really just meant to be inexpensive as they push that more people on the high end do/want something more anyways and are probably going to handle cooling it themselves.  Obviously this was a mistake they realized hence why we are getting something different this time.

I think as far as DX12 is concerned, all we hear is conjecture at this point and filled with a lot of what if's/I think's instead of pure fact.  Until we see it in the open we will not know what being DX12 ready actually means.


----------



## the54thvoid (Apr 26, 2015)

GhostRyder said:


> Well that is what I said, it comes down to money and the OEM being flexible which is why Apple chooses them.  But as far as pushing 5k, well it can handle the basics but was never meant to be the ultimate performance as nothing we have could offer decent performance at 5k without using multiple GPU's.
> 
> 
> AMD got more flak on this than NVidia did for the same thing...



No. Fermi was a heat fiasco for Nvidia and they were mocked relentlessly for it. ATI used it (as did their brand owners -see what term I didn't use there!) to their advantage.
Problem is when you mock someone's failing and then do it yourself, its a marketting and PR disaster.  The GTX 480 was righted by the surprise release of a hitherto "can't be done" GTX 580 that managed to include the previously fused off cores.
Hopefully (if the naming conjecture is true) next years card will be cool but the flip side of pumping up Arctic Islands is that 390X will be a furnace.

I bloody hope it isn't.


----------



## HumanSmoke (Apr 26, 2015)

the54thvoid said:


> No. Fermi was a heat fiasco for Nvidia and they were mocked relentlessly for it.


As they were with the FX 5800U...


the54thvoid said:


> ATI used it (as did their brand owners -see what term I didn't use there!) to their advantage.
> Problem is when you mock someone's failing and then do it yourself, its a marketing and PR disaster.


At least with the FX 5800U, Nvidia actually had the balls and sense of humour to laugh at their own failings. No amount of marketing could save NV30 from the obvious negative traits, so the company had fun with it.









Not something many companies would actually put together to announce their _mea culpa_. They may have done something similar with Fermi had AMD, their loyal followers, and shills not begun getting creative first.

Two things stand out. Nvidia's videos mocking themselves are much funnier and original than AMD's efforts, and the NV30 became a byword for hot'n'loud because of its staggeringly high 74 watt (full load) power consumption. What a difference a dozen years makes in GPU design.


----------



## rruff (Apr 26, 2015)

64K said:


> As far as the extra watts used it doesn't amount to anything much on an electricity bill for an average gamer playing 15-20 hours a week unless you live in an area where electricity is ridiculously expensive or you're running your card at max 24/7 for Folding or Mining. For me the difference would be about 8 cents a month on my power bill between a reference  GTX 780 Ti (peak 269 watts) and a reference  R9 290X (peak 282 watts) from W1zzard's reviews based on the last generations flagship cards. Even if AMD used 100 watts more than Nvidia it still wouldn't amount to much. 65 cents a month difference at 10 cents per kWh..



Compare a reference GTX 970 to an R9 290 at idle (7W more) or playing a video (60W) or gaming avg (76W). Any way you slice it the FPS/$ advantage of the AMD card disappears pretty fast if you actually use it. If it's on all the time, and you spend 6 hrs per week watching video, and 20 hrs a week gaming, you will spend ~$20/yr more on electricity in the US.

http://www.techpowerup.com/reviews/Colorful/iGame_GTX_970/25.html


----------



## arbiter (Apr 26, 2015)

HumanSmoke said:


> At least with the FX 5800U, Nvidia actually had the balls and sense of humour to laugh at their own failings. No amount of marketing could save NV30 from the obvious negative traits, so the company had fun with it.
> Not something many companies would actually put together to announce their _mea culpa_. They may have done something similar with Fermi had AMD, their loyal followers, and shills not begun getting creative first.
> Two things stand out. Nvidia's videos mocking themselves are much funnier and original than AMD's efforts, and the NV30 became a byword for hot'n'loud because of its staggeringly high 74 watt (full load) power consumption. What a difference a dozen years makes in GPU design.



Those AMD "fixer" videos are pretty sad. One the first ones they tried to compare what was at time a nvidia gtx650 (was easy to tell by the design of the ref cooler. Guy said it doesn't run his game well, then the fixer guy hands him a 7970 like they were compareing low-midrange card vs their top of line card at the time. It was pretty sad marking attempt by them. I know some people wouldn't looked in to what card they claimed wouldn't run his game well but when you looked it was pretty sad comparing. Would be like comparing a gtx980 to a r7 260x in performance now.


----------



## lilhasselhoffer (Apr 26, 2015)

GhostRyder said:


> ...
> AMD got more flak on this than NVidia did for the same thing...The problem also was not that they designed a bad heatsink, it was more they did not make a better one as it really just meant to be inexpensive as they push that more people on the high end do/want something more anyways and are probably going to handle cooling it themselves.  Obviously this was a mistake they realized hence why we are getting something different this time.
> 
> I think as far as DX12 is concerned, all we hear is conjecture at this point and filled with a lot of what if's/I think's instead of pure fact.  Until we see it in the open we will not know what being DX12 ready actually means.



Did you read my entire post?  Perhaps if you did you wouldn't have restated what I said.  

AMD designed the cheapest cooler that would meet the thermal limitations of their card.  This meant a lower priced final product, but the performance was "terribad."  You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy.  AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers.  The custom coolers rolled out, and AMD based GPUs actually had a chance.  When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.      

Additionally, GPUs are sold with a cooler and removing it violates any warranties related to that card.  You really want to argue that AMD assumed most people would void their warranties to bring their GPUs to noise/heat parity with the Nvidea offerings.  That's insane.



I'm not saying that Nvidea can do no wrong.  Fermi was crap, that existed because GPU computing was all the rage and Nvidea "needed" to compete with the performance of AMD at the time.  I'm not saying there's any viable excuses, just that there is no proof that Arctic Islands means a cooler chip.  Arguing that the name, history, or anything else insures that is foolish.  We won't have an answer until these GPUs start appearing, and discussion before that is speculation at best.  Arguing over wild speculation is pointless.


----------



## arbiter (Apr 26, 2015)

lilhasselhoffer said:


> AMD designed the cheapest cooler that would meet the thermal limitations of their card. This meant a lower priced final product, but the performance was "terribad." You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy. AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers. The custom coolers rolled out, and AMD based GPUs actually had a chance. When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.



They didn't even design that cooler, they just tossed on cooler from the last gen cards and shipped it.

As person you quoted touched on DX12, dx12 allowing the more use of the hardware's full power. Kinda wonder if use of cheap cooler if AMD pulls that against how much amplified the heat issue will be with DX12 letting the gpu run more closer to 100% then was allowed before. Could be same for nvidia side but their ref cooler isn't half bad.



GhostRyder said:


> AMD got more flak on this than NVidia did for the same thing...The problem also was not that they designed a bad heatsink, it was more they did not make a better one as it really just meant to be inexpensive as they push that more people on the high end do/want something more anyways and are probably going to handle cooling it themselves.



On the nvidia card did that heat cripple performance by 20%? Or did the nvidia card still run pretty much as it was ment to?  Really AMD took the most heat is more do to the they sold the cards with "up to ####mhz". When you use that it well usually means you won't get that top end most the time.


----------



## GhostRyder (Apr 27, 2015)

the54thvoid said:


> No. Fermi was a heat fiasco for Nvidia and they were mocked relentlessly for it. ATI used it (as did their brand owners -see what term I didn't use there!) to their advantage.
> Problem is when you mock someone's failing and then do it yourself, its a marketting and PR disaster.  The GTX 480 was righted by the surprise release of a hitherto "can't be done" GTX 580 that managed to include the previously fused off cores.
> Hopefully (if the naming conjecture is true) next years card will be cool but the flip side of pumping up Arctic Islands is that 390X will be a furnace.
> I bloody hope it isn't.


Mocked for it maybe, but not nearly as bad as some people (Including some of the people on this forum) do at least from what I saw during those times on other sites and such.  I ran into more people who still said it was a great card and stated how many different ways to alleviate the problem as there were plenty, same as with the R9 290/X.  The problem is I have seen many of those people then ridicule the same traits on the AMD side claiming it should have been better...  Personally, does not matter in the end of the day its easy to alleviate and something most of us could find a way around on any of the coolers.  But them mocking it (AMD during those days) was a little idiotic, but no matter what AMD says they are always wrong in some peoples eyes...


arbiter said:


> Those AMD "fixer" videos are pretty sad. One the first ones they tried to compare what was at time a nvidia gtx650 (was easy to tell by the design of the ref cooler. Guy said it doesn't run his game well, then the fixer guy hands him a 7970 like they were compareing low-midrange card vs their top of line card at the time. It was pretty sad marking attempt by them. I know some people wouldn't looked in to what card they claimed wouldn't run his game well but when you looked it was pretty sad comparing. Would be like comparing a gtx980 to a r7 260x in performance now.


Was it stupid, yes it was but its just an attempt at a mocking video with an attempt at humor.  I doubt they put any thought into what NVidia card it was more than it was an NVidia card with those videos and more focused on it being a quick bit of humor.


lilhasselhoffer said:


> Did you read my entire post?  Perhaps if you did you wouldn't have restated what I said.
> AMD designed the cheapest cooler that would meet the thermal limitations of their card.  This meant a lower priced final product, but the performance was "terribad."  You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy.  AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers.  The custom coolers rolled out, and AMD based GPUs actually had a chance.  When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.
> Additionally, GPUs are sold with a cooler and removing it violates any warranties related to that card.  You really want to argue that AMD assumed most people would void their warranties to bring their GPUs to noise/heat parity with the Nvidea offerings.  That's insane.
> I'm not saying that Nvidea can do no wrong.  Fermi was crap, that existed because GPU computing was all the rage and Nvidea "needed" to compete with the performance of AMD at the time.  I'm not saying there's any viable excuses, just that there is no proof that Arctic Islands means a cooler chip.  Arguing that the name, history, or anything else insures that is foolish.  We won't have an answer until these GPUs start appearing, and discussion before that is speculation at best.  Arguing over wild speculation is pointless.


I was agreeing with you not making a retort at your post...Sorry if it came off wrong.


arbiter said:


> On the nvidia card did that heat cripple performance by 20%? Or did the nvidia card still run pretty much as it was ment to?  Really AMD took the most heat is more do to the they sold the cards with "up to ####mhz". When you use that it well usually means you won't get that top end most the time.


It went up to 105c and could just as well cause issues.  Solution is the same as with the AMD card, have a better cooled case or use some form of airflow to alleviate the heat from stalemating inside the card.  AMD's driver update which changed how the fan profile was handled helped the issue and putting some nice airflow helped in both cases keep the temps down easily.
Either way, both NVidia and AMD heard the cries and have decided to alleviate the issue on both ends.


----------



## lilhasselhoffer (Apr 27, 2015)

GhostRyder said:


> ...
> I was agreeing with you not making a retort at your post...Sorry if it came off wrong.
> ...



My misunderstanding.  My apologies.


----------



## Aquinus (Apr 27, 2015)

lilhasselhoffer said:


> Yes, you'll also have to decrease voltage inside the chip, but if you look at a transistor as a very poor resistor you'll see that power = amperage * voltage = amperage^2 * resistance. To decrease the power flowing through the transistor, just to match the same thermal limits of the old design, you need to either half the amperage or quarter the resistance. While this is possible, AMD has had the tendency to not do this.


That's not how resistors or circuits in a CPU work with respect parts that are operating as logic. Since we're talking clock signals, not constant voltage, we're talking about impedance not resistance because technically a clock signal can be described as an AC circuit. As a result, it's not a simple as you think it is. On top of that, reducing the size of die very well can impact the gap in a transistor. Smaller gaps means a smaller electric potential is required to open or close it. Less gap means less impedance, so even if voltage might be as high (maybe a little lower, 0.1 volts?) So while you're correct that resistance increases on the regular circuitry because the wires are smaller, it does not mean transistors' impedance to a digital signal is more. In fact, impedance on transistors have continued to go down as smaller manufacturing nodes are used.

Lastly, impedance on a transistor depends on how strong the driving voltage difference is between the emitter and the base for an NPN transistor versus grounding the base for PNP transistors to open them up.

Also you made a false equivalency. You assume resistance doubles when circuit size is halved which is not true. Resistance might increase, but it's not that kind of rate. It depends on a lot of factors.


----------



## progste (Apr 27, 2015)

Imagine the Irony if the stock versions of these cards get to 90°C XD


----------



## 64K (Apr 27, 2015)

progste said:


> Imagine the Irony if the stock versions of these cards get to 90°C XD



AMD haters would be merciless in ridiculing that card if it did happen. If the joke is so obvious then I can't believe that AMD would have chosen Arctic Islands if it's going to be a hot/inefficient GPU. AMD needs to be getting everything right for the near future. Their stock has fallen 20% in the last week and a half. I wish them the best but mistakes are a luxury they can't afford right now.


----------



## Vlada011 (Apr 27, 2015)

Next news is AMD delay R9-390X go right on 14nm...
Before 5-6 months I was convinced they will launch 20nm, 20% stronger than GM200, 3D HBM Memory, Extreme Bandwidth, Incredible fps,
card made for 4K resolution... typical for AMD. I don't read their news any more, only topic and maybe few words more...
I don't want to read before R9-390X show up because they send news only to move attention from main questions...
Specification and performance of R9-390X,
Distance from GTX980 and TITAN X,
Temperatures, noise, power consumption.
Last time when customers wait so long AMD made miracle, they almost beat TITAN.
They didn't beat him, TITAN was better card, less heat, better OC, better gaming experience, more video memory, but they made miracle, nobody expect same performance as NVIDIA premium card. Main problem is because AMD still no better card than that model Hawaii, almost same as crippled GK110. But now is middle of 2015 and TITAN was launched on beginning of 2013. And NVIDIA had 4 stronger models, TITAN Black, GTX780Ti, GTX980, TITAN X and 5th GTX980Ti is finished only need few weeks to install chips on board and send to vendors when time come. 
Gap between NVIDIA and AMD is huge now and it's time to AMD make something good and drop price of GTX980Ti.


----------



## Casecutter (Apr 27, 2015)

HumanSmoke said:


> You were in a coma during the whole Fermi/Thermi frenzy?
> 8-10W an hour is noteworthy???


 
I meant the power/heat "narrative" is new as being directed toward AMD, not the topic in general.

While not specifically a cost objective for and individual computer, though when you have 3 that are sleeping as myself, it is worth being aware of.  We should looking be looking at all such "non-beneficial" loads, or "vampire usage" on everything. This should be just as disquieting and regarded almost as wasteful (if not more as nothing is hppening) to your household, as such avertised upfront efficiencies’ products are market around, and their effect on a community wide basis, then and regional power grid.

I'm astounded ... your need to deliver "point by point" discord, didn’t mean to rile you personally.


----------



## lilhasselhoffer (Apr 27, 2015)

Aquinus said:


> That's not how resistors or circuits in a CPU work with respect parts that are operating as logic. Since we're talking clock signals, not constant voltage, we're talking about impedance not resistance because technically a clock signal can be described as an AC circuit. As a result, it's not a simple as you think it is. On top of that, reducing the size of die very well can impact the gap in a transistor. Smaller gaps means a smaller electric potential is required to open or close it. Less gap means less impedance, so even if voltage might be as high (maybe a little lower, 0.1 volts?) So while you're correct that resistance increases on the regular circuitry because the wires are smaller, it does not mean transistors' impedance to a digital signal is more. In fact, impedance on transistors have continued to go down as smaller manufacturing nodes are used.
> 
> Lastly, impedance on a transistor depends on how strong the driving voltage difference is between the emitter and the base for an NPN transistor versus grounding the base for PNP transistors to open them up.
> 
> Also you made a false equivalency. You assume resistance doubles when circuit size is halved which is not true. Resistance might increase, but it's not that kind of rate. It depends on a lot of factors.



One, I've started by stating that a transistor is approximated as a poor resistor.  While incorrect, this is the only way I know of to figure out bled off energy (electrical to thermal) without resorting to immensely complicated mathematics that are beyond my ken.  It also makes calculation of heat transference a heck of a lot easier.

Two, I said exactly that.  In a simple circuit, voltage can be expressed as amperage multiplied be resistance.  Power can be expressed as amperage multiplied by voltage.  I took the extra step, and removed the voltage term from the equation because transistors generally have a fixed operational voltage depending upon size.  As that is difficult, at best, to determine I didn't want it to muddy the water.

Third, where exactly did I suggest resistance is double?  I cannot find it in any of my posts.  What I did find was reference to circuit size being halved, which quarters the available surface area to conduct heat.  Perhaps this is what you are referring to?  I'd like clarification, because if I did say this I'd like to correct the error.


All of this is complicated by a simplistic model, but it doesn't take away from my point.  None of the math, or assumed changes, means that the Arctic Islands chips will run cool, or even cooler than the current fire islands silicon.  Yes AMD may be using a 75% space saving process to only increase the transistor count by 50%, yes the decreased transistor size could well offer a much smaller gate voltage, and yes the architecture may have been  altered to be substantially more efficient (thus requiring fewer clock cycles to perform the same work).  All of this is speculation.  Whenever I can buy a card, or see some plausibly factual test results, anything said is wild speculation.


----------



## HumanSmoke (Apr 27, 2015)

Casecutter said:


> I meant the power/heat "narrative" is new as being directed toward AMD, not the topic in general.


Isn't that always the case with tech as emotive as the core components of a system? The failings - perceived or real, of any particular architecture/SKU, are always held up in comparison with their contemporaries and historical precedent. When one company drops the ball, people are only to eager to fall upon it like a pack of wolves. Regarding heat/power, the narrative shifts every time the architecture falls outside of the norm. The HD 2900XT (an AMD product) was pilloried in 2007, the GTX 480/470/465 received the attention three years later, and GCN in its large die compute orientated architecture come in for attention now. The primary difference between the present and the past is that in previous years, excessive heat and power where just a negative point that could be ameliorated by outright performance - and there are plenty of examples I can think of- from the 3dfx Voodoo 3 to the aforementioned FX 5800U and GeForce 6800Ultra/Ultra Extreme. The present day sees temp and input power limit performance due to throttling which makes the trade off less acceptable for many.


Casecutter said:


> I'm astounded ... your need to deliver "point by point" discord, didn’t mean to rile you personally.


Well, I'm not riled. You presented a number of points and I commented upon them individually for the sake of clarity, and to lessen the chances that anyone here might take my comments out of context. I also had three questions regarding your observations. Loading them into a single paragraph lessens their chances of being answered - although I note that splitting them up as individual points fared no better in that regard  ....so there's that rationale mythbusted.


----------



## crazyeyesreaper (Jun 15, 2015)

Bjorn_Of_Iceland said:


> So is my GTX780.. a 2 year old card.. and the 980 is not too far above so that even a 780ti can keep it in check.
> 
> AMD is lagging that much, they needed to skip a 20nm just to make them competitive.



Heavily overclocked GTX 780 keeps up with the 970 just fine, meanwhile 780 Ti overclocked can get close to the 980.

As such Nvidia did alot of R&D to push performance up enough to counter overclocked previous gen by a few %%%% points. Titan X and 980 Ti offer what Fury from AMD will offer. So they are relatively similar for now in terms of performance. Nothings really changed that much.

W1zz managed to get a 17% Performance boost on the GTX 780 with overclocking
on the 780 Ti he got a further 18% Performance boost.

So if we say 10% Performance across the board via Overclocking then yes, The GTX 780 compares to the 970 while the 780Ti compares to a 980Ti.








Add 10% to the 780 and 10% to the 780Ti and they have no issues keeping up with the 970 and 980 for the most part. It is game dependent but even in the Averaged senario across a multitude of games the result remains the same.


----------



## xfia (Jun 15, 2015)

buggalugs said:


> AMD should have dumped TSMC long ago although there isn't that many choices. AMD should try to do a deal with Samsung.


they have been working with samsung for years and they are both founding members of the hsa foundation. they have also been in talks about using 14nm for amd's new gpu's and cpu's for some time.


----------

