Thursday, April 23rd 2015
AMD to Skip 20 nm, Jump Straight to 14 nm with "Arctic Islands" GPU Family
AMD's next-generation GPU family, which it plans to launch some time in 2016, codenamed "Arctic Islands," will see the company skip the 20 nanometer silicon fab process from 28 nm, and jump straight to 14 nm FinFET. Whether the company will stick with TSMC, which is seeing crippling hurdles to implement its 20 nm node for GPU vendors; or hire a new fab, remains to be seen. Intel and Samsung are currently the only fabs with 14 nm nodes that have attained production capacity. Intel is manufacturing its Core "Broadwell" CPUs, while Samsung is manufacturing its Exynos 7 (refresh) SoCs. Intel's joint-venture with Micron Technology, IMFlash, is manufacturing NAND flash chips on 14 nm.
Named after islands in the Arctic circle, and a possible hint at the low TDP of the chips, benefiting from 14 nm, "Arctic Islands" will be led by "Greenland," a large GPU that will implement the company's most advanced stream processor design, and implement HBM2 memory, which offers 57% higher memory bandwidth at just 48% the power consumption of GDDR5. Korean memory manufacturer SK Hynix is ready with its HBM2 chip designs.
Source:
Expreview
Named after islands in the Arctic circle, and a possible hint at the low TDP of the chips, benefiting from 14 nm, "Arctic Islands" will be led by "Greenland," a large GPU that will implement the company's most advanced stream processor design, and implement HBM2 memory, which offers 57% higher memory bandwidth at just 48% the power consumption of GDDR5. Korean memory manufacturer SK Hynix is ready with its HBM2 chip designs.
71 Comments on AMD to Skip 20 nm, Jump Straight to 14 nm with "Arctic Islands" GPU Family
Yes, GCN has has a bit of a negative image due to heat production. Do you also propose that the reason they called the last generation fire islands was because they generated heat? If I was in marketing, and had the choice to name the project after a feature of the hardware, naming it after excess heat production would demonstrate substantial stupidity.
We can conjecture that they'll be cooler, or we could maker them cooler with a mild underclock. We could also design a stock cooler that wasn't absolute crap (read: so many of the 2xx series coolers were custom because the stock cooler from AMD was terribad). AMD chose to push performance numbers by hitting the edges of their thermal envelop, and save money by designing a cooler that met these base requirements. This isn't a design driven off of a name for the project. If it was, the next CPU core would be called "Intel killer." All of this funnels back into my statement that any conclusions drawn now are useless. No facts and no knowledge mean any conclusion can be as easily dismissed as stated.
From an engineering/ROI viewpoint what makes sense? Using a full die GM 206 for mobile parts, or using a 50% salvage GM 204 ( the GM 204 GTX 965M SKU has the same logic enabled as the GM 206) that has the same (or a little better) performance-per-watt and a larger heat dissipation heatsink?
I think as far as DX12 is concerned, all we hear is conjecture at this point and filled with a lot of what if's/I think's instead of pure fact. Until we see it in the open we will not know what being DX12 ready actually means.
Problem is when you mock someone's failing and then do it yourself, its a marketting and PR disaster. The GTX 480 was righted by the surprise release of a hitherto "can't be done" GTX 580 that managed to include the previously fused off cores.
Hopefully (if the naming conjecture is true) next years card will be cool but the flip side of pumping up Arctic Islands is that 390X will be a furnace.
I bloody hope it isn't.
Not something many companies would actually put together to announce their mea culpa. They may have done something similar with Fermi had AMD, their loyal followers, and shills not begun getting creative first.
Two things stand out. Nvidia's videos mocking themselves are much funnier and original than AMD's efforts, and the NV30 became a byword for hot'n'loud because of its staggeringly high 74 watt (full load) power consumption. What a difference a dozen years makes in GPU design.
www.techpowerup.com/reviews/Colorful/iGame_GTX_970/25.html
AMD designed the cheapest cooler that would meet the thermal limitations of their card. This meant a lower priced final product, but the performance was "terribad." You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy. AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers. The custom coolers rolled out, and AMD based GPUs actually had a chance. When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.
Additionally, GPUs are sold with a cooler and removing it violates any warranties related to that card. You really want to argue that AMD assumed most people would void their warranties to bring their GPUs to noise/heat parity with the Nvidea offerings. That's insane.
I'm not saying that Nvidea can do no wrong. Fermi was crap, that existed because GPU computing was all the rage and Nvidea "needed" to compete with the performance of AMD at the time. I'm not saying there's any viable excuses, just that there is no proof that Arctic Islands means a cooler chip. Arguing that the name, history, or anything else insures that is foolish. We won't have an answer until these GPUs start appearing, and discussion before that is speculation at best. Arguing over wild speculation is pointless.
As person you quoted touched on DX12, dx12 allowing the more use of the hardware's full power. Kinda wonder if use of cheap cooler if AMD pulls that against how much amplified the heat issue will be with DX12 letting the gpu run more closer to 100% then was allowed before. Could be same for nvidia side but their ref cooler isn't half bad. On the nvidia card did that heat cripple performance by 20%? Or did the nvidia card still run pretty much as it was ment to? Really AMD took the most heat is more do to the they sold the cards with "up to ####mhz". When you use that it well usually means you won't get that top end most the time.
Either way, both NVidia and AMD heard the cries and have decided to alleviate the issue on both ends.
Lastly, impedance on a transistor depends on how strong the driving voltage difference is between the emitter and the base for an NPN transistor versus grounding the base for PNP transistors to open them up.
Also you made a false equivalency. You assume resistance doubles when circuit size is halved which is not true. Resistance might increase, but it's not that kind of rate. It depends on a lot of factors.
Before 5-6 months I was convinced they will launch 20nm, 20% stronger than GM200, 3D HBM Memory, Extreme Bandwidth, Incredible fps,
card made for 4K resolution... typical for AMD. I don't read their news any more, only topic and maybe few words more...
I don't want to read before R9-390X show up because they send news only to move attention from main questions...
Specification and performance of R9-390X,
Distance from GTX980 and TITAN X,
Temperatures, noise, power consumption.
Last time when customers wait so long AMD made miracle, they almost beat TITAN.
They didn't beat him, TITAN was better card, less heat, better OC, better gaming experience, more video memory, but they made miracle, nobody expect same performance as NVIDIA premium card. Main problem is because AMD still no better card than that model Hawaii, almost same as crippled GK110. But now is middle of 2015 and TITAN was launched on beginning of 2013. And NVIDIA had 4 stronger models, TITAN Black, GTX780Ti, GTX980, TITAN X and 5th GTX980Ti is finished only need few weeks to install chips on board and send to vendors when time come.
Gap between NVIDIA and AMD is huge now and it's time to AMD make something good and drop price of GTX980Ti.
While not specifically a cost objective for and individual computer, though when you have 3 that are sleeping as myself, it is worth being aware of. We should looking be looking at all such "non-beneficial" loads, or "vampire usage" on everything. This should be just as disquieting and regarded almost as wasteful (if not more as nothing is hppening) to your household, as such avertised upfront efficiencies’ products are market around, and their effect on a community wide basis, then and regional power grid.
I'm astounded ... your need to deliver "point by point" discord, didn’t mean to rile you personally. o_O
Two, I said exactly that. In a simple circuit, voltage can be expressed as amperage multiplied be resistance. Power can be expressed as amperage multiplied by voltage. I took the extra step, and removed the voltage term from the equation because transistors generally have a fixed operational voltage depending upon size. As that is difficult, at best, to determine I didn't want it to muddy the water.
Third, where exactly did I suggest resistance is double? I cannot find it in any of my posts. What I did find was reference to circuit size being halved, which quarters the available surface area to conduct heat. Perhaps this is what you are referring to? I'd like clarification, because if I did say this I'd like to correct the error.
All of this is complicated by a simplistic model, but it doesn't take away from my point. None of the math, or assumed changes, means that the Arctic Islands chips will run cool, or even cooler than the current fire islands silicon. Yes AMD may be using a 75% space saving process to only increase the transistor count by 50%, yes the decreased transistor size could well offer a much smaller gate voltage, and yes the architecture may have been altered to be substantially more efficient (thus requiring fewer clock cycles to perform the same work). All of this is speculation. Whenever I can buy a card, or see some plausibly factual test results, anything said is wild speculation.
As such Nvidia did alot of R&D to push performance up enough to counter overclocked previous gen by a few %%%% points. Titan X and 980 Ti offer what Fury from AMD will offer. So they are relatively similar for now in terms of performance. Nothings really changed that much.
W1zz managed to get a 17% Performance boost on the GTX 780 with overclocking
on the 780 Ti he got a further 18% Performance boost.
So if we say 10% Performance across the board via Overclocking then yes, The GTX 780 compares to the 970 while the 780Ti compares to a 980Ti.
Add 10% to the 780 and 10% to the 780Ti and they have no issues keeping up with the 970 and 980 for the most part. It is game dependent but even in the Averaged senario across a multitude of games the result remains the same.