Thursday, April 23rd 2015

AMD to Skip 20 nm, Jump Straight to 14 nm with "Arctic Islands" GPU Family

AMD's next-generation GPU family, which it plans to launch some time in 2016, codenamed "Arctic Islands," will see the company skip the 20 nanometer silicon fab process from 28 nm, and jump straight to 14 nm FinFET. Whether the company will stick with TSMC, which is seeing crippling hurdles to implement its 20 nm node for GPU vendors; or hire a new fab, remains to be seen. Intel and Samsung are currently the only fabs with 14 nm nodes that have attained production capacity. Intel is manufacturing its Core "Broadwell" CPUs, while Samsung is manufacturing its Exynos 7 (refresh) SoCs. Intel's joint-venture with Micron Technology, IMFlash, is manufacturing NAND flash chips on 14 nm.

Named after islands in the Arctic circle, and a possible hint at the low TDP of the chips, benefiting from 14 nm, "Arctic Islands" will be led by "Greenland," a large GPU that will implement the company's most advanced stream processor design, and implement HBM2 memory, which offers 57% higher memory bandwidth at just 48% the power consumption of GDDR5. Korean memory manufacturer SK Hynix is ready with its HBM2 chip designs.
Source: Expreview
Add your own comment

71 Comments on AMD to Skip 20 nm, Jump Straight to 14 nm with "Arctic Islands" GPU Family

#51
lilhasselhoffer
64KI'm guessing AMD chose that code name because they have found a way to not only take advantage of the improved efficiency of the 14nm process but also a more efficient architecture on top of that. Like Nvidia did with Maxwell. Same 28nm process as Kepler but more efficient so it used less watts.

AMD knows that they currently have a reputation for designing GPUs that run too hot and use too many watts for the same performance as an Nvidia GPU. I'm not saying they deserve that reputation but it does exist. Over and over I see people citing those two reasons as why they won't buy an AMD card. As far as the extra watts used it doesn't amount to anything much on an electricity bill for an average gamer playing 15-20 hours a week unless you live in an area where electricity is ridiculously expensive or you're running your card at max 24/7 for Folding or Mining. For me the difference would be about 8 cents a month on my power bill between a reference GTX 780 Ti (peak 269 watts) and a reference R9 290X (peak 282 watts) from W1zzard's reviews based on the last generations flagship cards. Even if AMD used 100 watts more than Nvidia it still wouldn't amount to much. 65 cents a month difference at 10 cents per kWh.

AMD is already the brunt of many jokes about heat/power issues. I don't think they would add fuel to the fire by releasing a hot inefficient GPU and calling it Arctic Islands.
You really haven't answered the question posed here.

Yes, GCN has has a bit of a negative image due to heat production. Do you also propose that the reason they called the last generation fire islands was because they generated heat? If I was in marketing, and had the choice to name the project after a feature of the hardware, naming it after excess heat production would demonstrate substantial stupidity.

We can conjecture that they'll be cooler, or we could maker them cooler with a mild underclock. We could also design a stock cooler that wasn't absolute crap (read: so many of the 2xx series coolers were custom because the stock cooler from AMD was terribad). AMD chose to push performance numbers by hitting the edges of their thermal envelop, and save money by designing a cooler that met these base requirements. This isn't a design driven off of a name for the project. If it was, the next CPU core would be called "Intel killer." All of this funnels back into my statement that any conclusions drawn now are useless. No facts and no knowledge mean any conclusion can be as easily dismissed as stated.
Posted on Reply
#52
HumanSmoke
CasecutterI'd just remind those, it wasn't until AMD did GCN and/or 28mn, that being poor on power/heat became the narrative
You were in a coma during the whole Fermi/Thermi frenzy? 34 pages of iton the GTX 480 review alone. Even AMD were falling over themselves pointing out that heat+noise = bad....although I guess AMD now have second thoughts on publicizing that sort of thing
CasecutterI ask why Apple went with the AMD Tonga for their iMac 5K Retina display? Sure it could’ve been either Apple/Nvidia just didn’t care to or need to "partner up". It might have been a timing thing, or more the spec's for GM206 didn’t provide the oomph, while a GTX 970M (GM204) wasn't the right fit spec's/price for Apple.
Probably application, timing and pricing. Nvidia provide the only discrete graphics for Apples MBP which is a power/heat sensitive application. GM 206 probably wasn't a good fit for Apple's timeline, and Nvidia probably weren't prepared to price the parts at break-even margins. As I have noted before, AMD supply FirePro's to Apple. The D500 (a W7000/W8000 hybrid) to D700 (W9000) upgrade for Mac Pro costs $300 per card. The difference in those retail FirePro SKU's is ~$1800 per card, a $1500 difference. If Apple can afford to offer a rebranded W9000 for $300 over the cost of a cut down W8000, and still apply their margins for profit and amortized warranty, how favourable is the contract pricing for Apple?
CasecutterMaxwell is good, and saving while gaming is commendable, but the "vampire" load during sleep compared to AMD ZeroCore is noteworthy over a months' time.
8-10W an hour is noteworthy??? What does that make 3D, GPGPU, and HTPCvideo usage scenarios then?
CasecutterStill business is business and keeping the competition from any win enhances one's "cred".
A win + a decent contract price might matter more in a business environment. People have a habit of seeing through purchased "design wins". Intel and Nvidia's SoC programs don't look that great when the financials are taken into account - you don't see many people lauding the hardware precisely because many of the wins are bought and paid for.
CasecutterInterestingly, we don’t see that Nvidia has MXM version of the GM206?
It wouldn't make any kind of sense to use GM 206 for mobile unless the company plan on moving GM 107 down one tier in the hierarchy- and given the number of "design wins" that the 850M/860M/950M/960M is racking up, that doesn't look likely.
From an engineering/ROI viewpoint what makes sense? Using a full die GM 206 for mobile parts, or using a 50% salvage GM 204 ( the GM 204 GTX 965M SKU has the same logic enabled as the GM 206) that has the same (or a little better) performance-per-watt and a larger heat dissipation heatsink?
Posted on Reply
#53
WhoDecidedThat
lilhasselhofferI'm seeing plenty of people talking about DX12, and I don't get it. There is no plan out there which states DX12 will only appear on these new cards, and in fact Nvidia has stated that their current line-up is DX12 capable (though what this means in real terms is anyone's guess).
I think they are talking about what DX12 software i.e. games will bring to the table. It is just as exciting a prospect as a new GPU coming in.
Posted on Reply
#54
GhostRyder
arbiterI am sure a lot also had to do with Apple could get the chip for super cheap too keep their insanely high margin's on all the products their slap their logo on. Some of the non-butt kissing reviews of the that 5k imac, that gpu has a hard time pushing that rez just in normal desktop work. can see stuttering when desktop animations are working. Even a 290x/980 would be hard press to push that many pixels.
Well that is what I said, it comes down to money and the OEM being flexible which is why Apple chooses them. But as far as pushing 5k, well it can handle the basics but was never meant to be the ultimate performance as nothing we have could offer decent performance at 5k without using multiple GPU's.
lilhasselhofferYou really haven't answered the question posed here.

Yes, GCN has has a bit of a negative image due to heat production. Do you also propose that the reason they called the last generation fire islands was because they generated heat? If I was in marketing, and had the choice to name the project after a feature of the hardware, naming it after excess heat production would demonstrate substantial stupidity.

We can conjecture that they'll be cooler, or we could maker them cooler with a mild underclock. We could also design a stock cooler that wasn't absolute crap (read: so many of the 2xx series coolers were custom because the stock cooler from AMD was terribad). AMD chose to push performance numbers by hitting the edges of their thermal envelop, and save money by designing a cooler that met these base requirements. This isn't a design driven off of a name for the project. If it was, the next CPU core would be called "Intel killer." All of this funnels back into my statement that any conclusions drawn now are useless. No facts and no knowledge mean any conclusion can be as easily dismissed as stated.
AMD got more flak on this than NVidia did for the same thing...The problem also was not that they designed a bad heatsink, it was more they did not make a better one as it really just meant to be inexpensive as they push that more people on the high end do/want something more anyways and are probably going to handle cooling it themselves. Obviously this was a mistake they realized hence why we are getting something different this time.

I think as far as DX12 is concerned, all we hear is conjecture at this point and filled with a lot of what if's/I think's instead of pure fact. Until we see it in the open we will not know what being DX12 ready actually means.
Posted on Reply
#55
the54thvoid
Super Intoxicated Moderator
GhostRyderWell that is what I said, it comes down to money and the OEM being flexible which is why Apple chooses them. But as far as pushing 5k, well it can handle the basics but was never meant to be the ultimate performance as nothing we have could offer decent performance at 5k without using multiple GPU's.


AMD got more flak on this than NVidia did for the same thing...
No. Fermi was a heat fiasco for Nvidia and they were mocked relentlessly for it. ATI used it (as did their brand owners -see what term I didn't use there!) to their advantage.
Problem is when you mock someone's failing and then do it yourself, its a marketting and PR disaster. The GTX 480 was righted by the surprise release of a hitherto "can't be done" GTX 580 that managed to include the previously fused off cores.
Hopefully (if the naming conjecture is true) next years card will be cool but the flip side of pumping up Arctic Islands is that 390X will be a furnace.

I bloody hope it isn't.
Posted on Reply
#56
HumanSmoke
the54thvoidNo. Fermi was a heat fiasco for Nvidia and they were mocked relentlessly for it.
As they were with the FX 5800U...
the54thvoidATI used it (as did their brand owners -see what term I didn't use there!) to their advantage.
Problem is when you mock someone's failing and then do it yourself, its a marketing and PR disaster.
At least with the FX 5800U, Nvidia actually had the balls and sense of humour to laugh at their own failings. No amount of marketing could save NV30 from the obvious negative traits, so the company had fun with it.

Not something many companies would actually put together to announce their mea culpa. They may have done something similar with Fermi had AMD, their loyal followers, and shills not begun getting creative first.

Two things stand out. Nvidia's videos mocking themselves are much funnier and original than AMD's efforts, and the NV30 became a byword for hot'n'loud because of its staggeringly high 74 watt (full load) power consumption. What a difference a dozen years makes in GPU design.
Posted on Reply
#57
rruff
64KAs far as the extra watts used it doesn't amount to anything much on an electricity bill for an average gamer playing 15-20 hours a week unless you live in an area where electricity is ridiculously expensive or you're running your card at max 24/7 for Folding or Mining. For me the difference would be about 8 cents a month on my power bill between a reference GTX 780 Ti (peak 269 watts) and a reference R9 290X (peak 282 watts) from W1zzard's reviews based on the last generations flagship cards. Even if AMD used 100 watts more than Nvidia it still wouldn't amount to much. 65 cents a month difference at 10 cents per kWh..
Compare a reference GTX 970 to an R9 290 at idle (7W more) or playing a video (60W) or gaming avg (76W). Any way you slice it the FPS/$ advantage of the AMD card disappears pretty fast if you actually use it. If it's on all the time, and you spend 6 hrs per week watching video, and 20 hrs a week gaming, you will spend ~$20/yr more on electricity in the US.

www.techpowerup.com/reviews/Colorful/iGame_GTX_970/25.html
Posted on Reply
#58
arbiter
HumanSmokeAt least with the FX 5800U, Nvidia actually had the balls and sense of humour to laugh at their own failings. No amount of marketing could save NV30 from the obvious negative traits, so the company had fun with it.
Not something many companies would actually put together to announce their mea culpa. They may have done something similar with Fermi had AMD, their loyal followers, and shills not begun getting creative first.
Two things stand out. Nvidia's videos mocking themselves are much funnier and original than AMD's efforts, and the NV30 became a byword for hot'n'loud because of its staggeringly high 74 watt (full load) power consumption. What a difference a dozen years makes in GPU design.
Those AMD "fixer" videos are pretty sad. One the first ones they tried to compare what was at time a nvidia gtx650 (was easy to tell by the design of the ref cooler. Guy said it doesn't run his game well, then the fixer guy hands him a 7970 like they were compareing low-midrange card vs their top of line card at the time. It was pretty sad marking attempt by them. I know some people wouldn't looked in to what card they claimed wouldn't run his game well but when you looked it was pretty sad comparing. Would be like comparing a gtx980 to a r7 260x in performance now.
Posted on Reply
#59
lilhasselhoffer
GhostRyder...
AMD got more flak on this than NVidia did for the same thing...The problem also was not that they designed a bad heatsink, it was more they did not make a better one as it really just meant to be inexpensive as they push that more people on the high end do/want something more anyways and are probably going to handle cooling it themselves. Obviously this was a mistake they realized hence why we are getting something different this time.

I think as far as DX12 is concerned, all we hear is conjecture at this point and filled with a lot of what if's/I think's instead of pure fact. Until we see it in the open we will not know what being DX12 ready actually means.
Did you read my entire post? Perhaps if you did you wouldn't have restated what I said.

AMD designed the cheapest cooler that would meet the thermal limitations of their card. This meant a lower priced final product, but the performance was "terribad." You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy. AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers. The custom coolers rolled out, and AMD based GPUs actually had a chance. When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.

Additionally, GPUs are sold with a cooler and removing it violates any warranties related to that card. You really want to argue that AMD assumed most people would void their warranties to bring their GPUs to noise/heat parity with the Nvidea offerings. That's insane.



I'm not saying that Nvidea can do no wrong. Fermi was crap, that existed because GPU computing was all the rage and Nvidea "needed" to compete with the performance of AMD at the time. I'm not saying there's any viable excuses, just that there is no proof that Arctic Islands means a cooler chip. Arguing that the name, history, or anything else insures that is foolish. We won't have an answer until these GPUs start appearing, and discussion before that is speculation at best. Arguing over wild speculation is pointless.
Posted on Reply
#60
arbiter
lilhasselhofferAMD designed the cheapest cooler that would meet the thermal limitations of their card. This meant a lower priced final product, but the performance was "terribad." You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy. AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers. The custom coolers rolled out, and AMD based GPUs actually had a chance. When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.
They didn't even design that cooler, they just tossed on cooler from the last gen cards and shipped it.

As person you quoted touched on DX12, dx12 allowing the more use of the hardware's full power. Kinda wonder if use of cheap cooler if AMD pulls that against how much amplified the heat issue will be with DX12 letting the gpu run more closer to 100% then was allowed before. Could be same for nvidia side but their ref cooler isn't half bad.
GhostRyderAMD got more flak on this than NVidia did for the same thing...The problem also was not that they designed a bad heatsink, it was more they did not make a better one as it really just meant to be inexpensive as they push that more people on the high end do/want something more anyways and are probably going to handle cooling it themselves.
On the nvidia card did that heat cripple performance by 20%? Or did the nvidia card still run pretty much as it was ment to? Really AMD took the most heat is more do to the they sold the cards with "up to ####mhz". When you use that it well usually means you won't get that top end most the time.
Posted on Reply
#61
GhostRyder
the54thvoidNo. Fermi was a heat fiasco for Nvidia and they were mocked relentlessly for it. ATI used it (as did their brand owners -see what term I didn't use there!) to their advantage.
Problem is when you mock someone's failing and then do it yourself, its a marketting and PR disaster. The GTX 480 was righted by the surprise release of a hitherto "can't be done" GTX 580 that managed to include the previously fused off cores.
Hopefully (if the naming conjecture is true) next years card will be cool but the flip side of pumping up Arctic Islands is that 390X will be a furnace.
I bloody hope it isn't.
Mocked for it maybe, but not nearly as bad as some people (Including some of the people on this forum) do at least from what I saw during those times on other sites and such. I ran into more people who still said it was a great card and stated how many different ways to alleviate the problem as there were plenty, same as with the R9 290/X. The problem is I have seen many of those people then ridicule the same traits on the AMD side claiming it should have been better... Personally, does not matter in the end of the day its easy to alleviate and something most of us could find a way around on any of the coolers. But them mocking it (AMD during those days) was a little idiotic, but no matter what AMD says they are always wrong in some peoples eyes...
arbiterThose AMD "fixer" videos are pretty sad. One the first ones they tried to compare what was at time a nvidia gtx650 (was easy to tell by the design of the ref cooler. Guy said it doesn't run his game well, then the fixer guy hands him a 7970 like they were compareing low-midrange card vs their top of line card at the time. It was pretty sad marking attempt by them. I know some people wouldn't looked in to what card they claimed wouldn't run his game well but when you looked it was pretty sad comparing. Would be like comparing a gtx980 to a r7 260x in performance now.
Was it stupid, yes it was but its just an attempt at a mocking video with an attempt at humor. I doubt they put any thought into what NVidia card it was more than it was an NVidia card with those videos and more focused on it being a quick bit of humor.
lilhasselhofferDid you read my entire post? Perhaps if you did you wouldn't have restated what I said.
AMD designed the cheapest cooler that would meet the thermal limitations of their card. This meant a lower priced final product, but the performance was "terribad." You couldn't overclock, the cards put out a bunch of heat, and worst of all they were noisy. AMD cut cost, upped heat, and didn't put out an appreciably better product when it came to raw numbers. The custom coolers rolled out, and AMD based GPUs actually had a chance. When a custom cooler can drop temperatures, decrease noise, and increase performance all at once you have to admit the initial cooler was a twice baked turd.
Additionally, GPUs are sold with a cooler and removing it violates any warranties related to that card. You really want to argue that AMD assumed most people would void their warranties to bring their GPUs to noise/heat parity with the Nvidea offerings. That's insane.
I'm not saying that Nvidea can do no wrong. Fermi was crap, that existed because GPU computing was all the rage and Nvidea "needed" to compete with the performance of AMD at the time. I'm not saying there's any viable excuses, just that there is no proof that Arctic Islands means a cooler chip. Arguing that the name, history, or anything else insures that is foolish. We won't have an answer until these GPUs start appearing, and discussion before that is speculation at best. Arguing over wild speculation is pointless.
I was agreeing with you not making a retort at your post...Sorry if it came off wrong.
arbiterOn the nvidia card did that heat cripple performance by 20%? Or did the nvidia card still run pretty much as it was ment to? Really AMD took the most heat is more do to the they sold the cards with "up to ####mhz". When you use that it well usually means you won't get that top end most the time.
It went up to 105c and could just as well cause issues. Solution is the same as with the AMD card, have a better cooled case or use some form of airflow to alleviate the heat from stalemating inside the card. AMD's driver update which changed how the fan profile was handled helped the issue and putting some nice airflow helped in both cases keep the temps down easily.
Either way, both NVidia and AMD heard the cries and have decided to alleviate the issue on both ends.
Posted on Reply
#62
lilhasselhoffer
GhostRyder...
I was agreeing with you not making a retort at your post...Sorry if it came off wrong.
...
My misunderstanding. My apologies.
Posted on Reply
#63
Aquinus
Resident Wat-man
lilhasselhofferYes, you'll also have to decrease voltage inside the chip, but if you look at a transistor as a very poor resistor you'll see that power = amperage * voltage = amperage^2 * resistance. To decrease the power flowing through the transistor, just to match the same thermal limits of the old design, you need to either half the amperage or quarter the resistance. While this is possible, AMD has had the tendency to not do this.
That's not how resistors or circuits in a CPU work with respect parts that are operating as logic. Since we're talking clock signals, not constant voltage, we're talking about impedance not resistance because technically a clock signal can be described as an AC circuit. As a result, it's not a simple as you think it is. On top of that, reducing the size of die very well can impact the gap in a transistor. Smaller gaps means a smaller electric potential is required to open or close it. Less gap means less impedance, so even if voltage might be as high (maybe a little lower, 0.1 volts?) So while you're correct that resistance increases on the regular circuitry because the wires are smaller, it does not mean transistors' impedance to a digital signal is more. In fact, impedance on transistors have continued to go down as smaller manufacturing nodes are used.

Lastly, impedance on a transistor depends on how strong the driving voltage difference is between the emitter and the base for an NPN transistor versus grounding the base for PNP transistors to open them up.

Also you made a false equivalency. You assume resistance doubles when circuit size is halved which is not true. Resistance might increase, but it's not that kind of rate. It depends on a lot of factors.
Posted on Reply
#64
progste
Imagine the Irony if the stock versions of these cards get to 90°C XD
Posted on Reply
#65
64K
progsteImagine the Irony if the stock versions of these cards get to 90°C XD
AMD haters would be merciless in ridiculing that card if it did happen. If the joke is so obvious then I can't believe that AMD would have chosen Arctic Islands if it's going to be a hot/inefficient GPU. AMD needs to be getting everything right for the near future. Their stock has fallen 20% in the last week and a half. I wish them the best but mistakes are a luxury they can't afford right now.
Posted on Reply
#66
Vlada011
Next news is AMD delay R9-390X go right on 14nm...
Before 5-6 months I was convinced they will launch 20nm, 20% stronger than GM200, 3D HBM Memory, Extreme Bandwidth, Incredible fps,
card made for 4K resolution... typical for AMD. I don't read their news any more, only topic and maybe few words more...
I don't want to read before R9-390X show up because they send news only to move attention from main questions...
Specification and performance of R9-390X,
Distance from GTX980 and TITAN X,
Temperatures, noise, power consumption.
Last time when customers wait so long AMD made miracle, they almost beat TITAN.
They didn't beat him, TITAN was better card, less heat, better OC, better gaming experience, more video memory, but they made miracle, nobody expect same performance as NVIDIA premium card. Main problem is because AMD still no better card than that model Hawaii, almost same as crippled GK110. But now is middle of 2015 and TITAN was launched on beginning of 2013. And NVIDIA had 4 stronger models, TITAN Black, GTX780Ti, GTX980, TITAN X and 5th GTX980Ti is finished only need few weeks to install chips on board and send to vendors when time come.
Gap between NVIDIA and AMD is huge now and it's time to AMD make something good and drop price of GTX980Ti.
Posted on Reply
#67
Casecutter
HumanSmokeYou were in a coma during the whole Fermi/Thermi frenzy?
8-10W an hour is noteworthy???
I meant the power/heat "narrative" is new as being directed toward AMD, not the topic in general.

While not specifically a cost objective for and individual computer, though when you have 3 that are sleeping as myself, it is worth being aware of. We should looking be looking at all such "non-beneficial" loads, or "vampire usage" on everything. This should be just as disquieting and regarded almost as wasteful (if not more as nothing is hppening) to your household, as such avertised upfront efficiencies’ products are market around, and their effect on a community wide basis, then and regional power grid.

I'm astounded ... your need to deliver "point by point" discord, didn’t mean to rile you personally. o_O
Posted on Reply
#68
lilhasselhoffer
AquinusThat's not how resistors or circuits in a CPU work with respect parts that are operating as logic. Since we're talking clock signals, not constant voltage, we're talking about impedance not resistance because technically a clock signal can be described as an AC circuit. As a result, it's not a simple as you think it is. On top of that, reducing the size of die very well can impact the gap in a transistor. Smaller gaps means a smaller electric potential is required to open or close it. Less gap means less impedance, so even if voltage might be as high (maybe a little lower, 0.1 volts?) So while you're correct that resistance increases on the regular circuitry because the wires are smaller, it does not mean transistors' impedance to a digital signal is more. In fact, impedance on transistors have continued to go down as smaller manufacturing nodes are used.

Lastly, impedance on a transistor depends on how strong the driving voltage difference is between the emitter and the base for an NPN transistor versus grounding the base for PNP transistors to open them up.

Also you made a false equivalency. You assume resistance doubles when circuit size is halved which is not true. Resistance might increase, but it's not that kind of rate. It depends on a lot of factors.
One, I've started by stating that a transistor is approximated as a poor resistor. While incorrect, this is the only way I know of to figure out bled off energy (electrical to thermal) without resorting to immensely complicated mathematics that are beyond my ken. It also makes calculation of heat transference a heck of a lot easier.

Two, I said exactly that. In a simple circuit, voltage can be expressed as amperage multiplied be resistance. Power can be expressed as amperage multiplied by voltage. I took the extra step, and removed the voltage term from the equation because transistors generally have a fixed operational voltage depending upon size. As that is difficult, at best, to determine I didn't want it to muddy the water.

Third, where exactly did I suggest resistance is double? I cannot find it in any of my posts. What I did find was reference to circuit size being halved, which quarters the available surface area to conduct heat. Perhaps this is what you are referring to? I'd like clarification, because if I did say this I'd like to correct the error.


All of this is complicated by a simplistic model, but it doesn't take away from my point. None of the math, or assumed changes, means that the Arctic Islands chips will run cool, or even cooler than the current fire islands silicon. Yes AMD may be using a 75% space saving process to only increase the transistor count by 50%, yes the decreased transistor size could well offer a much smaller gate voltage, and yes the architecture may have been altered to be substantially more efficient (thus requiring fewer clock cycles to perform the same work). All of this is speculation. Whenever I can buy a card, or see some plausibly factual test results, anything said is wild speculation.
Posted on Reply
#69
HumanSmoke
CasecutterI meant the power/heat "narrative" is new as being directed toward AMD, not the topic in general.
Isn't that always the case with tech as emotive as the core components of a system? The failings - perceived or real, of any particular architecture/SKU, are always held up in comparison with their contemporaries and historical precedent. When one company drops the ball, people are only to eager to fall upon it like a pack of wolves. Regarding heat/power, the narrative shifts every time the architecture falls outside of the norm. The HD 2900XT (an AMD product) was pilloried in 2007, the GTX 480/470/465 received the attention three years later, and GCN in its large die compute orientated architecture come in for attention now. The primary difference between the present and the past is that in previous years, excessive heat and power where just a negative point that could be ameliorated by outright performance - and there are plenty of examples I can think of- from the 3dfx Voodoo 3 to the aforementioned FX 5800U and GeForce 6800Ultra/Ultra Extreme. The present day sees temp and input power limit performance due to throttling which makes the trade off less acceptable for many.
CasecutterI'm astounded ... your need to deliver "point by point" discord, didn’t mean to rile you personally. o_O
Well, I'm not riled. You presented a number of points and I commented upon them individually for the sake of clarity, and to lessen the chances that anyone here might take my comments out of context. I also had three questions regarding your observations. Loading them into a single paragraph lessens their chances of being answered - although I note that splitting them up as individual points fared no better in that regard :laugh: ....so there's that rationale mythbusted.
Posted on Reply
#70
crazyeyesreaper
Not a Moderator
Bjorn_Of_IcelandSo is my GTX780.. a 2 year old card.. and the 980 is not too far above so that even a 780ti can keep it in check.

AMD is lagging that much, they needed to skip a 20nm just to make them competitive.
Heavily overclocked GTX 780 keeps up with the 970 just fine, meanwhile 780 Ti overclocked can get close to the 980.

As such Nvidia did alot of R&D to push performance up enough to counter overclocked previous gen by a few %%%% points. Titan X and 980 Ti offer what Fury from AMD will offer. So they are relatively similar for now in terms of performance. Nothings really changed that much.

W1zz managed to get a 17% Performance boost on the GTX 780 with overclocking
on the 780 Ti he got a further 18% Performance boost.

So if we say 10% Performance across the board via Overclocking then yes, The GTX 780 compares to the 970 while the 780Ti compares to a 980Ti.




Add 10% to the 780 and 10% to the 780Ti and they have no issues keeping up with the 970 and 980 for the most part. It is game dependent but even in the Averaged senario across a multitude of games the result remains the same.
Posted on Reply
#71
xfia
buggalugsAMD should have dumped TSMC long ago although there isn't that many choices. AMD should try to do a deal with Samsung.
they have been working with samsung for years and they are both founding members of the hsa foundation. they have also been in talks about using 14nm for amd's new gpu's and cpu's for some time.
Posted on Reply
Add your own comment
Dec 22nd, 2024 04:34 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts