# NVIDIA GM204 and GM206 to Tape-Out in April, Products to Launch in Q4?



## btarunr (Apr 21, 2014)

It looks like things are going horribly wrong at TSMC, NVIDIA and AMD's principal foundry partner, with its 20 nm manufacturing process, which is throwing a wrench into the works at NVIDIA, forcing it to re-engineer an entire lineup of "Maxwell" GPUs based on existing 28 nm process. Either that, or NVIDIA is confident of delivering an efficiency leap using Maxwell on existing/mature 28 nm process, and saving costs in the process. NVIDIA is probably drawing comfort from the excellent energy-efficiency demonstrated by its Maxwell-based GeForce GTX 750 series. According to a 3DCenter.org report, NVIDIA's next mainline GPUs, the GM204 and GM206, which will be built on the 28 nm process, and "Maxwell" architecture, will tape out later this month. Products based on the two, however, can't be expected before Q4 2014, as late as December, or even as late as January 2015. 

GM204 succeeds GK104 as the company's next workhorse performance-segment silicon, which could power graphics card SKUs ranging all the way from US $250 to $500. An older report suggests that it could feature as many as 3,200 CUDA cores. The GM204 could be taped out in April 2014, and the first GeForce products based on it could launch no sooner than December 2014. The GM206 is the company's next mid-range silicon, which succeeds GK106. It will tape out in April, alongside the GM204, but products based on it will launch only in January 2015. The GM200 is a different beast altogether. There's no mention of which process the chip will be based on, but it will succeed the GK110, and should offer performance increments worthy of being a successor. For that, it has to be based on the 20 nm process. It will tape-out in June 2014, and products based on it will launch only in or after Q2 2015.

*View at TechPowerUp Main Site*


----------



## matar (Apr 21, 2014)

28nm I am not buying.


----------



## Razorfang (Apr 21, 2014)

Or it could be a conspiracy to extend the product line on both sides.


----------



## LAN_deRf_HA (Apr 21, 2014)

Would this then explain the specs for the "880" not being as impressive as expected?


----------



## JTristam (Apr 21, 2014)

If this is true then it sucks. Q4 2014/Q1 2015 is way too long. I was expecting Maxwell to be released at least this summer.


----------



## mroofie (Apr 21, 2014)

waaaat ??????

what's going to be released this year ? This article is not making sense -_-

and TSMc can go *** themselves
NVIDIA should look for a new partner because this is ridiculous


----------



## mroofie (Apr 21, 2014)

Razorfang said:


> Or it could be a conspiracy to extend the product line on both sides.


the gtx750 750 ti has been a success so im not sure where your getting this idea about aconspiracy


----------



## seronx (Apr 21, 2014)

My numbers;

28 nm GM104 silicon
~7 billion transistors
3,072 CUDA cores
192 TMUs
48 ROPs
6.1 single-precision TFLOP/s - 2.8 double-precision TFLOP/s

384-bit wide GDDR5 memory interface
6 GB standard memory amount
384 GB/s memory bandwidth
Clock speeds of 900 MHz core, 1000 MHz GPU Boost, 8 GHz memory
250W board power


28 nm GM106 silicon
~5 billion transistors
1,792 CUDA cores
112 TMUs
32 ROPs
3.9 single-precision TFLOP/s - 0.9 double-precision TFLOP/s
256-bit wide GDDR5 memory interface
4 GB standard memory amount
224 GB/s memory bandwidth
Clock speeds of 1000 MHz core, 1100 MHz GPU Boost, 7 GHz memory
150W board power


----------



## mroofie (Apr 21, 2014)

seronx said:


> My numbers;
> 
> 28 nm GM104 silicon
> ~7 billion transistors
> ...


 150 w and 250 w ?
please go look agian at the 750 ti
and comment back with the correct results


----------



## seronx (Apr 21, 2014)

mroofie said:


> 150 w and 250 w ?
> please go look agian at the 750 ti
> and comment back with the correct results


GK106 -> 140 Watts
GM107 -> 60 Watts

GK104 -> 230 Watts
GM106 -> 150 Watts

GK110 = GM104


----------



## Relayer (Apr 21, 2014)

Better tape out soon if it's going to be this month.


----------



## HumanSmoke (Apr 21, 2014)

seronx said:


> My numbers;
> 
> 28 nm GM104 silicon
> ~7 billion transistors
> ...


Not sure how you arrived at that calculation. Highly unlikely that Nvidia would offer a 1:2 rate for FP64 on the GM204 any more than it did with GK104 and GF114/104 before it. Double precision is 1. Unneeded for the gaming segment, 2. Adds to the power budget, and 3. Adds die space.
If the GM204 is an analogue of the previous 104 boards then FP64 will be culled. It was 1:12 in the GF104/104, and 1:24 in GK104. Keeping the FP64 ability at a nominal level would also protect Nvidia's margins on existing Titan/K6000/K20/K40 product lines- and more appropriately, keep them relevant since there's no way Nvidia make a GK 110 replacement on 28nm - which means holding out for the 16nm FinFET node (20nm BEOL+16nm FEOL) for a successor.


----------



## seronx (Apr 21, 2014)

HumanSmoke said:


> Not sure how you arrived at that calculation. Highly unlikely that Nvidia would offer a 1:2 rate for FP64 on the GM204 any more than it did with GK104 and GF114/104 before it. Double precision is 1. Unneeded for the gaming segment, 2. Adds to the power budget, and 3. Adds die space.
> If the GM204 is an analogue of the previous 104 boards then FP64 will be culled. It was 1:12 in the GF104/104, and 1:24 in GK104.


GM107 => 1/8th
GM106 => 1/4th
GM104 => 1/2th
GM200 => Full DP.

The future is compute shading which will be reliant on 64-bit maths.


----------



## hardcore_gamer (Apr 21, 2014)

matar said:


> 28nm I am not buying.



Does the process node matter if the card delivers good performance and power efficiency ?


----------



## MxPhenom 216 (Apr 21, 2014)

seronx said:


> GM107 => 1/8th
> GM106 => 1/4th
> GM104 => 1/2th
> GM200 => Full DP.
> ...



You are pulling so much of this out of your ass. Unless you have some insider info.


----------



## mroofie (Apr 21, 2014)

Relayer said:


> Better tape out soon if it's going to be this month.


 9 days left lol then we have to wait until next year for mid-range


----------



## HumanSmoke (Apr 21, 2014)

seronx said:


> GM107 => 1/8th
> GM106 => 1/4th
> GM104 => 1/2th
> GM200 => Full DP.
> The future is compute shading which will be reliant on 64-bit maths.


Really? I always thought that compute shading tended to only use FP64 for professional simulations and the like. Gaming compute - ambient occlusion, global illumination, motion blur, particle/water/smoke/fog effects, and depth of field etc. were almost entirely single precision based. If they were double precision based then wouldn't it stand to reason (as an example) that a R9 290X's (704 GFlops FP64) ability at applying compute shader image quality options would make it markedly inferior to the HD 7970 (1075 GFlops) ?


mroofie said:


> 9 days left lol then we have to wait until next year for mid-range


FWIW, the original forum post this article is based on is dated 15th April.


----------



## LAN_deRf_HA (Apr 21, 2014)

This happened not long ago where we were stuck on a node for awhile and it sucked for us, but with the efficiency of Maxwell might make up for it. The thing that really sucks is this Q4 nonsense.


----------



## mroofie (Apr 21, 2014)

HumanSmoke said:


> Really? I always thought that compute shading tended to only use FP64 for professional simulations and the like. Gaming compute - ambient occlusion, global illumination, motion blur, particle/water/smoke/fog effects, and depth of field etc. were almost entirely single precision based. If they were double precision based then wouldn't it stand to reason (as an example) that a R9 290X's (704 GFlops FP64) ability at applying compute shader image quality options would make it markedly inferior to the HD 7970 (1075 GFlops) ?
> 
> FWIW, the original forum post this article is based on is dated 15th April.


April 15th? for wat the tape out or release ? xD


----------



## HumanSmoke (Apr 21, 2014)

mroofie said:


> April 15th? for wat the tape out or release ? xD


15th April is the date of the original post (16th April local time- my time zone is 10 hours ahead of Germany) stating tape out this month.
So, if the tape out hadn't happened at that stage, it left 15 days in the month for it to happen at that stage....assuming tape out hadn't already occurred- then you're in the realms of trying to disprove a negative.


----------



## pjl321 (Apr 21, 2014)

I know this could probably never happen but wouldn't it be amazing if either nVidia or AMD (even less likely) signed up to used Intel's foundries giving us Maxwell or Pirate Islands at a pretty much ready 14nm!

It actually makes a lot of sense for all parties, Intel needs more of a reason than its own chips to really push forward with 14nm and for nVidia and AMD its a highly advanced and relatively mature/tested process. 

Win, freakin win baby!


----------



## TheBrainyOne (Apr 21, 2014)

I will literally drop a Hydrogen bomb on TSMC's Foundries if even one their spokesperson says, "Moore's Law is still being followed today."

Edit: AMD too will stick to v28 NM this year.


----------



## xenocide (Apr 21, 2014)

pjl321 said:


> I know this could probably never happen but wouldn't it be amazing if either nVidia or AMD (even less likely) signed up to used Intel's foundries giving us Maxwell or Pirate Islands at a pretty much ready 14nm!
> 
> It actually makes a lot of sense for all parties, Intel needs more of a reason than its own chips to really push forward with 14nm and for nVidia and AMD its a highly advanced and relatively mature/tested process.
> 
> Win, freakin win baby!


 
That will never happen.  Hell, Intel barely can get their 14nm process running correctly, and they have the best Engineers in the industry.  Not to mention Intel has literally nothing to gain from openning their top of the line fabs to competitors.  Business aside, you can't just take a microprocessor design and slap it on a process node that's 30% smaller, it doesn't work like that.  They would have to spend a few monthes redesigning and testing it to ensure it's functioning correctly, efficient, and cost effective.


----------



## librin.so.1 (Apr 21, 2014)

pjl321 said:


> I know this could probably never happen but wouldn't it be amazing if either nVidia or AMD (even less likely) signed up to used Intel's foundries giving us Maxwell or Pirate Islands at a pretty much ready 14nm!
> 
> It actually makes a lot of sense for all parties, Intel needs more of a reason than its own chips to really push forward with 14nm and for nVidia and AMD its a highly advanced and relatively mature/tested process.
> 
> Win, freakin win baby!




>implying CPUs and GPUs use the same kind of process

The transistors / chips for CPUs and for GPUs are done in a different way to cater to the ways each of these kinds of ICs work.

As a very good example to illustrate this, IF You know / remember, this was a major hindrance for AMD when they made their latest APUs to keep the GPU part good – using a process meant for CPUs would have non-trivially harmed the performance of the GPU side and vice versa. So they had to compromise. Which is also the reason why the CPU part on their latest APUs don't OC as good any more, compared to their previous APUs.
So yeah, using Intel's fabs for those GPUs could mean actually worse performance and power efficiency despite being 14nm.


----------



## refillable (Apr 21, 2014)

TheBrainyOne said:


> I will literally drop a Hydrogen bomb on TSMC's Foundries if even one their spokesperson says, "Moore's Law is still being followed today."


Lol that was ridiculously funny.


----------



## HumanSmoke (Apr 21, 2014)

TheBrainyOne said:


> I will literally drop a Hydrogen bomb on TSMC's Foundries if even one their spokesperson says, "Moore's Law is still being followed today."


Why would TSMC say that now, considering they know full well that processes they timelined for are falling behind schedule due to litho tools and energy demands slipping?
Back when people were a little more confident of EUV's ramp - a year or more ago, people might have seen a business as usual scenario, but ASML's delays in wafer and validation tooling (which caused an influx of funding from their customers), as well as TSMC's own well publicised false start recently have certainly stopped any talk of the continuation of transistor density per dollar.


----------



## pjl321 (Apr 21, 2014)

xenocide said:


> That will never happen.  Hell, Intel barely can get their 14nm process running correctly, and they have the best Engineers in the industry.  Not to mention Intel has literally nothing to gain from openning their top of the line fabs to competitors.  Business aside, you can't just take a microprocessor design and slap it on a process node that's 30% smaller, it doesn't work like that.  They would have to spend a few monthes redesigning and testing it to ensure it's functioning correctly, efficient, and cost effective.



Sure it wouldn't be straight forward but I think the finished product would justify the time/effort/money needed to achieve this. In say 3-6 months they would gain 2-4 years worth of waiting for TSMC to get there.
As for in Intel, I am pretty sure 14nm is ready and has been for some time but they are delaying purely from financial stand point, why spend billions on new facilities to save millions on smaller chips?

But if they had other big players paying to use their fabs then it makes financial sense again.

As for a competition stand point, Intel is not competing in the discrete gaming graphics card world so that shouldn't coming into play.

Hell, I think it would be really cool if Intel just bought nVidia outright! We would get amazing on board graphics, with excellent drivers and some absolute monstrous discrete graphics chips as everything is in house in the most advanced processes on the planet.


----------



## TheBrainyOne (Apr 21, 2014)

HumanSmoke said:


> Why would TSMC say that now, considering they know full well that processes they timelined for are falling behind schedule due to litho tools and energy demands slipping?
> Back when people were a little more confident of EUV's ramp - a year or more ago, people might have seen a business as usual scenario, but ASML's delays in wafer and validation tooling (which caused an influx of funding from their customers), as well as TSMC's own well publicised false start recently have certainly stopped any talk of the continuation of transistor density per dollar.


Can't take a joke, can you?

BTW, considering that NVIDIA has had some experience with Maxwell (GM107) and has had lots and lots of experience with 28 NM (3 years worth of experience at least), GM104 and GM106 should be a worth while upgrade. Even if NVIDIA is one year late to the 20 NM party, it won't matter because 20 NM production will be in full swing by then.



pjl321 said:


> But if they had other big players paying to use their fabs then it makes financial sense again.



It doesn't. Right now, people in the PC space are ready to buy Intel's GPUs (or their SoCs for Smartphone and Tablets) because their process advantage compensates their architecture disadvantage. If Intel shares their process with NVIDIA or AMD, they lose their business in those markets. The CPU market is declining so it makes no sense for AMD to use Intel's fabs.


----------



## HumanSmoke (Apr 21, 2014)

pjl321 said:


> As for a competition stand point, Intel is not competing in the discrete gaming graphics card world so that shouldn't coming into play.


Not necessarily.  Intel's Xeon Phi competes directly with Nvidia's Tesla and to lesser degree AMD's FirePro server boards in the math co-processor (GPGPU) market.


pjl321 said:


> Hell, I think it would be really cool if Intel just bought nVidia outright! We would get amazing on board graphics, with excellent drivers and some absolute monstrous discrete graphics chips as everything is in house in the most advanced processes on the planet.


The idea has been raised before, but Intel seems committed to x86. Nvidia's existing IP used in a diminishing number of Intel products might save them a bucks on licenses, but Intel already have a roadmap in place for professional parallelization. Intel have no interest in gaming, have their own baseband IP, and an ARM architectural license. Add in Nvidia's stock buy back program and Nvidia might cost more than it's worth- especially if Jen Hsun required a high profile position at Intel as part of the deal.


TheBrainyOne said:


> Can't take a joke, can you?


Certainly....if they're funny.


----------



## WhoDecidedThat (Apr 21, 2014)

TheBrainyOne said:


> I will literally drop a Hydrogen bomb on TSMC's Foundries if even one their spokesperson says, "Moore's Law is still being followed today."


----------



## buggalugs (Apr 21, 2014)

This is sad.


----------



## TheHunter (Apr 21, 2014)

Its been mentioned 3200cores (more "official"), another leak @ videocardz said 2560 cores with 64rops, both with 256bit bus.


----------



## MxPhenom 216 (Apr 21, 2014)

2560 and 64 rops sounds better. Though id like to see 512 bit bus, but with Nvidia GPUs that would be a big die. I can see Nvidia saving it for their GM210, once 20nm available.


----------



## Casecutter (Apr 21, 2014)

Let's hope they can reduce die-size on these 28Nm Maxwell's and still provide a decent bump in performance, while lowering the price… along with power.   If Nvidia doesn't offer a GM204 chip that performs more toward a GTX780 performance, while holding well below $400, does just lower power really justify a move for those with GK104's from considering a change at this point.  I mean I can't see GTX770 owners doing the switch if <20% increase for anting up more cash, when IDK say 30%-40% better efficiency?  Is this going have the right mix (price/perf/eff) to move Kepler owners?

It would be a super upgrade for the anyone still on a 570-580 Fermi, but even a original GTX680 owner would have a tough call if 20Nm might end up showing a say 14mo from now?  Or is 20Nm even future away?


----------



## HumanSmoke (Apr 21, 2014)

Casecutter said:


> If Nvidia doesn't offer a GM204 chip that performs more toward a GTX780 performance, while holding well below $400, does just lower power really justify a move for those with GK104's from considering a change at this point.  I mean I can't see GTX770 owners doing the switch if <20% increase for anting up more cash, when IDK say 30%-40% better efficiency?  Is this going have the right mix (price/perf/eff) to move Kepler owners?


Depends upon:
1. Pricing of current cards at the time of launch
2. Anything AMD may have as an answer
3. Whether the architecture tweaks produce a tangible benefit over the previous cards in the pricing segment. I went from an overclocked GTX 670 (for all intents and purposes a GTX 680) to a GTX 780 based solely upon needing a cheap, solid performer at 2560x1440. The graphs tell me that the difference between the two cards is 31% (Palit GTX 670 Jetstream / EVGA GTX 780 SC), but the reality is that the 670 just isn't cut out for that resolution, which becomes more apparent when overclocking is factored into the equation.


Casecutter said:


> It would be a super upgrade for the anyone still on a 570-580 Fermi, but even a original GTX680 owner would have a tough call if 20Nm might end up showing a say 14mo from now?  Or is 20Nm even future away?


Who knows? possibly not even TSMC. By 20nm I presume you mean TSMC's CLN16FF process, since the planar 20nm (CLN20SOC) isn't suitable for high power GPUs, and neither Nvidia or AMD are using the process - at least not for GPUs.
So you have a choice, design your next architecture around the next process node and hope the ramp of TSMC's process is smooth, or use the existing process to tune the architecture in readiness for a process change. The latter gives you proof of concept at minimal risk whilst introducing new SKUs (sales and marketing). AMD are already on record as saying that they won't be using 20nm this year, so have obviously come to the same conclusion.


----------



## alwayssts (Apr 21, 2014)

Wow.  This is indeed sad news.  

nVidia I can understand, as Huang has been openly bitching and moaning about the price/transistor curve of 20nm for a long time, which TSMC responded by saying it was blip that would not hold true in later nodes.  Also remember that nvidia came quite a bit later to 28nm, and this may be carbon-copy of that situation going on three years later, where-as amd launched in late 2011, and nvidia a quarter or so later. 

Lisa Su stated in Q413 AMD was taping out a 20nm chip last quarter (and referenced 14nm at CPA as this quarter).  It would seem awful strange to suddenly abandon ship at this late stage, as they must have known the prices and realistic production schedule.  I always assumed they would tape out designs at the initial fab doing production (that Apple is using) last quarter,and start production in ~May when TSMC is expanding production to other facilities and truly going to be doing mass production.  My hope was their plan was whatever issues came out of initial tape-out/samples could be figured out before that mass production time period, as it seemed a logical scenario, and would mesh with a late-year release of products (~May/June + ~6 months).  Anyone that was expecting any kind of availability on a new generation chips before that was, with all due respect, crazy.  TBH, I don't think Lisa Su saying they are 'in the design phase' really goes against that thinking, nor does saying this year will be 28nm (as this would be end of year at earliest, and probably not in huge availability.)  I could see it going either way (q414 or early 2015), but for all intents and purposes it makes sense to call it a 2015 process.

As for nvidia going to 28nm for another round of big chips, I really don't see the point.  Yeah, there are some efficiency improvements to be made versus gk104 and gk110 that could probably make sense on 28nm (like getting a 770-like product under 225w, a native to compete with Pitcairn, or more-efficient 48 ROP design than GK110), but the overall difference, price to create all those chips, and their over-all lifespan seems like a losing battle.  When you know going in you'd be buying a new product at full price on an old 28nm process (that already has efficient products, which are getting cheaper by the day) and we'll be seeing 16nm in a year or so...it seems like a really iffy proposition.


----------



## alwayssts (Apr 21, 2014)

HumanSmoke said:


> Depends upon:
> 2. Anything AMD may have as an answer...
> 
> Who knows? possibly not even TSMC. By 20nm I presume you mean TSMC's CLN16FF process, since the planar 20nm (CLN20SOC) isn't suitable for high power GPUs, and neither Nvidia or AMD are using the process - at least not for GPUs.
> So you have a choice, design your next architecture around the next process node and hope the ramp of TSMC's process is smooth, or use the existing process to tune the architecture in readiness for a process change. The latter gives you proof of concept at minimal risk whilst introducing new SKUs (sales and marketing). AMD are already on record as saying that they won't be using 20nm this year, so have obviously come to the same conclusion.



AMD has a decently efficient product in Hawaii, especially for it's die size.  All they really need to do is use 1.35v 5.5ghz (1.5-1.55 7ghz) ram and up the default core clock a little.  Outside that, what else is there for them to do until 20nm?

Who says 20nm isn't suitable for gpus and they are not using it?   Just because it is aimed at lower voltage/less leakage/lower clock doesn't mean a crapload of transistors could not run at a relatively low clock...being gpus are so parallel.  Even if they do run it at a higher voltage, it's still a 1.2x or so clock gain in the same power envelope up to wherever the voltage/power curve is, granted which is probably lower than 28nm.  The reason why it's aimed at SOC is because at a lower voltage (~.9v) it is supposedly around 1.3x more efficient, and hence the greatest benefits will be in low-voltage chips.  Given how much logic will be needed to get a decent chip size for bus width (even with cache to supplement the small die sizes they may want 6GB, or a 384-bit bus) while not having a lot of power savings, low-clocks could very well make sense.  (1.9x density, 1.2-1.3x power savings depending on clock/voltage).


----------



## HisDivineOrder (Apr 21, 2014)

Used to be, fabs loved the GPU makers.  Nowadays, the fabs see those same GPU makers as nobodies compared to the huge markets that are the mobile device chips.  So I fully expected any actual 20nm products that make it out the door to be prioritized for Qualcomm or excess Samsung need or even nVidia Tegra chips rather than being GPU's.

Did you really think that 750 Ti was a fluke?  It was a test run.  It was their beta test to see if Maxwell at 28nm would offer any benefit.  Looks like it did.  Expect a full transition for the next generation of cards to begin at once.  It'll affect the overall clockspeeds and it'll probably make the chips bigger than nVidia likes (with a few cuts to their feature sets), but the real meat and potatoes of Maxwell was always performance per watt anyway, so being a bit bigger shouldn't hurt it as much as it has earlier products.

Also, remember nVidia announced (relatively) recently they were focusing on building mobile device GPU's first and then scaling up from there instead of the reverse.  Prioritizing 20nm for Tegra while pushing discrete GPU's to 28nm again would just be that strategy taking shape.

Not surprised.  Disappointed, yes.  I'm curious to see what they release in May/June to go with Intel's latest releases.  nVidia doesn't usually let a big Intel launch go by without at least hinting at a new product refresh/launch.

I'm expecting a bunch of rebrands.  AMD did it a few months back, so why can't nVidia get away with the same, right?  This is what happens when AMD doesn't compete.  Nobody else does, either.  Intel and nVidia both doing refreshes would be really indicative of that.


----------



## arbiter (Apr 21, 2014)

alwayssts said:


> AMD has a decently efficient product in Hawaii, especially for it's die size.  All they really need to do is use 1.35v 5.5ghz (1.5-1.55 7ghz) ram and up the default core clock a little.  Outside that, what else is there for them to do until 20nm?



"hawaii" is was at its limit when amd released it. Most overclocks only net around 100mhz, 10% higher then stock. So AMD will have to make up a new gpu where as Nvidia already has one in maxwell.


----------



## Casecutter (Apr 21, 2014)

HumanSmoke said:


> I went from an overclocked GTX 670 (for all intents and purposes a GTX 680) to a GTX 780 based solely upon needing a cheap, solid performer at 2560x1440.


I suppose if at 1980x1080 and GTX 770, but now looking at 2560x1440 someone may be compelled especially if a GTX 780 can't get below $500. I think Nvidia would let it go EoL before marking down that GK110 price much/any further. Then $400 is kind of a "push" (a used 770 might earn $300) and the efficiency is a bonus.



HumanSmoke said:


> Who knows? possibly not even TSMC.


Exactly, is 20Nm out more than 14mo from now? Or is 20Nm even further away?  This news/products would appear 6mo's off, then is the bulk of 20Nm GPU being some 8mo's later, that's a short time period for this product life.  That's is the real question I'm trying to un-earth.


----------



## HumanSmoke (Apr 21, 2014)

alwayssts said:


> AMD has a decently efficient product in Hawaii, especially for it's die size.  All they really need to do is use 1.35v 5.5ghz (1.5-1.55 7ghz) ram and up the default core clock a little


Use still needs to be validated by AMD for their memory controllers. If it were a simple matter of using 7GHz effective memory IC's don't you think that at least one AIB if not AMD would have already added them ? No doubt validation is in the works, and AMD could conceivably stand pat with their lineup, although by the time the December holiday season rolls around, and if Nvidia are aiming to launch a new batch of silicon, what do you think AMD's reply will be? Nothing? A game bundle ? special Mistletoe editions ?


alwayssts said:


> Who says 20nm isn't suitable for gpus and they are not using it?   Just because it is aimed at lower voltage/less leakage/lower clock doesn't mean a crapload of transistors could not run *at a relatively low clock*...


Kind of answered your own question. What I meant was GPUs in the performance/enthusiast bracket, where a low clock/low power budget isn't going to cut it. For entry/OEM/low end GPUs? Sure, but what's the point? Nvidia's latest SKU (the GT 705) is using a recycled GF 119 GPU, and AMD's own R7 240 traces its lineage back four generations of cards.
Now, given the lead-in time between design > mask tooling > tape out, how long have Nvidia and AMD both known that CLN20SOC wasn't going to meet their requirements, or have AMD and Nvidia just decided to not use the process node by choice - which would be a first as far as I can recall.


> So in terms of product and technology selection, certainly we need to be at the leading-edge of the technology roadmap. So what we've said in the past is certainly this year all of our products are in 28-nanometer across both, you know, graphics client and our semi-custom business. We are, you know, actively in the design phase for 20-nanometer and that will come to production. And then clearly we'll go to FinFET. So that would be the progression of it -_ Lisa Su, AMD, Q1 2014 CC_





alwayssts said:


> Even if they do run it at a higher voltage, it's still a 1.2x or so clock gain in the same power envelope up to wherever the voltage/power curve is, granted which is probably lower than 28nm.  The reason why it's aimed at SOC is because at a lower voltage (~.9v) it is supposedly around 1.3x more efficient, and hence the greatest benefits will be in low-voltage chips.  Given how much logic will be needed to get a decent chip size for bus width (even with cache to supplement the small die sizes they may want 6GB, or a 384-bit bus) while not having a lot of power savings, low-clocks could very well make sense.  (1.9x density, 1.2-1.3x power savings depending on clock/voltage).


With all these supposed gains it must come as real surprise that no one is particularly interested in CLN20SOC for GPUs then. A mobile orientated GPU of low power/good efficiency per watt would be an ideal fit it would seem, and is something that is obviously missing from AMD's line up. So,  ideally suited to CLN20SOC, yet AMD have already poured cold water on GPUs at 20nm for this year. Strange no?
I'm pretty sure Apple hasn't gobbled up all of TSMC's 20nm capacity.


----------



## Relayer (Apr 22, 2014)

Hawaii's O/C potential seems to be more limited by cooling (and voltage) than a particular limitation of the silicon itself.







I swear I saw the Cryovenom 290 reviewed by Ocaholic as well and it also managed 1300MHz (with extra voltage), but I can't find the review???


----------



## Vlada011 (Apr 22, 2014)

Hawaii is far inferior chip to GK110. 
Higher temps, not so good overclocking and default weaker chip. Expected clock for Hawaii is 
1100-1200MHz, over 1150 you need water.

Never mind for GK110 owners, special people who have full unlocked chip this news is not so bad. We have performance and time to wait even end of 2016 and premium of Maxwell 20nm. Who bought Titan SLI before 1 year will play games 2 years on premium chip. Other who need performance maybe is time to think about GK110 with 2880 CUDA instead of first Maxwell successor of GK104. If they need performance and still have Fermi or something else. It's not smart wait something when you don't have scheduled date for 2-3-4 weeks and known specification on table.


----------



## vega22 (Apr 22, 2014)

Vlada011 said:


> over 1150 you need water.



mine does 1180 air :thumb:

hoping for 1250+ when it gets wet 

but i agree it is cooling limited.


----------



## Relayer (Apr 22, 2014)

deleted.
Actually, sorry this discussion is getting off topic.


----------



## arbiter (Apr 22, 2014)

Relayer said:


> Hawaii's O/C potential seems to be more limited by cooling (and voltage) than a particular limitation of the silicon itself.



I am going by what most review sites end up with, no many get over 1150mhz stable on their review cards hence why I said what I said.


----------



## Casecutter (Apr 22, 2014)

This information tells us Nvidia will have mainstream parts "Maxwell on 28Nm" out before Christmas, while "mainstream on 20Nm" in the market when... summer 2015?  I couldn't see Nvidia making this investment if they believe "mainstream on 20Nm" are less than 6mo's from Q4/14.


----------



## HumanSmoke (Apr 22, 2014)

Casecutter said:


> This information tells us Nvidia will have mainstream parts "Maxwell on 28Nm" out before Christmas, while "mainstream on 20Nm" in the market when... summer 2015?  I couldn't see Nvidia making this investment if they believe "mainstream on 20Nm" are less than 6mo's from Q4/14.


I'd view the GM 204 and GM 206 as holiday cash cows, and to a lesser extent, a marketing necessity. By the time the Christmas/New Year holiday season rolls around, Nvidia's performance segment cards (GTX 770/760) will be over six months old. If they don't hit that timeframe then Chinese New Year effectively kills any further additions until March 2015 - which is uncomfortably close to a year without update.
As for 20nm GPUs, the process is one factor, but I'm also guessing that full DirectX 12 compliance is another, as is the choice of what memory controller to use and validate - as I'm pretty certain that launching a 20/16nm GPU with GDDR5 comes under the heading "last resort"


----------



## xenocide (Apr 23, 2014)

Since Kepler Nvidia has been able to basically follow Intel's release process because they are able to market what used to be mid-range GPU's against AMD's top-end.  It's a win-win for Nvidia at least, and I have no complaints as long as the performance is there--discounting GPGPU GK104 was a definite improvement over GF110.  I wouldn't be surprised to see a whole line of GM104's on 28nm with the refreash of GM110-117 being on 20nm.


----------



## Pholostan (Apr 23, 2014)

Nothing is going wrong at TSMC. *Their 20nm planar process was never intended for high performance silicon.* It was made for low power, like small ARM SOC and such. There is a reason Intel developed their trigate for 22nm. We will pretty much require a finfet process for sub 28nm, planar will probably never work for high performance chips like big GPUs. TSMC 20nm planar is fine, it was not made for high performance. This is not news.


----------



## HumanSmoke (Apr 23, 2014)

Pholostan said:


> Nothing is going wrong at TSMC. *Their 20nm planar process was never intended for high performance silicon.* It was made for low power, like small ARM SOC and such. There is a reason Intel developed their trigate for 22nm. We will pretty much require a finfet process for sub 28nm, planar will probably never work for high performance chips like big GPUs. TSMC 20nm planar is fine, it was not made for high performance. This is not news.


YES. Very much so.
TSMC's own roadmap prior to the high performance CLN20G cancellation actually makes it pretty clear that CLN20SOC is low power optimized


----------



## Casecutter (Apr 23, 2014)

HumanSmoke said:


> I'd view the GM 204 and GM 206 as holiday cash cows, and to a lesser extent, a marketing necessity.


Let just hope there not just "Cows" to intended to bring home the currency...

Add in the fact that AMD has shown their ability already once to move Pitcairn and Tahiti pricing to some exceptional extents.  Nvidia has known the walls were narrowing, a GTX750Ti while nice on power is no 1080p gammer stunner, and GK104 I don’t believe they can go to war with it price wise. We can pretty much tell that mainstream on 20nm is looking like out till summer 2015, so yes Nvidia is getting the word-out that should you wait 6mo’s and they’ll have a new submissions. 
Please stand by.


----------

