• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GM204 and GM206 to Tape-Out in April, Products to Launch in Q4?

Joined
Sep 7, 2011
Messages
2,785 (0.57/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
I will literally drop a Hydrogen bomb on TSMC's Foundries if even one their spokesperson says, "Moore's Law is still being followed today."
Why would TSMC say that now, considering they know full well that processes they timelined for are falling behind schedule due to litho tools and energy demands slipping?
Back when people were a little more confident of EUV's ramp - a year or more ago, people might have seen a business as usual scenario, but ASML's delays in wafer and validation tooling (which caused an influx of funding from their customers), as well as TSMC's own well publicised false start recently have certainly stopped any talk of the continuation of transistor density per dollar.
 
Joined
May 19, 2009
Messages
224 (0.04/day)
That will never happen. Hell, Intel barely can get their 14nm process running correctly, and they have the best Engineers in the industry. Not to mention Intel has literally nothing to gain from openning their top of the line fabs to competitors. Business aside, you can't just take a microprocessor design and slap it on a process node that's 30% smaller, it doesn't work like that. They would have to spend a few monthes redesigning and testing it to ensure it's functioning correctly, efficient, and cost effective.

Sure it wouldn't be straight forward but I think the finished product would justify the time/effort/money needed to achieve this. In say 3-6 months they would gain 2-4 years worth of waiting for TSMC to get there.
As for in Intel, I am pretty sure 14nm is ready and has been for some time but they are delaying purely from financial stand point, why spend billions on new facilities to save millions on smaller chips?

But if they had other big players paying to use their fabs then it makes financial sense again.

As for a competition stand point, Intel is not competing in the discrete gaming graphics card world so that shouldn't coming into play.

Hell, I think it would be really cool if Intel just bought nVidia outright! We would get amazing on board graphics, with excellent drivers and some absolute monstrous discrete graphics chips as everything is in house in the most advanced processes on the planet.
 
Joined
Mar 30, 2014
Messages
105 (0.03/day)
Location
India
System Name Sony Xperia L
Processor Qualcomm Snapdragon MSM8930 @ 1.2 GHz
Memory 1 GB LPDDR2
Video Card(s) Qualcomm Adreno 305
Storage 8 GB inbuilt + 32 GB microSD
Display(s) 4.3" 480*854 TN Display
Power Supply 1750 mAh Li-Ion Battery
Software Android 4.2.2
Why would TSMC say that now, considering they know full well that processes they timelined for are falling behind schedule due to litho tools and energy demands slipping?
Back when people were a little more confident of EUV's ramp - a year or more ago, people might have seen a business as usual scenario, but ASML's delays in wafer and validation tooling (which caused an influx of funding from their customers), as well as TSMC's own well publicised false start recently have certainly stopped any talk of the continuation of transistor density per dollar.
Can't take a joke, can you?

BTW, considering that NVIDIA has had some experience with Maxwell (GM107) and has had lots and lots of experience with 28 NM (3 years worth of experience at least), GM104 and GM106 should be a worth while upgrade. Even if NVIDIA is one year late to the 20 NM party, it won't matter because 20 NM production will be in full swing by then.

But if they had other big players paying to use their fabs then it makes financial sense again.

It doesn't. Right now, people in the PC space are ready to buy Intel's GPUs (or their SoCs for Smartphone and Tablets) because their process advantage compensates their architecture disadvantage. If Intel shares their process with NVIDIA or AMD, they lose their business in those markets. The CPU market is declining so it makes no sense for AMD to use Intel's fabs.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.57/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
As for a competition stand point, Intel is not competing in the discrete gaming graphics card world so that shouldn't coming into play.
Not necessarily. Intel's Xeon Phi competes directly with Nvidia's Tesla and to lesser degree AMD's FirePro server boards in the math co-processor (GPGPU) market.
Hell, I think it would be really cool if Intel just bought nVidia outright! We would get amazing on board graphics, with excellent drivers and some absolute monstrous discrete graphics chips as everything is in house in the most advanced processes on the planet.
The idea has been raised before, but Intel seems committed to x86. Nvidia's existing IP used in a diminishing number of Intel products might save them a bucks on licenses, but Intel already have a roadmap in place for professional parallelization. Intel have no interest in gaming, have their own baseband IP, and an ARM architectural license. Add in Nvidia's stock buy back program and Nvidia might cost more than it's worth- especially if Jen Hsun required a high profile position at Intel as part of the deal.
Can't take a joke, can you?
Certainly....if they're funny.
 
Joined
Jul 19, 2008
Messages
1,180 (0.20/day)
Location
Australia
Processor Intel i7 4790K
Motherboard Asus Z97 Deluxe
Cooling Thermalright Ultra Extreme 120
Memory Corsair Dominator 1866Mhz 4X4GB
Video Card(s) Asus R290X
Storage Samsung 850 Pro SSD 256GB/Samsung 840 Evo SSD 1TB
Display(s) Samsung S23A950D
Case Corsair 850D
Audio Device(s) Onboard Realtek
Power Supply Corsair AX850
Mouse Logitech G502
Keyboard Logitech G710+
Software Windows 10 x64
This is sad.
 
Joined
Apr 10, 2012
Messages
1,400 (0.30/day)
Location
78°55' N, 11°56' E
System Name -aLiEn beaTs-
Processor Intel i7 11700kf @ 5.055Ghz
Motherboard MSI Z490 Unify
Cooling Corsair H115i Pro RGB
Memory G.skill Royal Silver 4400 cl17 @ 4403mhz
Video Card(s) Zotac GTX 980TI AMP!Omega Factory OC 1418MHz
Storage Intel SSD 330, Crucial SSD MX300 & MX500
Display(s) Samsung C24FG73 144HZ
Case CoolerMaster HAF 932 USB3.0
Audio Device(s) X-Fi Titanium HD @ 2.1 Bose acoustimass 5
Power Supply CoolerMaster 850W v2 gold atx 2.52
Mouse Razer viper 8k
Keyboard Logitech G19s
Software Windows 11 Pro 21h2 64Bit
Benchmark Scores ► ♪♫♪♩♬♫♪♭
Its been mentioned 3200cores (more "official"), another leak @ videocardz said 2560 cores with 64rops, both with 256bit bus.
 

MxPhenom 216

ASIC Engineer
Joined
Aug 31, 2010
Messages
13,009 (2.49/day)
Location
Loveland, CO
System Name Ryzen Reflection
Processor AMD Ryzen 9 5900x
Motherboard Gigabyte X570S Aorus Master
Cooling 2x EK PE360 | TechN AM4 AMD Block Black | EK Quantum Vector Trinity GPU Nickel + Plexi
Memory Teamgroup T-Force Xtreem 2x16GB B-Die 3600 @ 14-14-14-28-42-288-2T 1.45v
Video Card(s) Zotac AMP HoloBlack RTX 3080Ti 12G | 950mV 1950Mhz
Storage WD SN850 500GB (OS) | Samsung 980 Pro 1TB (Games_1) | Samsung 970 Evo 1TB (Games_2)
Display(s) Asus XG27AQM 240Hz G-Sync Fast-IPS | Gigabyte M27Q-P 165Hz 1440P IPS | LG 24" IPS 1440p
Case Lian Li PC-011D XL | Custom cables by Cablemodz
Audio Device(s) FiiO K7 | Sennheiser HD650 + Beyerdynamic FOX Mic
Power Supply Seasonic Prime Ultra Platinum 850
Mouse Razer Viper v2 Pro
Keyboard Corsair K65 Plus 75% Wireless - USB Mode
Software Windows 11 Pro 64-Bit
2560 and 64 rops sounds better. Though id like to see 512 bit bus, but with Nvidia GPUs that would be a big die. I can see Nvidia saving it for their GM210, once 20nm available.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
Let's hope they can reduce die-size on these 28Nm Maxwell's and still provide a decent bump in performance, while lowering the price… along with power. If Nvidia doesn't offer a GM204 chip that performs more toward a GTX780 performance, while holding well below $400, does just lower power really justify a move for those with GK104's from considering a change at this point. I mean I can't see GTX770 owners doing the switch if <20% increase for anting up more cash, when IDK say 30%-40% better efficiency? Is this going have the right mix (price/perf/eff) to move Kepler owners?

It would be a super upgrade for the anyone still on a 570-580 Fermi, but even a original GTX680 owner would have a tough call if 20Nm might end up showing a say 14mo from now? Or is 20Nm even future away?
 
Joined
Sep 7, 2011
Messages
2,785 (0.57/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
If Nvidia doesn't offer a GM204 chip that performs more toward a GTX780 performance, while holding well below $400, does just lower power really justify a move for those with GK104's from considering a change at this point. I mean I can't see GTX770 owners doing the switch if <20% increase for anting up more cash, when IDK say 30%-40% better efficiency? Is this going have the right mix (price/perf/eff) to move Kepler owners?
Depends upon:
1. Pricing of current cards at the time of launch
2. Anything AMD may have as an answer
3. Whether the architecture tweaks produce a tangible benefit over the previous cards in the pricing segment. I went from an overclocked GTX 670 (for all intents and purposes a GTX 680) to a GTX 780 based solely upon needing a cheap, solid performer at 2560x1440. The graphs tell me that the difference between the two cards is 31% (Palit GTX 670 Jetstream / EVGA GTX 780 SC), but the reality is that the 670 just isn't cut out for that resolution, which becomes more apparent when overclocking is factored into the equation.
It would be a super upgrade for the anyone still on a 570-580 Fermi, but even a original GTX680 owner would have a tough call if 20Nm might end up showing a say 14mo from now? Or is 20Nm even future away?
Who knows? possibly not even TSMC. By 20nm I presume you mean TSMC's CLN16FF process, since the planar 20nm (CLN20SOC) isn't suitable for high power GPUs, and neither Nvidia or AMD are using the process - at least not for GPUs.
So you have a choice, design your next architecture around the next process node and hope the ramp of TSMC's process is smooth, or use the existing process to tune the architecture in readiness for a process change. The latter gives you proof of concept at minimal risk whilst introducing new SKUs (sales and marketing). AMD are already on record as saying that they won't be using 20nm this year, so have obviously come to the same conclusion.
 
Joined
May 13, 2008
Messages
762 (0.13/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
Wow. This is indeed sad news.

nVidia I can understand, as Huang has been openly bitching and moaning about the price/transistor curve of 20nm for a long time, which TSMC responded by saying it was blip that would not hold true in later nodes. Also remember that nvidia came quite a bit later to 28nm, and this may be carbon-copy of that situation going on three years later, where-as amd launched in late 2011, and nvidia a quarter or so later.

Lisa Su stated in Q413 AMD was taping out a 20nm chip last quarter (and referenced 14nm at CPA as this quarter). It would seem awful strange to suddenly abandon ship at this late stage, as they must have known the prices and realistic production schedule. I always assumed they would tape out designs at the initial fab doing production (that Apple is using) last quarter,and start production in ~May when TSMC is expanding production to other facilities and truly going to be doing mass production. My hope was their plan was whatever issues came out of initial tape-out/samples could be figured out before that mass production time period, as it seemed a logical scenario, and would mesh with a late-year release of products (~May/June + ~6 months). Anyone that was expecting any kind of availability on a new generation chips before that was, with all due respect, crazy. TBH, I don't think Lisa Su saying they are 'in the design phase' really goes against that thinking, nor does saying this year will be 28nm (as this would be end of year at earliest, and probably not in huge availability.) I could see it going either way (q414 or early 2015), but for all intents and purposes it makes sense to call it a 2015 process.

As for nvidia going to 28nm for another round of big chips, I really don't see the point. Yeah, there are some efficiency improvements to be made versus gk104 and gk110 that could probably make sense on 28nm (like getting a 770-like product under 225w, a native to compete with Pitcairn, or more-efficient 48 ROP design than GK110), but the overall difference, price to create all those chips, and their over-all lifespan seems like a losing battle. When you know going in you'd be buying a new product at full price on an old 28nm process (that already has efficient products, which are getting cheaper by the day) and we'll be seeing 16nm in a year or so...it seems like a really iffy proposition.
 
Joined
May 13, 2008
Messages
762 (0.13/day)
System Name HTPC whhaaaat?
Processor 2600k @ 4500mhz
Motherboard Asus Maximus IV gene-z gen3
Cooling Noctua NH-C14
Memory Gskill Ripjaw 2x4gb
Video Card(s) EVGA 1080 FTW @ 2037/11016
Storage 2x512GB MX100/1x Agility 3 128gb ssds, Seagate 3TB HDD
Display(s) Vizio P 65'' 4k tv
Case Lian Li pc-c50b
Audio Device(s) Denon 3311
Power Supply Corsair 620HX
Depends upon:
2. Anything AMD may have as an answer...

Who knows? possibly not even TSMC. By 20nm I presume you mean TSMC's CLN16FF process, since the planar 20nm (CLN20SOC) isn't suitable for high power GPUs, and neither Nvidia or AMD are using the process - at least not for GPUs.
So you have a choice, design your next architecture around the next process node and hope the ramp of TSMC's process is smooth, or use the existing process to tune the architecture in readiness for a process change. The latter gives you proof of concept at minimal risk whilst introducing new SKUs (sales and marketing). AMD are already on record as saying that they won't be using 20nm this year, so have obviously come to the same conclusion.

AMD has a decently efficient product in Hawaii, especially for it's die size. All they really need to do is use 1.35v 5.5ghz (1.5-1.55 7ghz) ram and up the default core clock a little. Outside that, what else is there for them to do until 20nm?

Who says 20nm isn't suitable for gpus and they are not using it? Just because it is aimed at lower voltage/less leakage/lower clock doesn't mean a crapload of transistors could not run at a relatively low clock...being gpus are so parallel. Even if they do run it at a higher voltage, it's still a 1.2x or so clock gain in the same power envelope up to wherever the voltage/power curve is, granted which is probably lower than 28nm. The reason why it's aimed at SOC is because at a lower voltage (~.9v) it is supposedly around 1.3x more efficient, and hence the greatest benefits will be in low-voltage chips. Given how much logic will be needed to get a decent chip size for bus width (even with cache to supplement the small die sizes they may want 6GB, or a 384-bit bus) while not having a lot of power savings, low-clocks could very well make sense. (1.9x density, 1.2-1.3x power savings depending on clock/voltage).
 
Joined
Aug 23, 2013
Messages
585 (0.14/day)
Used to be, fabs loved the GPU makers. Nowadays, the fabs see those same GPU makers as nobodies compared to the huge markets that are the mobile device chips. So I fully expected any actual 20nm products that make it out the door to be prioritized for Qualcomm or excess Samsung need or even nVidia Tegra chips rather than being GPU's.

Did you really think that 750 Ti was a fluke? It was a test run. It was their beta test to see if Maxwell at 28nm would offer any benefit. Looks like it did. Expect a full transition for the next generation of cards to begin at once. It'll affect the overall clockspeeds and it'll probably make the chips bigger than nVidia likes (with a few cuts to their feature sets), but the real meat and potatoes of Maxwell was always performance per watt anyway, so being a bit bigger shouldn't hurt it as much as it has earlier products.

Also, remember nVidia announced (relatively) recently they were focusing on building mobile device GPU's first and then scaling up from there instead of the reverse. Prioritizing 20nm for Tegra while pushing discrete GPU's to 28nm again would just be that strategy taking shape.

Not surprised. Disappointed, yes. I'm curious to see what they release in May/June to go with Intel's latest releases. nVidia doesn't usually let a big Intel launch go by without at least hinting at a new product refresh/launch.

I'm expecting a bunch of rebrands. AMD did it a few months back, so why can't nVidia get away with the same, right? This is what happens when AMD doesn't compete. Nobody else does, either. Intel and nVidia both doing refreshes would be really indicative of that.
 
Joined
Jun 13, 2012
Messages
1,409 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Audio Device(s) Logitech Z906 5.1
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
AMD has a decently efficient product in Hawaii, especially for it's die size. All they really need to do is use 1.35v 5.5ghz (1.5-1.55 7ghz) ram and up the default core clock a little. Outside that, what else is there for them to do until 20nm?

"hawaii" is was at its limit when amd released it. Most overclocks only net around 100mhz, 10% higher then stock. So AMD will have to make up a new gpu where as Nvidia already has one in maxwell.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
I went from an overclocked GTX 670 (for all intents and purposes a GTX 680) to a GTX 780 based solely upon needing a cheap, solid performer at 2560x1440.
I suppose if at 1980x1080 and GTX 770, but now looking at 2560x1440 someone may be compelled especially if a GTX 780 can't get below $500. I think Nvidia would let it go EoL before marking down that GK110 price much/any further. Then $400 is kind of a "push" (a used 770 might earn $300) and the efficiency is a bonus.

Who knows? possibly not even TSMC.
Exactly, is 20Nm out more than 14mo from now? Or is 20Nm even further away? This news/products would appear 6mo's off, then is the bulk of 20Nm GPU being some 8mo's later, that's a short time period for this product life. That's is the real question I'm trying to un-earth.
 
Last edited:
Joined
Sep 7, 2011
Messages
2,785 (0.57/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
AMD has a decently efficient product in Hawaii, especially for it's die size. All they really need to do is use 1.35v 5.5ghz (1.5-1.55 7ghz) ram and up the default core clock a little
Use still needs to be validated by AMD for their memory controllers. If it were a simple matter of using 7GHz effective memory IC's don't you think that at least one AIB if not AMD would have already added them ? No doubt validation is in the works, and AMD could conceivably stand pat with their lineup, although by the time the December holiday season rolls around, and if Nvidia are aiming to launch a new batch of silicon, what do you think AMD's reply will be? Nothing? A game bundle ? special Mistletoe editions ?
Who says 20nm isn't suitable for gpus and they are not using it? Just because it is aimed at lower voltage/less leakage/lower clock doesn't mean a crapload of transistors could not run at a relatively low clock...
Kind of answered your own question. What I meant was GPUs in the performance/enthusiast bracket, where a low clock/low power budget isn't going to cut it. For entry/OEM/low end GPUs? Sure, but what's the point? Nvidia's latest SKU (the GT 705) is using a recycled GF 119 GPU, and AMD's own R7 240 traces its lineage back four generations of cards.
Now, given the lead-in time between design > mask tooling > tape out, how long have Nvidia and AMD both known that CLN20SOC wasn't going to meet their requirements, or have AMD and Nvidia just decided to not use the process node by choice - which would be a first as far as I can recall.
So in terms of product and technology selection, certainly we need to be at the leading-edge of the technology roadmap. So what we've said in the past is certainly this year all of our products are in 28-nanometer across both, you know, graphics client and our semi-custom business. We are, you know, actively in the design phase for 20-nanometer and that will come to production. And then clearly we'll go to FinFET. So that would be the progression of it - Lisa Su, AMD, Q1 2014 CC
Even if they do run it at a higher voltage, it's still a 1.2x or so clock gain in the same power envelope up to wherever the voltage/power curve is, granted which is probably lower than 28nm. The reason why it's aimed at SOC is because at a lower voltage (~.9v) it is supposedly around 1.3x more efficient, and hence the greatest benefits will be in low-voltage chips. Given how much logic will be needed to get a decent chip size for bus width (even with cache to supplement the small die sizes they may want 6GB, or a 384-bit bus) while not having a lot of power savings, low-clocks could very well make sense. (1.9x density, 1.2-1.3x power savings depending on clock/voltage).
With all these supposed gains it must come as real surprise that no one is particularly interested in CLN20SOC for GPUs then. A mobile orientated GPU of low power/good efficiency per watt would be an ideal fit it would seem, and is something that is obviously missing from AMD's line up. So, ideally suited to CLN20SOC, yet AMD have already poured cold water on GPUs at 20nm for this year. Strange no?
I'm pretty sure Apple hasn't gobbled up all of TSMC's 20nm capacity.
 
Joined
Jan 13, 2009
Messages
424 (0.07/day)
Hawaii's O/C potential seems to be more limited by cooling (and voltage) than a particular limitation of the silicon itself.



I swear I saw the Cryovenom 290 reviewed by Ocaholic as well and it also managed 1300MHz (with extra voltage), but I can't find the review???
 
Joined
Sep 22, 2012
Messages
1,010 (0.23/day)
Location
Belgrade, Serbia
System Name Intel® X99 Wellsburg
Processor Intel® Core™ i7-5820K - 4.5GHz
Motherboard ASUS Rampage V E10 (1801)
Cooling EK RGB Monoblock + EK XRES D5 Revo Glass PWM
Memory CMD16GX4M4A2666C15
Video Card(s) ASUS GTX1080Ti Poseidon
Storage Samsung 970 EVO PLUS 1TB /850 EVO 1TB / WD Black 2TB
Display(s) Samsung P2450H
Case Lian Li PC-O11 WXC
Audio Device(s) CREATIVE Sound Blaster ZxR
Power Supply EVGA 1200 P2 Platinum
Mouse Logitech G900 / SS QCK
Keyboard Deck 87 Francium Pro
Software Windows 10 Pro x64
Hawaii is far inferior chip to GK110.
Higher temps, not so good overclocking and default weaker chip. Expected clock for Hawaii is
1100-1200MHz, over 1150 you need water.

Never mind for GK110 owners, special people who have full unlocked chip this news is not so bad. We have performance and time to wait even end of 2016 and premium of Maxwell 20nm. Who bought Titan SLI before 1 year will play games 2 years on premium chip. Other who need performance maybe is time to think about GK110 with 2880 CUDA instead of first Maxwell successor of GK104. If they need performance and still have Fermi or something else. It's not smart wait something when you don't have scheduled date for 2-3-4 weeks and known specification on table.
 
Joined
Jul 18, 2007
Messages
2,693 (0.42/day)
System Name panda
Processor 6700k
Motherboard sabertooth s
Cooling raystorm block<black ice stealth 240 rad<ek dcc 18w 140 xres
Memory 32gb ripjaw v
Video Card(s) 290x gamer<ntzx g10<antec 920
Storage 950 pro 250gb boot 850 evo pr0n
Display(s) QX2710LED@110hz lg 27ud68p
Case 540 Air
Audio Device(s) nope
Power Supply 750w superflower
Mouse g502
Keyboard shine 3 with grey, black and red caps
Software win 10
Benchmark Scores http://hwbot.org/user/marsey99/
over 1150 you need water.

mine does 1180 air :thumb:

hoping for 1250+ when it gets wet :D

but i agree it is cooling limited.
 
Joined
Jun 13, 2012
Messages
1,409 (0.31/day)
Processor i7-13700k
Motherboard Asus Tuf Gaming z790-plus
Cooling Coolermaster Hyper 212 RGB
Memory Corsair Vengeance RGB 32GB DDR5 7000mhz
Video Card(s) Asus Dual Geforce RTX 4070 Super ( 2800mhz @ 1.0volt, ~60mhz overlock -.1volts)
Storage 1x Samsung 980 Pro PCIe4 NVme, 2x Samsung 1tb 850evo SSD, 3x WD drives, 2 seagate
Display(s) Acer Predator XB273u 27inch IPS G-Sync 165hz
Audio Device(s) Logitech Z906 5.1
Power Supply Corsair RMx Series RM850x (OCZ Z series PSU retired after 13 years of service)
Mouse Logitech G502 hero
Keyboard Logitech G710+
Hawaii's O/C potential seems to be more limited by cooling (and voltage) than a particular limitation of the silicon itself.

I am going by what most review sites end up with, no many get over 1150mhz stable on their review cards hence why I said what I said.
 
Joined
Apr 19, 2011
Messages
2,198 (0.44/day)
Location
So. Cal.
This information tells us Nvidia will have mainstream parts "Maxwell on 28Nm" out before Christmas, while "mainstream on 20Nm" in the market when... summer 2015? I couldn't see Nvidia making this investment if they believe "mainstream on 20Nm" are less than 6mo's from Q4/14.
 
Joined
Sep 7, 2011
Messages
2,785 (0.57/day)
Location
New Zealand
System Name MoneySink
Processor 2600K @ 4.8
Motherboard P8Z77-V
Cooling AC NexXxos XT45 360, RayStorm, D5T+XSPC tank, Tygon R-3603, Bitspower
Memory 16GB Crucial Ballistix DDR3-1600C8
Video Card(s) GTX 780 SLI (EVGA SC ACX + Giga GHz Ed.)
Storage Kingston HyperX SSD (128) OS, WD RE4 (1TB), RE2 (1TB), Cav. Black (2 x 500GB), Red (4TB)
Display(s) Achieva Shimian QH270-IPSMS (2560x1440) S-IPS
Case NZXT Switch 810
Audio Device(s) onboard Realtek yawn edition
Power Supply Seasonic X-1050
Software Win8.1 Pro
Benchmark Scores 3.5 litres of Pale Ale in 18 minutes.
This information tells us Nvidia will have mainstream parts "Maxwell on 28Nm" out before Christmas, while "mainstream on 20Nm" in the market when... summer 2015? I couldn't see Nvidia making this investment if they believe "mainstream on 20Nm" are less than 6mo's from Q4/14.
I'd view the GM 204 and GM 206 as holiday cash cows, and to a lesser extent, a marketing necessity. By the time the Christmas/New Year holiday season rolls around, Nvidia's performance segment cards (GTX 770/760) will be over six months old. If they don't hit that timeframe then Chinese New Year effectively kills any further additions until March 2015 - which is uncomfortably close to a year without update.
As for 20nm GPUs, the process is one factor, but I'm also guessing that full DirectX 12 compliance is another, as is the choice of what memory controller to use and validate - as I'm pretty certain that launching a 20/16nm GPU with GDDR5 comes under the heading "last resort"
 
Joined
Mar 24, 2011
Messages
2,356 (0.47/day)
Location
VT
Processor Intel i7-10700k
Motherboard Gigabyte Aurorus Ultra z490
Cooling Corsair H100i RGB
Memory 32GB (4x8GB) Corsair Vengeance DDR4-3200MHz
Video Card(s) MSI Gaming Trio X 3070 LHR
Display(s) ASUS MG278Q / AOC G2590FX
Case Corsair X4000 iCue
Audio Device(s) Onboard
Power Supply Corsair RM650x 650W Fully Modular
Software Windows 10
Since Kepler Nvidia has been able to basically follow Intel's release process because they are able to market what used to be mid-range GPU's against AMD's top-end. It's a win-win for Nvidia at least, and I have no complaints as long as the performance is there--discounting GPGPU GK104 was a definite improvement over GF110. I wouldn't be surprised to see a whole line of GM104's on 28nm with the refreash of GM110-117 being on 20nm.
 
Joined
Sep 3, 2013
Messages
11 (0.00/day)
Nothing is going wrong at TSMC. Their 20nm planar process was never intended for high performance silicon. It was made for low power, like small ARM SOC and such. There is a reason Intel developed their trigate for 22nm. We will pretty much require a finfet process for sub 28nm, planar will probably never work for high performance chips like big GPUs. TSMC 20nm planar is fine, it was not made for high performance. This is not news.
 
Top