Monday, September 17th 2018

NVIDIA GTX 1060 and GTX 1050 Successors in 2019; Turing Originally Intended for 10nm

NVIDIA could launch successors to its GeForce GTX 1060 series and GTX 1050 series only by 2019, according to a statement by an ASUS representative, speaking with PC Watch. This could mean that the high-end RTX 2080 Ti, RTX 2080, and RTX 2070, could be the only new SKUs for Holiday 2018 from NVIDIA, alongside cut-rate GeForce GTX 10-series SKUs. This could be a combination of swelling inventories of 10-series GPUs, and insufficient volumes of mid-range RTX 20-series chips, should NVIDIA even decide to extend real-time ray-tracing to mid-range graphics cards.

The way NVIDIA designed the RTX 2070 out of the physically smaller TU106 chip instead of TU104 leads us to believe that NVIDIA could carve out the GTX 1060-series successor based on this chip, since the RTX 2070 maxes it out, and NVIDIA needs to do something with imperfect chips. An even smaller chip (probably half-a-TU104?) could power the GTX 1050-series successor.
The PC Watch interview also states that NVIDIA's "Turing" architecture was originally designed for Samsung 10 nanometer silicon fabrication process, but was faced with delays and redesigning for the 12 nm process. This partially explains how NVIDIA hasn't kept up with the generational power-draw reduction curve of the previous 4 generations. NVIDIA has left the door open for a future optical-shrink of Turing to the 8 nm silicon fabrication node, an extension of Samsung's 10 nm node, with reduction in transistor sizes.
Sources: PC Watch, PCOnline.com.cn, Dylan on Reddit
Add your own comment

39 Comments on NVIDIA GTX 1060 and GTX 1050 Successors in 2019; Turing Originally Intended for 10nm

#27
Captain_Tom
Well damn, if they are looking at 8nm Samsung then it really does sound like AMD has TSMC's 7nm to itself in 2019 (besides Apple and ASIC's of course).
Posted on Reply
#28
Vayra86
FrickHow do you know how the RTX reception will be when the NDA isn't lifted yet?
The initial reception of RTX is past history, it 'happened' during and after the Nvidia Keynote where it was announced, which ended in a 30 fps, super blurry dancing bloke, where its price points were set, and what resulted in performance analysis of RTX cards using the new features. Furthermore you can look around on this forum in any RTX topic to recognize the lukewarm-ness.

Hell, RTX ON/OFF is even a meme now. Its being ridiculed... and its not rocket science to consider why. It falls squarely in the same corner as VR, 3D stereoscopic and all those other gimmicks that won't last. High cost, uncanny, low benefit and virtually zero adoption rate, which creates the eternal chicken/egg situation many new technologies die from.

Also, people seem to forget that much of the performance is known and can be calculated - you don't need Nvidia slides to tell you this. And the reality is that only in a select few use cases does Turing improve on perf/dollar AT ALL. In most cases its complete stagnation or worse. That already cuts out most Pascal owners from a decent deal. And do you really think those who skipped Pascal are going to spend big on features they never need? En masse? Naaah - Pascal on discount is a far better deal for them, and has a much more friendly price tag too.

Nvidia is in a very strange position right now, and they've kinda dug their own hole.
Posted on Reply
#29
Steevo
SDR82Just the 2070 is fully enabled (so no 2070 Ti). The 2080 and 2080 Ti are cut down so that it can leave space for the 2080 Plus and the Titan X.
I bet they intended on the 2070 die to be the 2080Ti and 2080 die, but process failures made this the way it is.

I hear the spin cycle starting and it's shaking hard.
Posted on Reply
#30
Vayra86
SteevoI bet they intended on the 2070 die to be the 2080Ti and 2080 die, but process failures made this the way it is.

I hear the spin cycle starting and it's shaking hard.
Yup - Icing on the cake is that bumped up TDP for the "Founders" cards. Needs extra watts for extra performance in other words, binning these cards is unreliable yield wise. So now you really just get a boosted BIOS at an increased price tag... instead of a throttling blower cooler. Sounds like an awesome deal at +100~150 bucks :D
Posted on Reply
#31
Fx
Vya DomusIt's hilarious we even consider discussing what the "reception" will be like when all this RTX stuff is going to be virtually nonexistent at launch. I think BFV is the only game that's supposed to come out with day one RTX support and Shadow of the Tomb Raider will receive a patch "later on".

That will be the RTX reception for ya when the NDA lifts, basically nothing to even talk about.
I have to disagree. If the cards didn't cost so much, I would be buying one and the one title I would be looking for ray tracing would be Shadow of the Tomb Raider.

That world is already beautiful and adding more immersion to it would be even better. It is also nice that it is a title where it isn't competitive so FPS could be a tad on the lower/tolerable side compared to BFV.
Posted on Reply
#32
yotano211
TheinsanegamerNParsect is a measure of distance, not time.
Inches is a measure of pleasure, not length.
Posted on Reply
#33
bug
Ferrum MasterRubbish, you simply cannot compare die sizes of such scale with minuscle mobile die sizes. They are different processes, goals and usages, lines and plants. They do not compete with each other.
Mwahahahahaha! Hahaha! Ha!
Thanks for the best laugh of the day.
Posted on Reply
#34
Vayra86
bugMwahahahahaha! Hahaha! Ha!
Thanks for the best laugh of the day.
How so? It was clear as day that we were stuck on 28nm for so long because the smaller nodes simply didn't offer the same characteristics for high performance, high power budget components. There is also a clear reason Nvidia's 16nm TSMC Pascal clocks noticeably higher than competitors on a different node...
Posted on Reply
#35
bug
Vayra86How so? It was clear as day that we were stuck on 28nm for so long because the smaller nodes simply didn't offer the same characteristics for high performance, high power budget components. There is also a clear reason Nvidia's 16nm TSMC Pascal clocks noticeably higher than competitors on a different node...
I only laughed @ mobile silicon not competing for fab capacity with GPU silicon.
Posted on Reply
#36
Vayra86
bugI only laughed @ mobile silicon not competing for fab capacity with GPU silicon.
Gotcha - we read his comment differently, to me he is speaking solely about die size/process nodes and not market - in which case he's 100% correct.
Posted on Reply
#37
efikkan
Vayra86Also, people seem to forget that much of the performance is known and can be calculated - you don't need Nvidia slides to tell you this. And the reality is that only in a select few use cases does Turing improve on perf/dollar AT ALL. In most cases its complete stagnation or worse. That already cuts out most Pascal owners from a decent deal. And do you really think those who skipped Pascal are going to spend big on features they never need? En masse? Naaah - Pascal on discount is a far better deal for them, and has a much more friendly price tag too.

Nvidia is in a very strange position right now, and they've kinda dug their own hole.
How do you "know" the performance? Which benchmarks do you base this on?
What specifically proves that Turing is stagnating or worse than Pascal?
Which hole is Nvidia in right now? The only valid complaint so far for Turing is pricing, everything else is grumpy AMD fans.
Posted on Reply
#38
bug
efikkanHow do you "know" the performance? Which benchmarks do you base this on?
What specifically proves that Turing is stagnating or worse than Pascal?
Which hole is Nvidia in right now? The only valid complaint so far for Turing is pricing, everything else is grumpy AMD fans.
The compute part is similar to Volta, I'm guessing that's what he's talking about. Anand has an article on Turing (as they always do). It is mostly Volta, save for the RT cores, but even then there are twists that make them different.
Posted on Reply
#39
danwat1234
Anybody think we'll see 10nm nVidia GPUs by early 2020? I thought Intel was behind but i guess nVidia may be last to the shrinkage race.
Posted on Reply
Add your own comment
Sep 17th, 2024 14:05 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts