Wednesday, March 15th 2017

NVIDIA to Build "Volta" Consumer GPUs on TSMC 12 nm Process

NVIDIA's next-generation "Volta" GPU architecture got its commercial debut in the most unlikely class of products, with the Xavier autonomous car processor. The actual money-spinners based on the architecture, consumer GPUs, will arrive some time in 2018. The company will be banking on its old faithful fab TSMC, to build those chips on a new 12 nanometer FinFET node that's currently under development. TSMC's current frontline process is the 16 nm FFC, which debuted in mid-2015, with mass-production following through in 2016. NVIDIA's "GP104" chip is built on this process.

This could also mean that NVIDIA could slug it out against AMD with its current GeForce GTX 10-series "Pascal" GPUs throughout 2017-18, even as AMD threatens to disrupt NVIDIA's sub-$500 lineup with its Radeon Vega series, scheduled for Q2-2017. NVIDIA's "Volta" architecture could see stacked DRAM technologies such as HBM2 gain more mainstream exposure, although competing memory standards such as GDDR6 aren't too far behind.
Sources: Commercial Times (Taiwan), TechReport
Add your own comment

25 Comments on NVIDIA to Build "Volta" Consumer GPUs on TSMC 12 nm Process

#1
bug
I believe TSMC's frontline process is still FF+. FFC is only a variant for more cost conscious clients.
Posted on Reply
#2
Vayra86
Watch this: Volta will be running GDDR6 in the end, as we are in 2018 and still looking at AMD to release a competitor for the top segment and Nvidia has no urge to up the ante.
Posted on Reply
#3
renz496
Vayra86Watch this: Volta will be running GDDR6 in the end, as we are in 2018 and still looking at AMD to release a competitor for the top segment and Nvidia has no urge to up the ante.
if GDDR6 can cover the things they do then why not use it?
Posted on Reply
#4
Vayra86
renz496if GDDR6 can cover the things they do then why not use it?
That's the point, it'll only cover what they want to do if AMD doesn't compete in the top end segment.
Posted on Reply
#5
the54thvoid
Super Intoxicated Moderator
Using HBM isn't like magic sauce. If GDDR5X and upwards provides more than enough bandwidth, it's more cost effective to use and better for shareholders as well (cheaper to use, can sell high).
It's actually counterproductive to potentially limit stock and incur other cost penalties to use cutting edge technologies first.
Posted on Reply
#6
bug
Vayra86That's the point, it'll only cover what they want to do if AMD doesn't compete in the top end segment.
You're making a big deal out of nothing. HBM never translated into any significant performance advantage to begin with. Implying that GDDR6 (which we know nothing about at this point) will be handicap in 2018 is... "creative"?
Posted on Reply
#7
FordGT90Concept
"I go fast!1!11!1!"
So they're just Pascal with an HBM2 controller and a tiny die shrink (16nm ->12nm)? Not entirely sure why TSMC is bothering with 12nm. You'd think cost versus reward wouldn't pay off.

And how is NVIDIA going to manage 6 DRAM stacks on the interposer when AMD could barely fit four? Kind of suggests the GPU is relatively small which, in turn, suggests it is memory (as in compute) centric rather than graphics centric.

Have to wait and see.
Posted on Reply
#8
bug
FordGT90ConceptSo they're just Pascal with an HBM2 controller and a tiny die shrink (16nm ->12nm)? Not entirely sure why TSMC is bothering with 12nm. You'd think cost versus reward wouldn't pay off.
A square with the side 16 is 256 square units. At 12, it's 144 (little more than half).
At the same time, this is rumour about Volta being built on 12nm. I'm not sure how you infer from that that Volta is "just Pascal with an HBM2 controller and a tiny die shrink".
Posted on Reply
#9
FordGT90Concept
"I go fast!1!11!1!"
bugI'm not sure how you infer from that that Volta is "just Pascal with an HBM2 controller and a tiny die shrink".
Pretty much what Polaris amounted to but you're right, not sure what Volta exclusively implies.
Posted on Reply
#10
efikkan
Vayra86Watch this: Volta will be running GDDR6 in the end, as we are in 2018 and still looking at AMD to release a competitor for the top segment and Nvidia has no urge to up the ante.
HBM is not going to be mainstream anytime soon from Nvidia, and why should they waste money on it when it's not needed? I wish AMD also prioritized real world value over PR value.
bugA square with the side 16 is 256 square units. At 12, it's 144 (little more than half).
At the same time, this is rumour about Volta being built on 12nm.
TSMC "12nm node" is not a real node shrink, but another refinement of 20nm + FinFET. Nvidia might choose to increase the density of the transistors on the refined process though.
Posted on Reply
#11
Vayra86
bugYou're making a big deal out of nothing. HBM never translated into any significant performance advantage to begin with. Implying that GDDR6 (which we know nothing about at this point) will be handicap in 2018 is... "creative"?
So why is there still 'stacked DRAM' in Volta slides then, seeing as they won't need it anyway?
Posted on Reply
#12
bug
Vayra86So why is there still 'stacked DRAM' in Volta slides then, seeing as they won't need it anyway?
Same reason we have HBM in Pascal maybe?
Nobody said HBM is not needed. Rather, for consumers it makes little difference other than cost in its current form.
Posted on Reply
#13
efikkan
Vayra86So why is there still 'stacked DRAM' in Volta slides then, seeing as they won't need it anyway?
It will be used for GV100, we don't know how many consumer products though.
Posted on Reply
#14
Vayra86
efikkanIt will be used for GV100, we don't know how many consumer products though.
Back to my original statement on Volta: it will launch for that market WITHOUT the stacked DRAM, just like Pascal, because the performance cap won't increase sufficiently to push HBM for gaming. That was my prediction and for some reason I need 4 posts to get that point across :)

It fits a pattern that started with Maxwell and the roadmap we had at that time. Nvidia is pushing architectural changes ahead of itself because the competition doesn't compete. I'm saying Volta will continue along that line.
Posted on Reply
#16
Liviu Cojocaru
This sounds good, I am looking forward to this upgrade. Hopefully I won't give in to the temptation to upgrade to 1080ti first :(
Posted on Reply
#17
renz496
TheGuruStudV is for vaporware.
Vega? :P
Posted on Reply
#18
TheGuruStud
renz496Vega? :p
No, it has a launch time and hasn't been blabbed about since 2015.
Posted on Reply
#19
kruk
efikkanI wish AMD also prioritized real world value over PR value.
AMD must have a good reason to push HBM, even more so with the low R&D budget they have. Vega with HBC will be already using it much better as Fury, but in Navi (maybe multiple small chips on an interposer?) it could play even more important role.
Posted on Reply
#20
efikkan
krukAMD must have a good reason to push HBM, even more so with the low R&D budget they have. Vega with HBC will be already using it much better as Fury, but in Navi (maybe multiple small chips on an interposer?) it could play even more important role.
Using a fancy new technology certainly gains attention, but in the end real world value matters. Sticking with a more pragmatic solution would have yielded better profit margins for AMD. HBM offers no significant benefits for a consumer GPU at this point.
Posted on Reply
#21
TheGuruStud
efikkanUsing a fancy new technology certainly gains attention, but in the end real world value matters. Sticking with a more pragmatic solution would have yielded better profit margins for AMD. HBM offers no significant benefits for a consumer GPU at this point.
B/c it's for enterprise. Why they didn't use gddr on consumer, idk. Maybe they really couldn't make two chips (lack of funds and all that).
Posted on Reply
#22
chodaboy19
efikkanUsing a fancy new technology certainly gains attention, but in the end real world value matters. Sticking with a more pragmatic solution would have yielded better profit margins for AMD. HBM offers no significant benefits for a consumer GPU at this point.
Couldn't agree more. Real life solutions that work is all that matters.
Posted on Reply
#23
Jism
efikkanHBM is not going to be mainstream anytime soon from Nvidia, and why should they waste money on it when it's not needed? I wish AMD also prioritized real world value over PR value.
Praize AMD for developing new standards, if you ask me. :) GDDR5, GDDR5x or GDDR6 takes up space, on a practical small card, and to have a wide memory-bus you need lots of chips to create a 300+ bits or even wider bus.

HBM is more practical, and requires less space, less power and offers more bandwidth compared to GDDR. The downside of HBM1 was that it was only able to adress up to 4GB of videoram, while HBM2 does not have that limitation anymore (up to 16GB).

Since AMD had a role in developping that interposer, AMD will have an advantage over HBM and HBM2 chips in the future while Nvidia and others have to wait in line first.

This was an important deal and this is why you dont see nvidia HBM cards yet. The FuryX with HBM is still an excellent all-round graphics card if you ask me. The HBM 1 OC is also sick, from base 500Mhz up to 1GHz offering a stunning 1024GB a sec of memory bandwidth. You dont see numbers like that in the GDDR camp.
Posted on Reply
#24
efikkan
JismPraize AMD for developing new standards, if you ask me. :) GDDR5, GDDR5x or GDDR6 takes up space, on a practical small card, and to have a wide memory-bus you need lots of chips to create a 300+ bits or even wider bus.
I don't give them credit for something they didn't invent.
JismHBM is more practical, and requires less space, less power and offers more bandwidth compared to GDDR. The downside of HBM1 was that it was only able to adress up to 4GB of videoram, while HBM2 does not have that limitation anymore (up to 16GB).
HBM is very expensive, has limited supply and has limited size configurations. The higher bandwidth offer no benefits to consumers at this point.
JismSince AMD had a role in developping that interposer, AMD will have an advantage over HBM and HBM2 chips in the future while Nvidia and others have to wait in line first.
Completely untrue. Nvidia was BTW the first to ship a HBM2 based product.
JismThis was an important deal and this is why you dont see nvidia HBM cards yet. The FuryX with HBM is still an excellent all-round graphics card if you ask me. The HBM 1 OC is also sick, from base 500Mhz up to 1GHz offering a stunning 1024GB a sec of memory bandwidth. You dont see numbers like that in the GDDR camp.
In your dreams. Fury X was beaten by GTX 980 Ti, and overclocking the memory wouldn't help here. Even though Fury X is the most powerful graphics card made by AMD, it's not even available any more.
Posted on Reply
#25
medi01
btarunrAMD threatens to disrupt NVIDIA's sub-$500 lineup
So 500mm2 Vega (bigger than Titan, which, miraculously, has been rendered irrelevant bo 699$ 1080Ti) threatens only sub-500$ lineup, really?
Posted on Reply
Add your own comment
Dec 19th, 2024 04:47 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts