Monday, November 13th 2017
NVIDIA "Volta" Architecture Successor Codenamed "Ampere," Expected GTC 2018
NVIDIA has reportedly codenamed the GPU architecture that succeeds its upcoming "Volta" architecture after the 18th century French physicist who is one of the pioneers of electromagnetism, André-Marie Ampère, after whom the popular unit of measuring current is named. The new NVIDIA "Ampere" GPU architecture, which succeeds "Volta," will make its debut at the 2018 Graphics Technology Conference (GTC), hosted by NVIDIA. As with GPU architecture launches by the company in recent times, one can expect an unveiling of the architecture, followed by preliminary technical presentations by NVIDIA engineers, with actual products launching a little later, and consumer-grade GeForce product launching much later.
NVIDIA is yet to launch GeForce products based on its upcoming "Volta" architecture as its current "Pascal" architecture turns 18 months old in the consumer graphics space. Should NVIDIA continue on the four-digit model number scheme of its GeForce 10-series "Pascal" family, one can expect those based on "Volta" to follow the GeForce 20-series, and "Ampere" GeForce 30-series. NVIDIA is yet to disclose the defining features of the "Ampere" architecture. We'll probably have to wait until March 2018 to find out.
Source:
Heise.de
NVIDIA is yet to launch GeForce products based on its upcoming "Volta" architecture as its current "Pascal" architecture turns 18 months old in the consumer graphics space. Should NVIDIA continue on the four-digit model number scheme of its GeForce 10-series "Pascal" family, one can expect those based on "Volta" to follow the GeForce 20-series, and "Ampere" GeForce 30-series. NVIDIA is yet to disclose the defining features of the "Ampere" architecture. We'll probably have to wait until March 2018 to find out.
97 Comments on NVIDIA "Volta" Architecture Successor Codenamed "Ampere," Expected GTC 2018
Previously both were on 28nm TSMC & Fury Nano was close in terms of efficiency wrt Maxwell, partly due to HBM of course.
In this case we are looking at 2x1.6 density and 1.2x1.15 faster clock speed.
And I just don't think we will see 7nm just yet, as it represents too big of a leap forward. as it means 484mm2 titan Xp being shrinked to 150mm2 and working at 2.5Ghz / 75 watts and who the hell will release such monstrosity. And bigger chip of that type it will run 4K at 120Hz.
Also it's 1.x clock speeds at same power or 0.x power at the same clocks, not both at the same time.
HBM is better than GDDR so it's the natural evolution.
I think GDDR6 will be used.
But ok we don't know for sure.
Matter of the fact is AMD is giving you their most expensive silicon for less than half of what Nvidia charges for their equivalent. They are instead giving you their second grade silicon , that's a fact independent from performance metrics. In this industry profit and success isn't caused exclusively by having a better product , it is dictated largely by having a product that is less expensive to manufacture against what the completion has at that price point. Recall all the major releases from the last 10 years and you'll see this is in fact the case.
This leads to one and only one outcome , the party that is winning isn't giving you their best.
I don't care if what Nvidia has it's enough , it could have been more. It boggles my mind as to why as a consumer you would be fine with that.
Does this say more about NVIDIA or more about AMD? On one hand, company A isnt able to put out a high end product for gaming, but does better in compute.. company B with its 'second grade' silicon easily takes the performance crown and performance per watt metric, but falls behind in compute.
I mean, use the right tool for the job right??? Good for you.. a self described 1%. Clearly your choice is made for you. Again, using the riggt tool for the job.
Each one excels in different ways. ;)