Monday, March 18th 2019
NVIDIA GTC 2019 Kicks Off Later Today, New GPU Architecture Tease Expected
NVIDIA will kick off the 2019 GPU Technology Conference later today, at 2 PM Pacific time. The company is expected to either tease or unveil a new graphics architecture succeeding "Volta" and "Turing." Not much is known about this architecture, but it's highly likely to be NVIDIA's first to be designed for the 7 nm silicon fabrication process. This unveiling could be the earliest stage of the architecture's launch cycle, would could see market availability only by late-2019 or mid-2020, if not later, given that the company's RTX 20-series and GTX 16-series have only been unveiled recently. NVIDIA could leverage 7 nm to increase transistor densities, and bring its RTX technology to even more affordable price-points.
99 Comments on NVIDIA GTC 2019 Kicks Off Later Today, New GPU Architecture Tease Expected
After Turing launch we've seen some performance gains, but no better performance per dollar and meh RTX performance. I've even told everyone who is not building a completely new rig or at least is sitting on GTX 1070 or higher performance GPUs to not bother buying Turing, unless you are like super enthusiast but still RTX 2080 Ti at that price... yikes.
So if they announce RTX 3000 series on 7nm Ampere with a boatload of more CCs and a 2nd generation RT logic today it will be funny to read and see the reactions through out the techpress and techforums.
I wish we could squeeze some info out of Intel as to where they are heading too. afaik they are still planning a gaming GPU launch next year sometime. There can be only one. :)
In addition to that, the cost of using a smaller process has been increasing over the last few generations. A few process steps ago producing a chip on a new, smaller process cost close to the same as the old one automatically bringing better cost efficiency (mostly lower prices to consumers along with it). This was not exactly the case with shrink from 22nm to 16nm and the cost difference between 16/14/12nm and 7nm is even worse. Smaller process is still worth it for its performance and especially power efficiency but not necessarily cost.
Also, yields and manufacturing costs do not rise linearly with die size. AMD's slide was for 250 mm². The current 7nm flagship GPU - Vega 20 on Radeon VII and MI cards - is a little over 30% larger than that example at 331 mm². There is a reason this competes in price with TU104 with the size of 545 mm² at 12nm.
Edit: Just to be clear, I the intention was not to compare Radeon VII and RTX2080 or start a discussion on that. Both GPUs in them - Vega 10 and TU104 - have 13.x billion transistors and have about the same compute performance. They are as good a comparison for 12nm vs 7nm as we are going to get right now.
Now, take a long look at Turing die sizes ;)
PS
Teasing the upcoming teasing. And in parallel income skyrokets, curiously.
Isn't this going that way?
Just to add. Turing die size. Yes I get it but isn't it faster at the same time from 1080 TI for example? Not sure about difference in the die size of the two.