Monday, March 18th 2019
NVIDIA GTC 2019 Kicks Off Later Today, New GPU Architecture Tease Expected
NVIDIA will kick off the 2019 GPU Technology Conference later today, at 2 PM Pacific time. The company is expected to either tease or unveil a new graphics architecture succeeding "Volta" and "Turing." Not much is known about this architecture, but it's highly likely to be NVIDIA's first to be designed for the 7 nm silicon fabrication process. This unveiling could be the earliest stage of the architecture's launch cycle, would could see market availability only by late-2019 or mid-2020, if not later, given that the company's RTX 20-series and GTX 16-series have only been unveiled recently. NVIDIA could leverage 7 nm to increase transistor densities, and bring its RTX technology to even more affordable price-points.
99 Comments on NVIDIA GTC 2019 Kicks Off Later Today, New GPU Architecture Tease Expected
This will change in time. Or perhaps has changed, considering the time it takes to develop and manufacture a GPU.
Performance gain is an unanswered question. For Nvidia GPUs 16/14nm the efficiency curve goes to hell after a little over 2GHz 12nm gets to 2.1GHz. We really do not know whether that is process or architecture limit but probably a bit of both. AMD Vegas get power limited very fast but seem to have gained a couple hundred MHz from the shrink. It is not bad but not that much either. Power consumption will go down noticeably which is a good thing but is it good enough by itself?
www.techpowerup.com/gpu-specs/geforce-rtx-2080-ti.c3305
And this is using 250W or more, so is also TDP 'capped' for the Geforce range. (Historically; never say never)
The headroom that does remain simply isn't enough to push another gen or refresh out. There is too little to gain and it would result in even larger dies which also means higher power consumption. That won't work for the 2080ti without extreme measures in cooling or otherwise. So where does that leave all products below it? They have nowhere to reposition to...
As for AMD, they simply have to jump straight to 7nm because Vega 64 already touches 300W and has the exact same problem. And for them, that even includes having already implemented HBM which is marginally more power efficient than conventional memory.
www.techpowerup.com/forums/threads/amd-radeon-vii-detailed-some-more-die-size-secret-sauce-ray-tracing-and-more.251444/post-3974556
Turing is still being ramped up. Or down, in this case as TU116 is true midrange material. Announcing RTX 3000 series now would be unexpected. Even if RTX 2000 will be a one-year thing, RTX 3000 announcement is more likely to be at GamesCom in August.
Eventually though, it depends on what AMD has up its sleeve. If AMD comes out with competitive enough Navi in August as rumors currently say Nvidia will need to answer. AMD and Nvidia know pretty well what the other is working on and generally have a pretty good idea what the other announces and released. There are details like final clocks and final performance that they have to estimate but these estimations are not far off.
Either way with my budget of 200 quid or less on a GPU, the anxiety of buyer's remorse due to this is largely diminished (much lower than if i spent £1000+ on a 2080 ti lol). The 1660 i just bought is a great little card will most definitely be keeping it till 7nm comes at the same price point and gives me 2x perfwatt in F@H^^
Ironically there seems to be a bit more hype surrounding the recent CryEngine demo. And that is not just me looking through my tinted glasses... we also know Crytek is a studio that isn't in the best position at this time and they could just be fishing for exposure. But even so, their vision of the ray traced future looks a whole lot better IMO.
GTX 980 - September 2014
GTX 980Ti - June 2015
GTX 1080 - May 2016
GTX 1080Ti - March 2017
RTX 2080Ti - September 2018
Bottom line doesn't really change, we've been looking at 'Pascal performance' for far too long now and Turing barely changes that.
I am of the opinion, though I can't prove it, that if Turing didn't need die space for the RT and Tensor cores then there would have been more CUDA cores and we would have seen the kind of performance increase of Pascal over Maxwell.
Fixed.
Meanwhile, what has actually happened in the real world : (marked the launch dates of both 20 series and Radeon 7)
Regardless, nice try.