Monday, March 18th 2019

NVIDIA GTC 2019 Kicks Off Later Today, New GPU Architecture Tease Expected

NVIDIA will kick off the 2019 GPU Technology Conference later today, at 2 PM Pacific time. The company is expected to either tease or unveil a new graphics architecture succeeding "Volta" and "Turing." Not much is known about this architecture, but it's highly likely to be NVIDIA's first to be designed for the 7 nm silicon fabrication process. This unveiling could be the earliest stage of the architecture's launch cycle, would could see market availability only by late-2019 or mid-2020, if not later, given that the company's RTX 20-series and GTX 16-series have only been unveiled recently. NVIDIA could leverage 7 nm to increase transistor densities, and bring its RTX technology to even more affordable price-points.
Add your own comment

99 Comments on NVIDIA GTC 2019 Kicks Off Later Today, New GPU Architecture Tease Expected

#26
ratirt
Vayra86Yeah you got that right, but now you need to still factor in the actual cost of moving fabs to a smaller node, adjusting the processes and machinery etc. And then all you've got is the same product that uses a bit less power - and has headroom for further improvement. That on its own is not enough to compete. You go smaller so you can go 'bigger' :)
Yes. I agree with that just wanted to make a point with the die size and yields. On the other hand if what you say is true, AMD or NV wouldn't go that direction but improve upon current manufacturing process till they hit the limit. At the beginning it might get more expensive but with time both companies are cutting costs with new smaller nodes. That's also what they are after and of course they can go bigger and even more faster. I think this all goes in line. Smaller, less power, faster and of course can go bigger which gives even more performance. This gives the companies a possibility to go bigger (in comparison to the 12 or 16 nm nodes)and they will need to evaluate how big they can go with the 7nm node so it is reasonable.
Posted on Reply
#27
Vayra86
ratirtOn the other hand if what you say is true, AMD or NV wouldn't go that direction but improve upon current manufacturing process till they hit the limit.
They did! 28nm was dragged out long past expiry date (that is why Maxwell was dubbed "Paxwell") and Pascal was little more than a shrink of it with some bonus features. 12nm is just 16++ in a sense and on this node, Nvidia already pushed die sizes to an absolute maximum with the 2080ti. Other side of the fence, AMD's Vega was already scaled beyond sensible power/temp targets and prior to it, Fury X also maxed out 28nm - it even required HBM and watercooling to round it off.
Posted on Reply
#28
londiste
Current state of things indicates that 7nm is not ready for prime-time with big dies like GPUs. Vega10 at 14nm and TU104 at 12nm are about 55-60% larger than Vega20 at 7nm. If 7nm is more than 50% more expensive (taking both manufacturing costs and yields in account) it does not make financial sense to go for a shrink. I did a quick Google search but could not find the AMD slide I mentioned earlier but twice the cost for 250mm² die at the end of 2017 is a cause for some pessimism. Vega20 is already a third larger than that. I would expect Vega20 production cost to be about on par with the same GPU (with a 55-60% larger die) made on 12nm today. For AMD, the choice to shrink was a no-brainer - they really need the power efficiency and judging from Radeon VII the choice was justified. Nvidia felt they did not need that for Turing and stayed with 12nm for now.

This will change in time. Or perhaps has changed, considering the time it takes to develop and manufacture a GPU.
Posted on Reply
#29
ratirt
Vayra86They did! 28nm was dragged out long past expiry date (that is why Maxwell was dubbed "Paxwell") and Pascal was little more than a shrink of it with some bonus features. 12nm is just 16++ in a sense and on this node, Nvidia already pushed die sizes to an absolute maximum with the 2080ti. Other side of the fence, AMD's Vega was already scaled beyond sensible power/temp targets and prior to it, Fury X also maxed out 28nm - it even required HBM and watercooling to round it off.
Ok. So you think that the 12nm is already maxed out since NV is going 7nm? I think there's still headroom but yet NV is moving to 7nm. Question is: is it because AMD does that or they have some doubts about the 12nm yields or performance gains? The crown has already been given to AMD for the first 7nm chip so what's the point in rushing it?
Posted on Reply
#30
londiste
ratirtOk. So you think that the 12nm is already maxed out since NV is going 7nm? I think there's still headroom but yet NV is moving to 7nm. Question is: is it because AMD does that or they have some doubts about the 12nm yields or performance gains? The crown has already been given to AMD for the first 7nm chip so what's the point in rushing it?
12nm is maxed out. TU102 is 754 mm², very close to TSMC's reticle size. They simply cannot manufacture a larger die without some workarounds (and very considerable additional cost). GV100 was a bit larger at 815 mm² but that is at the very limit and cards with these chips were sold at $3k at the cheapest. If Nvidia wants more transistors they have to go to 7nm.

Performance gain is an unanswered question. For Nvidia GPUs 16/14nm the efficiency curve goes to hell after a little over 2GHz 12nm gets to 2.1GHz. We really do not know whether that is process or architecture limit but probably a bit of both. AMD Vegas get power limited very fast but seem to have gained a couple hundred MHz from the shrink. It is not bad but not that much either. Power consumption will go down noticeably which is a good thing but is it good enough by itself?
Posted on Reply
#31
Vayra86
ratirtOk. So you think that the 12nm is already maxed out since NV is going 7nm? I think there's still headroom but yet NV is moving to 7nm. Question is: is it because AMD does that or they have some doubts about the 12nm yields or performance gains? The crown has already been given to AMD for the first 7nm chip so what's the point in rushing it?
Maxed as in: has no future for their product stack. Look at this die shot

www.techpowerup.com/gpu-specs/geforce-rtx-2080-ti.c3305

And this is using 250W or more, so is also TDP 'capped' for the Geforce range. (Historically; never say never)

The headroom that does remain simply isn't enough to push another gen or refresh out. There is too little to gain and it would result in even larger dies which also means higher power consumption. That won't work for the 2080ti without extreme measures in cooling or otherwise. So where does that leave all products below it? They have nowhere to reposition to...

As for AMD, they simply have to jump straight to 7nm because Vega 64 already touches 300W and has the exact same problem. And for them, that even includes having already implemented HBM which is marginally more power efficient than conventional memory.
Posted on Reply
#32
Fluffmeister
londisteCurrent state of things indicates that 7nm is not ready for prime-time with big dies like GPUs. Vega10 at 14nm and TU104 at 12nm are about 55-60% larger than Vega20 at 7nm. If 7nm is more than 50% more expensive (taking both manufacturing costs and yields in account) it does not make financial sense to go for a shrink. I did a quick Google search but could not find the AMD slide I mentioned earlier but twice the cost for 250mm² die at the end of 2017 is a cause for some pessimism. Vega20 is already a third larger than that. I would expect Vega20 production cost to be about on par with the same GPU (with a 55-60% larger die) made on 12nm today. For AMD, the choice to shrink was a no-brainer - they really need the power efficiency and judging from Radeon VII the choice was justified. Nvidia felt they did not need that for Turing and stayed with 12nm for now.

This will change in time. Or perhaps has changed, considering the time it takes to develop and manufacture a GPU.
Voila!

www.techpowerup.com/forums/threads/amd-radeon-vii-detailed-some-more-die-size-secret-sauce-ray-tracing-and-more.251444/post-3974556
Posted on Reply
#33
EarthDog
cucker tarlsonThey need more rtx games,it's been just two and one is a mp shooter.This technology will be dead if this continues.
They arent really behind. If you recall when these were released, IIRC, the only thing released in 18 was supposed to be bf v and SOTR. With the latter RT being delayed. We should see more this year... sooner the better.
Posted on Reply
#34
Space Lynx
Astronaut
CrackongSo RTX 2000 series obsolete in less than a year ?
no... gtx 1080 ti and rtx 2080 ti were 2 1/2 years apart from launch dates... Nvidia knows silicon has its limits so they probably will move rtx 2080 ti to a 3 year cycle... unless competition arrives, but it won't so....
Posted on Reply
#35
londiste
Nvidia has other things they can talk about. Ampere. RTX. Whatever piece of software they want to hype, probably directed to devs not consumers.

Turing is still being ramped up. Or down, in this case as TU116 is true midrange material. Announcing RTX 3000 series now would be unexpected. Even if RTX 2000 will be a one-year thing, RTX 3000 announcement is more likely to be at GamesCom in August.

Eventually though, it depends on what AMD has up its sleeve. If AMD comes out with competitive enough Navi in August as rumors currently say Nvidia will need to answer. AMD and Nvidia know pretty well what the other is working on and generally have a pretty good idea what the other announces and released. There are details like final clocks and final performance that they have to estimate but these estimations are not far off.
Posted on Reply
#36
AmioriK
Welp. Didn't expect them to potentially announce Turing successor so soon lol. I was thinking nextr-gen would be a 'Turing Refresh' on 7nm, using the increased density to throw more CUDA cores at the problem and ramp up the clocks a bit. Turing, IMO, is a fantastic uArch with some very future looking features (dual int/fp execution could be huge, look at games like Wolfenstein 2 and how they run on Turing cards). Anyway I never thought 20 series were a good investment. They are great cards and if you got the cash, sure get one. But you are paying significant early adopter tax, not that it matters to real hardware enthusiasts though.

Either way with my budget of 200 quid or less on a GPU, the anxiety of buyer's remorse due to this is largely diminished (much lower than if i spent £1000+ on a 2080 ti lol). The 1660 i just bought is a great little card will most definitely be keeping it till 7nm comes at the same price point and gives me 2x perfwatt in F@H^^
Posted on Reply
#37
Vayra86
EarthDogThey arent really behind. If you recall when these were released, IIRC, the only thing released in 18 was supposed to be bf v and SOTR. With the latter RT being delayed. We should see more this year... sooner the better.
As much as I understand these things take time, the initial hype has already died down and for a new tech that needs adoption, this is not a good thing.

Ironically there seems to be a bit more hype surrounding the recent CryEngine demo. And that is not just me looking through my tinted glasses... we also know Crytek is a studio that isn't in the best position at this time and they could just be fishing for exposure. But even so, their vision of the ray traced future looks a whole lot better IMO.
Posted on Reply
#38
kings
GTC is a conference focused on AI and ML. Pretty sure nothing will be said regarding to gaming cards!
Posted on Reply
#39
Moldysaltymeat
CrackongSo RTX 2000 series obsolete in less than a year ?
RTX 2000 series was obsolete from the time the first benchmarks launched. Way overpriced with little performance benefit over Pascal. Not to mention that the flagship, $1200 2080ti was dying on people shortly after launch. This RTX experiment was courageous, but caused a massive loss in share value. Investors are demanding an upgrade to Pascal for a cost-to-performance ratio that makes sense.
Posted on Reply
#40
Vya Domus
londisteEventually though, it depends on what AMD has up its sleeve. If AMD comes out with competitive enough Navi in August as rumors currently say Nvidia will need to answer. AMD and Nvidia know pretty well what the other is working on and generally have a pretty good idea what the other announces and released. There are details like final clocks and final performance that they have to estimate but these estimations are not far off.
No high end part from AMD is in sight, it will all come down to whether or not the RTX series is selling well enough. They have no reason to compete with their own products, I suspect Turing was a money pit and they will like not let go of it any time soon.
Posted on Reply
#41
Naito
Vya DomusI suspect Turing was a money pit and they will like not let go of it any time soon.
This. It'll probably be a shrink and tweak at most. Would be surprised to see anything more at this stage, if we see anything at all
Posted on Reply
#42
64K
lynx29no... gtx 1080 ti and rtx 2080 ti were 2 1/2 years apart from launch dates... Nvidia knows silicon has its limits so they probably will move rtx 2080 ti to a 3 year cycle... unless competition arrives, but it won't so....
According to the GPU database here the 1080 Ti and 2080 Ti launches were 1 1/2 years apart.
Posted on Reply
#43
MetXallica
Until a card comes out with 2X the performance of the 1080ti I'm waiting. Upgrading is a waste of money unless you can at least double your performance. I miss the golden age of computing when CPU's and GPU's were always at least double the performance every generation and prices weren't so ridiculous...
Posted on Reply
#44
jabbadap
Yeah it's GTC, so successor to Volta at most. Maybe some new bits about Tegra Orin automotive part too. Vega20 is quite compelling HPC compute gpu on the paper, so perhaps nvidia focuses their next effort for 7nm full fp64 gpu. GV100 is still two years old gpu, which got updated memory config year ago. Not to mention it lacks some mixed precision compute that Turing has.
Posted on Reply
#45
londiste
64KAccording to the GPU database here the 1080 Ti and 2080 Ti launches were 1 1/2 years apart.
...
GTX 980 - September 2014
GTX 980Ti - June 2015
GTX 1080 - May 2016
GTX 1080Ti - March 2017
RTX 2080Ti - September 2018
Posted on Reply
#46
Vayra86
64KAccording to the GPU database here the 1080 Ti and 2080 Ti launches were 1 1/2 years apart.
I think its more accurate to use the Gx104 to compare generation release moments. Nvidia plays around with their big die and launches it at the time it will have the greatest impact - or not at all (Kepler). For Turing, that was right at the beginning as it was the only card that offered a performance gain over Pascal. But for Pascal, it was right at the end because the rest of the stack could carry everything fine.

Bottom line doesn't really change, we've been looking at 'Pascal performance' for far too long now and Turing barely changes that.
Posted on Reply
#47
64K
Vayra86I think its more accurate to use the Gx104 to compare generation release moments. Nvidia plays around with their big die and launches it at the time it will have the greatest impact - or not at all (Kepler). For Turing, that was right at the beginning as it was the only card that offered a performance gain over Pascal. But for Pascal, it was right at the end because the rest of the stack could carry everything fine.

Bottom line doesn't really change, we've been looking at 'Pascal performance' for far too long now and Turing barely changes that.
Good point but I disagree about the big die Kepler. They eventually did release the 780 Ti which had more cores than the Titan and was faster than the Titan until the Titan Black was released but the 780 Ti wasn't good for compute.

I am of the opinion, though I can't prove it, that if Turing didn't need die space for the RT and Tensor cores then there would have been more CUDA cores and we would have seen the kind of performance increase of Pascal over Maxwell.
Posted on Reply
#48
wolf
Better Than Native
MoldysaltymeatRTX 2000 series was obsolete from the time the first benchmarks launched. Way overpriced with little performance benefit over Pascal. Not to mention that the flagship, $1200 2080ti was dying on people shortly after launch. This RTX experiment was courageous, but caused a massive loss in share value. Investors are demanding an upgrade to Pascal for a cost-to-performance ratio that makes sense.
Radeon 7 was obsolete from the time the first benchmarks launched. Way overpriced with little performance benefit over Vega. Not to mention that it's noisy, power hungry and can't consistently match the product it competes against. This Mi50 experiment was courageous, but the cards are sold for virtually no profit. Investors are demanding an upgrade to GCN for a cost-to-performance ratio that makes sense.

Fixed.
Posted on Reply
#49
Vya Domus
wolfInvestors are demanding an upgrade to GCN for a cost-to-performance ratio that makes sense.
Your comment is cute.

Meanwhile, what has actually happened in the real world : (marked the launch dates of both 20 series and Radeon 7)





Regardless, nice try.
Posted on Reply
#50
P4-630
kingsGTC is a conference focused on AI and ML. Pretty sure nothing will be said regarding to gaming cards!
This^^
Posted on Reply
Add your own comment
Nov 3rd, 2024 14:53 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts