Monday, March 18th 2019

NVIDIA GTC 2019 Kicks Off Later Today, New GPU Architecture Tease Expected

NVIDIA will kick off the 2019 GPU Technology Conference later today, at 2 PM Pacific time. The company is expected to either tease or unveil a new graphics architecture succeeding "Volta" and "Turing." Not much is known about this architecture, but it's highly likely to be NVIDIA's first to be designed for the 7 nm silicon fabrication process. This unveiling could be the earliest stage of the architecture's launch cycle, would could see market availability only by late-2019 or mid-2020, if not later, given that the company's RTX 20-series and GTX 16-series have only been unveiled recently. NVIDIA could leverage 7 nm to increase transistor densities, and bring its RTX technology to even more affordable price-points.
Add your own comment

99 Comments on NVIDIA GTC 2019 Kicks Off Later Today, New GPU Architecture Tease Expected

#1
cucker tarlson
They need more rtx games,it's been just two and one is a mp shooter.This technology will be dead if this continues.
Posted on Reply
#2
Crackong
So RTX 2000 series obsolete in less than a year ?
Posted on Reply
#3
kastriot
It's a trend, like new mobile phones every 1-2 year and milking customers more frequently.
Posted on Reply
#4
sutyi
CrackongSo RTX 2000 series obsolete in less than a year ?
They rolled out Turing on 12nm TSMC which is more like 16nm++ last October, and basically we knew that 7nm for mass production would be available from 2019Q1 more or less. I personally did not see the point of launching the new RTX series on this node. I think they did it cause they knew they could and AMD had nothing to counter with, cause they are all-in on 7nm.

After Turing launch we've seen some performance gains, but no better performance per dollar and meh RTX performance. I've even told everyone who is not building a completely new rig or at least is sitting on GTX 1070 or higher performance GPUs to not bother buying Turing, unless you are like super enthusiast but still RTX 2080 Ti at that price... yikes.

So if they announce RTX 3000 series on 7nm Ampere with a boatload of more CCs and a 2nd generation RT logic today it will be funny to read and see the reactions through out the techpress and techforums.
Posted on Reply
#5
Ravenlord
CrackongSo RTX 2000 series obsolete in less than a year ?
Like I excpected. It was known to launch productions for wide amount of 7nm products for many companies in around mid-year. RTX 2000 series was simple placeholder with improved 12nm to milk as many as possible, with faster 7nm we probably wouldn't even see this rtx 2000 with 12nm. Delays of 7nm and some AMD firecrackers forced them to release new gpus. Now we will see real new gpu generation, or bullshit - same shrinked gpu with new name.
Posted on Reply
#6
londiste
RavenlordLike I excpected. It was known to launch productions for wide amount of 7nm products for many companies in around mid-year.
AMD Presentation from December 2017 noted that 250mm² chip at 7nm cost twice as much as 250mm² chip at 12nm. Hopefully this has improved by now.
sutyiI personally did not see the point of launching the new RTX series on this node. I think they did it cause they knew they could and AMD had nothing to counter with, cause they are all-in on 7nm.
As with any new technology, they need to get around the chicken or egg problem. RTRT games/software/API and something to support RTRT in hardware. Whether the new hardware implementation is technically good or bad, the first generation usually does not succeed because games are not there. Games will not be there if there is no hardware.
Posted on Reply
#7
Vayra86
I do wonder, how many of those leather jackets does Huang have? I imagine a wardrobe that contains only that item.
Posted on Reply
#8
64K
I'm expecting good things from Nvidia on 7nm. Hopefully the prices will be more reasonable as well.

I wish we could squeeze some info out of Intel as to where they are heading too. afaik they are still planning a gaming GPU launch next year sometime.
Vayra86I do wonder, how many of those leather jackets does Huang have? I imagine a wardrobe that contains only that item.
There can be only one. :)
Posted on Reply
#9
PerfectWave
i like the word "affordable" by NVIDIA LUL
Posted on Reply
#10
ratirt
londisteAMD Presentation from December 2017 noted that 250mm² chip at 7nm cost twice as much as 250mm² chip at 12nm. Hopefully this has improved by now.
As with any new technology, they need to get around the chicken or egg problem. RTRT games/software/API and something to support RTRT in hardware. Whether the new hardware implementation is technically good or bad, the first generation usually does not succeed because games are not there. Games will not be there if there is no hardware.
I don't think that's totally correct. shrinks bring more yields. You have more chips on one wafer with lower count of defects affecting chips. This means it is cheaper or supposed to be cheaper.
Posted on Reply
#11
yeeeeman
ratirtI don't think that's totally correct. shrinks bring more yields. You have more chips on one wafer with lower count of defects affecting chips. This means it is cheaper or supposed to be cheaper.
Shrinks are done via improvements on the optics, substances, etc. These cost a fortune and usually are very reliable in the beginning. Hence the much higher cost of the same chip size in a newer process, same chip size that sells for the same amount of money as the previous generation.
Posted on Reply
#12
renz496
sutyiThey rolled out Turing on 12nm TSMC which is more like 16nm++ last October, and basically we knew that 7nm for mass production would be available from 2019Q1 more or less. I personally did not see the point of launching the new RTX series on this node. I think they did it cause they knew they could and AMD had nothing to counter with, cause they are all-in on 7nm.

After Turing launch we've seen some performance gains, but no better performance per dollar and meh RTX performance. I've even told everyone who is not building a completely new rig or at least is sitting on GTX 1070 or higher performance GPUs to not bother buying Turing, unless you are like super enthusiast but still RTX 2080 Ti at that price... yikes.

So if they announce RTX 3000 series on 7nm Ampere with a boatload of more CCs and a 2nd generation RT logic today it will be funny to read and see the reactions through out the techpress and techforums.
I can guarantee you with absolute confidence there will be no RTX3K series talk in this event.
Posted on Reply
#13
londiste
ratirtI don't think that's totally correct. shrinks bring more yields. You have more chips on one wafer with lower count of defects affecting chips. This means it is cheaper or supposed to be cheaper.
Shrink, in the early part of smaller process' life brings lower yield. Factual data on yields are hard to come by but usually in the first year of mass production, yields are noticeably worse than old, well matured process.

In addition to that, the cost of using a smaller process has been increasing over the last few generations. A few process steps ago producing a chip on a new, smaller process cost close to the same as the old one automatically bringing better cost efficiency (mostly lower prices to consumers along with it). This was not exactly the case with shrink from 22nm to 16nm and the cost difference between 16/14/12nm and 7nm is even worse. Smaller process is still worth it for its performance and especially power efficiency but not necessarily cost.

Also, yields and manufacturing costs do not rise linearly with die size. AMD's slide was for 250 mm². The current 7nm flagship GPU - Vega 20 on Radeon VII and MI cards - is a little over 30% larger than that example at 331 mm². There is a reason this competes in price with TU104 with the size of 545 mm² at 12nm.
Edit: Just to be clear, I the intention was not to compare Radeon VII and RTX2080 or start a discussion on that. Both GPUs in them - Vega 10 and TU104 - have 13.x billion transistors and have about the same compute performance. They are as good a comparison for 12nm vs 7nm as we are going to get right now.
Posted on Reply
#14
renz496
ratirtI don't think that's totally correct. shrinks bring more yields. You have more chips on one wafer with lower count of defects affecting chips. This means it is cheaper or supposed to be cheaper.
In the past that might be true. The reason why nvidia did not wait for 7nm and using "refined" 16nm process is because of the cost issue. In fact they already talk about this more extensively during 28nm generation. Going forward shriking does not guarantee lower cost due to the process itself becoming very expensive due to how hard it was to get it right.
Posted on Reply
#15
Naito
I'll stick with my 1070 TIs for now, thanks - not overly impressed by the RTX series when factoring in the cost. Not enough games to sway either...
Posted on Reply
#16
Robcostyle
CrackongSo RTX 2000 series obsolete in less than a year ?
IT IS already - world war z devs announced radeon rays support - cant wait for more AAA announces
Posted on Reply
#17
64K
londisteShrink, in the early part of smaller process' life brings lower yield. Factual data on yields are hard to come by but usually in the first year of mass production, yields are noticeably worse than old, well matured process.
In addition to that, the cost of using a smaller process has been increasing over the last few generations. A few process steps ago producing a chip on a new, smaller process cost close to the same as the old one automatically bringing better cost efficiency (mostly lower prices to consumers along with it). This was not exactly the case with skrink from 22nm to 16nm and the cost difference between 16/14/12nm and 7nm is even worse. Smaller process is still worth it for its performance and especially power efficiency but not necessarily cost.
That's what I've heard as well. The GPUs are getting more and more expensive to manufacture as the process node gets smaller.
Posted on Reply
#18
ebivan
Bringing RTX to an even more affordable price point.... lol
Posted on Reply
#20
tvamos
Vayra86I do wonder, how many of those leather jackets does Huang have? I imagine a wardrobe that contains only that item.
I like to think he has them labelled with Monday, Tuesday and so on. So atleast 7?
Posted on Reply
#21
Vayra86
ratirtI don't think that's totally correct. shrinks bring more yields. You have more chips on one wafer with lower count of defects affecting chips. This means it is cheaper or supposed to be cheaper.
No no. Smaller dies bring more yields. Shrinks do not necessarily, every time you go smaller, the margin for error decreases and the chance for errors increases, because there are more masking/lighting steps, and accuracy needs to be higher.

Now, take a long look at Turing die sizes ;)
Posted on Reply
#22
medi01
btarunrNot much is known about this architecture, but it's highly likely
Haha.

PS
Teasing the upcoming teasing.
64KThat's what I've heard as well. The GPUs are getting more and more expensive to manufacture as the process node gets smaller.
And in parallel income skyrokets, curiously.
Posted on Reply
#23
ratirt
renz496In the past that might be true. The reason why nvidia did not wait for 7nm and using "refined" 16nm process is because of the cost issue. In fact they already talk about this more extensively during 28nm generation. Going forward shriking does not guarantee lower cost due to the process itself becoming very expensive due to how hard it was to get it right.
Vayra86No no. Smaller dies bring more yields. Shrinks do not necessarily, every time you go smaller, the margin for error decreases and the chance for errors increases, because there are more masking/lighting steps, and accuracy needs to be higher.

Now, take a long look at Turing die sizes ;)
I hear what you guys are saying. Anyway, if you take turing die and it's size and move from 12 to 7nm, considering it's got same amount of cores, shaders etc. the die will be smaller. You have more of them on one wafer. I kinda consider this that way. What it means for me is that you get same performance (because it is the same chip) but it uses less power and it's smaller due to the shrink.
Isn't this going that way?

Just to add. Turing die size. Yes I get it but isn't it faster at the same time from 1080 TI for example? Not sure about difference in the die size of the two.
Posted on Reply
#24
Fluffmeister
They will launch a high end HPC/ DL AI part first regardless, Radeon VII competes with a 1080 Ti so they have bigger fish to fry.
Posted on Reply
#25
Vayra86
ratirtI hear what you guys are saying. Anyway, if you take turing die and it's size and move from 12 to 7nm, considering it's got same amount of cores, shaders etc. the die will be smaller. You have more of them on one wafer. I kinda consider this that way. What it means for me is that you get same performance (because it is the same chip) but it uses less power and it's smaller due to the shrink.
Isn't this going that way?
Yeah you got that right, but now you need to still factor in the actual cost of moving fabs to a smaller node, adjusting the processes and machinery etc. And then all you've got is the same product that uses a bit less power - and has headroom for further improvement. That on its own is not enough to compete. You go smaller so you can go 'bigger' :)
Posted on Reply
Add your own comment
Nov 21st, 2024 13:21 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts