I wonder how big the GPU difference based on the DIE are for the 5700Xt vs the 2080Ti?
31mm x 25mm for the 2080TI but I can't seem to find dimensions for the 5700XT other than 251mm squared.
I was just reading an article that Big Navi could have as much as 16 billion transistors up from the 10.3 on 5700Xt.
The more interesting question I believe is what a performance equivalent to the 5700XT of Nvidia Turing (with RT) would be sized at on 7nm. I believe I saw that calculation somewhere at some point... bear with me....
EDIT: can't find it back. But if we consider a density increase of around 50%... which is generous because we've seen 60% as well for 7nm EUV; Ballpark half size. That puts the 751mm to a comfortable 375 to bring
2080ti performance. I reckon they can make do with about 200-240mm2 for 5700XT equivalents.
Its not far off... but the 5700XT WITHOUT RT already weighs in at 251mm2. So, its a little stretch, but what RDNA2 will need to do is gain a bit of performance, AND add RT, at a similar die size. That's a pretty big assignment for just architecture. The outlook is that once again Nvidia will be getting more chips out of a wafer here and that is even in a worst case scenario where Ampere is no improvement over Turing.
The transistor count isn't really the right metric because the cards don't share feature sets or node. Its similar to TFLOPS. You can't compare outside the same gen. But die size vs overall performance are universal.
He really painted himself into a corner there,
first stating that AMD came to 7nm from a vastly inferior node to Nvidia's current 12nm, then claiming the latter is the same as 16nm, lol
I do sometimes wonder how it must be to be this out of touch.
In November 2013, TSMC became the first foundry to begin 16nm Fin Field Effect Transistor (FinFET) risk production. In addition, TSMC became the first foundry that produced the industry's first 16nm FinFET fully functional networking processor for its customer. Following the success of its 16nm...
www.tsmc.com
I wonder why they don't talk about 12nm and 16nm separately and why it's always referred to as "16/12nm". Hmm, no particular reason probably.
Context... I mean, I hope you can see I at least am not here to ridicule you or your statements, but rather provide insight and argumentation... but the above is putting your head in the sand, is it not? Its okay to admit we were wrong sometimes. Happens to me all the time... still alive and kicking
I also underlined how you missed this with the shot of the VII versus Vega in perf/watt gaps. Please don't respond saying 'but VII is larger, so more efficient'
But, more substance, because I like that; here is another pointer to consider the fact Nvidia will do just a little bit more than shrink
Nvidia could take an off-the-shelf process node, but sees no need as its 12nm chips still outperform AMD's 7nm GPUs
www.pcgamesn.com
I'm not going to say everything Huang says is a golden rule, but... there's that.