The "Ti" versions are stop-gaps normally launched much later - RTX 3080 Ti in May 2021, while original RTX 3080 much earlier in 2020.
This lineup looks very strange, the performance estimates simply do not look right, and the lineup is heavily rearranged / rebalanced - the RTX 4090 will pull forward and the gap with the rest of the lineup will be huge.
RTX 4090 = RTX 3090 + 56% more shaders.
RTX 4080 = RTX 3080 +18% more shaders.
RTX 4070 = RTX 3070 + 21% more shaders.
While RTX 3090 was only ~10-15% faster on average than the RTX 3080.
RTX 3080 was 30% faster than RTX 3070.
If the shaders do not give higher IPC, then the RTX 4070 will remain about the same performance or slower than the September 2020 10 GB RTX 3080.
Usually yes, regarding Ti versions launch, but there are exceptions, for example 3060Ti launched 2 months and 1 week after 3090, 2080Ti launched at the same time with 2080 etc.
And after all, the model numbering could be different from the table you quoted.(We could have for example a RTX 4090 Ultra for a Full AD102 version, who knows?
Irrespective from model numbering which can change, there are many examples (but not always) when the most cut down part of a GPU die comes at an earlier stage of a GPU lifecycle (concurrent or with small difference like 3060Ti) and as the yields improve over the lifetime of the product, either we have a refresh or if not the manufacturer can limit just a little bit the availability of the most cut down part if it make sense based on yields.
Another reason for AD103 not to be excessively stressed out is potentially the SM count, full AD102 (192SM) will be served with 384bit bus & 24Gbps GDDR6X (possibly limited availability in order to be able to support many SKUs) and to have the same bandwidth per/SM with 21Gbps GDDR6X memory on a 256bit bus, AD103 must be 112SM instead of the more orthodoxal 128SM, if Ada design is bandwidth limited (I think it will be, just like Pascal) there is no reason to stress AD103 frequency too much since the gains will be limited anyway, but this is just my speculation, we will see.
On top of specs, additional info can occure if you see what historically Nvidia did in the past when it launched a new lineup and how they brought their top last gen performance level at what (lower) price points.
Try to imagine Nvidia's CEO on stage announcing the $500 Ada part, then correlate with corresponding ada model/specs based on your optic and what performance that part must at least reach in order to generate the minimum buzz and then extrapolate from there.
Without even taking account the specs, how likely is it according to your perception, the $499 part (whatever the name) to be less than 3080? (imo logically it will be at least matching 3080
12GB and at that point Navi 33 how much faster it will be if any, since everybody saying that it will be slower at 4K than 6900XT due to 128bit bus (probably it will have around 6800XT 4K performance?)
Yeah, the N4 process should be 50-60% faster at the same wattage (than Samsung 8N).
Also, the clocks must be much higher. 2.5 GHz? 2.8 GHz AD103?
The question is - are the shaders smaller with lower IPC or the same size but slightly faster?
N4 has 63% clock speed increase potential for logic vs N16 according to TSMC and N16 can hit very similar frequencies to Samsung's 8nm.
Logically there will be eventually OC ada models close to 3GHz if ada architecture is designed for high clocks throughput.