I agree - surely you too can see that it's near impossible to see the jumps forward some people speak of...
I'm a bit more conservative with that. But the specs really aren't that theoretical and if you or anyone else really thinks its plausible for Nvidia to massively increase IPC without speaking highly of it in the keynote (which they haven't, it was RTX and DLSS front to back), well... its silly.
What we have is very very clear:
- overinflated and misleading Nvidia keynote statements on perf increases, placing Pascal next to Turing in tasks Pascal was never designed for
- higher TDP to cater for additional resources on die - RT and tensor cores
- A marginally smaller node 16>12nm
- Subtle changes to the shaders themselves
- Lower baseclocks and boosts
- Only slightly higher shader counts
There is just no way this is going to massively outperform Pascal per shader. If you then factor in the price and shader counts, its easy to come to conservative numbers. And its wishful thinking to see 50% perf increases - except if all you look at is the 2080ti and place it next to a 1080 non ti. But you can buy two 1080's at that price
You're not the only one. But the real question should be: what's the point of such a big die just to cater for some extra fancy effects that are completely detrimental to overall performance for its primary use case... The hands on with a 2080ti is telling: minor effects cause massive performance hits. Nvidia can only do this because they own the market; so that brings us back to the no-AMD-competition statement but in a different way