That's right, pathetic.
35% is kind of a lot, I would say it's a decent amount, it's a big difference, maybe you just don't count well. For example, 3060ti looks very decent against 3070, and 4060ti against very dubious 4070 looks like a tattered piece of junk.
I don't count well? What's that supposed to mean?
5888 shaders on the 4070,
4352 shaders on the 4060Ti,
5888 / 4352 = 1.35, which is 35% more.
I didn't do the counting, a calculator did.
If you're concerned about performance for the name, then you're a bit late to the party. The entire community has been moaning about this reduction of silicon config per tier for years already. It gets mentioned by someone in almost every thread about Nvidia because it's such a hard pill to swallow. In this thread, it was me,
post #25.
4070ti super got super bit depth and what did it give her? Nothing, super 8%. Which were obtained from the difference in those same cores.
It's power-limited. That's 8% more performance from 0% more power. Nvidia restricted the power of the 4070TiS to the exact same 285W of the 4070Ti to keep the 4070TiS from stealing sales of the 4080 card.
If you look at TechPowerUp's reviews of the 4070TiS you'll see that the Gigabyte model with a max power limit of 320W (exactly the same as a 4080) gets a much better 19% improvement over the base 4070Ti performance, and is only 6% behind the 4080.
Also, if you look at the overall results by resolution (
https://www.techpowerup.com/review/gigabyte-geforce-rtx-4070-ti-super-gaming-oc/32.html) you'll see that the 256-bit 4070TiS scales better with resolution than the 192-bit 4070 Ti, since 4K needs more bandwidth and the 192-bit 4070Ti is starting to struggle.