Friday, December 30th 2022
NVIDIA France Accidentally Reveals GeForce RTX 4070 Ti Specs
With less than a week to go until the official launch of the GeForce RTX 4070 Ti, NVIDIA France has gone and spoiled things by revealing the official specs of the upcoming GPU. The French division of NVIDIA appears to have posted the full product page, but it has since then been pulled. That didn't prevent Twitter leaker @momomo_us from snapping a couple of screenshots, including that of the official performance numbers from NVIDIA.
There aren't any real surprises here though, as we already knew the CUDA core count and the memory size, courtesy of the RTX 4070 Ti having been the RTX 4080 12 GB, until NVIDIA changed its mind. It's interesting to see that NVIDIA compares the RTX 4070 Ti to the RTX 3080 12 GB in the three official benchmarks, as it makes the RTX 4070 Ti look a lot better than it is in reality, at least based on the rumoured MSRP of US$800-900. One of the three benchmarks is Cyberpunk 2077 using Raytracingl, where NVIDIA suggests the RTX 4070 Ti is around 3.5 times faster than the RTX 3080, but it's worth reading the fine print. We'll know next week how the RTX 4070 Ti actually performs, as well as where the official pricing and actual retail pricing ends up.
Sources:
NVIDIA France (reverted to older page), via @momomo_us
There aren't any real surprises here though, as we already knew the CUDA core count and the memory size, courtesy of the RTX 4070 Ti having been the RTX 4080 12 GB, until NVIDIA changed its mind. It's interesting to see that NVIDIA compares the RTX 4070 Ti to the RTX 3080 12 GB in the three official benchmarks, as it makes the RTX 4070 Ti look a lot better than it is in reality, at least based on the rumoured MSRP of US$800-900. One of the three benchmarks is Cyberpunk 2077 using Raytracingl, where NVIDIA suggests the RTX 4070 Ti is around 3.5 times faster than the RTX 3080, but it's worth reading the fine print. We'll know next week how the RTX 4070 Ti actually performs, as well as where the official pricing and actual retail pricing ends up.
102 Comments on NVIDIA France Accidentally Reveals GeForce RTX 4070 Ti Specs
And again, I'll repeat myself - the pricing is insane regardless, in the entire market except the used cards. To me it feels obvious that both teams just want to keep the margins they had for the last two years and there's no way in hell I'm going to trust their stories when they just don't line up with the rest of HW industry.
at this point we can expect the 9070ti to cost the same as a new Tesla.
A proof - 10 GB RTX 3080 RTX 3080 VRAM usage warnings and the issue with VRAM pool sizes: the compromise of 4K gaming | ResetEra
Once again GTX 970 3.5 GB fiasco or AMD Radeon Fury X 4 GB HBM tech limitation.
Even at 1440p, the RTX 3070 gets an impressive 16 FPS, and if we estimate performance based on RTX 3090 with assumed no memory bottleneck, we would get a massive 6-7 FPS at 4K. Even the RTX 3060 with 12 GB scores a breathtaking 3 FPS!
So this is very far off from a smooth 60 FPS. No one will play games like this, it's a pretty slideshow, not a playable game. And as you can see, the cards are not powerful enough to game like this, so the VRAM limit is proven irrelevant. Both VRAM and computational performance is bottlenecking long before the VRAM size here, with raytracing it's often computational performance in perticular.
Problem solved, no need to thank me.
Guys, please stop looking at VRAM usage in monitoring apps and considering it as a must. It's not!
Kind of like when you put more system memory in your PC, your idle RAM usage rises. Currently, my main PC sits at 5.3 GB used with only Chrome open. Does Windows 10 work with 4 GB RAM? Absolutely.
Look at your performance. When your GPU usage drops, and the game starts to stutter massively, that's when you're hitting a VRAM (or CPU) limit. VRAM usage being at 100% doesn't mean anything. I have a solution, too. Play the original Portal that actually makes sense as a game.
It will inevitably shrink the GFX shipments to levels in which the economies of scale will no longer work and an industry for billions will die off. No, I do not recommend Windows 10 with 4 GB - it runs very slowly.
4 GB is good for Windows 7 or Windows XP, though.
I could have compared having 8 and 32 GB of system RAM - the allocation you see in Task Manager will differ greatly.
My point is: Just because you see all of your VRAM used up in a game, it doesn't mean that you couldn't run it with less.
End of off on my part. :)
Because I see the 12gb vram 3060 beating the 8gb vram 3070, ok by 1 fps but still.
I also see the 3080 10gb doing about half the fps of the 3090 24 gb.
sooo yeah, idk man, again you believe whatever you want to believe, but I think the Vram amount on these new cards is too low and again, that is probably on purpose so you buy the new stuff in a shorter amount of time.
1. RTX 3080 10 GB disabled shaders vs RTX 3090 24 GB all shaders
2. RTX 4080 16 GB crippled second-tier chip vs RTX 4090 24 GB almost full first-tier chip
Both approaches result in almost the same end result - the 80 cards can be beaten badly and it is a clear market segmentation.
2020, RTX 3080 - $700
2022, RTX 4080 - $1200 <- WE ARE HERE
2024, RTX 5080 - $2040
2026, RTX 6080 - $3468
2028, RTX 7080 - $5896
2030, RTX 8080 - $10022
2032, RTX 9080 - $17038
2034, GTX 1080 - $28965
2024 is TSMC N3
2026 is TSMC N2
2028 is TSMC N1
...
and then what?
2030 is TSMC P800 :laugh:
Most benchmarks show performance of the 3070 and 3090 in non-memory bottlenecked scenarios (for the reasons stated in my prior post), therefore if the performance differential between the two cards changes drastically it's safe to assume that the bottleneck lies elsewhere.
As seen in the provided techpowerup RTX portal benchmark, we can see there is a pretty clear advantage to cards that have more VRAM, with the vastly less powerful 3060 12GB beating the the 3080 10GB. You can see this trend extend throughout Nvidia's entire lineup in this benchmark. Oh he is almost certainly arguing in bad faith at this point. The fact that the 3060 12GB is beating the 10GB 3080 is an extremely clear indication of a memory bottleneck. The game uses 16GB of VRAM, any extra over the VRAM allotence is stores in the main system memory. This means the 3060 12GB is storing 4GB in the main system memory while keeping higher priority data in the VRAM. The 3070 is pushing 8GB into the main system memory but unfortunately for it some critical data is not able to fit as the VRAM is already filled with equal priority data, thus resulting in a much lower level of performance as compared to scenarios where it is not VRAM bound.
Mind you it shouldn't take such an obvious example of VRAM size bottlenecking to be a wake up call to this. You don't see this very often precisely because the performance penalty of not having enough VRAM is so heavy (not always just avg but frame timing as well). Devs cannot have newer video cards having stuttering, low FPS, or inconsistent frame timing. I really don't get the logic behind defending the practice aside from blindly defending everything Nvidia.
First, I was very generous to state 2-year cadence for N3->N2->N1 cadence. What if the story happens like with Intel's now badly famous 14nm with numerous pluses?! 14nm+, 14nm++, 14nm+++, 14nm+++(+), etc.
No one can guarantee that anything after N3 will work.
Likewise, go from the non-Ti to the highest non-Ti skipping everything else.