Friday, December 30th 2022

NVIDIA France Accidentally Reveals GeForce RTX 4070 Ti Specs

With less than a week to go until the official launch of the GeForce RTX 4070 Ti, NVIDIA France has gone and spoiled things by revealing the official specs of the upcoming GPU. The French division of NVIDIA appears to have posted the full product page, but it has since then been pulled. That didn't prevent Twitter leaker @momomo_us from snapping a couple of screenshots, including that of the official performance numbers from NVIDIA.

There aren't any real surprises here though, as we already knew the CUDA core count and the memory size, courtesy of the RTX 4070 Ti having been the RTX 4080 12 GB, until NVIDIA changed its mind. It's interesting to see that NVIDIA compares the RTX 4070 Ti to the RTX 3080 12 GB in the three official benchmarks, as it makes the RTX 4070 Ti look a lot better than it is in reality, at least based on the rumoured MSRP of US$800-900. One of the three benchmarks is Cyberpunk 2077 using Raytracingl, where NVIDIA suggests the RTX 4070 Ti is around 3.5 times faster than the RTX 3080, but it's worth reading the fine print. We'll know next week how the RTX 4070 Ti actually performs, as well as where the official pricing and actual retail pricing ends up.
Sources: NVIDIA France (reverted to older page), via @momomo_us
Add your own comment

102 Comments on NVIDIA France Accidentally Reveals GeForce RTX 4070 Ti Specs

#51
N/A
4080 has 20% more Teraflops, 42% more bandwidth and 40% more Rops. 33% faster overall, resulting 150 Fps and 125 non-Ti.
Posted on Reply
#52
ARF
With this 12 GB VRAM it will die quite fast, turning into a one-time/hit wonder.
Posted on Reply
#53
Why_Me
ARFWith this 12 GB VRAM it will die quite fast, turning into a one-time/hit wonder.
The RTX 3080 Ti seemed to do fine with 12GB of VRAM. btw that card has a $1200 MSRP.

Posted on Reply
#54
Dristun
efikkanMy objection is prejudging a product before we know the product's performance and price.
This card may very well end up at a similar performance per Dollar range as AMD's latest cards, so will you criticize them as harshly as you criticize this product then?
Of course - I couldn't care less about AMD either, and I posted that they're disappointing when their respective reviews dropped. How did you even decide that I like AMD, haha, I'm literally one of the few people in the entire forum with an all-Intel rig! :D
And again, I'll repeat myself - the pricing is insane regardless, in the entire market except the used cards. To me it feels obvious that both teams just want to keep the margins they had for the last two years and there's no way in hell I'm going to trust their stories when they just don't line up with the rest of HW industry.
Posted on Reply
#55
Bomby569
"The problem with the 4070Ti: Even at $799, the performance/price ratio is just about on par with "normal" RTX30 SKUs. Thus, a generation leap is not present at all. Ada Lovelace is simply a continuation of Ampere from P/P's point of view."

at this point we can expect the 9070ti to cost the same as a new Tesla.
Posted on Reply
#57
efikkan
evernessince16GB is the amount portal RTX uses at 4K, not just allocates: www.techpowerup.com/review/portal-with-rtx/3.html

Performance doesn't immediately drop when you run out of VRAM. It depends on the game but usually you can go 30% above available VRAM and the GPU will do a decent job of swapping between the VRAM and main system memory. The problem is, the instant something that needs to be fetched often is sent to the main system memory when VRAM is full, performance tanks.

It's not just an annoyance, it renders the game unplayable. The 3070 gets 1 FPS at 4K, but even in less extreme scenarios where you "just" get poor frame timing or stuttering it's easy to see why people want more VRAM. There's really no excuse other than forced obsolescence either because it would not be expensive for Nvidia to have added more.
Thanks for making my case.
Even at 1440p, the RTX 3070 gets an impressive 16 FPS, and if we estimate performance based on RTX 3090 with assumed no memory bottleneck, we would get a massive 6-7 FPS at 4K. Even the RTX 3060 with 12 GB scores a breathtaking 3 FPS!
So this is very far off from a smooth 60 FPS. No one will play games like this, it's a pretty slideshow, not a playable game. And as you can see, the cards are not powerful enough to game like this, so the VRAM limit is proven irrelevant. Both VRAM and computational performance is bottlenecking long before the VRAM size here, with raytracing it's often computational performance in perticular.
Posted on Reply
#58
Bomby569
efikkanThanks for making my case.
Even at 1440p, the RTX 3070 gets an impressive 16 FPS, and if we estimate performance based on RTX 3090 with assumed no memory bottleneck, we would get a massive 6-7 FPS at 4K. Even the RTX 3060 with 12 GB scores a breathtaking 3 FPS!
So this is very far off from a smooth 60 FPS. No one will play games like this, it's a pretty slideshow, not a playable game. And as you can see, the cards are not powerful enough to game like this, so the VRAM limit is proven irrelevant. Both VRAM and computational performance is bottlenecking long before the VRAM size here, with raytracing it's often computational performance in perticular.
i have a solution, play with all the shiny pretty rays traced at 540p.

Problem solved, no need to thank me.
Posted on Reply
#59
AusWolf
efikkanVRAM allocated isn't the same as VRAM needed. Many buffers and textures are heavily compressed on the fly. The true judge of VRAM requirement is benchmarking the performance; if the card runs out of VRAM the performance will drop sharply. If on the other hand the performance keep scaling, then there is no issue.
This!

Guys, please stop looking at VRAM usage in monitoring apps and considering it as a must. It's not!

Kind of like when you put more system memory in your PC, your idle RAM usage rises. Currently, my main PC sits at 5.3 GB used with only Chrome open. Does Windows 10 work with 4 GB RAM? Absolutely.

Look at your performance. When your GPU usage drops, and the game starts to stutter massively, that's when you're hitting a VRAM (or CPU) limit. VRAM usage being at 100% doesn't mean anything.
Bomby569i have a solution, play with all the shiny pretty rays traced at 540p.

Problem solved, no need to thank me.
I have a solution, too. Play the original Portal that actually makes sense as a game.
Posted on Reply
#60
Bomby569
AusWolfI have a solution, too. Play the original Portal that actually makes sense as a game.
it does look amazing, you can't ignore it. But it's definitely not worth it, and the game is amazing even in low settings on a potato.
Posted on Reply
#61
AusWolf
Bomby569it does look amazing, you can't ignore it. But it's definitely not worth it, and the game is amazing even in low settings on a potato.
Exactly. It's a game about puzzles with some witty humor mixed into it. It was never about shiny rays, and adding them doesn't make the game better. Just look different.
Posted on Reply
#62
ARF
Bomby569"The problem with the 4070Ti: Even at $799, the performance/price ratio is just about on par with "normal" RTX30 SKUs. Thus, a generation leap is not present at all. Ada Lovelace is simply a continuation of Ampere from P/P's point of view."

at this point we can expect the 9070ti to cost the same as a new Tesla.
This is a very dangerous trend which if not fixed will lead to dire consequences to all makers involved!
It will inevitably shrink the GFX shipments to levels in which the economies of scale will no longer work and an industry for billions will die off.
AusWolfThis!

Guys, please stop looking at VRAM usage in monitoring apps and considering it as a must. It's not!

Kind of like when you put more system memory in your PC, your idle RAM usage rises. Currently, my main PC sits at 5.3 GB used with only Chrome open. Does Windows 10 work with 4 GB RAM? Absolutely.
No, I do not recommend Windows 10 with 4 GB - it runs very slowly.
4 GB is good for Windows 7 or Windows XP, though.
Posted on Reply
#63
AusWolf
ARFNo, I do not recommend Windows 10 with 4 GB - it runs very slowly.
4 GB is good for Windows 7 or Windows XP, though.
I'm not saying that I recommend it - I'm saying that it works. ;) I have a laptop with a dual-core Celeron (that's basically an Atom), and 4 GB RAM. It's ok for light browsing.

I could have compared having 8 and 32 GB of system RAM - the allocation you see in Task Manager will differ greatly.

My point is: Just because you see all of your VRAM used up in a game, it doesn't mean that you couldn't run it with less.
Posted on Reply
#64
ARF
AusWolfI'm not saying that I recommend it - I'm saying that it works. ;) I have a laptop with a dual-core Celeron (that's basically an Atom), and 4 GB RAM. It's ok for light browsing.

I could have compared having 8 and 32 GB of system RAM - the allocation you see in Task Manager will differ greatly.

My point is: Just because you see all of your VRAM used up in a game, it doesn't mean that you couldn't run it with less.
I think the minimum for running Windows 10 is 6 GB and an SSD.
Posted on Reply
#65
AusWolf
ARFI think the minimum for running Windows 10 is 6 GB and an SSD.
I'd still say 4 GB is OK. Heck, I even ran it on a Compute Stick with only 2 GB. It wasn't pleasant, but it worked.

End of off on my part. :)
Posted on Reply
#66
ZoneDymo
efikkanThanks for making my case.
Even at 1440p, the RTX 3070 gets an impressive 16 FPS, and if we estimate performance based on RTX 3090 with assumed no memory bottleneck, we would get a massive 6-7 FPS at 4K. Even the RTX 3060 with 12 GB scores a breathtaking 3 FPS!
So this is very far off from a smooth 60 FPS. No one will play games like this, it's a pretty slideshow, not a playable game. And as you can see, the cards are not powerful enough to game like this, so the VRAM limit is proven irrelevant. Both VRAM and computational performance is bottlenecking long before the VRAM size here, with raytracing it's often computational performance in perticular.
are we looking at the same chart?


Because I see the 12gb vram 3060 beating the 8gb vram 3070, ok by 1 fps but still.
I also see the 3080 10gb doing about half the fps of the 3090 24 gb.

sooo yeah, idk man, again you believe whatever you want to believe, but I think the Vram amount on these new cards is too low and again, that is probably on purpose so you buy the new stuff in a shorter amount of time.
Posted on Reply
#67
ARF
I think that there is no much difference between the two approaches -
1. RTX 3080 10 GB disabled shaders vs RTX 3090 24 GB all shaders
2. RTX 4080 16 GB crippled second-tier chip vs RTX 4090 24 GB almost full first-tier chip

Both approaches result in almost the same end result - the 80 cards can be beaten badly and it is a clear market segmentation.
Posted on Reply
#68
Bwaze
Bomby569"The problem with the 4070Ti: Even at $799, the performance/price ratio is just about on par with "normal" RTX30 SKUs. Thus, a generation leap is not present at all. Ada Lovelace is simply a continuation of Ampere from P/P's point of view."

at this point we can expect the 9070ti to cost the same as a new Tesla.
Nah, it will barely top the $15.000, unless we get the “real price increases” - remember, right now people are still defending the Ada cards that this is nothing out of the ordinary:

2020, RTX 3080 - $700
2022, RTX 4080 - $1200 <- WE ARE HERE
2024, RTX 5080 - $2040
2026, RTX 6080 - $3468
2028, RTX 7080 - $5896
2030, RTX 8080 - $10022
2032, RTX 9080 - $17038
2034, GTX 1080 - $28965
Posted on Reply
#69
ARF
BwazeNah, it will barely top the $15.000, unless we get the “real price increases” - remember, right now people are still defending the Ada cards that this is nothing out of the ordinary:

2020, RTX 3080 - $700
2022, RTX 4080 - $1200 <- WE ARE HERE
2024, RTX 5080 - $2040
2026, RTX 6080 - $3468
2028, RTX 7080 - $5896
2030, RTX 8080 - $10022
2032, RTX 9080 - $17038
2034, GTX 1080 - $28965
There are not that many remaining manufacturing nodes, though :roll:
2024 is TSMC N3
2026 is TSMC N2
2028 is TSMC N1
...

and then what?
Posted on Reply
#70
eidairaman1
The Exiled Airman
Why_MeA lot of Nvidia owners spend more for quality. Kind of like those peeps that purchase high end Mercedes.

www.tomshardware.com/news/amd-responds-to-rx-7900-xtx-hotspot-fiasco
Mercedes, high quality?, Lmao cheap steaming pile.
Why_MeThe RTX 3080 Ti seemed to do fine with 12GB of VRAM. btw that card has a $1200 MSRP.

Which is still ridiculous for price, no thanks.
Posted on Reply
#71
Dan.G
ARFThere are not that many remaining manufacturing nodes, though :roll:
2024 is TSMC N3
2026 is TSMC N2
2028 is TSMC N1
...

and then what?
Picometre? :)
2030 is TSMC P800 :laugh:
Posted on Reply
#72
evernessince
efikkanThanks for making my case.
Even at 1440p, the RTX 3070 gets an impressive 16 FPS, and if we estimate performance based on RTX 3090 with assumed no memory bottleneck, we would get a massive 6-7 FPS at 4K. Even the RTX 3060 with 12 GB scores a breathtaking 3 FPS!
So this is very far off from a smooth 60 FPS. No one will play games like this, it's a pretty slideshow, not a playable game. And as you can see, the cards are not powerful enough to game like this, so the VRAM limit is proven irrelevant. Both VRAM and computational performance is bottlenecking long before the VRAM size here, with raytracing it's often computational performance in perticular.
1 FPS is less than 10% of 11. The 3070 is not 10% the performance of a 3090, that should go without saying.

Most benchmarks show performance of the 3070 and 3090 in non-memory bottlenecked scenarios (for the reasons stated in my prior post), therefore if the performance differential between the two cards changes drastically it's safe to assume that the bottleneck lies elsewhere.

As seen in the provided techpowerup RTX portal benchmark, we can see there is a pretty clear advantage to cards that have more VRAM, with the vastly less powerful 3060 12GB beating the the 3080 10GB. You can see this trend extend throughout Nvidia's entire lineup in this benchmark.
ZoneDymoare we looking at the same chart?


Because I see the 12gb vram 3060 beating the 8gb vram 3070, ok by 1 fps but still.
I also see the 3080 10gb doing about half the fps of the 3090 24 gb.

sooo yeah, idk man, again you believe whatever you want to believe, but I think the Vram amount on these new cards is too low and again, that is probably on purpose so you buy the new stuff in a shorter amount of time.
Oh he is almost certainly arguing in bad faith at this point. The fact that the 3060 12GB is beating the 10GB 3080 is an extremely clear indication of a memory bottleneck. The game uses 16GB of VRAM, any extra over the VRAM allotence is stores in the main system memory. This means the 3060 12GB is storing 4GB in the main system memory while keeping higher priority data in the VRAM. The 3070 is pushing 8GB into the main system memory but unfortunately for it some critical data is not able to fit as the VRAM is already filled with equal priority data, thus resulting in a much lower level of performance as compared to scenarios where it is not VRAM bound.

Mind you it shouldn't take such an obvious example of VRAM size bottlenecking to be a wake up call to this. You don't see this very often precisely because the performance penalty of not having enough VRAM is so heavy (not always just avg but frame timing as well). Devs cannot have newer video cards having stuttering, low FPS, or inconsistent frame timing. I really don't get the logic behind defending the practice aside from blindly defending everything Nvidia.
Posted on Reply
#73
Tartaros
CrackongWow we are so not surprised.

Didn't Jensen shit on the board partners because of things like this? Is he going to shit on his employees too?
Posted on Reply
#74
ARF
Dan.GPicometre? :)
2030 is TSMC P800 :laugh:
There is no scientific proof that these will exist.
First, I was very generous to state 2-year cadence for N3->N2->N1 cadence. What if the story happens like with Intel's now badly famous 14nm with numerous pluses?! 14nm+, 14nm++, 14nm+++, 14nm+++(+), etc.

No one can guarantee that anything after N3 will work.
Posted on Reply
#75
QUANTUMPHYSICS
In my opinion, you're best just waiting for the 4090Ti and then upgrading from Ti to Ti - skipping everything else in between.

Likewise, go from the non-Ti to the highest non-Ti skipping everything else.
Posted on Reply
Add your own comment
Dec 13th, 2024 19:46 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts