Monday, December 16th 2024
NVIDIA GeForce RTX 5070 Ti Leak Tips More VRAM, Cores, and Power Draw
It's an open secret by now that NVIDIA's GeForce RTX 5000 series GPUs are on the way, with an early 2025 launch on the cards. Now, preliminary details about the RTX 5070 Ti have leaked, revealing an increase in both VRAM and TDP and suggesting that the new upper mid-range GPU will finally address the increased VRAM demand from modern games. According to the leak from Wccftech, the RTX 5070 Ti will have 16 GB of GDDR7 VRAM, up from 12 GB on the RTX 4070 Ti, as we previously speculated. Also confirming previous leaks, the new sources confirm that the 5070 Ti will use the cut-down GB203 chip, although the new leak points to a significantly higher TBP of 350 W. The new memory configuration will supposedly run on a 256-bit memory bus and run at 28 Gbps for a total memory bandwidth of 896 GB/s, which is a significant boost over the RTX 4070 Ti.
Supposedly, the RTX 5070 Ti will also see a bump in total CUDA cores, from 7680 in the RTX 4070 Ti to 8960 in the RTX 5070 Ti. The new RTX 5070 Ti will also switch to the 12V-2x6 power connector, compared to the 16-pin connector from the 4070 Ti. NVIDIA is expected to announce the RTX 5000 series graphics cards at CES 2025 in early January, but the RTX 5070 Ti will supposedly be the third card in the 5000-series launch cycle. That said, leaks suggest that the 5070 Ti will still launch in Q1 2025, meaning we may see an indication of specs at CES 2025, although pricing is still unclear.
Update Dec 16th: Kopite7kimi, ubiquitous hardware leaker, has since responded to the RTX 5070 Ti leaks, stating that 350 W may be on the higher end for the RTX 5070 Ti: "...the latest data shows 285W. However, 350W is also one of the configs." This could mean that a TBP of 350 W is possible, although maybe only on certain graphics card models, if competition is strong, or in certain boost scenarios.
Sources:
Wccftech, Kopite7kimi on X
Supposedly, the RTX 5070 Ti will also see a bump in total CUDA cores, from 7680 in the RTX 4070 Ti to 8960 in the RTX 5070 Ti. The new RTX 5070 Ti will also switch to the 12V-2x6 power connector, compared to the 16-pin connector from the 4070 Ti. NVIDIA is expected to announce the RTX 5000 series graphics cards at CES 2025 in early January, but the RTX 5070 Ti will supposedly be the third card in the 5000-series launch cycle. That said, leaks suggest that the 5070 Ti will still launch in Q1 2025, meaning we may see an indication of specs at CES 2025, although pricing is still unclear.
Update Dec 16th: Kopite7kimi, ubiquitous hardware leaker, has since responded to the RTX 5070 Ti leaks, stating that 350 W may be on the higher end for the RTX 5070 Ti: "...the latest data shows 285W. However, 350W is also one of the configs." This could mean that a TBP of 350 W is possible, although maybe only on certain graphics card models, if competition is strong, or in certain boost scenarios.
160 Comments on NVIDIA GeForce RTX 5070 Ti Leak Tips More VRAM, Cores, and Power Draw
@Cpt.Jank The 4070Ti Super has 8448 cores, the 4070Ti has 7680 cores.
Upgrade seems a bit mid. If we compare it against (and assume it's similarly priced as) the 4070Ti the uplift should be less than 25%.
The additional VRAM is nice but the extra memory speed is a bit of a meme.
Does make it somewhat comparable to a 4080 though.
But I think this thing will get an MSRP of 850USD.
The "leaked" specs have been leaked for about the third time, so no surprise at all, just nearly a confirmation. The card seems to be no big bump at all, with only 6% more cores, 33% more bandwith and 65W TDP (the 4070TiS was not TDP-limited at all!). If core clock doesn't go up quite a bit, which I doubt given the nearly identical fabrication process, increased performance will have to come mainly from architectural improvements or DLSS4.x. It is the same with 5080 and 5070, which offer similarly small spec-bumps over their predecessors.
If they don't cost more, I don't care, but I doubt that.
- Pascal (10) was a breakout hit and was priced insanely well across the board, seeing volumes of 1060s, 1070s, and 1080/1080Tis moving. Even the 1050Ti sold like hotcakes as a slot-only card.
- Turing (20/16) came out sorely overpriced and underperformed due to it being mostly just an experimentalist platform for 1st gen RT and matrix math. The 16 series cut the crap and focused on price appeal, to much success with SIs and OEMs. Seen as an afterthought nowadays...
- Ampere (30) was, like the 10 series, so good that literally everything sold. Again. Even the RTX 3050, a laughably bad model in the lineup, was moving units. Everyone wanted a piece of that fresh and functional 2nd gen RT and the true fruit of all those fancy arch additions—DLSS. Good DLSS. It was the crypto boom that ruined prices after launch.
- Ada (40) follows the same leapfrogging pattern that was forming the last few generations, being an overpriced, underperforming software showcase for special first-adopters only. DLSS framegen, ray reconstruction, AI toolkit—yadda yadda no one friggin' cares.
Naturally, we should expect the 50 series to be a return to form for Nvidia, another lineup of home-runs that makes everyone happy, but the forecast isn't looking great. Higher power draw, barely improving on shader count from Ada/Ampere, and a lame leg for VRAM on the lower end. They're putting less and less into impressing across the board, and putting more and more into the ultra-mega-super-expensive chips from the 4090 to the—what, B6000? A6000 Blackwell?The margins are getting to their head, and the shareholders are only gonna stop huffing nitrous when the stock price eventually, inevitably, crashes to what Nvidia is ACTUALLY worth as an asset.
You don't feel screwed? And now you're ready to jump on the successor, again, too early and paying far too much, and you'll still be saying 'you don't get what you used to for your money' ? If you had waited a few months and went 4070ti S in the last round, there would have been ZERO reason to even put this 5070ti on your radar to begin with. You'll be gaining what 10-15% perf and 4GB VRAM, its almost a sidegrade.
Its strange how that logic works wrt spending on hardware, honestly. Just be honest, the 4070ti was a shit buy. I'm going to be honest too: I regret buying the 7900XT at the price it had at the time. 250 bucks over what it costs now. Worst timing I've had on a GPU. Could've had a free B580 for that today.
We should do better :) Same node, a few more shaders and a higher voltage to extract another 3% you could've had from an OC yourself.
Nvidia is going to sell this card to everyone who didn't buy 4070ti Super, basically, just stalling at virtually the same level of performance right there, but calling it new. What? Ampere was good? What are you talking about? It was heavily underspecced and overpriced and baked on the worst node we've seen on Nvidia cards in decades. The worst perf/W as well, that is a big reason why Ada looks so good now. If you would just look at the hardware in isolation (not the pricing, indeed mining at the time that inflated things badly, Nvidia positioned the MSRP's quite alright initially) its really not a good generation at all. What sold cards at the time was mining, and it barely leaving scraps for gamers. I think you're right about DLSS. But that's not really related to the GPUs, is it, its artificial segmentation. I think people would have gobbled up the GPUs anyway.
10GB on a 3080 (the most interesting card in the line up, everything else was overpriced or halfway obsolete on release, see x60/x70 performance in stuff like Hogwarts for example) was a fucking joke, and the lower tiers were nothing better. Double VRAM cards came too late and looked strange relative to core power, so basically one half of the stack was underspecced and the replacements were overkill on VRAM. The whole release of Ampere was a horrible mess, failure rate wasn't exactly lowest either. I think Ampere may go down history as the generation with the lowest useable lifetime of Nvidia's GPU stacks, all things considered. What also didn't and still doesn't help it, is the development in games where new engine developments just simply slaughter these GPUs. Ampere has been plagued with issues already in various games. To illustrate, a 4090 is almost twice the GPU that a 3090 is at this moment, and that's all core/shader performance, neither card is VRAM constrained.
Even the most demanding titles barely tickled the 8-gig barrier in the 30 series era, and if you were exceeding it, it was because you were trying to play 4K High/Ultra on games that just came out. That was barely in reach for the 3080, nevermind a 3070 (or 3060Ti, which iirc sold way better). The only reason it's an issue now is because easy-bake 'cinematic quality' has become a trend while cinematic framerates and studio-spec GPUs have not.
As for the failure rate debacle, I hardly remember anything of the sort (and what I have inklings of tell me of it being specifically problems with the first batch or two of boards). I do remember the power draw, Ampere guzzled gas and put out heat, but it was worth it for what ya got and you could usually squeeze much better voltages or more worthwhile performance out of a card if you gave even a fart about it, speaking from experience.
'No one expected' is the wrong term for it imho. They were warned but ignored it, or have the perspective they will upgrade before that moment occurs anyway. But the latter is of course cognitive dissonance. If you have no need, you wouldn't upgrade, they simply pre-empted their dissapointment with a lowered expectation. This is the 'money is no object crowd' that then proceeds to complain about pricing and buys it anyway.
It's only exaggerated by the fact that Nvidia went full retard with prices on Ada, thinking that if gullible gamers are willing to pay thousands for a graphics card just because it comes in a green box, then maybe they should. The saddest part of it all is that time proved Nvidia right.
I'm also making good on buying the DLSS-majigger now that I'm at 1440p, too. Cyberpunk doesn't look that bad on Ultra Quality.
Yet it just mentions 4070 Ti, while mixing specs from the non-Super (12 GB VRAM) and Super (8448 CUDA cores, the non-Super had 7680).
So in fact, it does not have more VRAM, because the 4070 Ti Super already had 16 GB.
Seems tech stagnation will go on for a while yet.
Also with AMD not looking only at Nvidia pricing but also having to worry about Intel pricing, that could help to make them ignore their fears and start with lower MSRPs. Until today they where fearing that a much lower MSRP price for their cards compared to Nvidia's equivalent would trigger a response from Nvidia. Now they will have to think differently if they don't want to get squeezed from two directions.
Make it 250W maximum.
Buy what makes you happy, don't worry about what I am doing.
I use the hardware that pleases me, I don't give a fuck about the name on the box.
Edit:
How well do AMD GPU's run F@H?
Like shit, that is one of the reasons I am not running one.
The worst thing is that you are a staff member and I can't put you in my ignore list(yes I checked that). But you did started following me just 35 minutes ago. I guess you are preparing your BAN hammer for any future posts by me, because, well, you are a staff member I guess you can do that. Yep, it's a wonderful game. I do agree that I am tempted to buy an Nvidia card to play it. But don't tell anyone.