Monday, December 16th 2024

NVIDIA GeForce RTX 5070 Ti Leak Tips More VRAM, Cores, and Power Draw

It's an open secret by now that NVIDIA's GeForce RTX 5000 series GPUs are on the way, with an early 2025 launch on the cards. Now, preliminary details about the RTX 5070 Ti have leaked, revealing an increase in both VRAM and TDP and suggesting that the new upper mid-range GPU will finally address the increased VRAM demand from modern games. According to the leak from Wccftech, the RTX 5070 Ti will have 16 GB of GDDR7 VRAM, up from 12 GB on the RTX 4070 Ti, as we previously speculated. Also confirming previous leaks, the new sources confirm that the 5070 Ti will use the cut-down GB203 chip, although the new leak points to a significantly higher TBP of 350 W. The new memory configuration will supposedly run on a 256-bit memory bus and run at 28 Gbps for a total memory bandwidth of 896 GB/s, which is a significant boost over the RTX 4070 Ti.

Supposedly, the RTX 5070 Ti will also see a bump in total CUDA cores, from 7680 in the RTX 4070 Ti to 8960 in the RTX 5070 Ti. The new RTX 5070 Ti will also switch to the 12V-2x6 power connector, compared to the 16-pin connector from the 4070 Ti. NVIDIA is expected to announce the RTX 5000 series graphics cards at CES 2025 in early January, but the RTX 5070 Ti will supposedly be the third card in the 5000-series launch cycle. That said, leaks suggest that the 5070 Ti will still launch in Q1 2025, meaning we may see an indication of specs at CES 2025, although pricing is still unclear.

Update Dec 16th: Kopite7kimi, ubiquitous hardware leaker, has since responded to the RTX 5070 Ti leaks, stating that 350 W may be on the higher end for the RTX 5070 Ti: "...the latest data shows 285W. However, 350W is also one of the configs." This could mean that a TBP of 350 W is possible, although maybe only on certain graphics card models, if competition is strong, or in certain boost scenarios.
Sources: Wccftech, Kopite7kimi on X
Add your own comment

160 Comments on NVIDIA GeForce RTX 5070 Ti Leak Tips More VRAM, Cores, and Power Draw

#76
freeagent
john_Maybe we are jealous of you? Just a thought.
Very doubtful, we are all big boys here.
john_The worst thing is that you are a stuff member and I can't put you in my ignore list(yes I checked that). But you did started following me just 35 minutes ago. I guess you are preparing your BAN hammer for any future posts by me, because, well, you are a stuff member I guess you can do that.
I recently started following people who have appeared on my radar, just using tools that are provided to me.
Posted on Reply
#77
Onasi
I see that this thread is going about as well as usually such things do.

Anyway, rumors are all well and good, but what will matter is performance and price. I am not hugely optimistic, NV essentially has a captive market and can price at whatever the hell they think said market will bear, but we’ll see. Not too enthused about a potential TDP jump. I do realize that this is inevitable nowadays as a means to scrape every little bit of performance, but it’s not to my preference. Probably will end up being that you can limit the power significantly without losing much, but still.
Posted on Reply
#78
Krit
freeagentIt means that 5070Ti will smoke 4070Ti.
It will be faster but not as fast as RTX 4070 Ti vs RTX 3070 Ti in performance gains. You will pay more for less gain! This is currently nvidia's signature and pride. :)
Posted on Reply
#79
freeagent
KritIt will be faster but not as fast as RTX 4070 Ti vs RTX 3070 Ti in performance gains. You will pay more for less gain! This is currently nvidia's signature and pride. :)
So, I have 2 kids, and 3 computers, I am not upgrading just to blow money.
Posted on Reply
#80
95Viper
Let's stick to the topic.
Stop the bickering/insults/snide remarks
Posted on Reply
#81
TheDeeGee
More than enough for 1440p120 which this card is aimed at.
Posted on Reply
#82
rv8000
N/ARTX 5070 Ti is very special and brings 4090 performance down to $800. At least In 1440p it should land much closer to 4090 than to 4080.
I hope this is sarcasm, Nvidia will never give that sort of performance for such a price when they can ream their customers while they have no regard for themselves and come back for more.
Posted on Reply
#83
close
freeagentMy 4070Ti smokes my 3070Ti in every possible way. Lots of guys hate Nvidia, and that's ok :)
freeagentIt means that 5070Ti will smoke 4070Ti.

And many of the comments in this thread are from guys running AMD GPU's lol..
P.S. A kink in your theory, I'm on the same page with the "guys running AMD GPU's lol" except not running AMD. Also no idea how F@H runs these days on anything, haven't tried it in over 20 years, the power consumption isn't a selling point for me.

Nvidia is the new Intel, with price and power envelope progress that greatly outpace the performance progress.
Posted on Reply
#84
Hecate91
TheDeeGeeMore than enough for 1440p120 which this card is aimed at.
If that is the case then this card isn't much of an upgrade over a 4070Ti Super, anything over $500 should be capable of 4K because good 4K monitors are very affordable.
Posted on Reply
#85
dismuter
Nvidia is the new Intel, with price and power envelope progress that greatly outpace the performance progress.
I'm not sure what that means. Intel's prices have been pretty good in the past 2-3 years compared to their all-around performance (especially for the i5), and NVIDIA has the best power efficiency.
Posted on Reply
#86
Krit
350w power draw that's a lot for mid range gpu that also could mean that actual architectural updates may be not so great.
Posted on Reply
#87
95Viper
Last warning! Stop the off-topic drama or there will be consequences!
Posted on Reply
#88
Hecate91
closeP.S. A kink in your theory, I'm on the same page with the "guys running AMD GPU's lol" except not running AMD. Also no idea how F@H runs these days on anything, haven't tried it in over 20 years, the power consumption isn't a selling point for me.

Nvidia is the new Intel, with price and power envelope progress that greatly outpace the performance progress.
I could be a guy running an Nvidia gpu and absolutely hate it because of the things Nvidia has been pulling since the RTX 2000 series, not everyone has to be fawning over everything the leather jacket man is selling. Nvidia is very much like Apple with how the marketing works, and Nvidia has been the new Intel since the 3000 series, pricing and power consumption went too high and performance slowed down because Nvidia wants to force ray tracing on everyone even though the tech still isn't ready after 3 generations.
freeagentI have no "power" in here.

I have no power at all, I volunteer for free, try to keep threads tidy in certain sections, etc.

Just a regular guy.

Many childish people in this thread, a little disappointed to be honest, but not at all surprising.

Keep up the good work fellas :)
It depends what you mean by power then I guess, since you put power in quotation marks.
To a normal pleb like me you have plenty of "tools" I don't. I might be misreading your post but if you want to insist those who disagree with you are childish, then cool, good for you.
I think a lot of the hositlity in gpu threads comes from just how bad the GPU market has gotten, maybe people are taking it too seriously, myself included. I shouldn't be seeing a GPU topic seriously here, not after I seen a review with not having DLSS listed as a con, I didn't expect that to be said again but it was with the B580 review.
Posted on Reply
#89
gt362gamer
Vayra86(...) 10GB on a 3080 (the most interesting card in the line up, everything else was overpriced or halfway obsolete on release, see x60/x70 performance in stuff like Hogwarts for example) was a fucking joke, and the lower tiers were nothing better. Double VRAM cards came too late and looked strange relative to core power, so basically one half of the stack was underspecced and the replacements were overkill on VRAM. The whole release of Ampere was a horrible mess, failure rate wasn't exactly lowest either. I think Ampere may go down history as the generation with the lowest useable lifetime of Nvidia's GPU stacks, all things considered. (...)
I'm not sure about that, the 2 GiB GTX 960 might beg to differ. Also the GTX 700 gen I think aged quite ungracefully, even if you got the higher vRAM variants. Hopefully Intel can help making 10-12 GiB of vRAM as the expected minimum for an affordable mid-range GPU with their new GPU lineup.
Posted on Reply
#90
Vayra86
dismuterThe comparisons in this article are messed up. The 4070 Ti was superseded by the 4070 Ti Super and has been discontinued, so there's no point mentioning or comparing to the 4070 Ti non-Super.
Yet it just mentions 4070 Ti, while mixing specs from the non-Super (12 GB VRAM) and Super (8448 CUDA cores, the non-Super had 7680).
So in fact, it does not have more VRAM, because the 4070 Ti Super already had 16 GB.
I think the reasoning here in the article for using 4070 Ti non S, is that we are again at the same moment in the launch cycle - we're not in Super release time, but in vanilla version release time. Super is a refresh, much like how Blackwell might get one a year, or two, later.
gt362gamerI'm not sure about that, the 2 GiB GTX 960 might beg to differ. Also the GTX 700 gen I think aged quite ungracefully, even if you got the higher vRAM variants. Hopefully Intel can help making 10-12 GiB of vRAM as the expected minimum for an affordable mid-range GPU with their new GPU lineup.
The 960 did come to mind after posting indeed lol, nice one. 700 series kinda had the same issue that Ampere has in terms of timing of its release, prior to a move forward in console gaming and increasing demands on... VRAM! But the gen itself was quite good, it was what Kepler should have been right away - that stack only went up to x104, 700 added the big chip and Nvidia dragged it out for quite a while. Maxwell, though, was also just an extremely good gen, rivalling Pascal - the first iteration of delta compression came with it, alongside a much leaner core.
Posted on Reply
#91
dismuter
Vayra86I think the reasoning here in the article for using 4070 Ti non S, is that we are again at the same moment in the launch cycle - we're not in Super release time, but in vanilla version release time. Super is a refresh, much like how Blackwell might get one a year, or two, later.
I don't think that that reasoning is useful. It would make much more sense to compare what's currently on the market, with what's going to replace it. But it seems that there was not much thought put into it anyway, considering that the CUDA core count mentioned for the 4070 Ti is actually that of the 4070 Ti Super.
Posted on Reply
#92
Makaveli
Outback BronzeQuite happy to sacrifice my new born atm…
haha my sister has two kids under 5.

those first 5 years are tough.
AusWolf16 GB on a 350 W card. Am I supposed to be impressed or something? :wtf:
NV's power consumption was very good in the 4000 series after they ditched that samsung node for TSMC.

even with a TBP of 350 watts will most likely see lower power while gaming.
Posted on Reply
#93
TheDeeGee
Krit350w power draw that's a lot for mid range gpu that also could mean that actual architectural updates may be not so great.
My 4070 Ti runs at 200w (via nvidia-smi) and lost only 5-6% performance across various games.

No doubt the 5070 Ti can run at 275W just fine.
Posted on Reply
#94
gt362gamer
Vayra86The 960 did come to mind after posting indeed lol, nice one. 700 series kinda had the same issue that Ampere has in terms of timing of its release, prior to a move forward in console gaming and increasing demands on... VRAM! But the gen itself was quite good, it was what Kepler should have been right away - that stack only went up to x104, 700 added the big chip and Nvidia dragged it out for quite a while. Maxwell, though, was also just an extremely good gen, rivalling Pascal - the first iteration of delta compression came with it, alongside a much leaner core.
I think I might have the definitive answer to the worst Nvidia GPUs... the FX 5000 series. Those may be the worst of them. I had an FX 5500 but for me it seemed ok since it was an upgrade from a Voodoo 3, along with upgrading system RAM to a "whooping" 192 MiB amount, although this is already ancient history relatively speaking and maybe a tad too much of offtopic so I'll leave it there.
Posted on Reply
#95
freeagent
TheDeeGeeMy 4070 Ti runs at 200w (via nvidia-smi) and lost only 5-6% performance across various games.

No doubt the 5070 Ti can run at 275W just fine.
With OC I can see ~305w on the core, and board power at~400w running certain workloads in F@H.

MSFS is a pretty good load, ~300w on the core, ~350-380w board power at times
Posted on Reply
#96
Scircura
First, beating a dead horse: I predict there won't be any benefit from the node process change, at all.
  1. 40 series is N4, 50 series is N4P.
  2. TSMC says N4->N4P is +6% perf, N4P->N4X is +4% perf. (source: Wikipedia)
  3. Zen 4 is N4P, Zen 5 is N4X.
  4. There was no clock speed improvement for Zen 5. Advertised boost clocks barely changed, and measured boost clocks were either the same or worse than Zen 4. (source: TPU, 9900x clocks vs 7900x clocks and 9700x clocks vs 7700x clocks)
  5. TSMC perf claims are at 1.2V, and GPU's run at lower voltage than CPU's so any hypothetical benefit will be further shrunk.
Next, the TDP boost won't improve max clocks by more than 10%. I don't have a solid source for this, since TPU OC tests are run at stock board power instead of max (in which case I could point you to the review of the 4070 TiS Strix which has +28% max board power.) But my general impression from undervolting tests, for example this 4080S test on Reddit, is that a 50% change in power results in 15% change in clocks and 10% change in performance. Also you can eyeball the voltage/frequency plots in a TPU review, extrapolate to ~1.2V (rule-of-thumb that power scales with square of voltage, so this would be ~25% more power than 40 series), and see that there's barely 200 MHz gained on the projected curve.

Finally, the higher memory bandwidth will help slightly. Promisingly, a 4090 with memory OC'd from 21Gbps to 26Gbps supposedly achieved 13% more perf for that 24% clock boost. But the 5070 Ti has half as many cores and probably won't see as much benefit. As I mentioned upthread, the 4070 TiS has a 33% wider bus than the 4070 Ti, 10% more cores, and 3% lower core clocks. Actual performance gain was about 10% at 4K, less at lower resolutions. I'll guesstimate 15% better perf at most from the upgrade of 21Gbps GDDR6 to 28Gbps GDDR7.

Overall I predict the 5070 Ti will perform 15-20% better than the 4070 TiS, which will put it slightly above the 4080S. It will probably be priced below the 4080S's $1000 (my bet: $849 MSRP, $975 street price) and have similar perf/W. I'm also expecting DisplayPort UHBR20 and PCIE 5.0 support, which will improve these cards' longevity.

Re: PCIE 5.0, I would love to see the cards make use of PCIE bifurcation because I'd rather have 8 lanes more PCIE connectivity for NVMe than the <1% performance benefit of extra graphics bandwidth, but I'm not hopeful that card or motherboard manufacturers will make this possible for 5070/5080 series cards.
Posted on Reply
#97
N/A
rv8000I hope this is sarcasm, Nvidia will never give that sort of performance for such a price when they can ream their customers while they have no regard for themselves and come back for more.
Except every single time. isn't 3070 Ti as good as 2080 Ti,. and 4070 ti as good as 3080 Ti, same thing. It's entirely possible that the 5070 Ti is in close range to the 4090 at 1440p, which remains better at 4K and the saving grace for 4090 owners that are coming back for a 5090 this time and now caclulate when is the best time to sell. It really is a 3-4 year investment. Asking $100 more isn't making or breaking the deal.
Posted on Reply
#98
yfn_ratchet
N/AExcept every single time. isn't 3070 Ti as good as 2080 Ti,. and 4070 ti as good as 3080 Ti, same thing. It's entirely possible that the 5070 Ti is in close range to the 4090 at 1440p, which remains better at 4K and the saving grace for 4090 owners that are coming back for a 5090 this time and now caclulate when is the best time to sell. It really is a 3-4 year investment. Asking $100 more isn't making or breaking the deal.
But that doesn't make any sense. That prediction would be in line with 3070Ti ~ Titan RTX, 4070Ti ~ 3090 Ti... but they're not. With the holding pattern you describe, the 5070Ti would be in place to touch tips with... the 4080 SUPER.

And I counter that the pattern holds in name only. GeForce model numbers and their expected configs saw a backslide for the 40 series; the 4080 SUPER is closer to what a launch model 4080 should have been, with an uncomfortable gap in shaders between the 4080 and 4090 where a 4080Ti would be expected to fit. Nevermind that we never got full-fat Ada flagship in GeForce, no 4090Ti with a fully intact AD102... nothing.

I project the 50 series to be maybe, MAYBE be a 15-20% uplift across the board between a new node/arch and higher power limits. And mind you, that would still see the 5070Ti neck and neck with the launch 4080, not bodyshotting a theoretical 4080Ti. It's GB203 up against AD103. It'd be an upset if Blackwell lost.
Posted on Reply
#99
Krit
So approximately RTX 4080 Super + ~2-7% performance uplift and ~100$ off.

It's basically stagnation but an average nvidia buyer will swallow it easy without blinking. Even for the same price ~15-20% uplift is kinda small step for next gen GPU in average if we look at gpu generation history. RTX 4060, 4060Ti, RTX 4070, RTX 4070Ti was and are selling very well despite poor value p/p.
Posted on Reply
#100
SRS
Until someone from the tech community decides to step up and take Nvidia on in the serious consumer GPU segment, all of the energy people spend in posting comments is wasted.

It is not at all impossible. It will, though, take a lot of money. "Oh, Apple won't be able to beat Intel. Intel has decades of expertise and even has its own leading fabs. Apple should stick to the Intel contract. The idea of using ARM designs for high performance is laughable and pitiable. What kind of expertise does Apple have in CPU design? Zero."

AMD could switch its role from enabling Nvidia to set prices to actually competing. That's the biggest barrier facing a would-be serious competitor... AMD's intentional sandbagging. However, even with that, AMD will still want to allocate as many of its wafers to enterprise as possible. There is space, right now, for a serious competitor which AMD has vacated and hasn't occupied for many years. Claims that there isn't enough market aren't supported when discontinued GPUs sell out so quickly, regardless of whether or not something like a mining craze is happening. The cards sell. If there were no market, they wouldn't.

It strikes me as weak that people are so excited about the 9800X3D, even though it's mostly an overclocked (increased power budget) variant of the 7800X3D and what people really need are more affordable powerful GPUs. Oh boy... a faster CPU to use with massively overpriced GPUs. What value!

There are so many comments claiming that people buy Nvidia cards because of the branding. That's not true. The main reason people buy them is because the alternatives aren't as good on a technical level. If I were to be given a chunk of Elon's fortune to create the Potato GPU corporation, releasing a card with Vicki Lawrence's Mama on the striped and polka dotted box, fake flowers and potpurri in the box with the GPU, influencer videos from me as the CEO mocking people for buying them — saying all their friends will make fun of them, and printing LAME! on the GPU shroud, they would sell out. Why? Because they'd offer better performance for less money than what Nvidia is offering, without the shortcomings. How?

1) More VRAM than Nvidia at each tier.
2) Competitive gaming performance per watt.
3) Better gaming performance per dollar.
4) Clearer naming strategy. No more Ti, Super, XT/XTX, products with the same name but different specs, products with a bigger number but worse performance, etc.
5) Each generation would be much better than the previous one in performance, not going down in performance per dollar especially.
6) Possibly moving the AI-oriented/RT-oriented hardware to a separate GPU, for a dual-GPU setup for those who care about those things. Possibly involving a new form factor to reduce latency.
7) Longer driver support than both Nvidia and AMD.
8) Better drivers out of the gate than Intel.
9) Top-end performance that's, at minimum, no lower than whatever Nvidia's top consumer card offers.
10) Serious committment to performance in AI workloads.
11) Excellent Linux support, not just Windows support.
12) Quiet cooling.



One doesn't need smoke and mirrors to sell a superior product.

There are endless comments trying to justify AMD's refusal to compete which is AMD's method of letting Nvidia set prices. (Soft collusion that also benefits Sony and MS by keeping "consoles" relevant.) They claim that there aren't enough customers to justify the creation of products, even though the 4090 was sold out for a long time. The argument that "consoles" are so good now (as compared to the pathetic Jaguar generation) has some merit but the video game market continues to expand, not contract. I would like to see good data showing that the serious ("enthusiast") PC gaming market is too small for a company to be able to make a profit whilst undercutting Nvidia — and that the market wouldn't expand if people were to be able to purchase better-value PC gaming equipment at the enthusiast level. Instead, what I've seen are comments that could be written by AMD and Nvidia. "Oh... woe is us... there's nothing we can do... Here's my money..." fatalism.

Enthusiasts are the people who care about hardware specs. The claim that they're blinded by "team" this and that has been shown to be untrue. Enthusiasts are not Dell, chained to one vendor. When a truly superior product becomes available, they will abandon everything else unless they're being paid to use the competition's. Enthusiasts are debating the 7800X3D vs 9800X3D for gaming. They aren't blinded by Intel's history of better performance (particularly Sandy Bridge—Skylake.)

Pointing to historical situations in which Nvidia was able to sell inferior products at a higher rate than AMD/ATI seems to point to inadequate marketing. But even then, ATI and AMD cards had drawbacks, like inadequate coolers. The cooler AMD used for the 290X was embarassingly underpowered and I believe I recall that ASUS released a Strix version that was defectively designed. The current state of the Internet makes it very easy to get the word out about a superior product. A certain tech video review site, for instance, has millions of YT followers. Don't tell me people aren't going to learn about the superior product and will instead buy blindly. I don't buy it. I also don't think serious gamers care about what generic imagery is on the box, and that includes the brand logo and color scheme.

If it were my company, I would ditch the archaic ATX form factor so that GPUs, which are the highest-wattage components by far, would have the form factor be about cooling them efficiently as the #1 design priority. Let's have some actual innovation (serious and committed) for once, instead of endless iteration of copy-cat products.

Anyway... my 1 cent. That's how much I have to rub together to get Potato GPU corporation off the ground. I'm not friends with the guys who build flaming moats.
Posted on Reply
Add your own comment
Dec 23rd, 2024 20:14 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts