Monday, December 16th 2024
NVIDIA GeForce RTX 5070 Ti Leak Tips More VRAM, Cores, and Power Draw
It's an open secret by now that NVIDIA's GeForce RTX 5000 series GPUs are on the way, with an early 2025 launch on the cards. Now, preliminary details about the RTX 5070 Ti have leaked, revealing an increase in both VRAM and TDP and suggesting that the new upper mid-range GPU will finally address the increased VRAM demand from modern games. According to the leak from Wccftech, the RTX 5070 Ti will have 16 GB of GDDR7 VRAM, up from 12 GB on the RTX 4070 Ti, as we previously speculated. Also confirming previous leaks, the new sources confirm that the 5070 Ti will use the cut-down GB203 chip, although the new leak points to a significantly higher TBP of 350 W. The new memory configuration will supposedly run on a 256-bit memory bus and run at 28 Gbps for a total memory bandwidth of 896 GB/s, which is a significant boost over the RTX 4070 Ti.
Supposedly, the RTX 5070 Ti will also see a bump in total CUDA cores, from 7680 in the RTX 4070 Ti to 8960 in the RTX 5070 Ti. The new RTX 5070 Ti will also switch to the 12V-2x6 power connector, compared to the 16-pin connector from the 4070 Ti. NVIDIA is expected to announce the RTX 5000 series graphics cards at CES 2025 in early January, but the RTX 5070 Ti will supposedly be the third card in the 5000-series launch cycle. That said, leaks suggest that the 5070 Ti will still launch in Q1 2025, meaning we may see an indication of specs at CES 2025, although pricing is still unclear.
Update Dec 16th: Kopite7kimi, ubiquitous hardware leaker, has since responded to the RTX 5070 Ti leaks, stating that 350 W may be on the higher end for the RTX 5070 Ti: "...the latest data shows 285W. However, 350W is also one of the configs." This could mean that a TBP of 350 W is possible, although maybe only on certain graphics card models, if competition is strong, or in certain boost scenarios.
Sources:
Wccftech, Kopite7kimi on X
Supposedly, the RTX 5070 Ti will also see a bump in total CUDA cores, from 7680 in the RTX 4070 Ti to 8960 in the RTX 5070 Ti. The new RTX 5070 Ti will also switch to the 12V-2x6 power connector, compared to the 16-pin connector from the 4070 Ti. NVIDIA is expected to announce the RTX 5000 series graphics cards at CES 2025 in early January, but the RTX 5070 Ti will supposedly be the third card in the 5000-series launch cycle. That said, leaks suggest that the 5070 Ti will still launch in Q1 2025, meaning we may see an indication of specs at CES 2025, although pricing is still unclear.
Update Dec 16th: Kopite7kimi, ubiquitous hardware leaker, has since responded to the RTX 5070 Ti leaks, stating that 350 W may be on the higher end for the RTX 5070 Ti: "...the latest data shows 285W. However, 350W is also one of the configs." This could mean that a TBP of 350 W is possible, although maybe only on certain graphics card models, if competition is strong, or in certain boost scenarios.
160 Comments on NVIDIA GeForce RTX 5070 Ti Leak Tips More VRAM, Cores, and Power Draw
Anyway, rumors are all well and good, but what will matter is performance and price. I am not hugely optimistic, NV essentially has a captive market and can price at whatever the hell they think said market will bear, but we’ll see. Not too enthused about a potential TDP jump. I do realize that this is inevitable nowadays as a means to scrape every little bit of performance, but it’s not to my preference. Probably will end up being that you can limit the power significantly without losing much, but still.
Stop the bickering/insults/snide remarks
Nvidia is the new Intel, with price and power envelope progress that greatly outpace the performance progress.
To a normal pleb like me you have plenty of "tools" I don't. I might be misreading your post but if you want to insist those who disagree with you are childish, then cool, good for you.
I think a lot of the hositlity in gpu threads comes from just how bad the GPU market has gotten, maybe people are taking it too seriously, myself included. I shouldn't be seeing a GPU topic seriously here, not after I seen a review with not having DLSS listed as a con, I didn't expect that to be said again but it was with the B580 review.
those first 5 years are tough. NV's power consumption was very good in the 4000 series after they ditched that samsung node for TSMC.
even with a TBP of 350 watts will most likely see lower power while gaming.
No doubt the 5070 Ti can run at 275W just fine.
MSFS is a pretty good load, ~300w on the core, ~350-380w board power at times
- 40 series is N4, 50 series is N4P.
- TSMC says N4->N4P is +6% perf, N4P->N4X is +4% perf. (source: Wikipedia)
- Zen 4 is N4P, Zen 5 is N4X.
- There was no clock speed improvement for Zen 5. Advertised boost clocks barely changed, and measured boost clocks were either the same or worse than Zen 4. (source: TPU, 9900x clocks vs 7900x clocks and 9700x clocks vs 7700x clocks)
- TSMC perf claims are at 1.2V, and GPU's run at lower voltage than CPU's so any hypothetical benefit will be further shrunk.
Next, the TDP boost won't improve max clocks by more than 10%. I don't have a solid source for this, since TPU OC tests are run at stock board power instead of max (in which case I could point you to the review of the 4070 TiS Strix which has +28% max board power.) But my general impression from undervolting tests, for example this 4080S test on Reddit, is that a 50% change in power results in 15% change in clocks and 10% change in performance. Also you can eyeball the voltage/frequency plots in a TPU review, extrapolate to ~1.2V (rule-of-thumb that power scales with square of voltage, so this would be ~25% more power than 40 series), and see that there's barely 200 MHz gained on the projected curve.Finally, the higher memory bandwidth will help slightly. Promisingly, a 4090 with memory OC'd from 21Gbps to 26Gbps supposedly achieved 13% more perf for that 24% clock boost. But the 5070 Ti has half as many cores and probably won't see as much benefit. As I mentioned upthread, the 4070 TiS has a 33% wider bus than the 4070 Ti, 10% more cores, and 3% lower core clocks. Actual performance gain was about 10% at 4K, less at lower resolutions. I'll guesstimate 15% better perf at most from the upgrade of 21Gbps GDDR6 to 28Gbps GDDR7.
Overall I predict the 5070 Ti will perform 15-20% better than the 4070 TiS, which will put it slightly above the 4080S. It will probably be priced below the 4080S's $1000 (my bet: $849 MSRP, $975 street price) and have similar perf/W. I'm also expecting DisplayPort UHBR20 and PCIE 5.0 support, which will improve these cards' longevity.
Re: PCIE 5.0, I would love to see the cards make use of PCIE bifurcation because I'd rather have 8 lanes more PCIE connectivity for NVMe than the <1% performance benefit of extra graphics bandwidth, but I'm not hopeful that card or motherboard manufacturers will make this possible for 5070/5080 series cards.
And I counter that the pattern holds in name only. GeForce model numbers and their expected configs saw a backslide for the 40 series; the 4080 SUPER is closer to what a launch model 4080 should have been, with an uncomfortable gap in shaders between the 4080 and 4090 where a 4080Ti would be expected to fit. Nevermind that we never got full-fat Ada flagship in GeForce, no 4090Ti with a fully intact AD102... nothing.
I project the 50 series to be maybe, MAYBE be a 15-20% uplift across the board between a new node/arch and higher power limits. And mind you, that would still see the 5070Ti neck and neck with the launch 4080, not bodyshotting a theoretical 4080Ti. It's GB203 up against AD103. It'd be an upset if Blackwell lost.
It's basically stagnation but an average nvidia buyer will swallow it easy without blinking. Even for the same price ~15-20% uplift is kinda small step for next gen GPU in average if we look at gpu generation history. RTX 4060, 4060Ti, RTX 4070, RTX 4070Ti was and are selling very well despite poor value p/p.
It is not at all impossible. It will, though, take a lot of money. "Oh, Apple won't be able to beat Intel. Intel has decades of expertise and even has its own leading fabs. Apple should stick to the Intel contract. The idea of using ARM designs for high performance is laughable and pitiable. What kind of expertise does Apple have in CPU design? Zero."
AMD could switch its role from enabling Nvidia to set prices to actually competing. That's the biggest barrier facing a would-be serious competitor... AMD's intentional sandbagging. However, even with that, AMD will still want to allocate as many of its wafers to enterprise as possible. There is space, right now, for a serious competitor which AMD has vacated and hasn't occupied for many years. Claims that there isn't enough market aren't supported when discontinued GPUs sell out so quickly, regardless of whether or not something like a mining craze is happening. The cards sell. If there were no market, they wouldn't.
It strikes me as weak that people are so excited about the 9800X3D, even though it's mostly an overclocked (increased power budget) variant of the 7800X3D and what people really need are more affordable powerful GPUs. Oh boy... a faster CPU to use with massively overpriced GPUs. What value!
There are so many comments claiming that people buy Nvidia cards because of the branding. That's not true. The main reason people buy them is because the alternatives aren't as good on a technical level. If I were to be given a chunk of Elon's fortune to create the Potato GPU corporation, releasing a card with Vicki Lawrence's Mama on the striped and polka dotted box, fake flowers and potpurri in the box with the GPU, influencer videos from me as the CEO mocking people for buying them — saying all their friends will make fun of them, and printing LAME! on the GPU shroud, they would sell out. Why? Because they'd offer better performance for less money than what Nvidia is offering, without the shortcomings. How?
1) More VRAM than Nvidia at each tier.
2) Competitive gaming performance per watt.
3) Better gaming performance per dollar.
4) Clearer naming strategy. No more Ti, Super, XT/XTX, products with the same name but different specs, products with a bigger number but worse performance, etc.
5) Each generation would be much better than the previous one in performance, not going down in performance per dollar especially.
6) Possibly moving the AI-oriented/RT-oriented hardware to a separate GPU, for a dual-GPU setup for those who care about those things. Possibly involving a new form factor to reduce latency.
7) Longer driver support than both Nvidia and AMD.
8) Better drivers out of the gate than Intel.
9) Top-end performance that's, at minimum, no lower than whatever Nvidia's top consumer card offers.
10) Serious committment to performance in AI workloads.
11) Excellent Linux support, not just Windows support.
12) Quiet cooling.
One doesn't need smoke and mirrors to sell a superior product.
There are endless comments trying to justify AMD's refusal to compete which is AMD's method of letting Nvidia set prices. (Soft collusion that also benefits Sony and MS by keeping "consoles" relevant.) They claim that there aren't enough customers to justify the creation of products, even though the 4090 was sold out for a long time. The argument that "consoles" are so good now (as compared to the pathetic Jaguar generation) has some merit but the video game market continues to expand, not contract. I would like to see good data showing that the serious ("enthusiast") PC gaming market is too small for a company to be able to make a profit whilst undercutting Nvidia — and that the market wouldn't expand if people were to be able to purchase better-value PC gaming equipment at the enthusiast level. Instead, what I've seen are comments that could be written by AMD and Nvidia. "Oh... woe is us... there's nothing we can do... Here's my money..." fatalism.
Enthusiasts are the people who care about hardware specs. The claim that they're blinded by "team" this and that has been shown to be untrue. Enthusiasts are not Dell, chained to one vendor. When a truly superior product becomes available, they will abandon everything else unless they're being paid to use the competition's. Enthusiasts are debating the 7800X3D vs 9800X3D for gaming. They aren't blinded by Intel's history of better performance (particularly Sandy Bridge—Skylake.)
Pointing to historical situations in which Nvidia was able to sell inferior products at a higher rate than AMD/ATI seems to point to inadequate marketing. But even then, ATI and AMD cards had drawbacks, like inadequate coolers. The cooler AMD used for the 290X was embarassingly underpowered and I believe I recall that ASUS released a Strix version that was defectively designed. The current state of the Internet makes it very easy to get the word out about a superior product. A certain tech video review site, for instance, has millions of YT followers. Don't tell me people aren't going to learn about the superior product and will instead buy blindly. I don't buy it. I also don't think serious gamers care about what generic imagery is on the box, and that includes the brand logo and color scheme.
If it were my company, I would ditch the archaic ATX form factor so that GPUs, which are the highest-wattage components by far, would have the form factor be about cooling them efficiently as the #1 design priority. Let's have some actual innovation (serious and committed) for once, instead of endless iteration of copy-cat products.
Anyway... my 1 cent. That's how much I have to rub together to get Potato GPU corporation off the ground. I'm not friends with the guys who build flaming moats.