Friday, November 22nd 2024

NVIDIA GeForce RTX 5070 Ti Specs Leak: Same Die as RTX 5080, 300 W TDP

Recent leaks have unveiled specifications for NVIDIA's upcoming RTX 5070 Ti graphics card, suggesting an increase in power consumption. According to industry leaker Kopite7kimi, the RTX 5070 Ti will feature 8,960 CUDA cores and operate at a 300 W TDP. In a departure from previous generations, the RTX 5070 Ti will reportedly share the same GB203 die with its higher-tier sibling, the RTX 5080. This architectural decision differs from the RTX 40-series lineup, where the 4070 Ti and 4080 utilized different dies (AD104 and AD103, respectively). This shared die approach could potentially keep NVIDIA's manufacturing costs lower. Performance-wise, the RTX 5070 Ti shows promising improvements over its predecessor. The leaked specifications indicate a 16% increase in CUDA cores compared to the RTX 4070 Ti, though this advantage shrinks to 6% when measured against the RTX 4070 Ti Super.

Power consumption sees a modest 5% increase to 300 W, suggesting improved efficiency despite the enhanced capabilities. Memory configurations remain unconfirmed, but speculations about the card indicate that it could feature 16 GB of memory on a 256-bit interface, distinguishing it from the RTX 5080's rumored 24 GB configuration. The positioning across the 50-series GPU stack of this RTX 5070 Ti appears carefully calculated, with its 8,960 CUDA cores sitting approximately 20% below the RTX 5080's 10,752 cores. This larger performance gap between tiers contrasts with the previous generation's approach, potentially indicating a more defined product hierarchy in the Blackwell lineup. NVIDIA is expected to unveil its Blackwell gaming graphics cards at CES 2025, with the RTX 5090, 5080, and 5070 series leading the announcement.
Source: VideoCardz
Add your own comment

88 Comments on NVIDIA GeForce RTX 5070 Ti Specs Leak: Same Die as RTX 5080, 300 W TDP

#26
Zazigalka
Prima.VeraMaybe you are to young to remember, but definitely nVidia wasn't always better.
Just for your homework, search for AMD Radeon HD 5870 card. It was so good, that it was almost beating the dual GPU card from nVidia, while wiping the floor with whole nvidia gen cards. Also the 5850 was a monster too, and could work in pair with the 5870. I remember that was my last SLI setup ever, but it was a blast. Good ol' times.
www.techpowerup.com/review/ati-radeon-hd-5870/30.html
While this is true, it was friggin ages ago. Reminiscing about what had been 15 years ago doesn't sell me cards worth buying.
3valatzyNothing too exciting except RTX 5090 which will be crazy expensive - maybe $4000 for the GB202 die that is 744 mm^2. That is at the reticle size limit! :kookoo:
$1999
Posted on Reply
#27
3valatzy
Zazigalka$1999
Even 4090 is not sold for this.


Posted on Reply
#28
Zazigalka
3valatzyEven 4090 is not sold for this.


Not the ROG/Suprim
5090 will sell for 2500 realistically, with that 1999 msrp. Still, not 4000. A 4090 equivalent will be around 1500.
Posted on Reply
#29
AnotherReader
Vayra86So pray tell where it went wrong then. They were 'beating' Nvidia with what? Tech that met the end of its dev cycle. They had every opportunity to obtain true leadership but AMD was thinking 'meh, we're good, this is fine, we don't need to chase the cutting edge continuously, 50% market is all we can do'? And then they thought, 'beating Nvidia': 'Let's release Nvidia's 970*(Edited) 2 years after the fact and kill this market!' I mean... what?! They weren't beating Nvidia at all. They traded punches, but never answered Nvidia Titan, and Hawaii XT failed miserably - a way too hungry dead end forcing them into Fury X and the capital loss against Maxwell. AMD's death of GCN happened somewhere between the great release of a 7970 and the birth of Tonga, which proved the arch was a dead end, but pushing out 290(x) anyway on a whoppin 512 bit bus. And then Fury had to happen, because how else do you above and beyond moar VRAM 512 bit? And then they got their 1000 bit hbm ass kicked by a 384 bit 980ti.

AMD made Mantle, which became Vulkan. And then what? What is the overarching strategy here, console access? We can applaud their many successes but the key to those events is that you use them to increase your market share and control, to the detriment of other key players. That's commerce.

Its one thing to make the occasional 'good card' (which is really nothing more than pricing a product correctly / in a way people buy it!) that sells, its another to actually execute on a strategy. Over several decades of AMD GPUs I haven't discovered what it is. If we go buy the marketing its some wild mix of making fun of the others while failing yourself (Poor Volta and a string of other events), going unified arch first and then not, and then yes, we might as well unify this again after dropping under 20% share convincingly; going 'midrange with Polaris' to lose key market share and brand recognition earned on GCN (which had a few 'good cards') only to claw back into the high end with RDNA2/3 and then back to midrange again?

There's just no rhyme or reason to it, and that is why it can't get ever get consistently good.


I had a console age in those years, for some reason it was PS3 at that point, not PC :D
The 290X was fine and, in fact, it was the last time that AMD beat Nvidia on the same node. The 512-bit bus is countered by the fact that its die was significantly smaller than the one used for the 780 Ti: GK110. It was Maxwell that took them by surprise and they never really recovered from that. In addition, it's clear that the gaming division doesn't get as much investment as the CPU divison. People think that RDNA is inferior as well, but RDNA is far more competitive than GCN was after Pascal.
Posted on Reply
#30
phints
300W TDP is quite high. Without knowing anything else you can tell the TSMC lithography jump is more minor this time :ohwell:
Posted on Reply
#32
Visible Noise
closeOf course the competition committed suicide. Journalists and reviewers banged the drum of how much better Nvidia is in terms of performance even when just edging out AMD, banged another drum of how ray tracing is "the future" (TM). People gobbled that up, went for Nvidia even when AMD was a perfectly decent (almost equivalent) or cheaper alternative. The demand for either cheaper (at the mid/low end) or more efficient (at the high end) GPUs (I think AMD never had these together in a single product) dried up in favor of "Nvidia at every end".

Only now that Nvidia is squeezing every $ it can with last year's overclocked products from a market they ended up dominating have many of the same journalists and reviewers realized the consequences and I started seeing articles how "Nvidia's GPUs don't get better, they just trade more power for more performance", or users started voting in polls that they care about raster performance nor ray tracing.
Lol, it's never AMD's fault for having sub-par products, is it?
ThomasKWere you even around when the HD 5870 came out and Nvidia's answer was the Fermi toaster?
That was 15 years ago dude, and the 480 was still faster. You also seem to have forgotten the quick respin - 7 months - when Nvidia got power under control and crushed AMD to the point it took AMD two generations to reach parity.
Posted on Reply
#33
3valatzy
Visible NoiseThat was 15 years ago dude, and the 480 was still faster. You also seem to have forgotten the quick respin - 7 months - when Nvidia got power under control and crushed AMD to the point
AMD had dual-GPU solutions back then to combat Nvidia's cards. It had Radeon HD 5970 which was faster than GTX 580.
Visible Noiseit took AMD two generations to reach parity.


Two generations later, the 7970 smashed GTX 580.
Posted on Reply
#34
Baba
3valatzyEven 4090 is not sold for this.


Posted on Reply
#35
3valatzy
^^^ RTX 5090 will be $4000, maybe after discounts $3900..
Posted on Reply
#36
Marcus L
3valatzyAMD had dual-GPU solutions back then to combat Nvidia's cards. It had Radeon HD 5970 which was faster than GTX 580.





Two generations later, the 7970 smashed GTX 580.
Why are you lying through your teeth

Sorry reading comprehension on my part, ignore
Posted on Reply
#37
wheresmycar
ZazigalkaA 4090 equivalent will be around 1500.
Yep this seems likely. Perhaps less with the alluded MSRP with limited product launch and then post-launch price normalisation hitting 1500+.

Scarcity driven hype - Baarstids!!

I'm defo up for an upgrade - I’d be satisfied with a 4080S or 7900XTX equivalent/+ from the 50-series lineup, maybe a 16GB 5070 Ti/S in the ~$800 range. Alternatively, see if AMD 8000-series offers something more compelling. Worst case scenario, certainly not the worst outcome, might settle with a (hopefully) discounted 4080S to feed the dedicated GSYNC panel

The piss-take - every generation stubbornly pushing the envelope, not just in performance, but in how much these toss-pots can squeeze out of our wallets.
Posted on Reply
#38
Visible Noise
3valatzyAMD had dual-GPU solutions back then to combat Nvidia's cards. It had Radeon HD 5970 which was faster than GTX 580.





Two generations later, the 7970 smashed GTX 580.
Read what I said. The 7970 reached parity with the 680 two years later.

And are you really going to bring up janky crossfire on a single board? There’s a reason why that was quickly abandoned. But hey, if “Two AMD GPU’s beats one Nvidia GPU for a $200 price increase ($275 inflation adjusted)” works for you, I’m not going to say you’re wrong. But then we would bring up the GTX 690, which beat everything that AMD would produce for the next 5 years - probably a record.

Funny, inflation adjusted the 5970 was a $1K card, and nobody screamed about the price then.


Posted on Reply
#39
dartuil
5090 I doubt that thing is less than 2500$
Posted on Reply
#40
N/A
300 W is not a problem, as a 66% power limit is always on the table.
If it keeps the full L3$, it's not bad at all at any price within reason. I just don't care at this point. hungry for performance.
One thing I can't turn a blind eye to is the N4 node instead of N3. I guess it's not mature enough for big dies, but whatever.
Posted on Reply
#41
nguyen
Could 5070 Ti be about as fast as 4090 I wonder, at 300W it would use about 100W less than 4090
Posted on Reply
#42
The Quim Reaper
DavenLooks like the upper part of the line-up is coming into focus

5070 $600 250W Slightly higher perforamance than 4070 Super
5070 Ti $800 300W Slightly higher performance than 4070 Ti Super
5080 $1000 400W Slightly higher performance than 4080 Super
5090 $2000 600W 40% higher performance than 4090

Nothing too exciting given the same 4 nm die process except for the 5090. I have no idea how this thing is going to work at 600W if your rig isn't perfectly up to snuff.

As for the lower part of the line-up, Nvidia is definitely waiting to see how Battlemage and RNDA4 performs.
It's not too exciting to current 40xx owners, 4090 aside, but current 40xx owners aren't the target market for these 50xx cards.

..Owners of 30xx cards, or older, looking for a huge upgrade are the target. And these cards will deliver that.
Posted on Reply
#43
Quicks
The Quim ReaperIt's not too exciting to current 40xx owners, 4090 aside, but current 40xx owners aren't the target market for these 50xx cards.

..Owners of 30xx cards, or older, looking for a huge upgrade are the target. And these cards will deliver that.
Well if they skipped the 40xx, because it did not have enough of a performance improvement. Then even the 50xx will disappoint and probably can wait another 2-3 years before upgrading.

Unless they cut the prices big time and make it worth upgrading.
Posted on Reply
#44
3valatzy
dartuil5090 I doubt that thing is less than 2500$
32 GB of new generation super rare and expensive GDDR7 VRAM won't be cheap.

QuicksWell if they skipped the 40xx, because it did not have enough of a performance improvement. Then even the 50xx will disappoint and probably can wait another 2-3 years before upgrading.

Unless they cut the prices big time and make it worth upgrading.


The extremely high power consumption will require new PC cases and new power supply units, which would render the whole initiative a no-go.
At least, they for the first time will use DisplayPort 2.1... :twitch: :rolleyes: :shadedshu:
Posted on Reply
#45
Ruru
S.T.A.R.S.
"suggesting an increase in power consumption"

Oh, why I'm not surprised. Gone were those 900/1000 series' days when they concentrated on efficiency yet still managed to have significant performance uplift.
Random_UserThere was also the Radeon 9600 Pro. A great, affordable low end card. that made gaming possible for alot of people. And it had a quite nice OC room. It was like "2500+ Barton" of videocard.
9550 was even better since it was even cheaper but just an underclocked 9600. My Club3D 9550 256M card in my stash OC's to Pro levels easily IIRC. Just had to make sure to get the 128-bit version, not the SE.
Posted on Reply
#46
Daven
nguyenCould 5070 Ti be about as fast as 4090 I wonder, at 300W it would use about 100W less than 4090
Explain how 8960 CUDA cores with 256-bit 16 GB matches the performance of 16384 CUDA cores with 384-bit 24GB. Oh and both GPUs are made on the same die process.
Posted on Reply
#47
nguyen
DavenExplain how 8960 CUDA cores with 256-bit 16 GB matches the performance of 16384 CUDA cores with 384-bit 24GB. Oh and both GPUs are made on the same die process.
4070 Ti: 7680 CUDA cores, 192bit bus
3090: 10496 CUDA cores, 384 bit bus


Pretty much every xx70 GPU tie with previous gen xx80Ti/xx90 (1070 = 980Ti, 2070Super = 1080Ti, 3070 = 2080Ti, 4070Ti = 3090)
Posted on Reply
#48
Outback Bronze
Prima.VeraAMD Radeon HD 5870
Yeah, that card kicked arse. One of the all-time greats, as well as the ATi 9700/800 pro/xt's.

They need to bring those days back to get NVidia off their high horse.

Unfortunately, I'll be looking into a 5090/80 :(
Posted on Reply
#49
Daven
nguyen4070 Ti: 7680 CUDA cores, 192bit bus
3090: 10496 CUDA cores, 384 bit bus


Pretty much every xx70 GPU tie with previous gen xx80Ti/xx90 (1070 = 980Ti, 2070Super = 1080Ti, 3070 = 2080Ti, 4070Ti = 3090)
So you are saying there will be a 100% increase in IPC going from Ada to Blackwell? That would mean the 5090 would be almost three times the performance of 4090.

By the way, the 4070Ti has over 50% clock increase over the 3090 which was possible going from Samsung 8LPP to TSMC 4N. Blackwell is the same node as Ada.

Just multiple CUDA cores times max clocks:

3090Ti: 10.7k cores times 1.86 Ghz = 19.44
4070Ti: 7.7k cores times 2.6 Ghz = 20.22

That’s why those two GPUs have the same performance. It was the node change. We don’t have that this time.

But just for fun, the 5070Ti would need 4.6 Ghz to match the 4090 at the same IPC. That’s not happening at 300W TSMC 4N.
Posted on Reply
#50
mb194dc
Seems really meh, won't be much generation increase. Pretty convinced the next gen cards are going to flop hard.

There just isn't the demand for ultra expensive cards anymore.

Compute is what's been driving Nvidia sales even for gaming division. People trying use consumer cards in their llms.

Without a massive breakthrough on the front end, it's going to end, very badly.
Posted on Reply
Add your own comment
Jul 12th, 2025 00:32 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts