Friday, November 22nd 2024

NVIDIA GeForce RTX 5070 Ti Specs Leak: Same Die as RTX 5080, 300 W TDP

Recent leaks have unveiled specifications for NVIDIA's upcoming RTX 5070 Ti graphics card, suggesting an increase in power consumption. According to industry leaker Kopite7kimi, the RTX 5070 Ti will feature 8,960 CUDA cores and operate at a 300 W TDP. In a departure from previous generations, the RTX 5070 Ti will reportedly share the same GB203 die with its higher-tier sibling, the RTX 5080. This architectural decision differs from the RTX 40-series lineup, where the 4070 Ti and 4080 utilized different dies (AD104 and AD103, respectively). This shared die approach could potentially keep NVIDIA's manufacturing costs lower. Performance-wise, the RTX 5070 Ti shows promising improvements over its predecessor. The leaked specifications indicate a 16% increase in CUDA cores compared to the RTX 4070 Ti, though this advantage shrinks to 6% when measured against the RTX 4070 Ti Super.

Power consumption sees a modest 5% increase to 300 W, suggesting improved efficiency despite the enhanced capabilities. Memory configurations remain unconfirmed, but speculations about the card indicate that it could feature 16 GB of memory on a 256-bit interface, distinguishing it from the RTX 5080's rumored 24 GB configuration. The positioning across the 50-series GPU stack of this RTX 5070 Ti appears carefully calculated, with its 8,960 CUDA cores sitting approximately 20% below the RTX 5080's 10,752 cores. This larger performance gap between tiers contrasts with the previous generation's approach, potentially indicating a more defined product hierarchy in the Blackwell lineup. NVIDIA is expected to unveil its Blackwell gaming graphics cards at CES 2025, with the RTX 5090, 5080, and 5070 series leading the announcement.
Source: VideoCardz
Add your own comment

88 Comments on NVIDIA GeForce RTX 5070 Ti Specs Leak: Same Die as RTX 5080, 300 W TDP

#76
Craptacular
TechBuyingHavocI expect both to suck as well. AMD is distracted with AI and not focused on gaming, despite what they are saying, and Intel may *wish* to gain market share with Battlemage but they have no money left and Celestial looks dead in the water.
How are you coming to the conclusion that Celestial is dead in the water?
Posted on Reply
#77
tugrul_LordOfDrinks
People should count power budget too. RTX5070 is said to have 250Watts. This is 25% higher than 4070 and more cores. So it can be way better than 4070, not an exact same thing. Also number of cores is multiple of 64. This makes it exactly match some calculations that do the work in chunks of 64 items. For example, 1440p has 40 times 64 pixels on each scanline. I guess 5070 will have some other architectural bonuses too. Something like lower cache latency, bigger cache, etc. In the end, it may even have 30% more performance compared to a 4070 that is limited to 200W like the ventus 2x.

There's also the 5070 ti vs 5080: 33% more power budget with only 15% more cores. So the power limit will give it an edge over 5070ti on per-core performance when algorithm is compute-bottlenecked and not bandwidth bottlenecked.

Even against 5090, the 5080 has 50% less cores but only 33% less power budget.

RTX5080 has the most power budget per CUDA core. <---- I think Nvidia is counting on RTX5080 most. It's cheaper than 5090, more power per cuda core, possibly with optimum memory size and perhaps with just enough memory bandwidth to balance with the compute performance (not unbalanced like a 660ti / etc).
Posted on Reply
#78
close
3valatzyThere is no such consensus. There is too much sponsorship paid by Nvidia in the reviews which are misleading and lack true data.



Was more a matter of luck that they got the right man in the right place - his name is Jim Keller who designed the Zen architecture which for now saves AMD.
The thing is that Nvidia is focused in a market which made them a multi-trillion company, while AMD is nowhere near, especially with the today's threat to exit the GPU competition all together. One or two weak Radeon GPUs generations and AMD will be out of the business.
Which will be fatal for the company.
Keller was also at Intel and Tesla without managing the same miraculous turnaround for either of them. So I think it's uncharitable to say the success was just the result of luck and randomness, or even just one person (Zen wasn't a one person effort, nor relied *only* on Keller's genius for everything that mattered), putting AMD on an upward trend in the CPU business. Also Nvidia is 'almost' tied with Apple for the most valuable public company in the world. No other silicon related company is even close, not just AMD. But that's exactly my point, AMD chose a battle they had a better shot at fighting, not even talking about winning. I doubt Radeon failing would sink AMD, they'd most likely cut those losses and keep going with the CPU+APU business (so whatever integrated portfolio they need to keep going).
Posted on Reply
#79
Draconis
close(Zen wasn't a one person effort, nor relied *only* on Keller's genius for everything that mattered), putting AMD on an upward trend in the CPU business.
Agreed. It was a group effort but Mike Clark is acknowledged as the "Father of Zen". Quote from Jim Keller interview with Ian Cutress, link below.
I found people inside the company, such as Mike Clark, Leslie Barnes, Jay Fleischman, and others
3valatzyhis name is Jim Keller who designed the Zen architecture which for now saves AMD.
Keller's role was more managerial but he did have architecture input, interview hereif you are interested.
Posted on Reply
#80
Random_User
3valatzyAMD had dual-GPU solutions back then to combat Nvidia's cards.
And they were absolute atrocity and hot garbage, much like Crossfire, which worked only half of the time (gaming). I've learned that the hard way.
DavenSo you are saying there will be a 100% increase in IPC going from Ada to Blackwell? That would mean the 5090 would be almost three times the performance of 4090.

By the way, the 4070Ti has over 50% clock increase over the 3090 which was possible going from Samsung 8LPP to TSMC 4N. Blackwell is the same node as Ada.

Just multiple CUDA cores times max clocks:

3090Ti: 10.7k cores times 1.86 Ghz = 19.44
4070Ti: 7.7k cores times 2.6 Ghz = 20.22

That’s why those two GPUs have the same performance. It was the node change. We don’t have that this time.

But just for fun, the 5070Ti would need 4.6 Ghz to match the 4090 at the same IPC. That’s not happening at 300W TSMC 4N.
Lol, so true. Even Ada had less than declared 30%, as native non-frame generated performance gains.
Posted on Reply
#81
mkppo
Vayra86But... Nvidia was always better. Simple as that. Even during GCN; AMD drew more power for a slightly better bang for buck, offered more VRAM for a slightly better bang for buck. And that's all AMD wrote. Not ONCE did they take the leading position, either in featureset or in software altogether. Driver regime has been spotty. GPU time to market has no real fixed cadence, its 'whatever happens with AMD' every single time and it never happens to be just a smooth launch. The list of issues goes on and on and on.

The only thing to applaud is AMD brought RDNA2/3 to a good, stable situation. Too bad the products don't sell. Because AMD chose to price them in parity with Nvidia... So at that point, they didn't have the consistency, nor the trust factor or brand image, nor the bang for buck price win.... and guess what. RDNA4 is a bugfix and then they're going back to the drawing board. Again: no consistency, they even admitted themselves that they failed now.
When GCN (HD 7970) came out, it was much faster than nvidia because the GTX 680 didn't come out yet.

When it did, it was barely faster with slightly lower power consumption. But all reviewers praised it to hell and back mentioning how it achieved the holy trifecta yada yada and I was sat thinking "isn't GCN just a superior architecture and overclocks much better, why is no one mentioning this?". Turns out I was correct, GTX 680 aged like shit and 7970 was so far ahead of it in a few years that it was in a different category altogether. 7970 was also just heavily downclocked.

There were other times too, 9700 Pro, 9800 Pro, 290X. Always isn't really correct here.
Daworait seems u dont know much, u underate all the performance those cards will have? u trolling or serious?
Given just how terrible the low/midrange of 40 series was, it's easy to come to the conclusion that 50 series will be no different. How do you know that he's underrating the performance when you don't know it yourself?
Posted on Reply
#82
Vayra86
mkppoWhen GCN (HD 7970) came out, it was much faster than nvidia because the GTX 680 didn't come out yet.

When it did, it was barely faster with slightly lower power consumption. But all reviewers praised it to hell and back mentioning how it achieved the holy trifecta yada yada and I was sat thinking "isn't GCN just a superior architecture and overclocks much better, why is no one mentioning this?". Turns out I was correct, GTX 680 aged like shit and 7970 was so far ahead of it in a few years that it was in a different category altogether. 7970 was also just heavily downclocked.

There were other times too, 9700 Pro, 9800 Pro, 290X. Always isn't really correct here.



Given just how terrible the low/midrange of 40 series was, it's easy to come to the conclusion that 50 series will be no different. How do you know that he's underrating the performance when you don't know it yourself?
I'm 100% in agreement when it comes to the 7970 and its budget sibling the 7950, which was the killer budget high end card to buy those days. Nobody bought a 680. Everyone bought 670 instead, or 7950. And everyone who bought a 660 and/or 660ti was screwed, because both GPUs were too weak in the VRAM department and there was an issue with a certain MSI Power edition. I did my stupid there and earned experience points putting two 660's in SLI :D Silly me - didn't work in half the games I played, and in the other half it was a stutter fest, 150 FPS notwithstanding :roll::roll:
Daworait seems u dont know much, u underate all the performance those cards will have? u trolling or serious?
It seems to me you're about 15-16 years old and have much to learn, young gwasshoppa.

You can use TPU to learn things, or you can keep going on your current trajectory, which is likely to result in a swift banhammer because this isn't a bottom-barrel Youtube chat or some social media feed. Try to discuss like a normal human being, using normal speech. People are supporting what they say (including myself) with factual arguments. You'd do well to try that too. And if you do, you will discover there's very few adamant 'fans' here of specific brands, and those types don't tend to survive here for very long either. Most people here have used and will keep using whatever brand is the best for them at any given time. People are individuals. Treat them as such ;)
DaworaThere is no way to know is 5000 series worthy buy or not, But Nvidia know what they do so be ready to suprise.

And comparing shader count alone makes u look stupid
This is the only real statement I could find in your posts besides flaming and its something I'll respond to, perhaps we can get to something meaningful that way?

Comparing shader count within the same gen/architecture update is reasonable. If the architecture doesn't change, those shaders are almost directly relative to the real performance gaps between those cards.

Now, for the 5000 series, we already know (because Blackwell for enterprise is already (pre-)released/ordered and in delivery soonish, starting this December in fact) there are little to no architectural changes compared to Ada. The nodes are also similar, 4nm on both, slight refinements in Blackwell. So its fair to compare shader count. Will Nvidia have an ace up their sleeve? Who knows, but so far all we've heard was pricing and product placement, no mention of a killer (software) feature or anything yet, and I think Nvidia also really doesn't have to, because everyone's screaming for chips and they'll sell them anyway.

So yes, 5000 series will look grim AF. Its not as if Ada was a fantastic thing to buy, you do it because you have to or don't care about money. RDNA3 was no different by the way, AMD priced it wayyy out of its league.
Posted on Reply
#83
Dawora
Vayra86I'm 100% in agreement when it comes to the 7970 and its budget sibling the 7950, which was the killer budget high end card to buy those days. Nobody bought a 680. Everyone bought 670 instead, or 7950. And everyone who bought a 660 and/or 660ti was screwed, because both GPUs were too weak in the VRAM department and there was an issue with a certain MSI Power edition. I did my stupid there and earned experience points putting two 660's in SLI :D Silly me - didn't work in half the games I played, and in the other half it was a stutter fest, 150 FPS notwithstanding :roll::roll:


It seems to me you're about 15-16 years old and have much to learn, young gwasshoppa.

You can use TPU to learn things, or you can keep going on your current trajectory, which is likely to result in a swift banhammer because this isn't a bottom-barrel Youtube chat or some social media feed. Try to discuss like a normal human being, using normal speech. People are supporting what they say (including myself) with factual arguments. You'd do well to try that too. And if you do, you will discover there's very few adamant 'fans' here of specific brands, and those types don't tend to survive here for very long either. Most people here have used and will keep using whatever brand is the best for them at any given time. People are individuals. Treat them as such ;)


This is the only real statement I could find in your posts besides flaming and its something I'll respond to, perhaps we can get to something meaningful that way?

Comparing shader count within the same gen/architecture update is reasonable. If the architecture doesn't change, those shaders are almost directly relative to the real performance gaps between those cards.

Now, for the 5000 series, we already know (because Blackwell for enterprise is already (pre-)released/ordered and in delivery soonish, starting this December in fact) there are little to no architectural changes compared to Ada. The nodes are also similar, 4nm on both, slight refinements in Blackwell. So its fair to compare shader count. Will Nvidia have an ace up their sleeve? Who knows, but so far all we've heard was pricing and product placement, no mention of a killer (software) feature or anything yet, and I think Nvidia also really doesn't have to, because everyone's screaming for chips and they'll sell them anyway.

So yes, 5000 series will look grim AF. Its not as if Ada was a fantastic thing to buy, you do it because you have to or don't care about money. RDNA3 was no different by the way, AMD priced it wayyy out of its league.
What u care how old random ppls are , i can be 10y, 20y,30y or 40y or 95y that should not matter.
I think u are around 13y max 14y old

Some ppls cant take it how badly AMD is doing in Gpus and how well Nvidia is doing, so they be very negative against Nvidia.
Posted on Reply
#84
Prima.Vera
So do we have the exact specs of the 5070, 5070 Ti and 5080 cards yet?
Posted on Reply
#85
ToxicTaZ
RTX 5000 Let's see how the more TDP Power and more expensive than the RTX 4000 series perform.

Let's see how they performed against my...

MSI RTX 4080 Super 16G SUPRIM X card.

Fully unlocked AD103
320w TDP
2640MHz
16GB GDDR6X 256-bit 24GHz
10240 Cuda Cores
320 Tensor Cores
230 TMUs Cores
80 RT Cores
80 SM

Hmm let's see how well the RTX 5000 series performs.

Cheers
Posted on Reply
#86
karomba
Legacy-ZAI have my eye on the 5070Ti, however, if it arrives with 12GB VRAM, there is no way. Also, the MSRP should be reasonable, you can't just keep pushing prices up nVidia, the world doesn't work that way, salaries doesn't just go up ad-infinitum, eventually you will price yourself out of the tier/market you used to target.
Yeah, I might upgrade to a 5070Ti. But a 12 GB version would be impossible because Nvidia never downgrades VRAM (except the 4060) so it would be very safe to bet it would not have 12 GB of VRAM.
Posted on Reply
#87
Vayra86
karombaYeah, I might upgrade to a 5070Ti. But a 12 GB version would be impossible because Nvidia never downgrades VRAM (except the 4060) so it would be very safe to bet it would not have 12 GB of VRAM.
Nvidia has downgraded VRAM several times between Pascal and Ada across the entire stack except x90, my friend. Either in capacity or bandwidth. You have wonderful cache now to fix all your problems. Except those that don't fit in the framebuffer.
Posted on Reply
Add your own comment
Jul 14th, 2025 15:33 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts