• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5070 Ti Specs Leak: Same Die as RTX 5080, 300 W TDP

Looks like the upper part of the line-up is coming into focus

5070 $600 250W Slightly higher perforamance than 4070 Super
5070 Ti $800 300W Slightly higher performance than 4070 Ti Super
5080 $1000 400W Slightly higher performance than 4080 Super
5090 $2000 600W 40% higher performance than 4090

Nothing too exciting given the same 4 nm die process except for the 5090. I have no idea how this thing is going to work at 600W if your rig isn't perfectly up to snuff.

As for the lower part of the line-up, Nvidia is definitely waiting to see how Battlemage and RNDA4 performs.

Nothing too exciting except RTX 5090 which will be crazy expensive - maybe $4000 for the GB202 die that is 744 mm^2. That is at the reticle size limit! :kookoo:

The good news is that AMD will have a chance to survive after this, because the RTX 5000 will be mostly not worthy to buy..

RTX 4070 - 5888 shaders RTX 5070 - 6400 shaders
RTX 4070S - 7168 shaders
RTX 4070Ti - 7680 shaders
RTX 4070TiS - 8448 shaders RTX 5070Ti - 8960 shaders
RTX 4080 - 9728 shaders
RTX 4080S - 10240 shaders RTX 5080 - 10752 shaders
RTX 4090 - 16384 shaders RTX 5090 - 21760 shaders
 
Maybe you are to young to remember, but definitely nVidia wasn't always better.
Just for your homework, search for AMD Radeon HD 5870 card. It was so good, that it was almost beating the dual GPU card from nVidia, while wiping the floor with whole nvidia gen cards. Also the 5850 was a monster too, and could work in pair with the 5870. I remember that was my last SLI setup ever, but it was a blast. Good ol' times.
While this is true, it was friggin ages ago. Reminiscing about what had been 15 years ago doesn't sell me cards worth buying.

Nothing too exciting except RTX 5090 which will be crazy expensive - maybe $4000 for the GB202 die that is 744 mm^2. That is at the reticle size limit! :kookoo:
$1999
 

Even 4090 is not sold for this.

1732304004259.png

1732304037836.png
 
So pray tell where it went wrong then. They were 'beating' Nvidia with what? Tech that met the end of its dev cycle. They had every opportunity to obtain true leadership but AMD was thinking 'meh, we're good, this is fine, we don't need to chase the cutting edge continuously, 50% market is all we can do'? And then they thought, 'beating Nvidia': 'Let's release Nvidia's 970*(Edited) 2 years after the fact and kill this market!' I mean... what?! They weren't beating Nvidia at all. They traded punches, but never answered Nvidia Titan, and Hawaii XT failed miserably - a way too hungry dead end forcing them into Fury X and the capital loss against Maxwell. AMD's death of GCN happened somewhere between the great release of a 7970 and the birth of Tonga, which proved the arch was a dead end, but pushing out 290(x) anyway on a whoppin 512 bit bus. And then Fury had to happen, because how else do you above and beyond moar VRAM 512 bit? And then they got their 1000 bit hbm ass kicked by a 384 bit 980ti.

AMD made Mantle, which became Vulkan. And then what? What is the overarching strategy here, console access? We can applaud their many successes but the key to those events is that you use them to increase your market share and control, to the detriment of other key players. That's commerce.

Its one thing to make the occasional 'good card' (which is really nothing more than pricing a product correctly / in a way people buy it!) that sells, its another to actually execute on a strategy. Over several decades of AMD GPUs I haven't discovered what it is. If we go buy the marketing its some wild mix of making fun of the others while failing yourself (Poor Volta and a string of other events), going unified arch first and then not, and then yes, we might as well unify this again after dropping under 20% share convincingly; going 'midrange with Polaris' to lose key market share and brand recognition earned on GCN (which had a few 'good cards') only to claw back into the high end with RDNA2/3 and then back to midrange again?

There's just no rhyme or reason to it, and that is why it can't get ever get consistently good.


I had a console age in those years, for some reason it was PS3 at that point, not PC :D
The 290X was fine and, in fact, it was the last time that AMD beat Nvidia on the same node. The 512-bit bus is countered by the fact that its die was significantly smaller than the one used for the 780 Ti: GK110. It was Maxwell that took them by surprise and they never really recovered from that. In addition, it's clear that the gaming division doesn't get as much investment as the CPU divison. People think that RDNA is inferior as well, but RDNA is far more competitive than GCN was after Pascal.
 
300W TDP is quite high. Without knowing anything else you can tell the TSMC lithography jump is more minor this time :ohwell:
 
Of course the competition committed suicide. Journalists and reviewers banged the drum of how much better Nvidia is in terms of performance even when just edging out AMD, banged another drum of how ray tracing is "the future" (TM). People gobbled that up, went for Nvidia even when AMD was a perfectly decent (almost equivalent) or cheaper alternative. The demand for either cheaper (at the mid/low end) or more efficient (at the high end) GPUs (I think AMD never had these together in a single product) dried up in favor of "Nvidia at every end".

Only now that Nvidia is squeezing every $ it can with last year's overclocked products from a market they ended up dominating have many of the same journalists and reviewers realized the consequences and I started seeing articles how "Nvidia's GPUs don't get better, they just trade more power for more performance", or users started voting in polls that they care about raster performance nor ray tracing.

Lol, it's never AMD's fault for having sub-par products, is it?

Were you even around when the HD 5870 came out and Nvidia's answer was the Fermi toaster?

That was 15 years ago dude, and the 480 was still faster. You also seem to have forgotten the quick respin - 7 months - when Nvidia got power under control and crushed AMD to the point it took AMD two generations to reach parity.
 
Last edited:
That was 15 years ago dude, and the 480 was still faster. You also seem to have forgotten the quick respin - 7 months - when Nvidia got power under control and crushed AMD to the point

AMD had dual-GPU solutions back then to combat Nvidia's cards. It had Radeon HD 5970 which was faster than GTX 580.

it took AMD two generations to reach parity.

1732309856532.png


Two generations later, the 7970 smashed GTX 580.
1732309757627.png
 

Attachments

  • 58.PNG
    58.PNG
    11.1 KB · Views: 73
A 4090 equivalent will be around 1500.

Yep this seems likely. Perhaps less with the alluded MSRP with limited product launch and then post-launch price normalisation hitting 1500+.

Scarcity driven hype - Baarstids!!

I'm defo up for an upgrade - I’d be satisfied with a 4080S or 7900XTX equivalent/+ from the 50-series lineup, maybe a 16GB 5070 Ti/S in the ~$800 range. Alternatively, see if AMD 8000-series offers something more compelling. Worst case scenario, certainly not the worst outcome, might settle with a (hopefully) discounted 4080S to feed the dedicated GSYNC panel

The piss-take - every generation stubbornly pushing the envelope, not just in performance, but in how much these toss-pots can squeeze out of our wallets.
 
AMD had dual-GPU solutions back then to combat Nvidia's cards. It had Radeon HD 5970 which was faster than GTX 580.



View attachment 372911

Two generations later, the 7970 smashed GTX 580.
View attachment 372908

Read what I said. The 7970 reached parity with the 680 two years later.

And are you really going to bring up janky crossfire on a single board? There’s a reason why that was quickly abandoned. But hey, if “Two AMD GPU’s beats one Nvidia GPU for a $200 price increase ($275 inflation adjusted)” works for you, I’m not going to say you’re wrong. But then we would bring up the GTX 690, which beat everything that AMD would produce for the next 5 years - probably a record.

Funny, inflation adjusted the 5970 was a $1K card, and nobody screamed about the price then.


1732331434178.png
 
5090 I doubt that thing is less than 2500$
 
300 W is not a problem, as a 66% power limit is always on the table.
If it keeps the full L3$, it's not bad at all at any price within reason. I just don't care at this point. hungry for performance.
One thing I can't turn a blind eye to is the N4 node instead of N3. I guess it's not mature enough for big dies, but whatever.
 
Could 5070 Ti be about as fast as 4090 I wonder, at 300W it would use about 100W less than 4090
 
Looks like the upper part of the line-up is coming into focus

5070 $600 250W Slightly higher perforamance than 4070 Super
5070 Ti $800 300W Slightly higher performance than 4070 Ti Super
5080 $1000 400W Slightly higher performance than 4080 Super
5090 $2000 600W 40% higher performance than 4090

Nothing too exciting given the same 4 nm die process except for the 5090. I have no idea how this thing is going to work at 600W if your rig isn't perfectly up to snuff.

As for the lower part of the line-up, Nvidia is definitely waiting to see how Battlemage and RNDA4 performs.

It's not too exciting to current 40xx owners, 4090 aside, but current 40xx owners aren't the target market for these 50xx cards.

..Owners of 30xx cards, or older, looking for a huge upgrade are the target. And these cards will deliver that.
 
It's not too exciting to current 40xx owners, 4090 aside, but current 40xx owners aren't the target market for these 50xx cards.

..Owners of 30xx cards, or older, looking for a huge upgrade are the target. And these cards will deliver that.

Well if they skipped the 40xx, because it did not have enough of a performance improvement. Then even the 50xx will disappoint and probably can wait another 2-3 years before upgrading.

Unless they cut the prices big time and make it worth upgrading.
 
5090 I doubt that thing is less than 2500$

32 GB of new generation super rare and expensive GDDR7 VRAM won't be cheap.

1732356736861.png


Well if they skipped the 40xx, because it did not have enough of a performance improvement. Then even the 50xx will disappoint and probably can wait another 2-3 years before upgrading.

Unless they cut the prices big time and make it worth upgrading.

1732356759430.png


The extremely high power consumption will require new PC cases and new power supply units, which would render the whole initiative a no-go.
At least, they for the first time will use DisplayPort 2.1... :twitch: :rolleyes: :shadedshu:
 
"suggesting an increase in power consumption"

Oh, why I'm not surprised. Gone were those 900/1000 series' days when they concentrated on efficiency yet still managed to have significant performance uplift.

There was also the Radeon 9600 Pro. A great, affordable low end card. that made gaming possible for alot of people. And it had a quite nice OC room. It was like "2500+ Barton" of videocard.
9550 was even better since it was even cheaper but just an underclocked 9600. My Club3D 9550 256M card in my stash OC's to Pro levels easily IIRC. Just had to make sure to get the 128-bit version, not the SE.
 
Could 5070 Ti be about as fast as 4090 I wonder, at 300W it would use about 100W less than 4090
Explain how 8960 CUDA cores with 256-bit 16 GB matches the performance of 16384 CUDA cores with 384-bit 24GB. Oh and both GPUs are made on the same die process.
 
Last edited:
Explain how 8960 CUDA cores with 256-bit 16 GB matches the performance of 16384 CUDA cores with 384-bit 24GB. Oh and both GPUs are made on the same die process.

4070 Ti: 7680 CUDA cores, 192bit bus
3090: 10496 CUDA cores, 384 bit bus
average-fps-3840-2160.png


Pretty much every xx70 GPU tie with previous gen xx80Ti/xx90 (1070 = 980Ti, 2070Super = 1080Ti, 3070 = 2080Ti, 4070Ti = 3090)
 
AMD Radeon HD 5870

Yeah, that card kicked arse. One of the all-time greats, as well as the ATi 9700/800 pro/xt's.

They need to bring those days back to get NVidia off their high horse.

Unfortunately, I'll be looking into a 5090/80 :(
 
4070 Ti: 7680 CUDA cores, 192bit bus
3090: 10496 CUDA cores, 384 bit bus
View attachment 372980

Pretty much every xx70 GPU tie with previous gen xx80Ti/xx90 (1070 = 980Ti, 2070Super = 1080Ti, 3070 = 2080Ti, 4070Ti = 3090)
So you are saying there will be a 100% increase in IPC going from Ada to Blackwell? That would mean the 5090 would be almost three times the performance of 4090.

By the way, the 4070Ti has over 50% clock increase over the 3090 which was possible going from Samsung 8LPP to TSMC 4N. Blackwell is the same node as Ada.

Just multiple CUDA cores times max clocks:

3090Ti: 10.7k cores times 1.86 Ghz = 19.44
4070Ti: 7.7k cores times 2.6 Ghz = 20.22

That’s why those two GPUs have the same performance. It was the node change. We don’t have that this time.

But just for fun, the 5070Ti would need 4.6 Ghz to match the 4090 at the same IPC. That’s not happening at 300W TSMC 4N.
 
Last edited:
Back
Top