• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 50 Technical Deep Dive

Finally we have a clear and singular idea of what a "CUDA core" is.
Until they change it again, doesn't matter as there never was an actual definition and it's not a "core" anyway.
 
Until they change it again, doesn't matter as there never was an actual definition and it's not a "core" anyway.
At least we know that it's an INT/FP unit (now).
 
At least we know that it's an INT/FP unit (now).

It was always best to just go by SM count with Nvidia or CU count with AMD
 
So was ray tracing years ago and yet here we are, still trying to figure out ways of patching out issues in image quality and performance. Did not turn out how people expected it.
It's not like pure raster was perfect on day one either. New techniques have been added for better visuals and performance improvement as well.

Even offline 3D renderers are still improving on IQ and performance. They went from being fairly complex to work with, to being really user-friendly and very simple to make something that looks good. Big animation studios have a team dedicated to performance optimization.

It's a tech that is still very young, trying to do in real time what took decades to achieve offline. But unlike raster, it's being compared to something that had decades of maturity, research, shared knowledge, and fast-growing hardware.
 
Last edited:
Seems like 5080 will be ~10% faster compared to 4080 Super. Funny how for Nvidia, Super cards don't exist now. 50 series looks like a software update to 40, maybe with 5090 being the exception.
What a waste from AMD side that they didn't yet figured out MCM design for consumer GPUs. If performance leaks for 9070XT are somewhat correct they could just glue those two together and they have 9090XT to compete with 5090. They at least would beat them in amount of nines in the name.
 
Does anyone have the make up of those shader and tensor cores to more base components?
 
Does anyone have the make up of those shader and tensor cores to more base components?
Nvidia doesn't like to disclose those things. I didn't see one, more detailed diagram for either 30 series or 40 series. It's just high level blocks.
 
Is the 5090 worse at pure raster performance than the 4090? in other words did they gimp it to fit in all the extra AI cores?
Looks like its around 50% faster than 4090 in pure raster when gaming 4k, just by looking spects

Performance difference can not be calculated on the Cuda Core numbers alone. Different structures. But still I'd think a 5090 would be anywhere faster from %20 to %25 raster performance.
Thats same underestimation what i have seen in many tech forums when someone speak about Nvidia 5xxx performance.

So +20-25%
its only 10-15% faster than my overlocked but sold 4090.
Looking 5090 spects, its just stupid to think its only 20-25% faster vs stock 4090

5090 is +40-50% faster in 4k vs 4090

5% improve... what a joke
this is RTX 50 topic not Ryzen X3D
 
Last edited:
Looks like its around 50% faster than 4090 in pure raster when gaming 4k, just by looking spects


Thats same underestimation what i have seen in many tech forums when someone speak about Nvidia 5xxx performance.

So +20-25%
its only 10-15% faster than my overlocked but sold 4090.
Looking 5090 spects, its just stupid to think its only 20-25% faster vs stock 4090

5090 is +40-50% faster in 4k vs 4090


this is RTX 50 topic not Ryzen X3D
You are completely talking out of your rear end. There's zero indication that it's going to be 50% faster, and in fact you're the only one that's even suggested such nonsense.

Edit!: lol you're the same member in another thread that said 1200 watts being dumped in a room had no impact on the room's temperature.
 
Last edited by a moderator:
Gaming is no different than any other hobby now a days. How far you want to take your experience depends on how much you want to spend.

I think people forget this, or haven't had other hobbies :confused:
While the prices are kind of crazy, and the 'traditional' uplift this gen is looking small, we've seen much like this before, certainly crazy high prices for crazy halo products. The biggest bother is that we're slowly whittling away at what portion of top dog performance we get as we go down the stack, it's becoming slowly more proportionate to price.

Still, I expect certain products in the stack to punch above their weight relative to the physical hardware differences. For instance, the 4070Ti has 47% the cores, but 64% of the performance of a 4090. Likewise, the 4080 has 59% of the cores but 80% of the performance. I'd predict this to follow a similar if slightly disproportionate trend in Blackwell. For instance a 5090 is ~double a 5080, but I'd expect closer to 50% faster.

As to the hobby cost, yeah I gotta agree there, as a motorcyclist that isn't cheap at all, even once you own the bike. Good tyres relatively often, servicing, fuel, insurance, gear etc, it adds up very fast. Know anyone with a project vehicle? RIP their wallet.
 
5070 will be faster then 5090 slower then 4070 Super :roll:
So u think its only 10% faster than 4070..

Yeah they use GDDR7 to be just 10% faster than 4070, sure sure
we can just take 4070 and OC it +10%

Its great when reviews is coming, no need to read all this BS and trolls go back in caves

You are completely talking out of your rear end. There's zero indication that it's going to be 50% faster, and in fact you're the only one that's even suggested such nonsense.

Edit!: lol you're the same clown in another thread that said 1200 watts being dumped in a room had no impact on the room's temperature.
U are Amd fan and u dont like to see any performance up lifts in Nvidia gpus so u are blind for facts and 5090 spects vs 4090.

And yes i have great AC, even in summer, room temps is same.
Maybe in u country u dont have AC at all?i dont know.
i can have 4x 5090 24/7 in summer and my room temp is still 20c if i want to..
 
Last edited:
So u think its only 10% faster than 4070..

Yeah they use GDDR7 to be just 10% faster than 4070, sure sure
we can just take 4070 and OC it +10%

Its great when reviews is coming, no need to read all this BS and trolls go back in caves
It's worth paying attention to the 250 watts of the 5070 with such pathetic specs. There is no reason why so much, except that otherwise it will not beat the 4070 super :D I shudder to imagine at what frequency it will work, that it needs so much.
 
I don't see the 5080. The whole lineup except for the 5090 has been shifted down to a lower tier, but being sold at the same price or really closely to their previous counterparts. It is an even more aggressive and clearer example compared to Ada from Ampere.
They want more people to buy the 5090, and screw those(for the most part) who can't.

It is a deliberate decision to cut the lower models that much, it's got nothing to do with the silicon(in the lower segment).
They can come out with a halo product asking this much and having created almost all their slides with that as if everyone could afford it; and they cannot even give the expected generational leap in performance on the others?

They could have made the GB203 die bigger, with more cores added. They did not want to.
The 5070 in particular is so weak, possibly a couple of generations back wouldn't have been considered a 60Ti tier GPU, but more like a 60.
And later on, a ~year from now, they will say: How great that gamers could buy the 4070 Super which is 10-15% faster for x amount of money!, when that card would have been the 5060 Ti at a lower price to begin with.
 
Last edited:
While the prices are kind of crazy, and the 'traditional' uplift this gen is looking small
I think there is more to it then meets the eye.. but only time will tell.

I am not a proponent of high prices, make no mistake.. but like CUDA, I think software is being written to specifically take advantage of the new hardware capabilites.

I could just be talking out of my ass though, because I have done almost no real research on it :D
 
"AI for games". Did they mean NPC logic will now be handled on the GPU instead of the CPU endpoint?
 
NVidia marketing pretending to be an article though.

Noticed my youtube is full of it today also.

What people actually want to see, still can't see.

GPU manufacturers should send GPUs to reviewers before the unveiling/announcement, so that when they are finished with their marketing "blah blah" we can see the results of actual said "blah blah" because we know they are lying in one way or another.

It's extremely annoying.
 
GPU manufacturers should send GPUs to reviewers before the unveiling/announcement, so that when they are finished with their marketing "blah blah" we can see the results of actual said "blah blah" because we know they are lying in one way or another.

It's extremely annoying.

That'd be the day, they want to get the pr spin out to the general consumer who will eat it up like chocolate and peanut butter on their favorite insta models @&#%%#
 
So, how are the reviewers gonna pin the accolades on RTX 5080, considering it has about ZERO performance uplift in raster snd Ray Tracing compared to RTX 4080 Super, for the same price?

Change the benchmarking format, relegate the non-DLSS performance numbers to the past that isn't relevant, and embrace the fake frames?

Ignore the mid-generational uplift, compare just the $1200 RTX 4080 to $999 RTX 5080, and somehow sell 15% of performance increase as ground breaking, hoping that people have memories of goldfish?

Focus on AI gaming features useability, something that is as much in a vague future as Ray Tracing was for the RTX 2080 - by the time game creators learned to use it, the card was too slow to actually enjoy it?

Fully embrace that Moore's Law is truly dead now, if you want more performance, you pay more, not wait for two years?
 
Last edited:
"Nvidia says more than 80 percent of RTX GPU owners activate its DLSS upscaling."
Even so, how many people really enjoyed frame doubling in previous generation? Even after all the updates, Flight Simulator still showed so much artifacts it was unusable to most, except for benchmark bragging.
 
Even so, how many people really enjoyed frame doubling in previous generation? Even after all the updates, Flight Simulator still showed so much artifacts it was unusable to most, except for benchmark bragging.
True. Similar to streaming compression, how many people care? Probably the wrong place to ask that question. I fondly remember games being glorified slideshows in my youth. My home computer was the exception that proved the rule of the doppler effect in Falcon 4.0 for example. I tend to think of the "shifting baselines theory" and how it pays to be the market leader and shift the baseline to where it suits you. Imagine quality is better but now I have over 400 frames. Some people won't even know what came before. (Ties into that thread about someone worrying about the proliferation of a.i.).
 
Last edited:
It's a tech that is still very young
It's not young at all, 6 years is a lot, raster graphics advanced significantly in this span of time.

"AI for games". Did they mean NPC logic will now be handled on the GPU instead of the CPU endpoint?
Of course not, AI is ironically used as anything but actual AI.
 
There is vague talk exactly about using "AI" for creating NPC interactions, but considering it's still all so vague I imagine it could be first used with the ChatGPT-like server. And maybe artificially limit the access in certain applications to only those with RTX 50x0, although none of that AI will run locally.
 
This is the 2nd generation that nvidia releases anemic gen on gen cards (at least in anything below the xx90 tier). Can't wait for the midrange AMD card that will slaughter the 5080 at 449$.
 
Does anyone have the make up of those shader and tensor cores to more base components?
Nvidia doesn't like to disclose those things. I didn't see one, more detailed diagram for either 30 series or 40 series. It's just high level blocks.
Depends, what exactly are you looking for? There are whitepapers available for released architectures.

Consumer Blackwell is not released yet, so detailed stuff isn't public yet either.

It's not young at all, 6 years is a lot, raster graphics advanced significantly in this span of time.
How?
Shader unit double-issue gets improved, from FP + INT to FP/INT + FP/INT today. Does anything else significant come to mind?
 
Last edited:
Back
Top