• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA GeForce RTX 5090 Features 16+6+7 Phase Power Delivery on 14-Layer PCB

If 5080 matches 4090 it will be a win. Im concerned about the the distance between 4090 and the 4080. How the lower tier 5000 series cards will stack there?

Yes I am. I buy you 5080 if it's faster in pure raw raster average performance at 4K than 4090 (under condition that leaked shader count of 5080 10752 SU is correct) and visa versa.
I'm not so sure man. It can be faster. The odds are it will be but the question is by how much? is 2% faster enough to uphold the bet?
 
I'm not so sure man. It can be faster. The odds are it will be but the question is by how much? is 2% faster enough to uphold the bet?
No. 2% is inside a margin of error and the choice of games used. It must be at least 5% coming from various outlets. Let's say Techpowerup, Hardware unboxed and Gamersnexus. I trust these guys not to manipulate benchmarks.
 
No. 2% is inside a margin of error and the choice of games used. It must be at least 5% coming from various outlets. Let's say Techpowerup, Hardware unboxed and Gamersnexus. I trust these guys not to manipulate benchmarks.
Ok so I guess we are setting the rules for the bet. 5% is still not a lot. I understand, it has to be across the board not just few games.
 
.

How much would you like to spend on the design of such a board, even just for proper calculations of the locations and characteristics of the elements, so that they do not drown in induction currents, so that there are no short circuits and eddy currents? Even installing the power elements so close together is a difficult manufacturing problem.
Most correct, and worth the extra '5090' premium, on top ot the current '4090' premium.
At least to some..
 
No. 2% is inside a margin of error and the choice of games used. It must be at least 5% coming from various outlets. Let's say Techpowerup, Hardware unboxed and Gamersnexus. I trust these guys not to manipulate benchmarks.
Ok so I guess we are setting the rules for the bet. 5% is still not a lot. I understand, it has to be across the board not just few games.
Are we now inventing new rules based on things I have not said? May I repeat myself:
Sure, what are you willing to bet? I am confident in my assessment. A 5080 that can’t catch the 4090 just doesn’t make sense stack-wise. It will match it or be faster.
That was my assessment. I haven’t made any percentile claims.

Another thing:
Yes I am. I buy you 5080 if it's faster in pure raw raster average performance at 4K than 4090 (under condition that leaked shader count of 5080 10752 SU is correct) and visa versa.
I would have taken this if I actually had any idea how it could work. I am fairly certain that me and @RedelZaVedno are not in the same economic zone, let alone country, so logistics are kind of eluding me. I also am not sure that anyone actually is fully willing to spend a 1000 bucks at least plus taxes and shipping on a forum bet, but whatever you say. I was thinking more along the lines of a gentleman’s bet involving a game of choice on Steam or something because, you know, sanity.
 
Are we now inventing new rules based on things I have not said? May I repeat myself:
I want to understand the rules since these were not stated. Being faster is relative. Just want to understand how much faster is actually faster.
 
High end as we knew it is dead. It's either "HI-FI" or "MID-FI" if we compare GPUs to headphones or speakers. Either you pay A LOT to get true high end (5090), or just a lot and get mid end, advertized as high end (5080). There is nothing in between and that's by design. Nvidia wants to be a luxury brand. I would laugh to anyone writing that GPU could be a luxury 10 years back, but here we are:confused:
Nah. People just want too much.

Games run fine on 1080p, 1440p, on what we always perceived as 'high end cards', and these cards last many years.
Last high end card I bought was GTX 1080, and all the way into 2024, it would play anything I threw at it. Not at stellar FPS, but also not at unplayable FPS, and still at medium-high settings, too, at a res that was barely a thing when the card was released. Right now I'm seeing upwards of 70 or 90 FPS on most games at 3440x1440 (above 1440p) at all maxed out settings (a lot of them hardly necessary or even worth it) on a 7900XT, which is x70ti ~ x80 territory, ergo the high end you feel is 'mid fi' whatever that is.

Hi-fi even in audio has turned out to be bullshit. It was a thing when most audio gear was sub par. Now everything is hifi or pretending to be, and most things sound just fine. That's progress. Its not a reason to denote something 'even better' as the new hi fi, its just better, so its a new thing. Enthusiast, if you will.

That's quite a lot better than what the old high end offered us, I think. People forget too easily that 4K is just a massive performance hog for a meagre benefit. Its their loss. But yes, if that is your perspective, and if you add on top of that the idea that you must use RT because Nvidia said so... then yes, you are nudged towards the x90 every time.

They call that fools and money parted. You need to check your perspective I think. Are you a fool? Or just talking along with marketing and peer pressure?

The x90 isn't there because you need to buy it. Its there so you can buy it. To game properly at 'maxed' settings, you don't even need half that amount of GPU. You just need to be smarter about the display you choose to buy instead, in this case, and in every other... never forget that companies will always create new demand when the old paradigm of demand is gone - and for pure raster perf, that paradigm is gone. Mid range will slaughter it too. Between RT and upscale, a new paradigm was found. This is what Nvidia is selling you now. Its not a 5090. Its DLSS and RT.
 
Last edited:
I wonder if the new versions of the benchmarks will favor the performance of lower-precision calculations? Just so they can give an advantage to GPU designs designed primarily for cutting-edge LLM training.
 
Will it be expensive, then? /s
 
I want to understand the rules since these were not stated. Being faster is relative. Just want to understand how much faster is actually faster.
Match or faster. Any amount of faster. The whole shebang is based on me saying that the 5080 will be the second fastest consumer GPU on release. For that, by definition, if we assume the 5090 will take the crown (it will), it has to at least be equal with the 4090. That’s it. That’s my claim.

The x90 isn't there because you need to buy it. Its there so you can buy it. To game properly at 'maxed' settings, you don't even need half that amount of GPU. You just need to be smarter about the display you choose to buy instead, in this case, and in every other...
Another point to make is whether or not one even needs maxed settings. The diminishing returns are, in many cases, absurd. Tanking your framerate by half for visual improvements one is unlikely to even notice in actual game is sort of a poor proposition, in my opinion. But, as I said, if one falls for the fallacy of wanting everything cranked all the time on the highest of resolutions, well, it’s a self-inflicted wound. And don’t even start me on people who then go “that’s just now, enthusiasts used to play without compromises in the past”. Yeah, no. This was always a thing. See how well flagships of the time ran Doom 3 or Crysis or any game of that type on high for the time resolution. Yeah. Peak performance, right?
1735308166352.png

Oh…
 
1735308712108.jpeg


or is it use the power??
 
so this thing is most likely gonna cost $3000 min?

i still remember when $700 was for the top-dog, now it won't even get you mid-range xx70ti class card
 
Beast mode.

This is not made for most people who visit this site, just saying :)
 
Hmm... I'm interested in the pricing of this card.
Probably >$3k. Considering it's more of a professional than an enthusiast material, it isn't surprising TBH.
 
Last edited:
I wonder if the new versions of the benchmarks will favor the performance of lower-precision calculations? Just so they can give an advantage to GPU designs designed primarily for cutting-edge LLM training.
I mean, this is mostly a gaming-focused forum, and your usual media for benchmarks focus that crowd as well, so you won't be seeing much of that. Even TPU's "AI Suite" is pretty basic and doesn't properly make use of the underlying hardware.
You may be seeing such benchmarks in the likes of r/localLlama on reddit, or some other more ML-focused places/blogs.
Training is also often done in FP16, the smaller data types are more relevant for inference.
 
I mean, this is mostly a gaming-focused forum, and your usual media for benchmarks focus that crowd as well, so you won't be seeing much of that. Even TPU's "AI Suite" is pretty basic and doesn't properly make use of the underlying hardware.
You may be seeing such benchmarks in the likes of r/localLlama on reddit, or some other more ML-focused places/blogs.
Training is also often done in FP16, the smaller data types are more relevant for inference.
This may be true, but for at least a year now Nvidia has been pushing FP8, and is even trying FP4 and INT4. Of course, we are talking about last-generation compute cards for several tens of thousands of dollars each. But I wonder how much of this remains enabled in the RTX 50 series and could be a target for pushing? It is hardly a coincidence that AMD will merge its graphics and compute architectures again. I am not satisfied with the cost-cutting explanation. It is too simple.
 
They call that fools and money parted. You need to check your perspective I think. Are you a fool? Or just talking along with marketing and peer pressure?

The x90 isn't there because you need to buy it. Its there so you can buy it. To game properly at 'maxed' settings, you don't even need half that amount of GPU. You just need to be smarter about the display you choose to buy instead, in this case, and in every other... never forget that companies will always create new demand when the old paradigm of demand is gone - and for pure raster perf, that paradigm is gone. Mid range will slaughter it too. Between RT and upscale, a new paradigm was found. This is what Nvidia is selling you now. Its not a 5090. Its DLSS and RT.

Recently I started to think PC graphic and audiophile grade stuff look more and more of the same snake oil trying to make you spend insane amount of money for what ? For not having a few light leaks on less than 1% of your actual frame because you had to use probe based GI instead of mighty RT in your game setting. Nonsense let's cut your frame rate by half and force you to upgrade so you can enjoy your frame in 100% perfection /s.

All those influencer (DF and co) showing 5x zoom at slow-mo speed to be sure all of us can appreciate the ""huge"" diff, if you think about it they looks a lot like those audiophile journalist trying to sold you those silver cable that help bring the details without the harshness of the highs in you setup even if your are over fifty and can't physically hear them...
 
This may be true, but for at least a year now Nvidia has been pushing FP8, and is even trying FP4 and INT4. Of course, we are talking about last-generation compute cards for several tens of thousands of dollars each. But I wonder how much of this remains enabled in the RTX 50 series and could be a target for pushing? It is hardly a coincidence that AMD will merge its graphics and compute architectures again. I am not satisfied with the cost-cutting explanation. It is too simple.
Even my previous 2060 Super had support for INT8 and INT4, so it's not news and not exclusive to last-generation compute cards for several tens of thousands of dollars each". FP8 has been added with Ada, yeah, and is pretty good, that's the major feature difference from the previous generations. I don't fathom the 5000 series getting rid of any of those.
Still, I don't see how this would change, and it's not hard to write some GEMM code that manages to reach the theoretical limit that Nvidia usually claims in their whitepapers.

Also what would be a good benchmark for such thing? The major relevance of those data types are for stuff like machine learning and running LLMs, hence why I referred the LocalLlama sub-reddit. Most users here won't be trying to run their own stuff locally, and thus such tests would be moot.
 
I mean, this is mostly a gaming-focused forum, and your usual media for benchmarks focus that crowd as well, so you won't be seeing much of that. Even TPU's "AI Suite" is pretty basic and doesn't properly make use of the underlying hardware.
You may be seeing such benchmarks in the likes of r/localLlama on reddit, or some other more ML-focused places/blogs.
Training is also often done in FP16, the smaller data types are more relevant for inference.
Is FP16 useless for game graphics?
 
Back
Top