Friday, December 27th 2024

NVIDIA GeForce RTX 5090 Features 16+6+7 Phase Power Delivery on 14-Layer PCB

Fresh details have surfaced about NVIDIA's upcoming flagship "Blackwell" graphics card, the GeForce RTX 5090, suggesting power delivery and board design changes compared to its predecessors. According to Benchlife, the new Blackwell-based GPU will feature a new 16+6+7 power stage design, departing from the RTX 4090's 20+3 phase configuration. The report confirms earlier speculation about the card's power requirements, indicating a TGP of 600 watts. This specification refers to the complete power allocation for the graphics subsystem, though the actual TDP of the GB202 chip might be lower. The RTX 5090 will ship with 32 GB of next-generation GDDR7 memory and utilize a 14-layer PCB, possibly due to the increased complexity of GDDR7 memory modules and power delivery. Usually, GPUs max out at 12 layers for high-end overclocking designs.

The upcoming GPU will fully embrace modern connectivity standards, featuring PCI Express 5.0 x16 interface compatibility and implementing a 12V-2×6 power connector design. We spotted an early PNY RTX 5090 model with 40 capacitors but an unclear power delivery setup. With additional power phases and more PCB layers, NVIDIA is pushing the power delivery and signal integrity boundaries for its next-generation flagship. While these specifications paint a picture of a powerful gaming and professional graphics solution, questions remain about the broader RTX 50 series lineup. The implementation of the 12V-2×6 connector across different models, particularly those below 200 W, remains unclear, so we have to wait for the CES-rumored launch.
Sources: Benchlife.info, via VideoCardz
Add your own comment

101 Comments on NVIDIA GeForce RTX 5090 Features 16+6+7 Phase Power Delivery on 14-Layer PCB

#26
RedelZaVedno
ratirtI'm not so sure man. It can be faster. The odds are it will be but the question is by how much? is 2% faster enough to uphold the bet?
No. 2% is inside a margin of error and the choice of games used. It must be at least 5% coming from various outlets. Let's say Techpowerup, Hardware unboxed and Gamersnexus. I trust these guys not to manipulate benchmarks.
Posted on Reply
#27
ratirt
RedelZaVednoNo. 2% is inside a margin of error and the choice of games used. It must be at least 5% coming from various outlets. Let's say Techpowerup, Hardware unboxed and Gamersnexus. I trust these guys not to manipulate benchmarks.
Ok so I guess we are setting the rules for the bet. 5% is still not a lot. I understand, it has to be across the board not just few games.
Posted on Reply
#28
Dirt Chip
TumbleGeorge.

How much would you like to spend on the design of such a board, even just for proper calculations of the locations and characteristics of the elements, so that they do not drown in induction currents, so that there are no short circuits and eddy currents? Even installing the power elements so close together is a difficult manufacturing problem.
Most correct, and worth the extra '5090' premium, on top ot the current '4090' premium.
At least to some..
Posted on Reply
#29
Onasi
RedelZaVednoNo. 2% is inside a margin of error and the choice of games used. It must be at least 5% coming from various outlets. Let's say Techpowerup, Hardware unboxed and Gamersnexus. I trust these guys not to manipulate benchmarks.
ratirtOk so I guess we are setting the rules for the bet. 5% is still not a lot. I understand, it has to be across the board not just few games.
Are we now inventing new rules based on things I have not said? May I repeat myself:
OnasiSure, what are you willing to bet? I am confident in my assessment. A 5080 that can’t catch the 4090 just doesn’t make sense stack-wise. It will match it or be faster.
That was my assessment. I haven’t made any percentile claims.

Another thing:
RedelZaVednoYes I am. I buy you 5080 if it's faster in pure raw raster average performance at 4K than 4090 (under condition that leaked shader count of 5080 10752 SU is correct) and visa versa.
I would have taken this if I actually had any idea how it could work. I am fairly certain that me and @RedelZaVedno are not in the same economic zone, let alone country, so logistics are kind of eluding me. I also am not sure that anyone actually is fully willing to spend a 1000 bucks at least plus taxes and shipping on a forum bet, but whatever you say. I was thinking more along the lines of a gentleman’s bet involving a game of choice on Steam or something because, you know, sanity.
Posted on Reply
#30
ratirt
OnasiAre we now inventing new rules based on things I have not said? May I repeat myself:
I want to understand the rules since these were not stated. Being faster is relative. Just want to understand how much faster is actually faster.
Posted on Reply
#31
Vayra86
RedelZaVednoHigh end as we knew it is dead. It's either "HI-FI" or "MID-FI" if we compare GPUs to headphones or speakers. Either you pay A LOT to get true high end (5090), or just a lot and get mid end, advertized as high end (5080). There is nothing in between and that's by design. Nvidia wants to be a luxury brand. I would laugh to anyone writing that GPU could be a luxury 10 years back, but here we are:confused:
Nah. People just want too much.

Games run fine on 1080p, 1440p, on what we always perceived as 'high end cards', and these cards last many years.
Last high end card I bought was GTX 1080, and all the way into 2024, it would play anything I threw at it. Not at stellar FPS, but also not at unplayable FPS, and still at medium-high settings, too, at a res that was barely a thing when the card was released. Right now I'm seeing upwards of 70 or 90 FPS on most games at 3440x1440 (above 1440p) at all maxed out settings (a lot of them hardly necessary or even worth it) on a 7900XT, which is x70ti ~ x80 territory, ergo the high end you feel is 'mid fi' whatever that is.

Hi-fi even in audio has turned out to be bullshit. It was a thing when most audio gear was sub par. Now everything is hifi or pretending to be, and most things sound just fine. That's progress. Its not a reason to denote something 'even better' as the new hi fi, its just better, so its a new thing. Enthusiast, if you will.

That's quite a lot better than what the old high end offered us, I think. People forget too easily that 4K is just a massive performance hog for a meagre benefit. Its their loss. But yes, if that is your perspective, and if you add on top of that the idea that you must use RT because Nvidia said so... then yes, you are nudged towards the x90 every time.

They call that fools and money parted. You need to check your perspective I think. Are you a fool? Or just talking along with marketing and peer pressure?

The x90 isn't there because you need to buy it. Its there so you can buy it. To game properly at 'maxed' settings, you don't even need half that amount of GPU. You just need to be smarter about the display you choose to buy instead, in this case, and in every other... never forget that companies will always create new demand when the old paradigm of demand is gone - and for pure raster perf, that paradigm is gone. Mid range will slaughter it too. Between RT and upscale, a new paradigm was found. This is what Nvidia is selling you now. Its not a 5090. Its DLSS and RT.
Posted on Reply
#32
TumbleGeorge
I wonder if the new versions of the benchmarks will favor the performance of lower-precision calculations? Just so they can give an advantage to GPU designs designed primarily for cutting-edge LLM training.
Posted on Reply
#33
AusWolf
Will it be expensive, then? /s
Posted on Reply
#34
Onasi
ratirtI want to understand the rules since these were not stated. Being faster is relative. Just want to understand how much faster is actually faster.
Match or faster. Any amount of faster. The whole shebang is based on me saying that the 5080 will be the second fastest consumer GPU on release. For that, by definition, if we assume the 5090 will take the crown (it will), it has to at least be equal with the 4090. That’s it. That’s my claim.
Vayra86The x90 isn't there because you need to buy it. Its there so you can buy it. To game properly at 'maxed' settings, you don't even need half that amount of GPU. You just need to be smarter about the display you choose to buy instead, in this case, and in every other...
Another point to make is whether or not one even needs maxed settings. The diminishing returns are, in many cases, absurd. Tanking your framerate by half for visual improvements one is unlikely to even notice in actual game is sort of a poor proposition, in my opinion. But, as I said, if one falls for the fallacy of wanting everything cranked all the time on the highest of resolutions, well, it’s a self-inflicted wound. And don’t even start me on people who then go “that’s just now, enthusiasts used to play without compromises in the past”. Yeah, no. This was always a thing. See how well flagships of the time ran Doom 3 or Crysis or any game of that type on high for the time resolution. Yeah. Peak performance, right?

Oh…
Posted on Reply
#36
Naifm92
so this thing is most likely gonna cost $3000 min?

i still remember when $700 was for the top-dog, now it won't even get you mid-range xx70ti class card
Posted on Reply
#37
freeagent
Beast mode.

This is not made for most people who visit this site, just saying :)
Posted on Reply
#38
Ruru
S.T.A.R.S.
SIGSEGVFor those who don't have any better choice to buy. :laugh:
"the more you buy, the more you save" -leather jacket man :D
Posted on Reply
#39
Darmok N Jalad
Dirt ChipThat's an extra 1000$ for those 2 layers. Thank you for your purchase.
Tis the season for 7 layer dip, times two.
Posted on Reply
#40
Veseleil
mtosevHmm... I'm interested in the pricing of this card.
Probably >$3k. Considering it's more of a professional than an enthusiast material, it isn't surprising TBH.
Posted on Reply
#41
Dawora
VeseleilProbably >$3k. Considering it's more of a professional than an enthusiast material, it isn't surprising TBH.
Or 1999$
Posted on Reply
#42
Veseleil
DaworaOr 1999$
Your optimism is heartwarming.
Posted on Reply
#43
igormp
TumbleGeorgeI wonder if the new versions of the benchmarks will favor the performance of lower-precision calculations? Just so they can give an advantage to GPU designs designed primarily for cutting-edge LLM training.
I mean, this is mostly a gaming-focused forum, and your usual media for benchmarks focus that crowd as well, so you won't be seeing much of that. Even TPU's "AI Suite" is pretty basic and doesn't properly make use of the underlying hardware.
You may be seeing such benchmarks in the likes of r/localLlama on reddit, or some other more ML-focused places/blogs.
Training is also often done in FP16, the smaller data types are more relevant for inference.
Posted on Reply
#44
TumbleGeorge
igormpI mean, this is mostly a gaming-focused forum, and your usual media for benchmarks focus that crowd as well, so you won't be seeing much of that. Even TPU's "AI Suite" is pretty basic and doesn't properly make use of the underlying hardware.
You may be seeing such benchmarks in the likes of r/localLlama on reddit, or some other more ML-focused places/blogs.
Training is also often done in FP16, the smaller data types are more relevant for inference.
This may be true, but for at least a year now Nvidia has been pushing FP8, and is even trying FP4 and INT4. Of course, we are talking about last-generation compute cards for several tens of thousands of dollars each. But I wonder how much of this remains enabled in the RTX 50 series and could be a target for pushing? It is hardly a coincidence that AMD will merge its graphics and compute architectures again. I am not satisfied with the cost-cutting explanation. It is too simple.
Posted on Reply
#45
sbacc
Vayra86They call that fools and money parted. You need to check your perspective I think. Are you a fool? Or just talking along with marketing and peer pressure?

The x90 isn't there because you need to buy it. Its there so you can buy it. To game properly at 'maxed' settings, you don't even need half that amount of GPU. You just need to be smarter about the display you choose to buy instead, in this case, and in every other... never forget that companies will always create new demand when the old paradigm of demand is gone - and for pure raster perf, that paradigm is gone. Mid range will slaughter it too. Between RT and upscale, a new paradigm was found. This is what Nvidia is selling you now. Its not a 5090. Its DLSS and RT.
Recently I started to think PC graphic and audiophile grade stuff look more and more of the same snake oil trying to make you spend insane amount of money for what ? For not having a few light leaks on less than 1% of your actual frame because you had to use probe based GI instead of mighty RT in your game setting. Nonsense let's cut your frame rate by half and force you to upgrade so you can enjoy your frame in 100% perfection /s.

All those influencer (DF and co) showing 5x zoom at slow-mo speed to be sure all of us can appreciate the ""huge"" diff, if you think about it they looks a lot like those audiophile journalist trying to sold you those silver cable that help bring the details without the harshness of the highs in you setup even if your are over fifty and can't physically hear them...
Posted on Reply
#46
igormp
TumbleGeorgeThis may be true, but for at least a year now Nvidia has been pushing FP8, and is even trying FP4 and INT4. Of course, we are talking about last-generation compute cards for several tens of thousands of dollars each. But I wonder how much of this remains enabled in the RTX 50 series and could be a target for pushing? It is hardly a coincidence that AMD will merge its graphics and compute architectures again. I am not satisfied with the cost-cutting explanation. It is too simple.
Even my previous 2060 Super had support for INT8 and INT4, so it's not news and not exclusive to last-generation compute cards for several tens of thousands of dollars each". FP8 has been added with Ada, yeah, and is pretty good, that's the major feature difference from the previous generations. I don't fathom the 5000 series getting rid of any of those.
Still, I don't see how this would change, and it's not hard to write some GEMM code that manages to reach the theoretical limit that Nvidia usually claims in their whitepapers.

Also what would be a good benchmark for such thing? The major relevance of those data types are for stuff like machine learning and running LLMs, hence why I referred the LocalLlama sub-reddit. Most users here won't be trying to run their own stuff locally, and thus such tests would be moot.
Posted on Reply
#47
Makaveli
DaworaOr 1999$
in Canada a 4090 cost like $2500.

So this will be 3k easy.
Posted on Reply
#49
Wirko
igormpI mean, this is mostly a gaming-focused forum, and your usual media for benchmarks focus that crowd as well, so you won't be seeing much of that. Even TPU's "AI Suite" is pretty basic and doesn't properly make use of the underlying hardware.
You may be seeing such benchmarks in the likes of r/localLlama on reddit, or some other more ML-focused places/blogs.
Training is also often done in FP16, the smaller data types are more relevant for inference.
Is FP16 useless for game graphics?
Posted on Reply
#50
Visible Noise
WirkoIs FP16 useless for game graphics?
Not entirely, but its use requires careful consideration due to its low precision. Unreal Engine added support in version 5.3.
Posted on Reply
Add your own comment
Apr 13th, 2025 22:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts