Friday, December 27th 2024

NVIDIA GeForce RTX 5090 Features 16+6+7 Phase Power Delivery on 14-Layer PCB

Fresh details have surfaced about NVIDIA's upcoming flagship "Blackwell" graphics card, the GeForce RTX 5090, suggesting power delivery and board design changes compared to its predecessors. According to Benchlife, the new Blackwell-based GPU will feature a new 16+6+7 power stage design, departing from the RTX 4090's 20+3 phase configuration. The report confirms earlier speculation about the card's power requirements, indicating a TGP of 600 watts. This specification refers to the complete power allocation for the graphics subsystem, though the actual TDP of the GB202 chip might be lower. The RTX 5090 will ship with 32 GB of next-generation GDDR7 memory and utilize a 14-layer PCB, possibly due to the increased complexity of GDDR7 memory modules and power delivery. Usually, GPUs max out at 12 layers for high-end overclocking designs.

The upcoming GPU will fully embrace modern connectivity standards, featuring PCI Express 5.0 x16 interface compatibility and implementing a 12V-2×6 power connector design. We spotted an early PNY RTX 5090 model with 40 capacitors but an unclear power delivery setup. With additional power phases and more PCB layers, NVIDIA is pushing the power delivery and signal integrity boundaries for its next-generation flagship. While these specifications paint a picture of a powerful gaming and professional graphics solution, questions remain about the broader RTX 50 series lineup. The implementation of the 12V-2×6 connector across different models, particularly those below 200 W, remains unclear, so we have to wait for the CES-rumored launch.
Sources: Benchlife.info, via VideoCardz
Add your own comment

101 Comments on NVIDIA GeForce RTX 5090 Features 16+6+7 Phase Power Delivery on 14-Layer PCB

#1
Legacy-ZA
Yes well, I watch a lot of videos from "NorthWestRepair" on YouTube, many of the AIBs can learn to put some damn fuses on their boards, near the core / memory and the power connectors. Seems like only MSi does it with the 4000 series.

@W1zzard I am sure I won't be the only one that would appreciate it, if you could mention which PCB boards have them with the upcoming reviews? They expect us to pay top dollar, we best expect good warranties (5 years) with components that last.
Posted on Reply
#2
Dirt Chip
That's an extra 1000$ for those 2 layers. Thank you for your purchase.
Posted on Reply
#3
TumbleGeorge
.
Dirt ChipThat's an extra 1000$ for those 2 layers. Thank you for your purchase.
How much would you like to spend on the design of such a board, even just for proper calculations of the locations and characteristics of the elements, so that they do not drown in induction currents, so that there are no short circuits and eddy currents? Even installing the power elements so close together is a difficult manufacturing problem.
Posted on Reply
#4
Timbaloo
PCIe 5.0 x16 does not surprise me for the top dog. However I hope this does not mean solutions like 5.0 x8 (or 4x even) for lower spec cards, as this might be an issue with the (low) adoption of PCIe 5.0 motherboards...
Posted on Reply
#5
mtosev
Hmm... I'm interested in the pricing of this card.
Posted on Reply
#6
SIGSEGV
For those who don't have any better choice to buy. :laugh:
Posted on Reply
#7
nguyen
Man Nvidia just goes overkill on everything huh: die size, VRAM, PCB layers, VRM and very possibly prices :cool:
Posted on Reply
#8
windwhirl
TimbalooPCIe 5.0 x16 does not surprise me for the top dog. However I hope this does not mean solutions like 5.0 x8 (or 4x even) for lower spec cards, as this might be an issue with the (low) adoption of PCIe 5.0 motherboards...
I doubt that. Even if the gpu right after the 5090 (5080, 5080 Ti or whatever Nvidia calls it) is 4090 level, you'd be hard pressed to notice a difference between 4.0 x16 and x8.
Posted on Reply
#10
R0H1T
TumbleGeorgeHow much would you like to spend on the design of such a board, even just for proper calculations of the locations and characteristics of the elements, so that they do not drown in induction currents, so that there are no short circuits and eddy currents? Even installing the power elements so close together is a difficult manufacturing problem.
I would break it down into chiplets o_O

It's not that bad, try making a Tokamak :nutkick:
Posted on Reply
#11
RedelZaVedno
High end as we knew it is dead. It's either "HI-FI" or "MID-FI" if we compare GPUs to headphones or speakers. Either you pay A LOT to get true high end (5090), or just a lot and get mid end, advertized as high end (5080). There is nothing in between and that's by design. Nvidia wants to be a luxury brand. I would laugh to anyone writing that GPU could be a luxury 10 years back, but here we are:confused:
Posted on Reply
#12
BorisDG
nguyenMan Nvidia just goes overkill on everything huh: die size, VRAM, PCB layers, VRM and very possibly prices :cool:
They are stuck on 4nm (5nm) and brute force everything, so they have some kind of good perf. boost over past generation. I was looking forward to upgrade, but I'm somewhat sceptical right now.
Posted on Reply
#13
Legacy-ZA
nguyenvery possibly prices :cool:
"Possibly" :roll:

I did get a good chuckle, I'll give you that. ^_^

Where is that awesome cat avatar you used to have? Everytime I saw it, I wanted to pinch those cheeks. :P
Posted on Reply
#14
Asni
Given the amount of power stages dedicated to the memories, GDDR7 efficiency needs at least to be questioned. Coming from a 3-phase design on the 4090.
Posted on Reply
#15
Onasi
RedelZaVednoHigh end as we knew it is dead. It's either "HI-FI" or "MID-FI" if we compare GPUs to headphones or speakers. Either you pay A LOT to get true high end (5090), or just a lot and get mid end, advertized as high end (5080). There is nothing in between and that's by design. Nvidia wants to be a luxury brand. I would laugh to anyone writing that GPU could be a luxury 10 years back, but here we are:confused:
Wat. GPUs were always a luxury, today more than ever. One doesn’t need a dGPU, especially a powerful one, for most of their needs.

And no, 5080 would not be a “mid end”, lol. It will be the second fastest GPU in the world on release. That’s by no meaning of the word a “mid” product. It’s absolutely flagship performance. It’s just that the 5090 is a ridiculous halo product, essentially a Titan and the dual GPU card replacement (with some compromises, true) that straddles the line between consumer and pro products. Nobody needs a 5090 to play games, whatever the unhinged enthusiasts for whom it’s “Ultra with PT at 4K or nothing” would tell you.
Posted on Reply
#16
Bomby569
it can't be cheap to manufacturer what's going on above and under all those layers, a work of art
Posted on Reply
#17
RedelZaVedno
OnasiWat. GPUs were always a luxury, today more than ever. One doesn’t need a dGPU, especially a powerful one, for most of their needs.

And no, 5080 would not be a “mid end”, lol. It will be the second fastest GPU in the world on release. That’s by no meaning of the word a “mid” product. It’s absolutely flagship performance. It’s just that the 5090 is a ridiculous halo product, essentially a Titan and the dual GPU card replacement (with some compromises, true) that straddles the line between consumer and pro products. Nobody needs a 5090 to play games, whatever the unhinged enthusiasts for whom it’s “Ultra with PT at 4K or nothing” would tell you.
Well it won't be. Given the number of shaders on the same node, only memory having 30% more bandwith, 4090 will be faster in rasterization unless Ngreedia learned how to defy the laws of physics. 5080 is a shitshow. $800 value GPU price gauged to $1400 or more.
Posted on Reply
#18
Onasi
@RedelZaVedno
It absolutely will be. No reason to think otherwise. 4080 was faster than 3090Ti. 3080 was faster than Titan RTX. And so it goes for every generation. There is less than 25% delta between 4080/4080S and 4090. 5080 with less than 25-30% uplift just will not make sense. I am willing to actually bet on it.
Posted on Reply
#19
Wirko
TimbalooYou're right, one "tier" lower does not yet make a noticable difference, I rechecked my assumptions here: www.techpowerup.com/review/nvidia-geforce-rtx-4090-pci-express-scaling/28.html
I'm sure W1zzard has a PCIe 5-4-3-2-1 test in his plans for the 5090. Maybe down to Gen 1 x1, haha. And maybe over USB4 connection too. Will be very interesting, especially for those who intend to use it as an eGPU.
Posted on Reply
#20
RedelZaVedno
Onasi@RedelZaVedno
It absolutely will be. No reason to think otherwise. 4080 was faster than 3090Ti. 3080 was faster than Titan RTX. And so it goes for every generation. There is less than 25% delta between 4080/4080S and 4090. 5080 with less than 25-30% uplift just will not make sense. I am willing to actually bet on it.
I'd accept the bet. Math doesn't lie. 4080S had 10240 shading units vs 3090TI's 10752, with 3090TI being on shitty samsung node. 5080 has 10752 shading units vs 4090's 16384 on similar nodes. No way can DDR7 speed close that gap in rasterization. Sure it will have better RT (who really cares?) and maybe support new frame gen tech in DLSS (again who really cares), but in raw performance 4090 will be the 2nd best, only behind 5090.
Posted on Reply
#21
Onasi
RedelZaVednoWell it won't be. Given the number of shaders on the same node, only memory having 30% more bandwith, 4090 will be faster in rasterization unless Ngreedia learned how to defy the laws of physics.
Shieeet, I guess we all hallucinated when the 970 was faster than 780 with less everything on the same node. Or when the 2070S was a bit faster than the 1080Ti with less everything on the same node (16 and 12 were the same node). Guess NV regularly breaks the laws of physics.
RedelZaVednoI'd accept the bet. Math doesn't lie. 4080S had 10240 shading units vs 3090TI's 10752, with 3090TI being on shitty samsung node. 5080 has 10752 shading units vs 4090's 16384 on similar nodes. No way can DDR7 speed close that gap in rasterization. Sure it will have better RT (who really cares?) and maybe support new frame gen tech in DLSS (again who really cares), but in raw performance 4090 will be the 2nd best, only behind 5090.
Sure, what are you willing to bet? I am confident in my assessment. A 5080 that can’t catch the 4090 just doesn’t make sense stack-wise. It will match it or be faster.
Posted on Reply
#22
RedelZaVedno
OnasiSure, what are you willing to bet? I am confident in my assessment. A 5080 that can’t catch the 4090 just doesn’t make sense stack-wise. It will match it or be faster.
Ngreeida doesn't need to advertise 5080 as being slower. Black Lether jacket one will come on stage and brag 5080 to be x % faster than 4090 when using super duper new DLSS frame generation 4090 won't support. Same goes for RT and new tensor cores. But he'll conveniently forget to mention that in raw rasterization performance 4090 still wipes the floor with 5080. And raw resterization really is all that counts at the high end. I'm not spending $1500+ to use frame gen mess.
Posted on Reply
#23
Onasi
@RedelZaVedno
Cool, cool. I don’t care for ravings about NGreedia, leather jackets and fake frames. I heard that before. I am sticking to my point. Again. WHAT. ARE. YOU. WILLING. TO. BET?
Posted on Reply
#24
RedelZaVedno
Onasi@RedelZaVedno
Cool, cool. I don’t care for ravings about NGreedia, leather jackets and fake frames. I heard that before. I am sticking to my point. Again. WHAT. ARE. YOU. WILLING. TO. BET?
Yes I am. I buy you 5080 if it's faster in pure raw raster average performance at 4K than 4090 (under condition that leaked shader count of 5080 10752 SU is correct) and visa versa.
Posted on Reply
#25
ratirt
If 5080 matches 4090 it will be a win. Im concerned about the the distance between 4090 and the 4080. How the lower tier 5000 series cards will stack there?
RedelZaVednoYes I am. I buy you 5080 if it's faster in pure raw raster average performance at 4K than 4090 (under condition that leaked shader count of 5080 10752 SU is correct) and visa versa.
I'm not so sure man. It can be faster. The odds are it will be but the question is by how much? is 2% faster enough to uphold the bet?
Posted on Reply
Add your own comment
Jan 27th, 2025 04:27 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts