Friday, December 27th 2024

NVIDIA GeForce RTX 5090 Features 16+6+7 Phase Power Delivery on 14-Layer PCB
Fresh details have surfaced about NVIDIA's upcoming flagship "Blackwell" graphics card, the GeForce RTX 5090, suggesting power delivery and board design changes compared to its predecessors. According to Benchlife, the new Blackwell-based GPU will feature a new 16+6+7 power stage design, departing from the RTX 4090's 20+3 phase configuration. The report confirms earlier speculation about the card's power requirements, indicating a TGP of 600 watts. This specification refers to the complete power allocation for the graphics subsystem, though the actual TDP of the GB202 chip might be lower. The RTX 5090 will ship with 32 GB of next-generation GDDR7 memory and utilize a 14-layer PCB, possibly due to the increased complexity of GDDR7 memory modules and power delivery. Usually, GPUs max out at 12 layers for high-end overclocking designs.
The upcoming GPU will fully embrace modern connectivity standards, featuring PCI Express 5.0 x16 interface compatibility and implementing a 12V-2×6 power connector design. We spotted an early PNY RTX 5090 model with 40 capacitors but an unclear power delivery setup. With additional power phases and more PCB layers, NVIDIA is pushing the power delivery and signal integrity boundaries for its next-generation flagship. While these specifications paint a picture of a powerful gaming and professional graphics solution, questions remain about the broader RTX 50 series lineup. The implementation of the 12V-2×6 connector across different models, particularly those below 200 W, remains unclear, so we have to wait for the CES-rumored launch.
Sources:
Benchlife.info, via VideoCardz
The upcoming GPU will fully embrace modern connectivity standards, featuring PCI Express 5.0 x16 interface compatibility and implementing a 12V-2×6 power connector design. We spotted an early PNY RTX 5090 model with 40 capacitors but an unclear power delivery setup. With additional power phases and more PCB layers, NVIDIA is pushing the power delivery and signal integrity boundaries for its next-generation flagship. While these specifications paint a picture of a powerful gaming and professional graphics solution, questions remain about the broader RTX 50 series lineup. The implementation of the 12V-2×6 connector across different models, particularly those below 200 W, remains unclear, so we have to wait for the CES-rumored launch.
101 Comments on NVIDIA GeForce RTX 5090 Features 16+6+7 Phase Power Delivery on 14-Layer PCB
At least to some..
Another thing: I would have taken this if I actually had any idea how it could work. I am fairly certain that me and @RedelZaVedno are not in the same economic zone, let alone country, so logistics are kind of eluding me. I also am not sure that anyone actually is fully willing to spend a 1000 bucks at least plus taxes and shipping on a forum bet, but whatever you say. I was thinking more along the lines of a gentleman’s bet involving a game of choice on Steam or something because, you know, sanity.
Games run fine on 1080p, 1440p, on what we always perceived as 'high end cards', and these cards last many years.
Last high end card I bought was GTX 1080, and all the way into 2024, it would play anything I threw at it. Not at stellar FPS, but also not at unplayable FPS, and still at medium-high settings, too, at a res that was barely a thing when the card was released. Right now I'm seeing upwards of 70 or 90 FPS on most games at 3440x1440 (above 1440p) at all maxed out settings (a lot of them hardly necessary or even worth it) on a 7900XT, which is x70ti ~ x80 territory, ergo the high end you feel is 'mid fi' whatever that is.
Hi-fi even in audio has turned out to be bullshit. It was a thing when most audio gear was sub par. Now everything is hifi or pretending to be, and most things sound just fine. That's progress. Its not a reason to denote something 'even better' as the new hi fi, its just better, so its a new thing. Enthusiast, if you will.
That's quite a lot better than what the old high end offered us, I think. People forget too easily that 4K is just a massive performance hog for a meagre benefit. Its their loss. But yes, if that is your perspective, and if you add on top of that the idea that you must use RT because Nvidia said so... then yes, you are nudged towards the x90 every time.
They call that fools and money parted. You need to check your perspective I think. Are you a fool? Or just talking along with marketing and peer pressure?
The x90 isn't there because you need to buy it. Its there so you can buy it. To game properly at 'maxed' settings, you don't even need half that amount of GPU. You just need to be smarter about the display you choose to buy instead, in this case, and in every other... never forget that companies will always create new demand when the old paradigm of demand is gone - and for pure raster perf, that paradigm is gone. Mid range will slaughter it too. Between RT and upscale, a new paradigm was found. This is what Nvidia is selling you now. Its not a 5090. Its DLSS and RT.
Oh…
or is it use the power??
i still remember when $700 was for the top-dog, now it won't even get you mid-range xx70ti class card
This is not made for most people who visit this site, just saying :)
You may be seeing such benchmarks in the likes of r/localLlama on reddit, or some other more ML-focused places/blogs.
Training is also often done in FP16, the smaller data types are more relevant for inference.
All those influencer (DF and co) showing 5x zoom at slow-mo speed to be sure all of us can appreciate the ""huge"" diff, if you think about it they looks a lot like those audiophile journalist trying to sold you those silver cable that help bring the details without the harshness of the highs in you setup even if your are over fifty and can't physically hear them...
Still, I don't see how this would change, and it's not hard to write some GEMM code that manages to reach the theoretical limit that Nvidia usually claims in their whitepapers.
Also what would be a good benchmark for such thing? The major relevance of those data types are for stuff like machine learning and running LLMs, hence why I referred the LocalLlama sub-reddit. Most users here won't be trying to run their own stuff locally, and thus such tests would be moot.
So this will be 3k easy.
This generation won't even be marketed for gamers. AI acceleration, that's the name of the game, not toying.