Wednesday, December 25th 2024
NVIDIA GeForce RTX 5090 PCB Pictured, Massive GPU Die and 16-Chip Memory Configuration
NVIDIA's GeForce RTX 5090 graphics card printed circuit board has allegedly been shown in the flesh, showing the memory layout and some interesting engineering choices. The custom PCB variant (non-Founders Edition) houses more than 40 capacitors, which is perhaps not standard on the FE reference board, and 16 GDDR7 memory modules. The leaked PCB, which extends beyond standard dimensions and traditional display connector configurations, is reportedly based on NVIDIA's PG145 reference design. While lacking the characteristic NVIDIA branding of a Founders Edition card, a little marking indicates that this is a PNY custom design. The memory modules are distributed systematically: five on the left, two below, five on the right, and four above the GPU die. The interface is PCIe 5.0 x16.
As NVIDIA has reportedly designated 32 GB GDDR7 memory capacity for these cards, this roughly translates into 16 x 2 GB GDDR7 memory modules. At the heart of the card lies what sources claim to be the GB202 GPU, measuring 24×31 mm within a 63×56 mm package. Regarding power delivery, PNY seemingly adopted NVIDIA's practices of using a 16-pin 12V-6x2 power connector. The entire PCB features only a single power connector, so the 16-pin 12V-2x6, but with an updated PCIe 6.0 CEM specification, is the logical choice.
Sources:
Chiphell, @9550pro, via VideoCardz
As NVIDIA has reportedly designated 32 GB GDDR7 memory capacity for these cards, this roughly translates into 16 x 2 GB GDDR7 memory modules. At the heart of the card lies what sources claim to be the GB202 GPU, measuring 24×31 mm within a 63×56 mm package. Regarding power delivery, PNY seemingly adopted NVIDIA's practices of using a 16-pin 12V-6x2 power connector. The entire PCB features only a single power connector, so the 16-pin 12V-2x6, but with an updated PCIe 6.0 CEM specification, is the logical choice.
74 Comments on NVIDIA GeForce RTX 5090 PCB Pictured, Massive GPU Die and 16-Chip Memory Configuration
If you were a pro and bought these it was a steal but if you were a gamer and did it you were dumb even for a gamer and that's saying a lot given gamers.
If you viewed it as a gaming product you painted the target on your head, shot yourself, and wandered around naked after all of your own doing.
The issue is not nvidia it's PC gamers demanded to be treated in a special manner above and beyond what any other consumer section is and then stomping their footsies when it doesn't happen. It's so common and so isolated to that one group it's a punch line to everyone outside of it. Every new nvidia release brings the wailing of the incels as does every new game release that isn't right wing fantasy. As someone who's going to buy four of these and never game on it the price is dirt cheap for what it is and it won't make a dent my bank account. I'll give the 4090s to someone else who is not going to game on them. If I am going to game on the PC there's a mobile intel 1400 series with a mobile 4070, 64 RAM, dual 4tb SSDs NUC style box to play Quake 1 and Doom 2 on and some odd indie games. Or the Switch and the PS5.
These aren't really gaming cards. The gobs of VRAM are for handling various models and other things. The embargo set on high end cards is not because of Chinese WoW gold farmers or whatever it's for the actual work they are made to do. People need to accept that.
That some Nvidia owners feel the need to come out and say they aren't having issues anytime someone mentions the issue shows that it is in fact is a problem. After all, if it was a claim without merit, it would warrant no response. It's a combination of these people knowing it's an issue and feeling compelled to defend their purchase.
And just a couple months ago, we saw this report on Blackwell AI racks with insane cooling requirements.
There should be no surprise that gamer 5090 will face same challenges.
Same with Elsa in Japan and I'm sure some other brands elsewhere.
www.leadtek.com/eng/products/AI_HPC(37)
PAM3 vs PAM4.
Typical 8-pin connectors are rated at 150W but they can actually handle 288W (even more with HCS terminals) this is a proper amount of safety overhead
Whereas the 16-pin is rated by PCI-SIG for 600W and is only capable of 684W. 14% safety margin is simply WAY too little for this sort of use-case.
Also this is an enthusiast site, where people like overclocking, I am not exactly thrilled about the prospect of having to directly replace the connector if I want to push my GPU.
The smart thing that should have been done is to replace the 8-pin PCI-E connector with the existing 8-Pin EPS connector which has an additional power conductor and can handle 384W, and would also reduce part count which would simplify and reduce costs.
1-What ??
2- What makes you think the new standard PCIe 6.0 CEM connector is going to melt ? Are you a reviewer at PCI-SIG ?
3- Wrong PCB size ? Can you be more specific ? If all the required elements fit, and the crosstalk and such are correctly dealt with, then it's the right size. Sure it takes a lot of layers to make such a PCB, but that's because of the 512b memory bus in the first place.
4- You realize you need 16 chips because it's a 512b bus and 64b chips don't exist ??
Not to mention that available sizes have power of 2 depths... So 3GB chips don't exist...Edit: Apparently, and it's the first time I see such thing, those are actually planned, my bad here. But you still need 16 chips anyway, just that in a near future we'll be able to have 48GB on those, and maybe even 64GB, and the memory size is very valuable for such cards (of course not for gaming).You have 0 idea what you're talking about but you're literally trying to prove that you're smarter than Nvidia engineers... Stay humble a bit more maybe ?
This has always been the case. We've had games and engines with hard caps even on FPS - so if you put a game like Fallout or Elden Ring on your benchmark suite as a reviewer, there's a certain weight of that leaking into the relative performance number; a weight that says 'the x60 can reach the same 60 FPS as an x90' at one or more resolutions.
In the end though, as time passes, hardware is what its specs say it is, its very transparent like that. This is exactly what defines a high(er) end piece of hardware - it ain't always the performance relative to cards in the rest of a review, it should be viewed with the perspective of the fact a review is a snapshot of its performance at a certain moment in time. The raw performance is really there. But its not a given you will unlock all of it in the lifetime of the product, and/or, you will see it unlock its potential after you've upgraded the rest of your stuff. That is also why I've always kinda laughed at people saying 'you cannot future proof' - its bullshit. Hardware doesn't progress quite so fast, and raw performance is raw performance, as long as there are no radical changes in the landscape around that hardware. For example, the move to a new API. As long as the hardware is playing on the same ruleset, so to speak, its very easily comparable, and yes, there are IPC increases gen over gen, but they'll never change the game radically, its just an iterative, small step forward. And this is also why high end GPUs have generally kept their value quite well; they're so powerful, they can keep up with the mainstream movement several gens ahead of themselves. A similar thing counts for CPUs. You don't need to upgrade them constantly if you get something decent; the moment games will optimize around the performance you've got, is years ahead of you.
Just because you buy geforce cards to run some commercial software and dont play games doesnt meant hats how other people use it. People need to accept that.
Nvidia went for aesthetics over functionality with the 12vhwpr connector. I agree with you on this, it isn't gamers fault games are trash, AAA studios losing money, shutting down, or getting acquired are evidence people aren't buying overpriced junk.
And geforce cards are for gaming regardless of people wanting to accept it or not, although ever since the xx90 replaced the Titan it has been obvious Nvidia doesn't care about selling them for playing games. The "pc master race" market isn't all of the pc gaming market as they're always implying. Though the "pcmr" crowd are the ones ruining pc gaming for everyone else, demanding things like RGB on everything, glass fishtank cases, mice full of holes, and silly keyboards with no function keys that have little practical usefulness outside of games.
The molex 6+2 and 8 pin have a loud click and you can feel when its plugged in right, not so much with the new one, the other issue is the physics of pins being too small to handle high current, add in manufacturing tolerances and some wanting to go cheap, then you end up with less power current capacity than 2x 8 pin connectors. Not what I got from that loud group of pc elitists, but sure. To me they seem to be the same people complaining about the price of an xx90 card but buy it anyway and show it off in a glass box case.
The people who do complain are the ones with no intention at all of buying one. Though they may want it.
Have you not seen how the AMD engineers do it? I can show you.
RX 7900 XTX PCB size:
This here size:
Maybe they could put more L3 cache and stay on a 384-bit bus. Not to mention that both 512-bit over GDDR7 and 32 GB of VRAM are overkill and stupidity. Send them many greetings, and tell them to learn more.
cablemod/comments/175dfs6
www.digitaltrends.com/computing/nvidia-rtx-4090-cracked-pcb/
hackaday.com/2022/10/28/nvidia-power-cables-are-melting-this-may-be-why/ Exactly.
For prosumers? It's the opposite of an overkill.
RTX 5090 is capable of gaming by accident and it should only be considered a GPU for real maths.