Wednesday, December 25th 2024

NVIDIA GeForce RTX 5090 PCB Pictured, Massive GPU Die and 16-Chip Memory Configuration

NVIDIA's GeForce RTX 5090 graphics card printed circuit board has allegedly been shown in the flesh, showing the memory layout and some interesting engineering choices. The custom PCB variant (non-Founders Edition) houses more than 40 capacitors, which is perhaps not standard on the FE reference board, and 16 GDDR7 memory modules. The leaked PCB, which extends beyond standard dimensions and traditional display connector configurations, is reportedly based on NVIDIA's PG145 reference design. While lacking the characteristic NVIDIA branding of a Founders Edition card, a little marking indicates that this is a PNY custom design. The memory modules are distributed systematically: five on the left, two below, five on the right, and four above the GPU die. The interface is PCIe 5.0 x16.

As NVIDIA has reportedly designated 32 GB GDDR7 memory capacity for these cards, this roughly translates into 16 x 2 GB GDDR7 memory modules. At the heart of the card lies what sources claim to be the GB202 GPU, measuring 24×31 mm within a 63×56 mm package. Regarding power delivery, PNY seemingly adopted NVIDIA's practices of using a 16-pin 12V-6x2 power connector. The entire PCB features only a single power connector, so the 16-pin 12V-2x6, but with an updated PCIe 6.0 CEM specification, is the logical choice.
Sources: Chiphell, @9550pro, via VideoCardz
Add your own comment

74 Comments on NVIDIA GeForce RTX 5090 PCB Pictured, Massive GPU Die and 16-Chip Memory Configuration

#26
SOAREVERSOR
igormpGotta give it to then, all those years giving us products saying "hey, buy this toy and play some games on it, but also take a look at what else you can do with it" to get us hooked up in their ecosystem during our professional lifetime, just like drug dealers lol
Most people have never needed what they did because they didn't work on them. They bought it for e-peen. CUDA however and other stuff has a strangle hold on the pro market.

If you were a pro and bought these it was a steal but if you were a gamer and did it you were dumb even for a gamer and that's saying a lot given gamers.

If you viewed it as a gaming product you painted the target on your head, shot yourself, and wandered around naked after all of your own doing.

The issue is not nvidia it's PC gamers demanded to be treated in a special manner above and beyond what any other consumer section is and then stomping their footsies when it doesn't happen. It's so common and so isolated to that one group it's a punch line to everyone outside of it. Every new nvidia release brings the wailing of the incels as does every new game release that isn't right wing fantasy.
bonehead123Sooooo to summarize (without all the techno-babble/marketing hype):

Big chip...
Big socket.....
Big PCB.......

And last but not least.....

wait for it...

wait for it....

A GIGANTIC, HUMONGOUS, bank account-BUSTING PRICE !

Looks like Jacket Man will be getting LOTS of new jackets next year, hehehehe

As someone who's going to buy four of these and never game on it the price is dirt cheap for what it is and it won't make a dent my bank account. I'll give the 4090s to someone else who is not going to game on them. If I am going to game on the PC there's a mobile intel 1400 series with a mobile 4070, 64 RAM, dual 4tb SSDs NUC style box to play Quake 1 and Doom 2 on and some odd indie games. Or the Switch and the PS5.

These aren't really gaming cards. The gobs of VRAM are for handling various models and other things. The embargo set on high end cards is not because of Chinese WoW gold farmers or whatever it's for the actual work they are made to do. People need to accept that.
Posted on Reply
#27
igormp
SOAREVERSORThe issue is not nvidia it's PC gamers demanded to be treated in a special manner above and beyond what any other consumer section is and then stomping their footsies when it doesn't happen.
The "PC master race" market for sure is one of the most entitled ones :laugh:
Posted on Reply
#28
Hankieroseman
GoldenTigerIt's gorgeous... Looks very well designed, anyone can tell that but salty fans of admittedly mediocre devices :).
:toast:
3valatzyQuite low quality design. The thermal density will be high - 600 watts in so small area will be tough to keep cool.

1. The PCB will melt;
2. The single power connector will melt;
3. Wrong PCB size;
4. Too many memory chips - this needs either 3 GB or 4 GB chips.

Overall, given the $3000-4000 price tag - it is a meh. Don't buy.
My experience with ASUS RTX3090 OC24 and 4090 OC24 Gamer Editions says you're so full of ______ your eyes are brown.!:kookoo:
Posted on Reply
#29
N/A
3valatzyThe thermal density will be high - 600 watts in so small area will be tough to keep cool.
It's reminiscent of the prior compute cards. Vapor chamber takes care of everything i guess. Very symmetrical if you notice two PCBs connected on the top side and the GPU is in the center. And you can always limit the power to 450 without losing much performance. 2080 Ti similarly sized chip works even at 50-60%. we can clearly see the trend that power has more than doubled in 6 years, up from 250W.
Posted on Reply
#30
evernessince
Does GDDR7 have stringent signaling requirements like GDDR6X? I noticed the memory is once again very close to the GPU core.
Geofrancisthats called Survivorship bias.
Yep, plus it's not a common issue. The problem people have with the connector is that it is simply worse than the last one. Any increase in failure rates is a net negative.

That some Nvidia owners feel the need to come out and say they aren't having issues anytime someone mentions the issue shows that it is in fact is a problem. After all, if it was a claim without merit, it would warrant no response. It's a combination of these people knowing it's an issue and feeling compelled to defend their purchase.
Posted on Reply
#31
ir_cow
Time to order a waterblock:)
Posted on Reply
#32
theglaze
Thermal concerns have been raised since never-released 4090 Ti.

And just a couple months ago, we saw this report on Blackwell AI racks with insane cooling requirements.

There should be no surprise that gamer 5090 will face same challenges.




Posted on Reply
#33
TheLostSwede
News Editor
ncrsI was under the impression that PNY was one of the biggest vendors of NVIDIA professional/datacenter cards.
www.pny.com/professional/hardware/nvidia-data-center-gpus
www.pny.com/professional/hardware/nvidia-professional-products
As a distributor of sorts, they don't make them, as Nvidia controls all those cards and then ship them out to their partners to sell them.
Same with Elsa in Japan and I'm sure some other brands elsewhere.
Posted on Reply
#34
ncrs
TheLostSwedeAs a distributor of sorts, they don't make them, as Nvidia controls all those cards and then ship them out to their partners to sell them.
Same with Elsa in Japan and I'm sure some other brands elsewhere.
Do you know who does manufacture it in the end? Obviously they can't make the chips or RAM, but PCBs and the whole assembly, packaging and testing would fit their facilities, at least according to what's written on their site.
Posted on Reply
#35
Geofrancis
evernessinceYep, plus it's not a common issue. The problem people have with the connector is that it is simply worse than the last one. Any increase in failure rates is a net negative.

That some Nvidia owners feel the need to come out and say they aren't having issues anytime someone mentions the issue shows that it is in fact is a problem. After all, if it was a claim without merit, it would warrant no response. It's a combination of these people knowing it's an issue and feeling compelled to defend their purchase.
You just need to look at the size and rating of the connectors, there is zero margin on it. Anything other than a perfect connection ends badly.
Posted on Reply
#36
JustBenching
ScrizzSame. I plugged in my card years ago, and it's fine. Same with my CPU...
Guess we are doing it wrong, let's take advice on how to plug it in from non nvidia owners, they sure know how to make them melt apparently.
GeofrancisYou just need to look at the size and rating of the connectors, there is zero margin on it. Anything other than a perfect connection ends badly.
You realize that the power rating is much higher on the 12vhpwr connector then it is on the 6 and 8pins, RIGHT?
Posted on Reply
#37
TheLostSwede
News Editor
ncrsDo you know who does manufacture it in the end? Obviously they can't make the chips or RAM, but PCBs and the whole assembly, packaging and testing would fit their facilities, at least according to what's written on their site.
Not sure, but as you can see, Leadtek has exactly the same cards and I believe they do the same job in much of APAC (excluding Japan) as PNY does in North America for Nvidia.
www.leadtek.com/eng/products/AI_HPC(37)
Posted on Reply
#38
londiste
evernessinceDoes GDDR7 have stringent signaling requirements like GDDR6X? I noticed the memory is once again very close to the GPU core.
Stringent, yes. Slightly better than GDDR6X though.
PAM3 vs PAM4.
Posted on Reply
#39
Enzarch
JustBenchingYou realize that the power rating is much higher on the 12vhpwr connector then it is on the 6 and 8pins, RIGHT?
They are talking about the safety margin; (The difference between what the connector is rated for by PCI-SIG and the actual current capacity rated by the manufacturer (Molex))
Typical 8-pin connectors are rated at 150W but they can actually handle 288W (even more with HCS terminals) this is a proper amount of safety overhead
Whereas the 16-pin is rated by PCI-SIG for 600W and is only capable of 684W. 14% safety margin is simply WAY too little for this sort of use-case.

Also this is an enthusiast site, where people like overclocking, I am not exactly thrilled about the prospect of having to directly replace the connector if I want to push my GPU.

The smart thing that should have been done is to replace the 8-pin PCI-E connector with the existing 8-Pin EPS connector which has an additional power conductor and can handle 384W, and would also reduce part count which would simplify and reduce costs.
Posted on Reply
#40
TokyoQuaSaR
3valatzyQuite low quality design. The thermal density will be high - 600 watts in so small area will be tough to keep cool.

1. The PCB will melt;
2. The single power connector will melt;
3. Wrong PCB size;
4. Too many memory chips - this needs either 3 GB or 4 GB chips.

Overall, given the $3000-4000 price tag - it is a meh. Don't buy.
It's a tough job for sure, but what you mean by "low quality design" ? I am an electronics engineer, I don't think you can tell just by looking at this picture whether that's a low quality design or not. What is sure is that it's a huge foorprint for the GPU, long time no see a 512b memory bus on a consumer GPU.
1-What ??
2- What makes you think the new standard PCIe 6.0 CEM connector is going to melt ? Are you a reviewer at PCI-SIG ?
3- Wrong PCB size ? Can you be more specific ? If all the required elements fit, and the crosstalk and such are correctly dealt with, then it's the right size. Sure it takes a lot of layers to make such a PCB, but that's because of the 512b memory bus in the first place.
4- You realize you need 16 chips because it's a 512b bus and 64b chips don't exist ?? Not to mention that available sizes have power of 2 depths... So 3GB chips don't exist... Edit: Apparently, and it's the first time I see such thing, those are actually planned, my bad here. But you still need 16 chips anyway, just that in a near future we'll be able to have 48GB on those, and maybe even 64GB, and the memory size is very valuable for such cards (of course not for gaming).

You have 0 idea what you're talking about but you're literally trying to prove that you're smarter than Nvidia engineers... Stay humble a bit more maybe ?
Posted on Reply
#41
Vayra86
Prima.VeraIndeed. Double the shaders, double the VRAM, double the price, double the fun, 30% more performance.
Makes total sense.


That was 448 bit and 352-bit. NOT 256-bit :laugh: :laugh: :laugh:
Diminishing returns are real, and quite more apparent is the fact that games and engines are simply not suitable for an ever increasing FPS. There's game logic to be handled, and it ain't the GPU doing it. We saw the same thing with the 4090 - when it got released, most CPUs simply bottlenecked the card. Now that there are faster CPUs, we see the 4090 edge further ahead as the rest of the hardware and pipeline starts to catch up, as well as games in general becoming harder to run, meaning less pressure on the rest of the pipeline relative to the GPU.

This has always been the case. We've had games and engines with hard caps even on FPS - so if you put a game like Fallout or Elden Ring on your benchmark suite as a reviewer, there's a certain weight of that leaking into the relative performance number; a weight that says 'the x60 can reach the same 60 FPS as an x90' at one or more resolutions.

In the end though, as time passes, hardware is what its specs say it is, its very transparent like that. This is exactly what defines a high(er) end piece of hardware - it ain't always the performance relative to cards in the rest of a review, it should be viewed with the perspective of the fact a review is a snapshot of its performance at a certain moment in time. The raw performance is really there. But its not a given you will unlock all of it in the lifetime of the product, and/or, you will see it unlock its potential after you've upgraded the rest of your stuff. That is also why I've always kinda laughed at people saying 'you cannot future proof' - its bullshit. Hardware doesn't progress quite so fast, and raw performance is raw performance, as long as there are no radical changes in the landscape around that hardware. For example, the move to a new API. As long as the hardware is playing on the same ruleset, so to speak, its very easily comparable, and yes, there are IPC increases gen over gen, but they'll never change the game radically, its just an iterative, small step forward. And this is also why high end GPUs have generally kept their value quite well; they're so powerful, they can keep up with the mainstream movement several gens ahead of themselves. A similar thing counts for CPUs. You don't need to upgrade them constantly if you get something decent; the moment games will optimize around the performance you've got, is years ahead of you.
Posted on Reply
#42
TheinsanegamerN
3valatzyQuite low quality design. The thermal density will be high - 600 watts in so small area will be tough to keep cool.
Oh boy, we got a qualified PCB designer here.
3valatzy1. The PCB will melt;
3090ti didnt melt.
3valatzy2. The single power connector will melt;
Not if you plug it in right.
3valatzy3. Wrong PCB size;
So it's simultaneously too cramped but also too big?
3valatzy4. Too many memory chips - this needs either 3 GB or 4 GB chips.
Why, so you can cut the bus size in half and kneecap the chips performance?
3valatzyOverall, given the $3000-4000 price tag - it is a meh. Don't buy.
You heard him, this big GPU is too big, dont even think about it!
SOAREVERSORMost people have never needed what they did because they didn't work on them. They bought it for e-peen. CUDA however and other stuff has a strangle hold on the pro market.

If you were a pro and bought these it was a steal but if you were a gamer and did it you were dumb even for a gamer and that's saying a lot given gamers.

If you viewed it as a gaming product you painted the target on your head, shot yourself, and wandered around naked after all of your own doing.

The issue is not nvidia it's PC gamers demanded to be treated in a special manner above and beyond what any other consumer section is and then stomping their footsies when it doesn't happen. It's so common and so isolated to that one group it's a punch line to everyone outside of it. Every new nvidia release brings the wailing of the incels as does every new game release that isn't right wing fantasy.


As someone who's going to buy four of these and never game on it the price is dirt cheap for what it is and it won't make a dent my bank account. I'll give the 4090s to someone else who is not going to game on them. If I am going to game on the PC there's a mobile intel 1400 series with a mobile 4070, 64 RAM, dual 4tb SSDs NUC style box to play Quake 1 and Doom 2 on and some odd indie games. Or the Switch and the PS5.

These aren't really gaming cards. The gobs of VRAM are for handling various models and other things. The embargo set on high end cards is not because of Chinese WoW gold farmers or whatever it's for the actual work they are made to do. People need to accept that.
If you see complaining about poor quality games and hardware as "right wing incels", it may be time for you to go outside and touch grass.

Just because you buy geforce cards to run some commercial software and dont play games doesnt meant hats how other people use it. People need to accept that.
Posted on Reply
#43
Hecate91
TheinsanegamerNNot if you plug it in right.
If it has to be plugged in "right" then its a design issue, the 6+2 and 8 pin connector never had that issue because it was easy to tell when it was plugged in all the way.
Nvidia went for aesthetics over functionality with the 12vhwpr connector.
TheinsanegamerNIf you see complaining about poor quality games and hardware as "right wing incels", it may be time for you to go outside and touch grass.

Just because you buy geforce cards to run some commercial software and dont play games doesnt meant hats how other people use it. People need to accept that.
I agree with you on this, it isn't gamers fault games are trash, AAA studios losing money, shutting down, or getting acquired are evidence people aren't buying overpriced junk.
And geforce cards are for gaming regardless of people wanting to accept it or not, although ever since the xx90 replaced the Titan it has been obvious Nvidia doesn't care about selling them for playing games.
igormpThe "PC master race" market for sure is one of the most entitled ones :laugh:
The "pc master race" market isn't all of the pc gaming market as they're always implying. Though the "pcmr" crowd are the ones ruining pc gaming for everyone else, demanding things like RGB on everything, glass fishtank cases, mice full of holes, and silly keyboards with no function keys that have little practical usefulness outside of games.
Posted on Reply
#44
JustBenching
Hecate91, the 6+2 and 8 pin connector never had that issue
Of course they did, lot's of burned 6 and 8pins.
Hecate91Nvidia went for aesthetics over functionality with the 12vhwpr connector.
Nvidia didn't go for aesthetics over functionality cause nvidia didn't design the connector.
Posted on Reply
#45
freeagent
Hecate91If it has to be plugged in "right" then its a design issue
Plugged in right means once you hear the click, you are there, just like the old style.
Hecate91Though the "pcmr" crowd are the ones ruining pc gaming for everyone else,
That is supposed to be a pc vs console thing, but whatever you say.
Posted on Reply
#46
Hecate91
freeagentPlugged in right means once you hear the click, you are there, just like the old style.
What click? Every tech channel was saying just cram the connector in until you can't see the edge of the connector itself, MSI was the only one who did something to help by making the connector side yellow.
The molex 6+2 and 8 pin have a loud click and you can feel when its plugged in right, not so much with the new one, the other issue is the physics of pins being too small to handle high current, add in manufacturing tolerances and some wanting to go cheap, then you end up with less power current capacity than 2x 8 pin connectors.
freeagentThat is supposed to be a pc vs console thing, but whatever you say.
Not what I got from that loud group of pc elitists, but sure. To me they seem to be the same people complaining about the price of an xx90 card but buy it anyway and show it off in a glass box case.
Posted on Reply
#47
freeagent
Hecate91What click?
They make a "click" sound when fully inserted.
Hecate91Every tech channel was saying just cram the connector in until you can't see the edge of the connector itself, MSI was the only one who did something to help by making the connector side yellow.
What can I say, there are a lot of people who don't understand how this stuff works, and just talk a lot of shit. Who knows. Noobs making videos? Dunno.
Hecate91The molex 6+2 and 8 pin have a loud click and you can feel when its plugged in right
Don't act like I have never used one before :D
Hecate91not so much with the new one
You can feel it, and hear it. Maybe people are using crappy power supplies or something, don't know what to say.
Hecate91To me they seem to be the same people complaining about the price of an xx90 card but buy it anyway and show it off in a glass box case.
The xx90 users dont complain about the price, anyone who buys a flagship knows the price of admission, whether it is in your pay grade is up to you. Dont forget, people have credit, so it isnt like they are just dumping all that cash at once, though a lot of people do, and they still dont complain, because they know.

The people who do complain are the ones with no intention at all of buying one. Though they may want it.
Posted on Reply
#48
3valatzy
TokyoQuaSaRIt's a tough job for sure, but what you mean by "low quality design" ?
It means that the design violates the common sense.
TokyoQuaSaR3- Wrong PCB size ? Can you be more specific ? If all the required elements fit, and the crosstalk and such are correctly dealt with, then it's the right size. Sure it takes a lot of layers to make such a PCB, but that's because of the 512b memory bus in the first place.
Sorry, but your goal is no to "fit", but to have a PCB with the approximate size of the heatsink above it.

Have you not seen how the AMD engineers do it? I can show you.

RX 7900 XTX PCB size:



This here size:

TokyoQuaSaR4- You realize you need 16 chips because it's a 512b bus and 64b chips don't exist ?? Not to mention that available sizes have power of 2 depths... So 3GB chips don't exist...
Maybe they could put more L3 cache and stay on a 384-bit bus. Not to mention that both 512-bit over GDDR7 and 32 GB of VRAM are overkill and stupidity.
TokyoQuaSaRYou have 0 idea what you're talking about but you're literally trying to prove that you're smarter than Nvidia engineers... Stay humble a bit more maybe ?
Send them many greetings, and tell them to learn more.

cablemod/comments/175dfs6
www.digitaltrends.com/computing/nvidia-rtx-4090-cracked-pcb/

hackaday.com/2022/10/28/nvidia-power-cables-are-melting-this-may-be-why/
Hecate91What click? Every tech channel was saying just cram the connector in until you can't see the edge of the connector itself, MSI was the only one who did something to help by making the connector side yellow.
The molex 6+2 and 8 pin have a loud click and you can feel when its plugged in right, not so much with the new one, the other issue is the physics of pins being too small to handle high current, add in manufacturing tolerances and some wanting to go cheap, then you end up with less power current capacity than 2x 8 pin connectors.
Exactly.

Posted on Reply
#49
JustBenching
freeagentThey make a "click" sound when fully inserted.
They do, but why let facts get in the way of talking crap about nvidia?
Posted on Reply
#50
Macro Device
3valatzyboth 512-bit over GDDR7 and 32 GB of VRAM are overkill and stupidity.
For gaming? Could be.
For prosumers? It's the opposite of an overkill.

RTX 5090 is capable of gaming by accident and it should only be considered a GPU for real maths.
Posted on Reply
Add your own comment
Dec 26th, 2024 16:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts