Wednesday, April 23rd 2025

Parts of NVIDIA GeForce RTX 50 Series GPU PCB Reach Over 100°C: Report

Igor's Lab has run independent testing and thermal analysis of NVIDIA's latest GeForce RTX 50-series graphics cards, including the add-in card partner design RTX 5080, 5070 Ti, and 5060 Ti, which are now attracting attention for surprising thermal "hotspots" on the back of their PCBs. These hotspots are just the areas on PCB that get hot under load, and not the "Hot Spot" sensor NVIDIA removed with RTX 50 series. Infrared tests have shown temperatures climbing above 100°C in the power delivery region, even though the GPU die stays below 80°C. This isn't a problem with the silicon but with concentrated heating in clusters of thin copper planes and via arrays. Card makers like Palit, PNY, and MSI have all seen the same issue since they closely follow NVIDIA's reference PCB layout and use similar cooler mounting. A big part of the trouble comes down to how PCB designers and cooler engineers work separately.

NVIDIA's Thermal Design Guide gives AIC partners detailed power-loss budgets, listing worst-case dissipation for the GPU, memory, NVVDD and FBVDDQ rails, inductors, MOSFETs, and other components, and it recommends ideal thermal interface materials and mounting pressures. The guide assumes that even heat is spreading and that there is perfect airflow in a wind tunnel, but actual consumer PCs don't match those conditions. Multi-layer PCBs force high currents through 35 to 70 µm copper layers, which join at tight via clusters under the VRMs. Without dedicated thermal bridges or reinforced vias, these areas become bottlenecks where heat builds up, and the standard backplate plus heat-pipe layout can't pull it away fast enough.
This hotspot issue isn't limited to the newest Blackwell GPUs. Even the previous-generation GeForce RTX 4090, which was engineered for up to 600 W of heat dissipation with multiple vapor chambers and a three-slot cooler, showed a similar pattern. Thermographic snapshots of prototype Ada architecture boards revealed rear-side VRM zones reaching the mid-70s Celsius while the GPU die sat in the low-60s. Internal versions of NVIDIA's guide were redacted to protect internal details, and no special note was made to reinforce the backplate in that spot. As a result, partners assumed the standard backplate contact area was sufficient and didn't add any extra cooling measures. Fortunately, simple tweaks can make a big difference. In Igor's Lab tests, placing a thin thermal pad or a small amount of conductive putty between the hotspot area and the backplate dropped peak VRM temperatures by 8 to 12 °C under the same load.
Sources: Igor's Lab, via Tom's Hardware
Add your own comment

78 Comments on Parts of NVIDIA GeForce RTX 50 Series GPU PCB Reach Over 100°C: Report

#1
Wirko
Wow, that's a very high resolution thermal image, with every solder pad visible (and apparently it's not combined with an image in visible light).
Posted on Reply
#2
Legacy-ZA
There was a reason they disabled this sensor on the newer GPUs.

My Palit RTX3070Ti used to have a hotspot of 107C, suffice to say, it broke soon after. I got it replaced with an ASUS RTX3070Ti TUF, which never went above 88C, that was much better, although, still a bit high. My brother has one too, same thing, but his hasn't broken as of yet, but suffers extreme performance throttling.

The ASUS 5070Ti should be fine from what I saw.

Not to take a dump on your reviews @W1zzard as they are very thorough, but this is why we requested FLIR imaging before in your reviews, thankfully, at the moment we can get that on Guru3D. For most of the RTX5000 AIBs, the PCBs look very cool under operation, but let's face it, AIBs don't send every model they make to reviewers.

Anyways, I do expect extremely high hotspot temps on founder models, especially the 5090, so much circuity to cool in such a small surface area, it's to be expected.
Posted on Reply
#3
W1zzard
Legacy-ZAThere was a reason they disabled this sensor on the newer GPUs.
These are two different kinds of hotspots
Posted on Reply
#4
Legacy-ZA
W1zzardThese are two different kinds of hotspots
Interesting, I was only aware of the one "hotspot" sensor before, and if I am not mistaken, it was near the memory/GPU so we could see much easier if there was a misalignment of the cooler on the core/memory. Anyways, we need more sensors in GPUs, and definitely a higher class of voltage controller so we can measure voltage/power in real-time without causing performance degradation during testing/gameplay.
Posted on Reply
#5
W1zzard
Legacy-ZAInteresting, I was only aware of the one "hotspot" sensor before, and if I am not mistaken, it was near the memory/GPU so we could see much easier if there was a misalignment of the cooler on the core/memory.
So the hotspot in the context of "NVIDIA removed it with Blackwell" is a sensor that within the GPU chip that monitors a large number of temperature sensors inside the rectangular chip piece of silicon (not on the PCB, not on the card). The hottest of all these sensors is reported, this what is called "hotspot" by GPU engineers.

What Igor is reporting are PCB temperatures, and in this context "hotspot" means "hottest measurement in the FLIR image", it has nothing to do with the GPU or the sensors in it.

Hope that makes sense
Posted on Reply
#6
evernessince
Legacy-ZAThere was a reason they disabled this sensor on the newer GPUs.
You are thinking of GPU die hotspot, in this instance it's a hotspot in the power delivery system on the card itself.

I wouldn't discount the probability that GPU die hotspot will kill some cards though, given that customers are no longer able to see that info and there are always cards that come out of the factory with paste or mounting issues.
Legacy-ZAInteresting, I was only aware of the one "hotspot" sensor before, and if I am not mistaken, it was near the memory/GPU so we could see much easier if there was a misalignment of the cooler on the core/memory. Anyways, we need more sensors in GPUs, and definitely a higher class of voltage controller so we can measure voltage/power in real-time without causing performance degradation during testing/gameplay.
It's a bad choice of words. Most people thinking of hotspot temp will indeed think of GPU die hotspot. The title should be appended to include "VRM" before the word hotspot to make it clear.
Posted on Reply
#7
Legacy-ZA
W1zzardSo the hotspot in the context of "NVIDIA removed it with Blackwell" is a sensor that within the GPU chip that monitors a large number of temperature sensors inside the rectangular chip piece of silicon (not on the PCB, not on the card). The hottest of all these sensors is reported, this what is called "hotspot" by GPU engineers.

What Igor is reporting are PCB temperatures, and in this context "hotspot" means "hottest measurement in the FLIR image", it has nothing to do with the GPU or the sensors in it.

Hope that makes sense
evernessinceYou are thinking of GPU die hotspot, in this instance it's a hotspot in the power delivery system on the card itself.

I wouldn't discount the probability that GPU die hotspot will kill some cards though, given that customers are no longer able to see that info and there are always cards that come out of the factory with paste or mounting issues.



It's a bad choice of words. Most people thinking of hotspot temp will indeed think of GPU die hotspot. The title should be appended to include "VRM" before the word hotspot to make it clear.
The more you know, thank you for the clarification. <3
Posted on Reply
#8
Rightness_1
Just planned obsolescence via a thermally induced failure not long after the 2-year warranty expires in the EU. Many of these cards won't make 4 years old at those heat levels.

Just think those temps are outside of a case with no extra radiated heat from your CPU/MB VRMs and the SSD affecting it, and it's not covered in 2 years' worth of dust either. And we won't mention the whole thing running in a 27c room either...
Posted on Reply
#9
TheDeeGee
The gift that keeps on giving.
Posted on Reply
#10
CosmicWanderer
I now consider myself lucky that I wasn't able to secure a RTX 5090 on launch day.

Hopefully AMD make a strong enthusiast tier comeback with UDNA because we definitely need more options to choose from.
Posted on Reply
#11
_roman_
I'm not sure if I want to see 52°C on the connector. There is a lot of thermal mass near the connector
The hardware needs to last until the warranty is over anyway.

Those repair youtbue channel which i sometimes watch have very new cards on the repair table. Everything over 600€ should last much longer when it is a graphic card.

I'd like to see the marketing hoax from that particular graphic card. Every graphic card advertises how fabulous and awesome the cooling solution is.
Posted on Reply
#12
Breit
Anyone still remembering the original 3090FE? This card had memory modules mounted on the backside of the PCB and that was super hot 1st gen GDDR6X memory. It was easily getting above 100°C on the backplate. It seems like these kind of temperatures are OK for Nvidia and they are designing against this?! At least they did not learn from their mistakes and doing something against it it seems.
Posted on Reply
#13
TSiAhmat
BreitAnyone still remembering the original 3090FE? This card had memory modules mounted on the backside of the PCB and that was super hot 1st gen GDDR6X memory. It was easily getting above 100°C on the backplate. It seems like these kind of temperatures are OK for Nvidia and they are designing against this?! At least they did not learn from their mistakes and doing something against it it seems.
the 5090fe has similar temps on vram (94C° in the TPU Review) again...
Posted on Reply
#14
piloponth
Atleast cables aren’t burning in that thermo photo.
Posted on Reply
#15
PerfectWave
when you want to save on pcb length :roll:
Posted on Reply
#16
Breit
PerfectWavewhen you want to save on pcb length :roll:
Unfortunately, that is not the problem.
Posted on Reply
#17
N/A
There is no proper VRM cooling. An L-shaped profile is touching the thin fins albeit flattened. 1-2 mm thermal pad doesn't help.

Posted on Reply
#18
jmcosta
_roman_I'm not sure if I want to see 52°C on the connector. There is a lot of thermal mass near the connector
The hardware needs to last until the warranty is over anyway.

Those repair youtbue channel which i sometimes watch have very new cards on the repair table. Everything over 600€ should last much longer when it is a graphic card.

I'd like to see the marketing hoax from that particular graphic card. Every graphic card advertises how fabulous and awesome the cooling solution is.
By looking at the thermal image the cables at around 40C, the 52C spot is from the heat being transferred from gpu and mosfets through the copper plane
I agree with you that these expensive cards should have had higher built quality and wider safety tolerances but unfortunately now we live in a world where profit commands.
Posted on Reply
#19
Sammoonryong
This is getting better and better… my old 3080 has 8 degrees delta between the core and the hotspot, memory tops at 90C (it’s Zotac trinity so not exactly the bottom of the barrel but it’s the lower tier model) Nvidia should partner with insurance companies to offer a house fire insurance with their newest cards /facepalm
Posted on Reply
#20
Wirko
jmcostaBy looking at the thermal image the cables at around 40C, the 52C spot is from the heat being transferred from gpu and mosfets through the copper plane
Yeah, heat doesn't seem to be generated in the connector. Ain't that great?
Posted on Reply
#21
N/A
We need water cooling, Full Cover. it's the only way.
Posted on Reply
#22
Sol_Badguy
From Igor's review I see that the PCB hotspot of the 5080 tested was not as horrible and worrisome as in case of the 5070.
The 5070 has fewer VRM phases compared to the 5080 and relative to the load they are stressed more.
Of course this is just one variable, but probably one of the most if not the most important one.

So extrapolating from this it seems that lower-tier GPUs (5070 and downwards) are more likely to have undersized VRMs relative to their power draw compared to higher-tier GPUs (5070Ti and upwards).
Going further it seems that the 5070 Ti is probably the most fortunate of this lineup.

There are 5070 Ti variants which have (apparently) the same PCB as the corresponding 5080 variants, that means the same number of phases for a lower power draw. One example is the ASUS TUF -> 5070 Ti & 5080.
Other variants are even better, not only the same PCB but also the same cooler, which as proven by the review has better results on the 5070 Ti. One example is the Palit Gamerock -> 5070 Ti & 5080.

Of course the AIBs are still to blame, looking at TPU reviews for 5070 Tis and 5080s the usual increase over MSRP is about $250. Yes this does include other stuff like RGB, dual BIOS, factory OC whatever. But the biggest reason for the increase is the cooling solution, and not everyone is including thermal pads on the backplate, when it would cost literally pennies to do so. The profit margins are sufficient to allow the whole back of the PCB to be padded.
The greed knows no limits.
Posted on Reply
#23
Breit
Sol_BadguyFrom Igor's review I see that the PCB hotspot of the 5080 tested was not as horrible and worrisome as in case of the 5070.
The 5070 has fewer VRM phases compared to the 5080 and relative to the load they are stressed more.
Of course this is just one variable, but probably one of the most if not the most important one.

So extrapolating from this it seems that lower-tier GPUs (5070 and downwards) are more likely to have undersized VRMs relative to their power draw compared to higher-tier GPUs (5070Ti and upwards).
Going further it seems that the 5070 Ti is probably the most fortunate of this lineup.

There are 5070 Ti variants which have (apparently) the same PCB as the corresponding 5080 variants, that means the same number of phases for a lower power draw. One example is the ASUS TUF -> 5070 Ti & 5080.
Other variants are even better, not only the same PCB but also the same cooler, which as proven by the review has better results on the 5070 Ti. One example is the Palit Gamerock -> 5070 Ti & 5080.

Of course the AIBs are still to blame, looking at TPU reviews for 5070 Tis and 5080s the usual increase over MSRP is about $250. Yes this does include other stuff like RGB, dual BIOS, factory OC whatever. But the biggest reason for the increase is the cooling solution, and not everyone is including thermal pads on the backplate, when it would cost literally pennies to do so. The profit margins are sufficient to allow the whole back of the PCB to be padded.
The greed knows no limits.
Regarding your comment about the AIB prices: The thermal solution is definitely not the reason AIB cards are more expensive than the FE cards. Nvidias thermal solution is by far the most expensive one. At least for 5080/5090 that is.
Posted on Reply
#24
blinnbanir
Is it just me? Every thing said negative about AMD cards is afflicting Nvidia with Blackwell. I could go on but you just have to read or wait for news releases.
Posted on Reply
#25
JustBenching
Legacy-ZAThere was a reason they disabled this sensor on the newer GPUs.

My Palit RTX3070Ti used to have a hotspot of 107C, suffice to say, it broke soon after. I got it replaced with an ASUS RTX3070Ti TUF, which never went above 88C, that was much better, although, still a bit high. My brother has one too, same thing, but his hasn't broken as of yet, but suffers extreme performance throttling.

The ASUS 5070Ti should be fine from what I saw.

Not to take a dump on your reviews @W1zzard as they are very thorough, but this is why we requested FLIR imaging before in your reviews, thankfully, at the moment we can get that on Guru3D. For most of the RTX5000 AIBs, the PCBs look very cool under operation, but let's face it, AIBs don't send every model they make to reviewers.

Anyways, I do expect extremely high hotspot temps on founder models, especially the 5090, so much circuity to cool in such a small surface area, it's to be expected.
Those Hotspots nvidia removed have nothing to do with these, these are on the back of the pcb and power delivery, you never had exposed sensors on these things.
Posted on Reply
Add your own comment
May 1st, 2025 23:38 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts