– Hotspot vs. Average Measured Temperature
To understand the true maximum, you first need to understand the
difference between the hotspot and the average measured temperature.
Your GPU basically has numerous temperature sensors. And what you see as the
GPU hotspot temp is the highest measured temperature of these sensors.
Previously, there used to be only
one temperature sensor right in the middle of the chip. Engineers would
calculate the temperature when the
silicone chip started to heat up.
Here, it’s important to know that the temperature on the chip
should never cross 125C, or it will start to degrade.
Keeping this in mind,
GPU engineers calculated that when there are 125C hotspots on the silicone chip, the temperature sensor would be at
95.
So the
GPU’s maximum safe temperature was set at 95C, and as long as you stayed within 95C, you can be sure that your GPU won’t overheat.
Today, GPUs have a sensor grid that allows manufacturers to get both the
hotspot and the
average measured temperature.
The temperature is still measured from the chip’s surface, so there are hotspots in the chip that are hotter than the
hotspot temp.
And the process is also the same – GPU designers determine the
max safe temp (typically 120C) and see the value they get on the hotspot and average temperature sensors.
Typically, depending on the GPU, the average temperature sensor is somewhere between
92 and
97C, while the safe temperature for the hotspot sensor is somewhere in the
110C range.
The latter is closer to the real value that you should worry about, but it’s only half the picture.
The bottom line, as long as your GPU’s temperature is
less than 95C and the hotspot temp is
less than 110C, everything should be fine.