• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVidia now HIDING hot spot temperature? A great problem IMO.

Dropping a bin is fine. I've seen cards fluctuate at crazy clocks when they reach throttling point, though (I think 83 ˚C on Pascal and Turing).
83 on this one too I think.. I have it hidden in HWiNFO lol..

Yeah that's how Gpu Boost n+1 works it starts throttling around 50 or 60 depending on which gen
Right? So for instance Superposition will start off at ~3045-3030Mhz depending on my ambient, but will finish off at 3015MHz.
 
Maybe AMD should had done this when they launched the 7000 series, a lot of bad press would had been avoided, when those coolers had that small issue of missing coolant, correct?:D;)
 
I had the loudest and hotttest running 1060 in existence (never forgive u that Asus) it was running around 83°C-86°C (90+ on hotspot and i can only imagine how much on the memory without heatsinks) with 100% fanspeed but it is till alive and it is turning 7 years old this october. (Not 7 years gaming on it but probably above 3).

If Nvidia is not tying their boost algorithm / throttling point / fanspeed to the hotspot temp then is is really that important?
 
Last edited:
I'm not sure if not for a hotspot sensor on my 7900XTX would Gigabyte grant an RMA for me, since edge was running perfectly fine at 70C max, only hotspot reached 110. Fans were going crazy.
I mean you could argue that fans constantly screaming like a vacuum cleaner would be enough for RMA, but you never know with AIBs. And with that hotspot sensor it was easy case as max temp as reported by AMD was 110 so I just needed to point that out.

If AIB will use PTM instead of some cheap paste I don't see it ever becoming a problem but I have a feeling that the MSRP cards will use cheapest available stuff even though 5090 costs 2000$ and 5080 a 1000.
 
Because on a 600w card with 900w spikes, hot spot will be 100c+ often...

That'll freak people even if the design can handle it, for the warranty length anyway.

These cards may not last much after though?
 
Really? I specifically bought a GPU with oversized cooler because I wanted the card to cool well and the hot spot temperature gives me information that something in the area I care about started failing. So at least for me, it provides a useful information.

Fair enough, and comparing the "normal" and hotspot gives a good answer if you worry about contact between the die and the cooler. For example my 6700 XT with an aftermarket cooler has a somewhat noticeable difference between those, but nothing too worrying. Maybe the cooler's surface isn't just flat enough. No throttling or overheating anyway.
 
Stop defending nvidia for removing it lol. More data points are always welcome for those who know what they are.

Hotspot denoted the hottest point of the GPU die. Core temperature is an average of many sensors. At least that's how it worked in the past and if nvidia removed it, it's actually a semi big deal for people who are going to put theirs on water and what not.

Out of the 6 times i disassembled the 3090 block, this one time the core temperatures were fine but GPU kept throttling in a weird way. Turns out my hotspot was going crazy because I missed a spot when applying LM. It was actually a useful sensor in that scenario.

I wish they gave a straight answer as to why they removed it. Nvidia's communications with the media with incorrect statements or borderline lies (just see Der8auer's correspondence with nvidia) are making them look worse than AMD. And that's saying something.

Geee I wonder who knows more about what constitutes the health status of a GPU. Is it you, or NVIDIA engineers?
Me, because when I put the GPU on a waterblock I bet nvidia doesn't know how terribly I put it on.

Hotspot can help tell though.
 
Last edited:
I wish they gave a straight answer as to why they removed it.
I think it is because the cooler is too small and it overheats badly on a FE card:


It would not make sense to remove it from ALL 50 series cards just because one of them overheats.
 
I think it is because the cooler is too small and it overheats badly on a FE card:


It would not make sense to remove it from ALL 50 series cards just because one of them overheats.
Can you stop posting conspiracy theory rumours when W1zz has done an actual review of the actual card and determined it runs just fine? We get it, you don't like NVIDIA and you don't like the 5090 FE, but nobody asked you for your opinion and we especially didn't ask you to start a new thread every time you have a new anti-NVIDIA thought.
 
Last edited:
Can you stop posting conspiracy theory rumours when W1zz has done an actual review of the actual card and determined it runs just fine?
I think he did not see the actual die temps. Because they hid them. If you think that the chip is cool when even a distant part of the cooler in airflow has 65°C, you may not have a good sense for technical stuff.
 
I think he did not see the actual die temps. Because they hid them. If you think that the chip is cool when even a distant part of the cooler in airflow has 65°C, you may not have a good sense for technical stuff.
That depends on how big the difference between the hotspot and that distant part is, which is influenced by many things. Without having the hotspot reading, you'll never know.
 
The other problem is, most reviews are on an open test bench. Once people start putting 570 watts inside the case with a cooler that has 96'C memory temp on an open bench, I can bet you there will be some throttling for sure down the line.

Oh and that 9950X3D will throttle along with it too unless you find a way to dump that heat away from the CPU cooler/rad.

With these power levels, open vs closed test bench will make a sizeable difference.
 
putting 570 watts inside the case with a cooler that has 96'C memory temp on an open bench, I can bet you there will be some throttling for sure down the line.
I'd be happy to test it, just send me 5090 and the smallest case it can fit :D
 
Engineers know everything, they never make mistakes, and certainly his opinion as an employee is above everyone else, safety > profit. Stop complaining. After all, there is no precedent for planes crashing, cars exploding, smartphone batteries catching fire and CPUs degrading. None of this was caused by human error, I promise.
I suggest you educate yourself on the concept of "balance of probability". Once you've done that, please answer the following question:

Is it more likely that NVIDIA engineers removed this sensor reading
  • because they discovered it to be unnecessary and causing unnecessary concern among customers, who can't be bothered to educate themselves on what this reading actually means?
  • or, for nefarious purposes?
If the latter, please explain your reasoning.

Stop defending nvidia for removing it lol.
I'm not defending it, I'm responding to the claim made by the OP that this is somehow "a great problem", given their complete failure to provide any concrete evidence that it is, in fact, a problem.
 
I suggest you educate yourself on the concept of "balance of probability". Once you've done that, please answer the following question:

Is it more likely that NVIDIA engineers removed this sensor reading
  • because they discovered it to be unnecessary and causing unnecessary concern among customers, who can't be bothered to educate themselves on what this reading actually means?
  • or, for nefarious purposes?
If the latter, please explain your reasoning.
It could be the first option, but due to not recognising the importance of this reading during a re-paste, or waterblock installation - you might call that negligence.
Or maybe they don't want you to know how successful your re-paste is, potentially resulting in overheating, or GPU damage and refused RMA requests - so it's a mixture of the first and second options.

It might also be the case of the card not having as many sensors as Ada or Ampere did, although I don't know what good that would be (I don't think they cost too much).

Of course, I know nothing, I'm just creating theories here.
 
No die has sensors all over covering every mm of the surface. But it was always handy to know that, within the area covered by the sensors, what the highest temperature is. If that's 'inaccurate' then the core temperature is also just as inaccurate because it's an average of many sensors.

It has been useful for me in the past and i'm sure it has been for others too.

This removal is better for nvidia in the sense that uninformed consumers won't freak out over their temperatures I suppose. But it's not good for people replacing their coolers/installing waterblocks and such and there are no two ways about it.
 
Geee I wonder who knows more about what constitutes the health status of a GPU. Is it you, or NVIDIA engineers?

An Appeal to Authority fallacy is no substitute for an argument. As Denver pointed out, being a specialist in a field does not make you above reproach.

If Nvidia is not tying their boost algorithm / throttling point / fanspeed to the hotspot temp then is is really that important?

Depends on what the hotspot is measuring, if it's merely the hottest part of the card and that happens to always be outside of the cores it may not have an impact on the ability of a card to boost. In that instance though it's still useful to have for the card and end user as a failsafe against other potential issues.

In addition, I feel like it'd be an engineering failure if the card doesn't properly measure the hottest points of the various parts of the card. Hence why Nvidia is still measuring this info but choosing not to disclose it.

Nvidia thinks that showing hotspot causes a perceptual issue and that it's users are not smart enough to understand what the data means. Clearly the data provided by it is useful, they are keeping the sensor but hiding the data. The comparisons to Apple are dead on.
 
No die has sensors all over covering every mm of the surface.
Vega 20 (a.k.a. Radeon VII) had plenty on die temp sensors :
vega%2020%20gpu%20sensors.png


Also, can "removal" be due to small change of how hot spot reports it's temp ?
(and current software simply needs a patch to make it appear again)
 
Are we going to endorse the lack of features (however useless some people think they are), just because some customers might be upset by a high temp readings that are irrelevant to them (is that really an official explanation by nvidia, or)? Just like the omission of a q-code screens even on a high-end mobos nowadays, when they were a standard feature for many midrange mobos before?

It's a step back for the PC enthusiasts, and I really can't see any reason to be silent about it, especially on a forum that used to be an enthusiast's place to go... I guess that's not the deal anymore.

Why cheap out on a such feature? It is a very handy temp reading for the people that like to tinker with their investment by modding cooling solutions, changing thermal interface, making sure their WB is seated properly, or whatever.

If you hate the forum activities by @BoggledBeagle so much, use the damn ignore button. And this isn't the first time people hating on this particular forum member neither.

Which one is better for the average browsing Joe, seing this discussion as a first search result on the topic, or should he just go straight to the tom's or w/e garbage site out there? Your contribution to the topic matters, so do it the right way and be respectful to the members of the community. Cheers.
 
Are we going to endorse the lack of features (however useless some people think they are), just because some customers might be upset by a high temp readings that are irrelevant to them (is that really an official explanation by nvidia, or)? Just like the omission of a q-code screens even on a high-end mobos nowadays, when they were a standard feature for many midrange mobos before?

It's a step back for the PC enthusiasts, and I really can't see any reason to be silent about it, especially on a forum that used to be an enthusiast's place to go... I guess that's not the deal anymore.

Why cheap out on a such feature? It is a very handy temp reading for the people that like to tinker with their investment by modding cooling solutions, changing thermal interface, making sure their WB is seated properly, or whatever.

If you hate the forum activities by @BoggledBeagle so much, use the damn ignore button. And this isn't the first time people hating on this particular forum member neither.

Which one is better for the average browsing Joe, seing this discussion as a first search result on the topic, or should he just go straight to the tom's or w/e garbage site out there? Your contribution to the topic matters, so do it the right way and be respectful to the members of the community. Cheers.

To be clear, I personally don't think Beagle is in the wrong for questioning anything... but I certainly do question the utility of the fearmongering, as thread has shown it's... of very poor quality. Flinging, victimism, accusations thrown left and right.

I propose a restart to all who are posting here.
 
To be clear, I personally don't think Beagle is in the wrong for questioning anything... but I certainly do question the utility of the fearmongering, as thread has shown it's... of very poor quality. Flinging, victimism, accusations thrown left and right.

I propose a restart to all who are posting here.

Saying that since nvidia engineers know best, that sensor is not needed is also of very poor quality.
Let's just remove all the sensors, why bother since nvidia engineers know what they are doing.
Maybe just one reading 1 or 0 depending if the card is running or not.
 
To be clear, I personally don't think Beagle is in the wrong for questioning anything... but I certainly do question the utility of the fearmongering, as thread has shown it's... of very poor quality. Flinging, victimism, accusations thrown left and right.

I propose a restart to all who are posting here.
I'm seeing the same pattern again and again, and even though I might be overreacting a bit, these kind of discussions are always end up being of very poor quality indeed. I'm kinda tired of it, maybe I should take a break or something.
 
I suggest you educate yourself on the concept of "balance of probability". Once you've done that, please answer the following question:

Is it more likely that NVIDIA engineers removed this sensor reading
  • because they discovered it to be unnecessary and causing unnecessary concern among customers, who can't be bothered to educate themselves on what this reading actually means?
  • or, for nefarious purposes?
If the latter, please explain your reasoning.

Only individuals who are running a hardware monitoring program are going to see hotspot temps in the first place. I'd argue that inherently puts them above the 99% of average PC users in terms of education.

Mind you the logic that you can't show hotspot data is bunk. It relies on the fact that the person understands what is and isn't a hot GPU temperature in the first place but somehow doesn't understand acceptable hotspot temp. The probably of the having an issue understanding just hotspot temp in unlikely.

Since when did PC gamers start condoning hiding info because someone MIGHT misinterpret it? That is a dangerous precedent to set that if allowed unmitigated will turn PCs into locked-down Apple clones.
 
Back
Top