• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

110°C Hotspot Temps "Expected and Within Spec", AMD on RX 5700-Series Thermals

Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Vega isn't much bigger, we are talking 330 mm^2 vs 250 mm^2 and keep in mind Radeon 7 has some shaders disabled. In the end they're pretty close. But that doesn't even matter, the transistor density is pretty much the same.

As someone else said before Nvidia does not expose these hotspot temperatures so we can't compare them and know with certainty that Nvidia does deal with this as well.
 
Joined
Sep 17, 2014
Messages
22,447 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
So you're pretending that only good products get to be winners & all bad products or companies lose (customers) :rolleyes:

Must've missed the P4 or various Nvidia GPUs then, brand name & Market position are just as important if not more than the actual product in many cases!

You have got to stop omitting half the text you quote to make your point, because its simply invalid. In the very same sentence you can read AMD's doing it well for CPU. And yes, that is how it works, if you repeatedly 'lose' at some point you're gone. AMD does not repeatedly lose, but GPU has been trouble for them ever since acquiring ATI. They did have some decent releases, but they are not very recent, so its about damn time - and Navi so far 'is not it', - its more a case of barely hanging on and that only flies because Nvidia chose to waste time on RTX.

Vega isn't much bigger, we are talking 330 mm^2 vs 250 mm^2 and keep in mind Radeon 7 has some shaders disabled. In the end they're pretty close. But that doesn't even matter, the transistor density is pretty much the same.

As someone else said before Nvidia does not expose these hotspot temperatures so we can't compare them and know with certainty that Nvidia does deal with this as well.

Still not getting the memo - GN's review shows us that hitting 110C is totally unnecessary. It does not make sense to assume Nvidia cards that are on a larger node and run cooler are showing similar behaviour. In fact, that is just weak deflection.
 
Last edited:

deu

Joined
Apr 24, 2016
Messages
493 (0.16/day)
System Name Bo-minator (my name is bo)
Processor AMD 3900X
Motherboard Gigabyte X570 AORUS MASTER
Cooling Noctua NH-D15
Memory G-SkiLL 2x8GB RAM 3600Mhz (CL16-16-16-16-36)
Video Card(s) ASUS STRIX 1080Ti OC
Storage Samsung EVO 850 1TB
Display(s) ACER XB271HU + DELL 2717D
Case Fractal Design Define R4
Audio Device(s) ASUS Xonar Essence STX
Power Supply Antec HCP 1000W
Mouse G403
Keyboard CM STORM Quick Fire Rapid
Software Windows 10 64-bit Pro
Benchmark Scores XX
Guys please understand the topic before you comment: This does not actually say whether or not it is hot or cold compared to nvidia since the way of meassuring is different (some would say more precise) Put on the edge nvidia could be probing up your a** and get an overall temp of 37,8C; it all depends on the placement of the probe what temperature you get. IF these sensors are placed "correctly" it is an super smart way to optimize the boost of af GPU; if done wrong it is a super optimized way to melt a GPU; Im pretty sure AMD goes for the first option, since they A: want to stay in the market and B: want a fault-rate under 50%. In short terms: It would not make sense to f*** your own GPU over but in reality we dont know is this is good or bad, since we clearly cant compare the two methods
 
Joined
Apr 12, 2013
Messages
7,529 (1.77/day)
You have got to stop omitting half the text you quote to make your point, because its simply invalid. In the very same sentence you can read AMD's doing it well for CPU. And yes, that is how it works, if you repeatedly 'lose' at some point you're gone. AMD does not repeatedly lose, but GPU has been trouble for them ever sincee acquiring ATI.
That still doesn't explain half the point you omitted about bad products getting good $ does it? You also conveniently sidestepped the good points of AMD GPUs or do you believe they have none? There is no product without compromises, just with AMD you have to compromise more, again depending on what you do & it's not like AIB cards are "horrible" as well.
 
Joined
Sep 17, 2014
Messages
22,447 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
That still doesn't explain half the point you omitted about bad products getting good $ does it? You also conveniently sidestepped the good points of AMD GPUs or do you believe they have none? There is no product without compromises, just with AMD you have to compromise more, again depending on what you do & it's not like AIB cards are "horrible" as well.

You really need to clarify whether you actually have a point or just want to keep this slowchat going with utter bullshit. The numbers speak for themselves, what are you really arguing against? That AMD is a sad puppy not getting enough love?

Grow up

And yes, AIB cards are not horrible, if you care to read back I just about repeated that every other post. That is the whole god damn point. :roll:
 
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
It does not make sense to assume Nvidia cards that are on a larger node and run cooler are showing similar behaviour.

And it does not make sense to assume that they don't like you clearly insinuated. Why do you people not understand that that you're definition of "cooler" is really, really primitive. Your shinny RTX Titan may show through it's one sensor reading exposed to software that it runs at 75C while some parts of the die might in fact hit over 100C. You don't know that, but it's safe to assume that this does happen because all ICs behave like this. Equally, maybe some parts of a Navi 10 die hit more than 110c, maybe that is within spec, maybe it's not. AMD knows best, more than you and me.

Point is AMD uses a different set of sensors and ways to measure temperatures, this can't directly translate into "Nvidia cards run cooler" nor does it mean that this must make them better products. That's the memo.
 
Joined
Sep 17, 2014
Messages
22,447 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
And it does not make sense to assume that they don't like you clearly insinuated. Why do you people not understand that that you're definition of "cooler" is really, really primitive. Your shinny RTX Titan may show through it's one sensor reading exposed to software that it runs at 75C while some parts of the die might in fact hit over 100C. You don't know that, but it's safe to assume that this does happen because all ICs behave like this. Equally, maybe some parts of a Navi 10 die hit more than 110c, maybe that is within spec.

Point is AMD uses a different set of sensors and ways to measure temperatures, this can't directly translate into "Nvidia cards run cooler". That's the memo.

Did you catch the line about AIB cards running much cooler yet? Even on AMD's revolutionary sensor placement? And staying well clear of 110C?

Simple case of connected dots here... if you feel confident this 110C is a guarantee for longevity, power to you. I don't.

I might be a stubborn idiot but this is clear as day, sorry.
 
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Simple case of connected dots here...

And there is a discontinuity among those dots here that you conveniently ignored : Radeon 7.

A card with a more than decent cooler that still reports these "hella scary" temperatures.

if you feel confident this 110C is a guarantee for longevity, power to you. I don't.

It's not a guarantee for anything because I don't have a bloody clue what that 110C figure is supposed to tell me. I am trying really hard to understand how is it that you people are so convinced that these numbers have some negative implication when in reality you have absolutely no reference point. You simply insist to believe AMD is doing something wrong with no proof.

The Sapphire Pulse model is an astonishingly 2% faster than reference, all this talk about how crappy AMD's cooler and temperatures are would have led me to believe things would have been a lot more different.
 
Joined
Sep 17, 2014
Messages
22,447 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
And there is a discontinuity among those dots here that you conveniently ignored : Radeon 7.

A card with a more than decent cooler that still reports these "hella scary" temperatures.

We are going in circles because I covered that one already; Radeon 7 has a much higher TDP and is a bigger die requiring more power, while ALSO being on a stock AMD cooler. We don't know if AIBs would do better, but its very very likely. You should look at similar TDP Nvidia cards that you like to think get just as hot. Here's a hint, compare the vcore curves they use, and how GPU Boost 3.0 works. I also, already, went into that one. Nvidia's boost simply does not allow the GPU to get that hot. You just lose a few hundred mhz in the worst case scenario. AMD's Navi just keeps bumping into its throttle point ad infinitum.

The Sapphire Pulse model is an astonishingly 2% faster than reference, all this talk about how crappy AMD's cooler and temperatures are would have led me to believe things would have been a lot more different.

This was never about being able to hit higher clocks... this is about the temps while getting those clocks. But keep deflecting, all is well.

At the same time this only confirms my idea that AMD pushed Navi out of the box right up into the danger zone and slapped a blower on top for good measure. Its OC'd out of the box, practically, without a cooler to match.

On one hand, there are hotspots on GPUs and exposing that reading for monitoring externally is definitely a good thing. I do not doubt for a second that Nvidia has similar sensor readings internally available, just not exposed.

On the other hand, 110°C being expected and in spec is a suspicious statement because we know these GPUs throttle at that exact 110°C point.
It is like saying Ryzen 3000 running at 95°C is expected and in spec. It is technically correct...

Ah my shining beacon of wisdom and clarity. Thank you.
 
Last edited:
Joined
Aug 9, 2019
Messages
226 (0.12/day)
Yes, improved. But know that the Vega with HBM was 'prone' to crack if the pressure was too high. The interposer or HBM would simply fail when the pressure was too tight. It's why AMD is going for a safe route. Every GPU you see these days is with a certain force but not too tight if you know what i mean. Any GPU could be brought 'better' in relation of temperatures if you start adding washers to it. It's no secret sauce either.

"- AMD is known for several releases with above average temperature-related long term fail rates."

I do not really agree. As long as the product is working within spec, no faillure that occurs or at least survives it's warranty period what is wrong with that? It's not like your going to use your videocard for longer then 3 years. You could always tweak the card to have lower temps. I simply slap on a AIO watercooler and call it a day. GPU hardware is designed to run 'hot'. Have'nt you seen the small heatsinks that they are applying to the Firepro series? Those are single-slotted coolers with small fans that you would see back in laptops and such.

why wouldnt you use the card for more than 3 years? i guess you throw your parts in the trash after 3 years? i give mine away at the very least, this isnt a disposable world we are living in like you think! If your car died the day after warranty expired it would be OK with yoU? or do you even live in the real world?

im waiting for a hdmi 2.1 cards that come out and dont run 100C :) I guess i dont play games very often and only recently upgraded from i7 3930k from 8 years ago. We all choose to spend our money different ways. im not a big eat out / fast food kinda guy, id rather buy the Tbone for 12$ and cook it myself then pay 120 for it cooked already.
 
Joined
Oct 1, 2006
Messages
4,931 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
The reference starts to thermal throttling at 90-91C (from 1900mhz to very unstable clocks below 1800mhz) and even shuts down while gaming after a while if its fully utilized (linus and other reviewers have mentioned this)
The reason you don't see a significant boost is because the gain from pushing the frequency is poor in Navi(maybe driver issue?). This chip having an overclock of 15% results in a <4% performance gain
How much of that is due to the cooler and how much of that was due to unstable drivers?
The fact is all the recent reviews shows that the Sapphire Pulse barely out performs the Reference Card.
Any for the overclock results, the Reference Card's gpu actually overclocked better than the Sapphire Pulse on W1zzard's sample.
Let me remind you the official given "game clock" is 1755Mhz, so the card ran below 1900Mhz is throttling is just not true.

How do you explain this?
https://www.techpowerup.com/review/sapphire-radeon-rx-5700-xt-pulse/34.html

129164

It is not just TPU reviews, even GN's reviews shows that the non-reference card performs almost the same as the reference design.
So it takes more than just "cooler card must be better, hotter card must be running out of spec and losing performance" to prove it.
It is all speculation and GN's own opinion on what is too hot, while even his own data cannot prove the Reference card is losing significant clock speed or performance.
 
Last edited:
Joined
Dec 28, 2012
Messages
3,878 (0.89/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
I am amazed by these claims, how the hell do you know that ? What is normal and how do you know that's supposed to be normal ? Are you by any chance working on chip design and know this stuff better than we or AMD ?
Common sense and physics. Use your brain.

A device has a max rated limit. This is the max it can take before IMMEDIATE damage occurs. Long term damage does not play by the same rule. Whenever you are dealing with a physical product, you NEVER push it to 100% limit constantly and expect it to last. This applies to air conditioners, jacks, trucks, computers, tables, fans, anything you use on a daily basis. Like I said, my car can do 155 MPH. But if I were to push it that fast constantly, every day, the car wouldnt last very long before experiencing mechanical issues, because it isnt designed to SUSTAIN that speed.

Every time the GPU heats up and cools down, the solder connectors experience expansion and contraction. Over time, this can result in the solder connections cracking internally, resulting in a card that does not work properly. The greater the temperature variance, the faster this occurs. This is why many GPUs now shut the fans off under 50C, because cooling it all the way down to 30C increases the variance the GPU experiences.

What AMD is doing here is allowing the GPU to run at max tjunct temp for extended periods of time and calling this acceptable. Given the GPU also THROTTLES at this temp, AMD is admitting it designed a GPU that cant run at full speed during typical gaming workloads. Given AMD also releases GPUs that can be tweaked to both run faster and consume less voltage rather reliably, it would seem a LOT of us know better then RTG engineers.

Would you care to explain how AMD's silicon is magically no longer affected by physical expansion and contraction from temperatures? I'd love to hear about this new technology.
 
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
We don't know if AIBs would do better, but its very very likely.

Really ? What would they do with it ? Put a liquid cooler on it, because I can't think of anything else that they could do to improve the cooler, it already has a hefty heatsink with three fans and GN already showed you can't really do much to the TIM and mounting.

We are going circles because you are trying really, really hard to dismiss evidence that you don't like.

Given the GPU also THROTTLES at this temp, AMD is admitting it designed a GPU that cant run at full speed during typical gaming workloads.

As I said above the Sapphire Pulse model is a mere 2% faster than reference, this argument is stupid. The reference model runs fine during typical gaming workloads, speed wise.

Navi shows one of the smallest gaps between reference and AIB models in the last few generations that we've seen. How the hell does that work if AMD made a shitty GPU that can't run at full speed due to thermal throttling if the AIB models eliminate this possibility ?
 
Last edited:
Joined
Sep 17, 2014
Messages
22,447 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Really ? What would they do with it ? Put a liquid cooler on it, because I can't think of anything else that they could do to improve the cooler, it already has a hefty heatsink with three fans and GN already showed you can't really do much to the TIM and mounting.

We are going circles because you are trying really, really hard to dismiss evidence that you don't like.



As I said above the Sapphire Pulse model is a mere 2% faster than reference, this argument is stupid. The reference model runs fine during typical gaming workloads.

And we arrive once again upon your assumption versus mine, and I say, power to you, buy more save more, go go. You're doing the exact same wrt 'evidence' (limited to Radeon 7 'also having a hot spot' versus overwhelming evidence that other cards run much cooler and even Navi can) and this will go nowhere.

Its times like these that common sense gets you places. Try it someday. Calling the argument stupid because you cannot quantify things, is not usually a good idea.
 
Joined
Oct 8, 2006
Messages
173 (0.03/day)
back with rx280's, i rma'd 5 (mining) of them when i ran a game to push each of them one at a time and manually slowed the fans down to heat them up. if they had artifacts before 85c i would send them back. then i tested the new ones and sent back another 3. they are just confirming to us that it is defective if the card cannot reach temps without errors. my personal way of binning cards :p
 
Joined
Oct 1, 2006
Messages
4,931 (0.74/day)
Location
Hong Kong
Processor Core i7-12700k
Motherboard Z690 Aero G D4
Cooling Custom loop water, 3x 420 Rad
Video Card(s) RX 7900 XTX Phantom Gaming
Storage Plextor M10P 2TB
Display(s) InnoCN 27M2V
Case Thermaltake Level 20 XT
Audio Device(s) Soundblaster AE-5 Plus
Power Supply FSP Aurum PT 1200W
Software Windows 11 Pro 64-bit
Every time the GPU heats up and cools down, the solder connectors experience expansion and contraction. Over time, this can result in the solder connections cracking internally, resulting in a card that does not work properly. The greater the temperature variance, the faster this occurs. This is why many GPUs now shut the fans off under 50C, because cooling it all the way down to 30C increases the variance the GPU experiences.
The reason for shutting off the fan at idle is just for noise reasons, that is nothing to do with reducing temperature gradient at all.
Fact is older GPUs do not have this feature at all and all of them ran fine and did not pre-maturely die because it.

Also starting and stopping the fans more often than otherwise is actually slightly detrimental to the life span of the fans.
For motors the ideal condition is actually to run them at a steady state.
This is the same reason why you don't want to start and stop your HDD motor too often.
 
Last edited:
Joined
Jan 8, 2017
Messages
9,436 (3.28/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
The reason for shutting off the fan at idle is just for noise reasons, that is nothing to do with reducing temperature gradient at all.
Fact is older GPUs do not have this feature at all and all of that ran fine and did not pre-maturely die because it.
Also starting and stopping the fans more often than otherwise is actually slightly detrimental to the life span of the fans.

Wasn't this contracting and expanding the same nonsense some site tried to pass as the reason why Intel didn't use solder, I can't remember who made an article on this.

And even if that would be the case, it's not just the temperature delta that matters, the frequency of these deltas is what really may have an effect on the material. And thankfully, GPU usually run at high constant temperatures for extended periods of times and idle at low constant temperature for the rest of the time.
 
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
I love, how the most offended users actually own NV cards,... ^))))

Given that temps are reached on the ref card and that today we see know AIBs drop card temperatures by good 25+ degrees, could we find another reason to get offended? Like lack of cross fire or something?

Calling the argument stupid because you cannot quantify things...
He literally chewed it for you, let me repeat the relevant part, perhaps you'd get it in second go: had thermals been a problem, gap between AIB and ref cards would be much bigger than 3-5% that we see now (especially taking into account much lover temps on AIBs).
 
Joined
Jul 10, 2015
Messages
754 (0.22/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
Newer unreleased / not even announced GPU demolishes older GPUs, such insight much wow. :roll:
Well, 7nm radeon just match 16nm 3 years old Pascal in p/W. Nvidia should just make 7nm Pascal, why bother with Turing.
 
Joined
Jun 16, 2016
Messages
409 (0.13/day)
System Name Baxter
Processor Intel i7-5775C @ 4.2 GHz 1.35 V
Motherboard ASRock Z97-E ITX/AC
Cooling Scythe Big Shuriken 3 with Noctua NF-A12 fan
Memory 16 GB 2400 MHz CL11 HyperX Savage DDR3
Video Card(s) EVGA RTX 2070 Super Black @ 1950 MHz
Storage 1 TB Sabrent Rocket 2242 NVMe SSD (boot), 500 GB Samsung 850 EVO, and 4TB Toshiba X300 7200 RPM HDD
Display(s) Vizio P65-F1 4KTV (4k60 with HDR or 1080p120)
Case Raijintek Ophion
Audio Device(s) HDMI PCM 5.1, Vizio 5.1 surround sound
Power Supply Corsair SF600 Platinum 600 W SFX PSU
Mouse Logitech MX Master 2S
Keyboard Logitech G613 and Microsoft Media Keyboard
In my opinion, we have to trust that AMD knows what they're doing with the max temperature. If they have done engineering tests at these heats and aren't worried about degradation, then the cards will probably be fine throughout their designed lifespan. 110 degrees seems like a lot, but part of that is that we were trained to watch temps from one sensor. I'm guessing that setting a temperature limit of 92 degrees or so for older GPUs was a way of using the one sensor to try to extrapolate the maximum temperature from one source.

If it is a problem, then these cards will start failing and people will complain about it. If we subscribe to the bathtub model of component failure, there should be a large percentage of the total failures for a product early on, due to defective cards or if this heat is really a problem, so it shouldn't take too long to tell if the GPU is immolating itself. It's not like every 5700 will last for 3 years 1 month and then burn up after the warranty is through. If the heat is a problem, we'll hear about it soon and people will still be under warranty.
 
Joined
Sep 17, 2014
Messages
22,447 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
I love, how the most offended users actually own NV cards,... ^))))

Given that temps are reached on the ref card and that today we see know AIBs drop card temperatures by good 25+ degrees, could we find another reason to get offended? Like lack of cross fire or something?


He literally chewed it for you, let me repeat the relevant part, perhaps you'd get it in second go: had thermals been a problem, gap between AIB and ref cards would be much bigger than 3-5% that we see now (especially taking into account much lover temps on AIBs).

Bought one yet? You were waiting and they're out, what's keeping you? After all, ref is 'just fine' ;)

Also, this line, is a bit of head scratcher
"the relevant part, perhaps you'd get it in second go: had thermals been a problem, gap between AIB and ref cards would be much bigger than 3-5%"

Actually... not having headroom while still having lower temps is a clear sign the card is clocked straight to the limit out of the box, and this also echoes in the GN review. @TheinsanegamerN worded it nicely, ref design is like a car running at top speed full in the red zone all the time, and considering that normal is a rather weird approach. The GN review also handiily points out memory ICs are also a hair below running out of spec. Now, imagine what happens with a bit of dust, wear and tear over time - or in fact, in most use cases outside the review bench. The throttling will get worse, and that peak temp won't be lower for it.
 
Last edited:
Joined
May 7, 2014
Messages
59 (0.02/day)
How much of that is due to the cooler and how much of that was due to unstable drivers?
The fact is all the recent reviews shows that the Sapphire Pulse barely out performs the Reference Card.
Any for the overclock results, the Reference Card's gpu actually overclocked better than the Sapphire Pulse on W1zzard's sample.
Let me remind you the official given "game clock" is 1755Mhz, so the card ran below 1900Mhz is throttling is just not true.

How do you explain this?
https://www.techpowerup.com/review/sapphire-radeon-rx-5700-xt-pulse/34.html

View attachment 129164
It is not just TPU reviews, even GN's reviews shows that the non-reference card performs almost the same as the reference design.
So it takes more than just "cooler card must be better, hotter card must be running out of spec and losing performance" to prove it.
It is all speculation and GN's own opinion on what is too hot, while even his own data cannot prove the Reference card is losing significant clock speed or performance.

yeah it could be the driver or the architecture... we don't know as of now but the performance gain from overclocking is poor in Navi, this is the reason you see the premium cards with higher clocks being close to the reference.
Check the clock speeds page and compare between the two, the frequency in the reference is all over the place once it starts to reach 91C and as i said above theres a case some of them in warm environments that they even shutdown.

AMD cheap out their cooler that is a fact even knowing about the thermal density issue...and now they come with the "oh it's fine".
They did the same in the CPU department
Its all about profits with these corporations.
we are living in a time when truth has been so diminished in value that even thosse at the top are quite comfortable with truth being whatever they can convince people to believe
 
Last edited:
Joined
Jul 9, 2015
Messages
3,413 (1.00/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
Bought one yet? You were waiting and they're out, what's keeping you? After all, ref is 'just fine' ;)
No. Thanks for asking.
I need to complete a woodworking project, for there to even be a place for a PC with monitor (my current something is hooked to a TV and that's not the way I'd like to play games).
Besides, AIBs are not really available yet.

Now, imagine what happens with a bit of dust...
Clearly nothing, but who cares about ref cards anyway.

Actually... not having headroom while still having lower temps is a clear sign the card is clocked straight to the limit out of the box...
Actually, talk was about thermal design and horrors that nvidia GPU owners feel, for some reason, for 5700 XT ref GPU owners.

Now that we've covered that, NV's 2070s (I didn't check others) AIBs aren't great OCers either, diff between Ref and AIB performance is also similar between brands.
 
Joined
Jun 28, 2016
Messages
3,595 (1.17/day)
What BS, you're making it sound like AMD GPUs are unusable garbage & Nvidia not only outstrips it across the board but also in every price bracket, every game you can think of! Which is of course BS as well :rolleyes:
By all means, AMD GPUs aren't unusable. That's not what I said.
But these GPUs aren't mainstream. To be mainstream, they have to offer more than just performance/price ratio. There's so much to improve in thermals, efficiency and stability. In marketing and support as well.
Nvidia's cards are so much more attractive, because Nvidia sells a polished, complete product. AMD sells a DIY project.

This becomes obvious when you look at what some of AMD's custom GPU clients can achieve. Apple, Sony, Microsoft and soon Samsung - they're offering AMD's chips in a much easier to digest form.
Of course AMD could make more robust products. They could do better pre-launch testing, improve compatibility and drivers. And work on relations with partners to deliver AIB cards and OEM systems on day of launch (like Nvidia and Intel do). But that would raise costs and - at least for now - AMD wants to remain the cheaper alternative. It's a conscious decision.

Also starting and stopping the fans more often than otherwise is actually slightly detrimental to the life span of the fans.
First of all: is this your intuition or are there some publications to support this hypothesis? :)

Second: you seem a bit confused. The passive cooling does not increase the number of times the fan starts. The fan is not switching on and off during gaming.
If the game applies a lot of load, the fan will be on during the whole session. Otherwise the fan is off.
So the number of starts and stops is roughly the same. It's just that your fan starts during boot and mine during game launch. So I don't have to listen to it when I'm not gaming (90% of the time).

In fact it actually decreases the number of starts for those of us who don't play games every day.
 

eidairaman1

The Exiled Airman
Joined
Jul 2, 2007
Messages
42,184 (6.64/day)
Location
Republic of Texas (True Patriot)
System Name PCGOD
Processor AMD FX 8350@ 5.0GHz
Motherboard Asus TUF 990FX Sabertooth R2 2901 Bios
Cooling Scythe Ashura, 2×BitFenix 230mm Spectre Pro LED (Blue,Green), 2x BitFenix 140mm Spectre Pro LED
Memory 16 GB Gskill Ripjaws X 2133 (2400 OC, 10-10-12-20-20, 1T, 1.65V)
Video Card(s) AMD Radeon 290 Sapphire Vapor-X
Storage Samsung 840 Pro 256GB, WD Velociraptor 1TB
Display(s) NEC Multisync LCD 1700V (Display Port Adapter)
Case AeroCool Xpredator Evil Blue Edition
Audio Device(s) Creative Labs Sound Blaster ZxR
Power Supply Seasonic 1250 XM2 Series (XP3)
Mouse Roccat Kone XTD
Keyboard Roccat Ryos MK Pro
Software Windows 7 Pro 64
I wouldn't run furmark on this card unless I want to cook breakfast

Furmark is trash on any card

Yeah no thanks AMD, i dont wish to have hearing damage because of your poor cooler design
Please give me a break with that crap. Try a server fan.

Tbf all cards use crap thermal compound/pads, why? Cheap in bulk.
 
Last edited:
Top