• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

110°C Hotspot Temps "Expected and Within Spec", AMD on RX 5700-Series Thermals

Joined
Aug 9, 2019
Messages
226 (0.12/day)
Honestly I don't care about this 'issue' and I don't belive it for a second that Nvidia or Intel doesn't have the same stuff going on anyway.

In the past ~10+ years I only had 2 cards die on me and both were Nvidia cards so theres that.

Don't care about ref/blower cards either,whoever buys those should know what they are buying instead of waiting some time to get 'proper' models.

I'm planning to buy a 5700 but I'm not in a hurry,I can easily wait till all of the decent models are out and then buy one of them 'Nitro/pulse/giga G1 probably'.

110c is to hot hopefully they come out with a 5800 that solves it, but i dont think they care as long as they have your money till the warranty is expired.

i have only had nvidia cards die too but thats because they are always the best bang for buck. (only bought a few amd gpu's 9800 that could unlock shaders? or 9700 vanilla? i forget and the 1950xtxtxtx? still have it on the wall of my garage. Most deaths are from simple cap that i could have replaced but by the time they die i would rather hang them on the wall then repair and use. (maybe 30 motherboards some with cpu's and coolers intact on my wall and 20 gfx cards over the years.)

its strange to me why people want a 5700 anyway, the 1080ti has been out for how long? i purchased two of them used long ago for 450 and 500 (just about 2 years ago to the day) they seem to run better then the new 5700xt in every scenario. so its people that love the amd brand and are hoping for a better future?

if i was to purchase a card today it would be a open box 2080 i think they run 550? To bad nothing has hdmi 2.1 so i will just sit and wait for next gen after next gen still so slow and overpriced. (id be happy with 8k@120hz hehehe)
 
Joined
Jun 28, 2016
Messages
3,595 (1.17/day)
I've never seen that claim. And yes, if that is what the specs says.
No? You've never seen a topic where people criticize Intel for 80*C+ and praise Ryzen for being cooler? Maybe some bad TIM discussion? Anything? :-D
 

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
19,582 (2.86/day)
Location
Piteå
System Name White DJ in Detroit
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Plantronics 5220, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
No? You've never seen a topic where people criticize Intel for 80*C+ and praise Ryzen for being cooler? Maybe some bad TIM discussion? Anything? :-D

TIM discussions, but not straigt up "heat is murdering intel CPU's". But then I don't pay much attention. :)
 

las

Joined
Nov 14, 2012
Messages
1,693 (0.38/day)
System Name Meh
Processor 7800X3D
Motherboard MSI X670E Tomahawk
Cooling Thermalright Phantom Spirit
Memory 32GB G.Skill @ 6000/CL30
Video Card(s) Gainward RTX 4090 Phantom / Undervolt + OC
Storage Samsung 990 Pro 2TB + WD SN850X 1TB + 64TB NAS/Server
Display(s) 27" 1440p IPS @ 360 Hz + 32" 4K/UHD QD-OLED @ 240 Hz + 77" 4K/UHD QD-OLED @ 144 Hz VRR
Case Fractal Design North XL
Audio Device(s) FiiO DAC
Power Supply Corsair RM1000x / Native 12VHPWR
Mouse Logitech G Pro Wireless Superlight + Razer Deathadder V3 Pro
Keyboard Corsair K60 Pro / MX Low Profile Speed
Software Windows 10 Pro x64
The RTX 2060 uses more power than an RX 5700 in gaming on average while performing worse. So what did you want to say?

5700 uses more power than 2060.

It's clear that AMD maxed these chips completely out, just like Ryzen chips, to look good in reviews but there's no OC headroom as a result. Which is why Custom versions perform pretty much identical to reference and overclocks 1.5% on average.
 
Joined
Jul 9, 2015
Messages
3,413 (0.99/day)
System Name M3401 notebook
Processor 5600H
Motherboard NA
Memory 16GB
Video Card(s) 3050
Storage 500GB SSD
Display(s) 14" OLED screen of the laptop
Software Windows 10
Benchmark Scores 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling.
We are never sure until after the fact. I'll word it differently.
Well, actually no, we are pretty sure that if someone falls from 100 meter height onto a concrete surface, he/she will inevitably die.

Your example with "it makes everything hotter" is moot here, as we are talking about only 1 out of 64 sensors reporting that temp.
Overall temp of the chip in TPUs tests of ref card was 79C, +4 degrees if OCed.
Nowhere 110.
Only 6 degrees higher than 2070s (blower ref vs aib-ish ref)

AIB cards are late to the party
I don't see it that way. No matter who what and where, first couple of month (or longer) there are shortages and price gouging, regardless of when AIBs come.

110c is to hot hopefully they come out with a 5800 that solves it
There is nothing to fix, besides people's perception.
We are talking about 79C temp overall, with one out of gazillion of "spot" sensors reporting particular temp.
We have no idea how much those temps would be in case of NV, but likely also over 100.
 
Joined
Apr 12, 2013
Messages
7,536 (1.77/day)
You oughta scroll back a bit, I covered this at length - Memory ICs reach 100C for example, which is definitely not where you want them. That heat affects other components and none of this helps chip longevity. The writing is on the wall. To each his own what he thinks of that, but its not looking comfy to me.

By the way, your 7700K link kinda underlines that we know about the 'hot spots' on Intel processors, otherwise you wouldn't have that reading. But these Navi temps are not 'spikes'. They are sustained.
These are hotspots, not the entire die's temp! Did you even read what the blog post said?
Paired with this array of sensors is the ability to identify the ‘hotspot’ across the GPU die. Instead of setting a conservative, ‘worst case’ throttling temperature for the entire die, the RadeonTM RX 5700 series GPUs will continue to opportunistically and aggressively ramp clocks until any one of the many available sensors hits the ‘hotspot’ or ‘Junction’ temperature of 110 degrees Celsius. Operating at up to 110C Junction Temperature during typical gaming usage is expected and within spec. This enables the RadeonTM RX 5700 series GPUs to offer much higher performance and clocks out of the box, while maintaining acoustic and reliability targets.



We provide users with both measurements – the average GPU Temperature and Junction Temperature – so that they have the best and most complete visibility into the operation of their RadeonTM RX 5700 series GPUs.
No it doesn't, the forum poster who claimed the "same" about Intel was doing this after (entire) bare die (or delid?) & water cooling IIRC. These are hotspots, I mean are you intentionally trying to be obtuse or do you have an axe to grind here :rolleyes:
 
Last edited:
Joined
Apr 21, 2010
Messages
578 (0.11/day)
System Name Home PC
Processor Ryzen 5900X
Motherboard Asus Prime X370 Pro
Cooling Thermaltake Contac Silent 12
Memory 2x8gb F4-3200C16-8GVKB - 2x16gb F4-3200C16-16GVK
Video Card(s) XFX RX480 GTR
Storage Samsung SSD Evo 120GB -WD SN580 1TB - Toshiba 2TB HDWT720 - 1TB GIGABYTE GP-GSTFS31100TNTD
Display(s) Cooler Master GA271 and AoC 931wx (19in, 1680x1050)
Case Green Magnum Evo
Power Supply Green 650UK Plus
Mouse Green GM602-RGB ( copy of Aula F810 )
Keyboard Old 12 years FOCUS FK-8100
Some people Have no idea How Hotspot Temp measures.If Overall temp reaches 100'c , this means Hotspot Temp is above 120'c.each GPU chip does have multiple layers and due to Heat transfer fluids , arrays Sensor in middle layer always reports highest temp.Top and bottom layers have lower temp than junction temp.See that ?
according to Globalfoundries :


Standard temperature range: -40°C to 125°

AMD set it to 110'c.So any process Unit's Temp in any layers must not excess more than 110'c and you guy ,like kid, scream it like House is in Fire?
What's junction temp for Turning card ? I bet Nvidia doesn't want to reveal it.
 
Joined
Jun 27, 2019
Messages
2,109 (1.06/day)
Location
Hungary
System Name I don't name my systems.
Processor i5-12600KF 'stock power limits/-115mV undervolt+contact frame'
Motherboard Asus Prime B660-PLUS D4
Cooling ID-Cooling SE 224 XT ARGB V3 'CPU', 4x Be Quiet! Light Wings + 2x Arctic P12 black case fans.
Memory 4x8GB G.SKILL Ripjaws V DDR4 3200MHz
Video Card(s) Asus TuF V2 RTX 3060 Ti @1920 MHz Core/@950mV Undervolt
Storage 4 TB WD Red, 1 TB Silicon Power A55 Sata, 1 TB Kingston A2000 NVMe, 256 GB Adata Spectrix s40g NVMe
Display(s) 29" 2560x1080 75Hz / LG 29WK600-W
Case Be Quiet! Pure Base 500 FX Black
Audio Device(s) Onboard + Hama uRage SoundZ 900+USB DAC
Power Supply Seasonic CORE GM 500W 80+ Gold
Mouse Canyon Puncher GM-20
Keyboard SPC Gear GK630K Tournament 'Kailh Brown'
Software Windows 10 Pro
110c is to hot hopefully they come out with a 5800 that solves it, but i dont think they care as long as they have your money till the warranty is expired.

i have only had nvidia cards die too but thats because they are always the best bang for buck. (only bought a few amd gpu's 9800 that could unlock shaders? or 9700 vanilla? i forget and the 1950xtxtxtx? still have it on the wall of my garage. Most deaths are from simple cap that i could have replaced but by the time they die i would rather hang them on the wall then repair and use. (maybe 30 motherboards some with cpu's and coolers intact on my wall and 20 gfx cards over the years.)

its strange to me why people want a 5700 anyway, the 1080ti has been out for how long? i purchased two of them used long ago for 450 and 500 (just about 2 years ago to the day) they seem to run better then the new 5700xt in every scenario. so its people that love the amd brand and are hoping for a better future?

if i was to purchase a card today it would be a open box 2080 i think they run 550? To bad nothing has hdmi 2.1 so i will just sit and wait for next gen after next gen still so slow and overpriced. (id be happy with 8k@120hz hehehe)

Sry for the late reply.

In my case its because I don't buy 'high end' hardware,more of a budget-mid range user so I never really considered 1080 and cards around that range when they were new/expensive.

Pretty much always use my cards for 2-3 years before upgrading and this 5700 will be my biggest/most expensive upgrade yet and it will be used for 3 years at least.
I don't mind playing with 45-50 fps and droping settings to ~medium when needed so I easily last that much,probably wouldn't even bother upgrading yet from my RX 570 if I still had my 1920x1080 monitor but this 2560x1080 res is more GPU heavy and some new games are kinda pushing it already.
If Borderlands 3 will run alright with the 570 I might even delay that purchase since it will be my main game for a good few months at least.

+Problem is that I don't want to buy a card with 6GB Vram cause I prefer to keep the Texture setting ~high at least and with 3-4 years in mind thats gonna be a problem 'already ran into this issue with my previous cards'.
Atm all of the 8GB Nvidia cards are out of my budget '2060S ' and I'm not a fan of used cards especially when I plan to keep it for long 'having no warranty is a dealbreaker for me'.
Dual fan 2060S models start around ~500$ here with tax included,blower 5700 non XT ~410 so even the aftermarket models will be cheaper and thats the max I'm willing to spend.


My cards were like this,at least what I can remember:

AMD 'Ati' 9600 Pro,6600 GT,7800 GT,8800GT 'Died after 2.5 years',GTS 450 which was my warranty replacement,GTX 560 Ti 'Died after 1 Year,had no warranty on it..',AMD 7770,GTX 950 Xtreme and now the RX 570.
That 950 is still running fine at my friend who bought it from me,its almost 4 years old now.

My bro had more AMD cards than me now that I think of it,even had a 7950 Crossfire system for a while and that ran 'hot'. :D
If I recall correctly then his only dead card was a 8800GTX,all of his AMD cards survived somehow.
 
Last edited:
Joined
Sep 17, 2014
Messages
22,465 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
TIM discussions, but not straigt up "heat is murdering intel CPU's". But then I don't pay much attention. :)

I found myself dialing back my OC multiple times due to temps on Intel CPUs. Its not pretty and its a sign of the times as performance caps out. And its STILL not pretty on a hot summer day - still seeing over 85C on the regular. Some years back the consensus was that 80 C was just about the max 'safe' temp. Go higher continuously, and you may suffer noticeable degradation in the useful life of the chip. 'In spec' is not the same as 'safe'. Murder maybe should be rephrased to a slow painful death ;)

These are hotspots, not the entire die's temp! Did you even read what the blog post said?

Do YOU even read? You say we don't know about hotspots on Intel CPUs, and in the same sentence you linked that 7700K result with hotspot readings. I also pointed out that Intel already reports TJunction for quite a while now.

Gotta stop acting like AMD is doing something radically new. Its clear as day; the GPU has no headroom, and it constantly pushes itself to max temp limit, and while doing so, heat at memory ICs gets to max 'specced' as well. So what if the die is cooler - it still won't provide any headroom to push the chip further. The comparisons with Nvidia therefore fall flat completely as well, because Nvidia DOES have that headroom - and does not suffer from the same heat levels elsewhere on the board.

Its not my problem you cannot connect those dots, and you can believe whatever you like to believe... to which the follow up question is: did you buy one yet? After all, they're fine and AIB cards don't clock higher, so you might as well... GPU history is full of shitty products, and this could well be another one (on ref cooling).
 
Last edited:
Joined
Jan 8, 2017
Messages
9,438 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
What's junction temp for Turning card ? I bet Nvidia doesn't want to reveal it.

Admittedly, it's a smart choice. This way the simple minded folk wont be bothered by scary numbers that they don't understand.

AMD has always been the most transparent to what their products are doing under the hood but by the same token this drives away people that don't know what to do with this information, it's a shame.
 
Last edited:

INSTG8R

Vanguard Beta Tester
Joined
Nov 26, 2004
Messages
8,042 (1.10/day)
Location
Canuck in Norway
System Name Hellbox 5.1(same case new guts)
Processor Ryzen 7 5800X3D
Motherboard MSI X570S MAG Torpedo Max
Cooling TT Kandalf L.C.S.(Water/Air)EK Velocity CPU Block/Noctua EK Quantum DDC Pump/Res
Memory 2x16GB Gskill Trident Neo Z 3600 CL16
Video Card(s) Powercolor Hellhound 7900XTX
Storage 970 Evo Plus 500GB 2xSamsung 850 Evo 500GB RAID 0 1TB WD Blue Corsair MP600 Core 2TB
Display(s) Alienware QD-OLED 34” 3440x1440 144hz 10Bit VESA HDR 400
Case TT Kandalf L.C.S.
Audio Device(s) Soundblaster ZX/Logitech Z906 5.1
Power Supply Seasonic TX~’850 Platinum
Mouse G502 Hero
Keyboard G19s
VR HMD Oculus Quest 3
Software Win 11 Pro x64
Admittedly, it's a smart choice. This way the simple minded folk wont be bothered by scary numbers that they don't understand.
As a Vega owner I learned not to look at the Hotspot, it just makes you sad. That said I run a custom fan curve in my Nitro+ and keep mine around 90C
 
Joined
Apr 21, 2010
Messages
578 (0.11/day)
System Name Home PC
Processor Ryzen 5900X
Motherboard Asus Prime X370 Pro
Cooling Thermaltake Contac Silent 12
Memory 2x8gb F4-3200C16-8GVKB - 2x16gb F4-3200C16-16GVK
Video Card(s) XFX RX480 GTR
Storage Samsung SSD Evo 120GB -WD SN580 1TB - Toshiba 2TB HDWT720 - 1TB GIGABYTE GP-GSTFS31100TNTD
Display(s) Cooler Master GA271 and AoC 931wx (19in, 1680x1050)
Case Green Magnum Evo
Power Supply Green 650UK Plus
Mouse Green GM602-RGB ( copy of Aula F810 )
Keyboard Old 12 years FOCUS FK-8100

NVIDIA GPUs are designed to operate reliably up to their maximum specified operating temperature. This maximum temperature varies by GPU, but is generally in the 105C range (refer to the nvidia.com product page for individual GPU specifications). If a GPU hits the maximum temperature, the driver will throttle down performance to attempt to bring temperature back underneath the maximum specification. If the GPU temperature continues to increase despite the performance throttling, the GPU will shutdown the system to prevent damage to the graphics card. Performance utilities such as EVGA Precision or GPU-Z can be used to monitor temperature of NVIDIA GPUs. If a GPU is hitting the maximum temperature, improved system cooling via an added system fan in the PC can help to reduce temperatures.

If One spot is under 105'c then it's ok until throttle.this article doesn't say entire spots , rather if one of spot
 
Last edited:
Joined
Aug 8, 2019
Messages
430 (0.22/day)
System Name R2V2 *In Progress
Processor Ryzen 7 2700
Motherboard Asrock X570 Taichi
Cooling W2A... water to air
Memory G.Skill Trident Z3466 B-die
Video Card(s) Radeon VII repaired and resurrected
Storage Adata and Samsung NVME
Display(s) Samsung LCD
Case Some ThermalTake
Audio Device(s) Asus Strix RAID DLX upgraded op amps
Power Supply Seasonic Prime something or other
Software Windows 10 Pro x64
rubbing eyes

so how many ppl are still running 7970s/r9 2xx cards around here,which are 6-8 years old.

I have a system that dailies an HD5830, a bunch of HD5450s floating around. The other half has a 650 Ti in their system. My T61 has an HD5450 in the dock and a NVS140M on it's mobo.

I know a guy who still games on his R9 290, with 390X bios.

I run a Fury X, and it was in the first 99 boards made. It's running a modified bios that lifts the power limit, under volts, tightens the HBM timings, and performs far better than stock.

The Fury series like the Vegas, need water cooling to perform their best. Vega64/V2/56 on air, is just disappointing because they are loud and/or throttle everywhere.

I have had a few GPUs that were bit by the NV soldergate...

Toshiba lappy 7600 GT, replaced and increased clamp pressure mods they directed to use.

Thinkpad T61 and it's Quadro NVS140M, Lenovo made Nvidia, remake the GPU, with proper solder. I hunted down and aquired myself one.

But ATI/AMD aren't exempt...

My most notorious death card, was a Powercolor 9600XT... that card died within 2-3 weeks everytime, and I had to RMA it 3 times. I still refuse to use anything from TUL/Powercolor because of the horrible RMA process, horrible customer service, and their insistence on using slow UPS. So I got nailed with a $100 brokerage bill every time. I sold it cheap after the last RMA, guy messaged me angry a month later that it died on him.

My uncle got 2 years out of a 2900 XT... It was BBA... lol
 
Last edited:
Joined
Sep 17, 2014
Messages
22,465 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000



If One spot is under 105'c then it's ok until throttle.this article doesn't say entire spots , rather if one of spot

One of the spots of which we know is not the hottest point in an Nvidia GPU, because they throttle way earlier than that - but more importantly, they throttle more rigorously than AMD's Navi does. The 'throttle point' for an Nvidia GPU is 84C on the die sensor. Will there be hotter spots on it? Sure. But when 84C is maintained and GPU boost cannot lower it reliably with small voltage drops and dropping boost bins in increments of 13mhz, it will go down hard on the voltage and kick off half, or your entire boost clock until things settle down - and on top of that, it takes away a few boost bins from your highest clock - it did that already because temp also makes it lose boost bins.

Now, enter Navi: if you don't adjust the fan profile, the card will simply continuously bump into the red zone, right up to max spec. There is no safeguard to kick it down a notch consistently. Like a mad donkey it will bump its head into that same rock every time, all the time.

The way both boost mechanics work is quite different, still, and while AMD finally managed to get a form of boost going that can utilize the headroom available, it does rely on cooling far more so than GPU Boost does - and what's more, it also won't boost higher if you give it temperature headroom. Bottom line, they've still got a very 'rigid' way of boosting versus a highly flexible one.

If you had to capture it one sentence; Nvidia's boost wants to stay as far away from the throttle point as it can to do best, while AMD's boost doesn't care how hot it gets to maximize performance as long as it doesn't melt.
 
Joined
Apr 21, 2010
Messages
578 (0.11/day)
System Name Home PC
Processor Ryzen 5900X
Motherboard Asus Prime X370 Pro
Cooling Thermaltake Contac Silent 12
Memory 2x8gb F4-3200C16-8GVKB - 2x16gb F4-3200C16-16GVK
Video Card(s) XFX RX480 GTR
Storage Samsung SSD Evo 120GB -WD SN580 1TB - Toshiba 2TB HDWT720 - 1TB GIGABYTE GP-GSTFS31100TNTD
Display(s) Cooler Master GA271 and AoC 931wx (19in, 1680x1050)
Case Green Magnum Evo
Power Supply Green 650UK Plus
Mouse Green GM602-RGB ( copy of Aula F810 )
Keyboard Old 12 years FOCUS FK-8100
One of the spots of which we know is not the hottest point in an Nvidia GPU, because they throttle way earlier than that - but more importantly, they throttle more rigorously than AMD's Navi does. The 'throttle point' for an Nvidia GPU is 84C on the die sensor. Will there be hotter spots on it? Sure. But when 84C is maintained and GPU boost cannot lower it reliably with small voltage drops and dropping boost bins in increments of 13mhz, it will go down hard on the voltage and kick off half, or your entire boost clock until things settle down - and on top of that, it takes away a few boost bins from your highest clock - it did that already because temp also makes it lose boost bins.

Please provide source about 84'c.It's first time I hear it.
 
Joined
Jan 8, 2017
Messages
9,438 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
as long as it doesn't melt.

Your assumption continues to be that AMD must not know what they are doing and they have set limits which are not safe even though they have explicitly stated that they are. If their algorithm figures out it's a valid move to keep clocks up and the power usage and temperature does not keep on rising this means an equilibrium has been reached, this conclusion is elementary.

I do not understand at all how you conclude that their algorithm must be worse because it does not make frequent adjustments like Nvidia's. If anything this is proof their hardware is more balanced and no large adjustments are needed to keep the GPU in it's desired operating point.

There is no safeguard to kick it down a notch consistently.

Again, If their algorithm figures out it's a valid move to do that this means an equilibrium has been reached. There is no need for any additional interventions. The only safeguards needed after that are for thermal shutdown and whatnot and I am sure they work just fine otherwise they would all burn away from the moment they are turned on.

Do not claim their cards do not have safeguards in this regard, it's simply untrue. You now better than this, come on.

If you had to capture it one sentence; Nvidia's boost wants to stay as far away from the throttle point as it can to do best, while AMD's boost doesn't care how hot it gets to maximize performance as long as it doesn't melt.

You are simply wrong and I am starting to question whether or not you really understand how these things work.

They both seek to maximize performance while staying away from the throttle point as far as possible only if that's the right thing to do. If you go and look back at reference models of Pascal cards they all immediately hit their temperature limit and stay there just in the same way the 5700XT does. Does that mean they didn't care how hot those got ?

Of course the reason I brought up Pascal is because those have the same blower coolers, they don't use those anymore but let's see what happens when Truing GPUs do have that kind of cooling :

129319


What a surprise, they also hit their temperature limit. So much for Nvidia wanting to stay as far away from the throttle point, right ?

This is not how these things are supposed to work. Their goal is not to just stay as far away from the throttle point, if you do that your going to have a crappy boost algorithm. Their main concern is to maximize performance even if that means you need to stay right at the throttling point.
 
Last edited:
Joined
Sep 17, 2014
Messages
22,465 (6.03/day)
Location
The Washing Machine
Processor 7800X3D
Motherboard MSI MAG Mortar b650m wifi
Cooling Thermalright Peerless Assassin
Memory 32GB Corsair Vengeance 30CL6000
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Lexar NM790 4TB + Samsung 850 EVO 1TB + Samsung 980 1TB + Crucial BX100 250GB
Display(s) Gigabyte G34QWC (3440x1440)
Case Lian Li A3 mATX White
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse Steelseries Aerox 5
Keyboard Lenovo Thinkpad Trackpoint II
Software W11 IoT Enterprise LTSC
Benchmark Scores Over 9000
Your assumption continues to be that AMD must not know what they are doing and they have set limits which are not safe even though they have explicitly stated that they are. If their algorithm figures out it's a valid move to keep clocks up and the power usage and temperature does not keep on rising this means an equilibrium has been reached, this conclusion is elementary.

I do not understand at all how you conclude that their algorithm must be worse because it does not make frequent adjustments like Nvidia's. If anything this is proof their hardware is more balanced and no large adjustments are needed to keep the GPU in it's desired operating point.



Again, If their algorithm figures out it's a valid move to do that this means an equilibrium has been reached. There is no need for any additional interventions. The only safeguards needed after that are for thermal shutdown and whatnot and I am sure they work just fine otherwise they would all burn away from the moment they are turned on.

Do not claim their cards do not have safeguards in this regard, it's simply untrue. You now better than this, come on.



You are simply wrong and I am starting to question whether or not you really understand how these things work.

They both seek to maximize performance while staying away from the throttle point as far as possible only if that's the right thing to do. If you go and look back at reference models of Pascal cards they all immediately hit their temperature limit and stay there just in the same way the 5700XT does. Does that mean they didn't care how hot those got ?

Of course the reason I brought up Pascal is because those have the same blower coolers, they don't use those anymore but let's see what happens when Truing GPUs do have that kind of cooling :

View attachment 129319

What a surprise, they also hit their temperature limit. So much for Nvidia wanting to stay as far away from the throttle point, right ?

This is not how these things are supposed to work. Their goal is not to just stay as far away from the throttle point, if you do that your going to have a crappy boost algorithm. Their main concern is to maximize performance even if that means you need to stay right at the throttling point.

You missed the vital part where I stressed that Navi does not clock further when you give it temperature headroom. Which destroys the whole theory about 'equilibrium'. The equilibrium does not max out performance at all, it just boosts to a predetermined cap that you cannot even manually OC beyond. 0,7% - that is margin of error.

And the ref cooler is balanced out so that, in ideal (test bench) conditions, it can remain just within spec without burning itself up too quickly. I once again stress the Memory IC temps, which, once again, is easily glossed over but very relevant here wrt longevity. AIB versions then confirm the behaviour because all they really manage is a temp drop with no perf gain.

And ehm... about AMD not knowing what they're doing... we are in Q2 2019 and they finally managed to get their GPU Boost to 'just not quite as good as' Pascal. You'll excuse me if I lack confidence in their expertise with this. Ever since GCN they have been struggling with power state management. Please - we are WAY past giving AMD the benefit of the doubt when it comes to their GPU division. They've stacked mishap upon failure for years and resources are tight. PR, strategy, timing, time to market, technology... none of it was good and even Navi is not a fully revamped arch, its always 'in development', like an eternal Beta... and it shows.

Here is another graph to drive the point home that Nvidia's boost is far better.

NAVI:
Note the clock spread while the GPU keeps on pushing 1.2V. And not just at 1.2V but at each interval. Its a mess and it underlines voltage control is not as directly linked to GPU clock as you'd want.

There is also still an efficiency gap between Navi and Pascal/Turing, despite a node advantage. This is where part of that gap comes from.

Ask yourself this, where do you see an equilibrium here? This 'boost' runs straight into a heat wall and then panics all the way down to 1800mhz, while losing out on good ways to drop temp: dropping volts. And note; this is an AIB card.
129360




Turing:

You can draw up a nice curve to capture a trend here that relates voltage to clocks, all the way up to the throttle target (and néver beyond it, under normal circumstances - GPU Boost literally keeps it away from throttle point before engaging in actually throttling). At each and every interval, GPU boost finds the optimal clock to settle at. No weird searching and no voltage overkill for the given clock at any given point in time. Result: lower temps, higher efficiency, maximized performance, and (OC) headroom if temps and voltages allow.
129343


People frowned upon Pascal when it launched for 'minor changes' compared to Maxwell, but what they achieved there was pretty huge, it was Nvidia's XFR. Navi is not AMD's GPU XFR, and if it is, its pretty shit compared to their CPU version.

And.. surprise for you apparently but that graph you linked contains a 2080 in OC mode doing... 83C. 1C below throttle, settled at max achievable clockspeed WITHOUT throttling.

Please provide source about 84'c.It's first time I hear it.


129345


So as you can see, Nvidia GPUs throttle 6C before they reach 'out of spec' - or permanent damage. In addition, they throttle such that they stop exceeeding the throttle target from then onwards under a continuous load. Fun fact, Titan X is on a blower too... with a 250W TDP.
 
Last edited:
Joined
Aug 6, 2009
Messages
1,162 (0.21/day)
Location
Chicago, Illinois
So I realize this thread had died a bit but I just thought about and possibly realized something.

Long gone are the days when a new generation of video cards or processors offer big or even relatively impressive performance for the prices of the hardware it's/they're replacing. Now days it seems like all they do is give is just enough of an upgrade to justify the existence of said products or at least in their(AMD/Intel/Nvidia) minds.

Sad times.
 
Joined
Jul 10, 2015
Messages
754 (0.22/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
Dont let AMD to fool you. 7nm Geforce will bring what you are missing.
 
Joined
Aug 6, 2009
Messages
1,162 (0.21/day)
Location
Chicago, Illinois
Dont let AMD to fool you. 7nm Geforce will bring what you are missing.

I sure hope so. I've been waiting so long to be wowed like the way we used to in the old days. Like when we went from AGP to PCI Express. Remember those days? What a time to be into computers. Hell...when every issue of Maximum PC seemed amazing and full of great content, , I really miss those days, things have become so stale. What really sucks is so many of us here trash each other on this forum and this seems to be the best place we have for us, it really makes me sad(not trolling) and not many things do.
 
Joined
Jan 8, 2017
Messages
9,438 (3.27/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
Dont let AMD to fool you. 7nm Geforce will bring what you are missing.

The more you buy the more you save.

Do you by any chance own a leather jacket and your favorite color is green ?
 
Joined
Jul 10, 2015
Messages
754 (0.22/day)
Location
Sokovia
System Name Alienation from family
Processor i7 7700k
Motherboard Hero VIII
Cooling Macho revB
Memory 16gb Hyperx
Video Card(s) Asus 1080ti Strix OC
Storage 960evo 500gb
Display(s) AOC 4k
Case Define R2 XL
Power Supply Be f*ing Quiet 600W M Gold
Mouse NoName
Keyboard NoNameless HP
Software You have nothing on me
Benchmark Scores Personal record 100m sprint: 60m
No, no need to buy to see p/W graphs, W1zzard does it for us.

Th3pwne3, I feel you. Those days are gone but I am sure that jump from 16nm to 7nm would bring much bigger differences than RDNA 1.0 showed. Ampere and/or RDNA 2.0 to the rescue in 2020. 28nm to 14/16nm brought impressive results, cards like 480 and 1060 for 250eur with 500eur 980 perf. Well, 280eur Turing 1660ti almost mimic that with 450eur 1070 perf, 16nm vs 12nm. FFS, 7nm Navi for 400eur only with 500eur ultra oced 2070 perf.
 
Last edited:
Joined
Jan 29, 2020
Messages
59 (0.03/day)
So why did then xfx decided to work on their cards cons? if those temps are "normal" ?? bunch of cr@p!
 

ArrowMarrow

New Member
Joined
Feb 14, 2020
Messages
1 (0.00/day)
So why did then xfx decided to work on their cards cons? if those temps are "normal" ?? bunch of cr@p!

As one can read above it's an issue of mentality - nothing is running too hot. The VII, with hotspot maxing at 110°; VDDC ~90°; and GPU Temp with around 80-85° (under full load - in stock settings and sufficient airflow thru average casing) are as they'de be expected and stated.
There is no point in discussing readouts like "Nvidia vs AMD". It's obvious and known that Radeons generally runs a highter temp. level.
Important here is to mention -or to repeat - as it has been pointed out previously from many others here: Temperature is relative and therefore it has to be evaluated accordingly. Means: As far as I know the VIIs are layed out to run on ~125°C (don't ask now which part of the card - answer: probably the ones mentioned above as they r the ones generating most).
So again, temperatures should be compared on Devices with same or derivative architectures. I mean, of course one can compare (almost) anything - but rather more as a basis of discussion and opnion. Like e.x.: the tires and wheels of a sports car get much hotter then a family sedan. Are high temperatures bad for the wheels ? - yes and no. But for a sports car it's needed no question (breaks, tyre grip...). So again it's relative/"a matter of perspective".
The last point goes with that above. The discussion about the Sensors/readout-points. I want to point out I'm not having any qualified knowledge or education about this subject per se. But it's actually simple logic - the nearer you get with the sensors to where energy transfers to heat, the higher read out you will get. So simple enough right ? - as mentioned above if other cards have architectures and within those, Sensors that are related/derivative/similar, one could compare. But how can anybody compare readouts of different sensors at different "places". in short: The higher the "delta" btw. the "near-core" temperatures in respect to the package temperature - the better. With AMDs sensors/sensorplacement and their readouts. users possess more detailed info of heat generation and travel.
So from that what I've read from the posts before and according what I've put together above - the VII (at least) have more accurate readouts.
And finally, our concern is the Package Temp - the rest, one should check every now and then to have a healthy card.
And finally about buying a GPU and wanting to keep it ... 3-4 even 7 years some had written.
BS - if we talk here about high-end cards, it's very simple for 90% of us the life span is 1-2 years for 5% 0.5-1year and 5% max 3 year. Any GPU you buy now (except for the Titans&Co. - maybe) is in 3 years good enough for your 8-9 year old brother to play minecraft .... that's fact - the ones complaining/worrying and making romantic comments like wanting to keep them 7 years and .... so on. My answer is - then keep them. have fun. And make your self busy in complaining and being toxic about the cards that the became slower then when you had purchased them. ( headshake* - people who write crap like that and sadly think like that they need and always will need to something to complain about, fight about - it's not an issue of knowledge or even opinion. It's and issue of character....

Finally I bought last year 1.Q a Strix 1080Ti (before 2x Saphire RX 550+ nitro+ SE in Crossfire) - 2 days later went then to buy the VII - why ? I got the LG 43" monitor centered and 2 AOC monitors left&right in portrait orientation. And I realized .... the Multimonitor Software of AMD is better then Nvidias ... cuz it doesn't support it - so because of AMDs "Radeon Adrenalin" app it was a simple decision - and until today I've not regreted it nor had any issues with it 'til today.
 
Joined
Aug 20, 2007
Messages
21,476 (3.40/day)
System Name Pioneer
Processor Ryzen R9 9950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage Intel 905p Optane 960GB boot, +2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches + PBT DS keycaps
Software Gentoo Linux x64 / Windows 11 Enterprise IoT 2024
So why did then xfx decided to work on their cards cons? if those temps are "normal" ?? bunch of cr@p!

That was memory temp, not even die...
 
Top