# 110°C Hotspot Temps "Expected and Within Spec", AMD on RX 5700-Series Thermals



## btarunr (Aug 13, 2019)

AMD this Monday in a blog post demystified the boosting algorithm and thermal management of its new Radeon RX 5700 series "Navi" graphics cards. These cards are beginning to be available in custom-designs by AMD's board partners, but were only available as reference-design cards for over a month since their 7th July launch. The thermal management of these cards spooked many early adopters accustomed to seeing temperatures below 85 °C on competing NVIDIA graphics cards, with the Radeon RX 5700 XT posting GPU "hotspot" temperatures well above 100 °C, regularly hitting 110 °C, and sometimes even touching 113 °C with stress-testing application such as Furmark. In its blog post, AMD stated that 110 °C hotspot temperatures under "typical gaming usage" are "expected and within spec."

AMD also elaborated on what constitutes "GPU Hotspot" aka "junction temperature." Apparently, the "Navi 10" GPU is peppered with an array of temperature sensors spread across the die at different physical locations. The maximum temperature reported by any of those sensors becomes the Hotspot. In that sense, Hotspot isn't a fixed location in the GPU. Legacy "GPU temperature" measurements on past generations of AMD GPUs relied on a thermal diode at a fixed location on the GPU die which AMD predicted would become the hottest under load. Over the generations, and starting with "Polaris" and "Vega," AMD leaned toward an approach of picking the hottest temperature value from a network of diodes spread across the GPU, and reporting it as the Hotspot. 



 





On Hotspot, AMD writes: "Paired with this array of sensors is the ability to identify the 'hotspot' across the GPU die. Instead of setting a conservative, 'worst case' throttling temperature for the entire die, the Radeon RX 5700 series GPUs will continue to opportunistically and aggressively ramp clocks until any one of the many available sensors hits the 'hotspot' or 'Junction' temperature of 110 degrees Celsius. Operating at up to 110C Junction Temperature during typical gaming usage is expected and within spec. This enables the Radeon RX 5700 series GPUs to offer much higher performance and clocks out of the box, while maintaining acoustic and reliability targets."

AMD also commented on the significantly increased granularity of clock-speeds that improves the GPU's power-management. The company transisioned from fixed DPM states to a highly fine-grained clock-speed management system that takes into account load, temperatures, and power to push out the highest possible clock-speeds for each component. "Starting with the AMD Radeon VII, and further optimized and refined with the Radeon RX 5700 series GPUs, AMD has implemented a much more granular 'fine grain DPM' mechanism vs. the fixed, discrete DPM states on previous Radeon RX GPUs. Instead of the small number of fixed DPM states, the Radeon RX 5700 series GPU have hundreds of Vf 'states' between the bookends of the idle clock and the theoretical 'Fmax' frequency defined for each GPU SKU. This more granular and responsive approach to managing GPU Vf states is further paired with a more sophisticated Adaptive Voltage Frequency Scaling (AVFS) architecture on the Radeon RX 5700 series GPUs," the blog post reads.

*View at TechPowerUp Main Site*


----------



## er557 (Aug 13, 2019)

Radeons have always ran hot, but this is ludicrous.


----------



## Zubasa (Aug 13, 2019)

er557 said:


> Radeons have always ran hot, but this is ludicrous.


It is hard to compare to the competition, because nVidia GPUs do not have a TJunction sensor at all.
Without knowing where the Temp sensor on nVidia GPUs is located, there really is no valid comparison.
The edge temp on AMD gpus aka "GPU" read out is much closer to what you typically expects.

Edit: It is not a single TJunction sensor, the TJunction / Hotspot read out is just the highest reading out of many different sensors spread across the die.
In the case of Radeon VII there are 64 of them. It is not necessary the the same area of the GPU die that is getting hot all the time.


----------



## spnidel (Aug 13, 2019)

er557 said:


> Radeons have always ran hot, but this is ludicrous.


it's not, your post implies that only radeon gpus can reach up to 110c on a certain point in the silicon, which doesn't make any sense and isn't the case


----------



## Jism (Aug 13, 2019)

er557 said:


> Radeons have always ran hot, but this is ludicrous.



A GPU, CPU is such complex, that you cannot have a one fixed temperature for the complete core. There's always a certain part of the core or actual chip that runs hotter then the rest. It's designed to withstand 110 degrees.

You dont tell me either that a Nvidia GPU or Intel CPU does'nt have a hotspot either. If Hwtools where able to capture the data among those sensors, then we could realtime see which part of the GPU now is getting hotter, and thus improving thermals by for example, rework the heatpaste in between the cooler and chip.


----------



## er557 (Aug 13, 2019)

In that case I would only buy such card from 3rd party AIB's with killer cooling


----------



## las (Aug 13, 2019)

Zubasa said:


> It is hard to compare to the competition, because nVidia GPUs do not have a TJunction sensor at all.
> Without knowing where the Temp sensor on nVidia GPUs is located, there really is no valid comparison.
> The edge temp on AMD gpus aka "GPU" read out is much closer to what you typically expects.



Most Nvidia cards are cool and quiet for a reason, lower temps overall.


----------



## Zubasa (Aug 13, 2019)

er557 said:


> In that case I would only buy such card from 3rd party AIB's with killer cooling


The reviews are out recently, go read them yourself.
The Reference cards are not actually overheating / throttling.



las said:


> Most Nvidia cards are cool and quiet for a reason, lower temps overall.


That reason being? As long as the GPU chip is consuming similar power, they are putting out similar amount of heat energy.
The cooler / thermal transfer is all there is to it.


----------



## las (Aug 13, 2019)

Zubasa said:


> The reviews are out recently out, go read them yourself.
> The Reference cards are not actually overheating / throttling.
> 
> 
> ...



The GPU is not the only thing using power...









						Galax GeForce RTX 2060 Super EX Review
					

The Galax GeForce RTX 2060 Super EX retails for the same price as the NVIDIA Founders Edition, yet comes overclocked out of the box, and features a much better cooler. The card is actually the coolest RTX 2060 Super we tested so far, and it includes idle fan stop, too.




					www.techpowerup.com
				




5700 XT uses more power than 2070 Super in gaming on average, while performing worse. 5700 XT is slower, hotter and louder.


----------



## er557 (Aug 13, 2019)

I wouldn't run furmark on this card unless I want to cook breakfast


----------



## Zubasa (Aug 13, 2019)

las said:


> The GPU is not the only thing using power...
> 
> 
> 
> ...


We are in a post about the Hotspot which is a sensor on the GPU die.
The VRM efficiency etc affects the cooling not the GPU die itself.



er557 said:


> I wouldn't run furmark on this card unless I want to cook breakfast


Why would you want to run Furmark on any card except to heat it up?
FYI even if you put a waterblock on a stock GPU it is still putting out similar amount of heat despite running up to 40C cooler.


----------



## yeeeeman (Aug 13, 2019)

I like how all the noobs that run these sites and simple customers try to dissect what real engineers have developed and question their decisions. The fu**? If you think are better engineers get a job at AMD and start improving things...
First we had the stupid articles of 1.5V on Ryzen CPU that is out of spec, blablabla. Do you all think AMD has hired monkeys to make chips?
Please stop being smart asses and play the fuc**** games you bought these cpus and gpus for.


----------



## Jism (Aug 13, 2019)

yeeeeman said:


> I like how all the noobs that run these sites and simple customers try to dissect what real engineers have developed and question their decisions. The fu**? If you think are better engineers get a job at AMD and start improving things...
> First we had the stupid articles of 1.5V on Ryzen CPU that is out of spec, blablabla. Do you all think AMD has hired monkeys to make chips?
> Please stop being smart asses and play the fuc**** games you bought these cpus and gpus for.



Yes. But this fud is litterally generated by news websites as well to generate more clicks. The same boat goes on onto youtube. Cards are tested before being shipped as a actual product. Cards are being put into ovens at a constant 40 to 60 degrees and have it running a high load. Cards are being tested and thrown in the worst case scenarios to guarantee stability and working. Cards are designed to have a VRM running on 100 degrees. Chips have certain hardware protection to prevent it from being fried the moment someone starts their PC without a heatsink attached to their GPU.

Boost clocks are simular technology as Ryzen CPU's. The current(s) (power limit), temperatures (thermals) and all that are constant monitored. Undervolt is not needed, however, due to a production of different chips that was seen in the Vega series, undervolt could help in a situation where base / boost clocks are sustained compared to the original.

Give me one reason why anyone needs a 12 phase VRM for their CPU or GPU. The thing is; you wont find a real world situation in where you need that 12 phase VRM. Even if you LN2 it it's still sufficient enough (even without heatsinks, too) to bring the power the GPU or CPU needs. Sick and tired of those news posts.

It would be cool tho, @Wizzard, to have software that is able to readout all those tiny sensors as well and prefferable with a location on the chip so we could see in realtime what part of the chip is simply hottest. Dont ya'll think?


----------



## ZoneDymo (Aug 13, 2019)

las said:


> The GPU is not the only thing using power...
> 
> 
> 
> ...



and its 100 - 150 dollars cheaper.... so why are you comparing the two?
If anything you should compare it to the RTX2060 Super (like in your link...was the 2070 a typo?) and then the 5700XT is overall the better option.


----------



## Anymal (Aug 13, 2019)

Nvidias 7nm or even 7nm+ or 5nm wil demolish first Navi.


----------



## Vayra86 (Aug 13, 2019)

Jism said:


> Yes. But this fud is litterally generated by news websites as well to generate more clicks. The same boat goes on onto youtube. Cards are tested before being shipped as a actual product. Cards are being put into ovens at a constant 40 to 60 degrees and have it running a high load. Cards are being tested and thrown in the worst case scenarios to guarantee stability and working. Cards are designed to have a VRM running on 100 degrees. Chips have certain hardware protection to prevent it from being fried the moment someone starts their PC without a heatsink attached to their GPU.
> 
> Boost clocks are simular technology as Ryzen CPU's. The current(s) (power limit), temperatures (thermals) and all that are constant monitored. Undervolt is not needed, however, due to a production of different chips that was seen in the Vega series, undervolt could help in a situation where base / boost clocks are sustained compared to the original.
> 
> ...



And yet...
- Radeon VII hotspot was fixed with some added mounting pressure, or at least, substantially improved upon
- Not a GPU gen goes by without launch (quality control) problems, be it from a bad batch or small design errors that get fixed through software (Micron VRAM, 2080ti space invaders, bad fan idle profiles, gpu power modes not working correctly,  drawing too much power over the PCIe slot, etc etc.)
- AMD is known for several releases with above average temperature-related long term fail rates

As long as companies are not continuously delivering perfect releases, we have reason to question everything out of the ordinary, and 110C on the die is a pretty high temp for silicon and the components around it aren't a fan of it either. It will definitely not _improve_ the longevity of this chip, over, say, a random Nvidia chip doing 80C all the time. You can twist and turn that however you like but we are talking about the same materials doing the same sort of work. And physics don't listen to marketing.


----------



## cucker tarlson (Aug 13, 2019)

1.5v spikes in idle,110 degree hotspots,all seems fine for amd.


----------



## Vayra86 (Aug 13, 2019)

cucker tarlson said:


> 1.5v spikes in idle,110 degree hotspots,all seems fine for amd.



No you misunderstand, none of this is true and everybody does this, you just never saw it because AMD is the only one doing temp sensors right...


Seriously people.


----------



## Jism (Aug 13, 2019)

Vayra86 said:


> And yet...
> - Radeon VII hotspot was fixed with some added mounting pressure, or at least, substantially improved upon
> - Not a GPU gen goes by without launch (quality control) problems, be it from a bad batch or small design errors that get fixed through software (Micron VRAM, 2080ti space invaders, bad fan idle profiles, gpu power modes not working correctly, etc etc.)
> - AMD is known for several releases with above average temperature-related long term fail rates.



Yes, improved. But know that the Vega with HBM was 'prone' to crack if the pressure was too high. The interposer or HBM would simply fail when the pressure was too tight. It's why AMD is going for a safe route. Every GPU you see these days is with a certain force but not too tight if you know what i mean. Any GPU could be brought 'better' in relation of temperatures if you start adding washers to it. It's no secret sauce either.

"- AMD is known for several releases with above average temperature-related long term fail rates."

I do not really agree. As long as the product is working within spec, no faillure that occurs or at least survives it's warranty period what is wrong with that? It's not like your going to use your videocard for longer then 3 years. You could always tweak the card to have lower temps. I simply slap on a AIO watercooler and call it a day. GPU hardware is designed to run 'hot'. Have'nt you seen the small heatsinks that they are applying to the Firepro series? Those are single-slotted coolers with small fans that you would see back in laptops and such.


----------



## Zubasa (Aug 13, 2019)

Anymal said:


> Nvidias 7nm or even 7nm+ or 5nm wil demolish first Navi.


Newer unreleased / not even announced GPU demolishes older GPUs, such insight much wow.


----------



## Vayra86 (Aug 13, 2019)

Jism said:


> at least survives it's warranty period what is wrong with that? *It's not like your going to use your videocard for longer then 3 years*. You could always tweak the card to have lower temps. I simply slap on a AIO watercooler and call it a day. GPU hardware is designed to run 'hot'. Have'nt you seen the small heatsinks that they are applying to the Firepro series? Those are single-slotted coolers with small fans that you would see back in laptops and such.



LOL. You can keep your own weak definition of quality home with you then, I'll take GPUs that last 5-7 years at the very least, tyvm. But I get it, AMD only releases midrange far too late in the cycle these days so yes you'll definitely upgrade in 3 years time that way. I guess its a nice race to the bottom you got going on. By the way, that AIO isn't free either. Might as well just get a higher tier card instead, no?

Seriously, people. What the hell are you saying. AMD damage control squad in full effect here, and its preposterous as usual.

This 110C is just as 'in spec' as Intel's K CPUs doing 100 C if you look at them funny. Hot chips are never a great thing.



Zubasa said:


> It is hard to compare to the competition, because nVidia GPUs do not have a TJunction sensor at all.
> Without knowing where the Temp sensor on nVidia GPUs is located, there really is no valid comparison.
> The edge temp on AMD gpus aka "GPU" read out is much closer to what you typically expects.



No its not hard, that is why some reviews contain FLIR cam shots, and temps above 100 C are not unheard of, but right on the die is quite a surprise. We've also seen multiple examples over time where hot cards would have much higher return/fail rates, heat does radiate out and not just through the heatsink, VRAM for example really is not a big fan of high temps.

Keep in mind the definition of 'in spec' is subject to change and as performance gets harder to extract, goal posts are going to be moved. And it won't benefit longevity, ever. The headroom we used to have, is now used out of the box, for example.


----------



## Zubasa (Aug 13, 2019)

Vayra86 said:


> No its not hard, that is why some reviews contain FLIR cam shots, and temps above 100 C are not unheard of, but right on the die is quite a surprise. We've also seen multiple examples over time where hot cards would have much higher return/fail rates, heat does radiate out and not just through the heatsink, VRAM for example really is not a big fan of high temps.


The fallacy of this argument is you are treating the GPU die as a 2D object.
Thermal camera is measuring the surface temperature of the back of the die.
The working transistors are actually on the side that is bonded to the substrate facing the PCB. You are assuming the Hot Spot is just the middle of the die close to the visible back side.
In really the chip has thickness and even grinding down the chip a faction of a millimeter (0.2mm) can drop the temperature by few (5) degrees.


----------



## er557 (Aug 13, 2019)

Xuper said:


> The level of Noob/troll in this topic is unbelievable.....that's why I'm more active in anandtech forum.


I didnt see any noobing here, only people explaining what they understand of the OP, and opinions. 
You think that posting here that you like another forum better is not trolling?


----------



## Jism (Aug 13, 2019)

Vayra86 said:


> LOL. You can keep your own weak definition of quality home with you then, I'll take GPUs that last 5-7 years at the very least, tyvm. But I get it, AMD only releases midrange far too late in the cycle these days so yes you'll definitely upgrade in 3 years time that way. I guess its a nice race to the bottom you got going on. By the way, that AIO isn't free either. Might as well just get a higher tier card instead, no?
> 
> Seriously, people. What the hell are you saying. AMD damage control squad in full effect here, and its preposterous as usual.



I'm sure that alot of GPU's tend to last out 5 years as well on stock without any undervolt and even with a dusty heatsink and fan combination. But who's going to play with a GPU that's 5 years old or even older? Ill buy products now, use 'm, and replace 'm, just as a car, just as my kitchen, just as whatever thats designed to be replaceable. If that was'nt the case anyone of you would still be running their Pentium 1's and AMD K5's with their Voodoo chips around it.

As for your fancy heat story, VRM's are designed to withstand 110 degrees operating temperature. It's not really the VRM's that suffer but more things like the capacitors sitting right next to it. They have a estimated lifespan based on thermals. The hotter the shorter their mbtf basicly is. I woud'nt recommend playing on a card with a 100 degree vrm where GDDR chips are right next to it either, but it works and there are cards that last out many many years before giving their last frame ever.

It's becomes just more and more difficult, to cool a small die area. It's why Intel and AMD are using IHS. Not just to protect it from being crushed by too tense heatsinks or waterblock but to more evenly distribute the heat. That's why all the GPU's these days come with a copper baseplate that extracts heat from the chip faster then a material like aluminium does. AMD is able to release a stock videocard with a great cooler, but what's the purpose of that if the chip is designed to run in the 80 degree mark? The fan ramps up anyway if that is the case. And you can set that up in driver settings as well. Big deal.


----------



## cucker tarlson (Aug 13, 2019)

Jism said:


> but who's going to play with a GPU that's 5 years old or even older? Ill buy products now, use 'm, and replace 'm, just as a car, just as my kitchen, just as whatever thats designed to be replaceable. If that was'nt the case anyone of you would still be running their Pentium 1's and AMD K5's with their Voodoo chips around it.


rubbing eyes

so how many ppl are still running 7970s/r9 2xx cards around here,which are 6-8 years old.


----------



## Vayra86 (Aug 13, 2019)

Jism said:


> I'm sure that alot of GPU's tend to last out 5 years as well on stock without any undervolt and even with a dusty heatsink and fan combination. But who's going to play with a GPU that's 5 years old or even older? Ill buy products now, use 'm, and replace 'm, just as a car, just as my kitchen, just as whatever thats designed to be replaceable. If that was'nt the case anyone of you would still be running their Pentium 1's and AMD K5's with their Voodoo chips around it.
> 
> As for your fancy heat story, VRM's are designed to withstand 110 degrees operating temperature. It's not really the VRM's that suffer but more things like the capacitors sitting right next to it. They have a estimated lifespan based on thermals. The hotter the shorter their mbtf basicly is. I woud'nt recommend playing on a card with a 100 degree vrm where GDDR chips are right next to it either, but it works and there are cards that last out many many years before giving their last frame ever.
> 
> It's becomes just more and more difficult, to cool a small die area. It's why Intel and AMD are using IHS. Not just to protect it from being crushed by too tense heatsinks or waterblock but to more evenly distribute the heat. That's why all the GPU's these days come with a copper baseplate that extracts heat from the chip faster then a material like aluminium does. AMD is able to release a stock videocard with a great cooler, but what's the purpose of that if the chip is designed to run in the 80 degree mark? The fan ramps up anyway if that is the case. And you can set that up in driver settings as well. Big deal.



The only conclusion then is: time will tell 

I'm staying far away, regardless.

My GTX 1080 is now running into 3 years post-release and I can easily see myself getting another year out of it. And after that, I will probably sell it for close to 100-150 EUR because it still works perfectly fine. If you buy high end cards, 3 years is short and a great moment to start thinking about an upgrade WITH a profitable sale of the old GPU.

You can compare resale value of Nvidia vs AMD cards over the last five to seven years and you'll understand my point. Its almost an Apple vs Android comparison, AMD cards lose value much faster and this is the reason they do. Its too easy to chalk that up to 'branding' alone.


----------



## R0H1T (Aug 13, 2019)

Vayra86 said:


> As long as companies are not continuously delivering perfect releases, we have reason to question everything out of the ordinary, and 110C on the die is a pretty high temp for silicon and the components around it aren't a fan of it either. It will definitely not _improve_ the longevity of this chip, over, say, a random Nvidia chip doing 80C all the time. You can twist and turn that however you like but we are talking about the same materials doing the same sort of work. And physics don't listen to marketing.


Are you conflating engineering with designing chips? While it's arguably true that Intel, Nvidia make better (engineered) chips than AMD, it's certainly not because AMD is incompetent. It could be a myriad of factors beyond their control, like uarch & of course the node which they've chosen. An argument could be made if you had 2 chips made on the exact same node, even then it would boil down do the uarch ~ which isn't as simple as fixing your home.


Xuper said:


> The level of Noob/troll in this topic is unbelievable.....that's why I'm more active in anandtech forum.


Interesting, were you there like 3 years back (before Zen) when nearly every AMD supporter was labelled a shill or fanboi? Come 2019 & the IDF has taken an indefinite hiatus, not unlike Hunter X Hunter


----------



## cucker tarlson (Aug 13, 2019)

my gas stove can reach 300 celcius and it's fine,you guys are spreading FUD


----------



## Vayra86 (Aug 13, 2019)

R0H1T said:


> Are you conflating engineering with designing chips? While it's arguably true that Intel, Nvidia make better (engineered) chips than AMD, it's certainly not because AMD is incompetent. It could be a myriad of factors beyond their control, like uarch & of course the node which they've chosen. An argument could be made if you had 2 chips made on the same exact node, even then it would boil down do the uarch which isn't as simple as fixing your home.



No, I don't conflate anything, I'm a consumer buying a product and I've got a pretty good sense of what's quality and what's questionable. Experience, if you will... whether they designed it wrong or whether it was a bad batch or an unlucky combination of circumstances is entirely not my concern. Neither is having to do all sorts of tweaking to get a product to work as intended or 'comfortably' - this is the reason I still can't see myself buying an AMD GPU these days. Unfortunately - I might add. I'm just not seeing the dedication I'd want and require of a GPU vendor. Because it goes a lot further than the GPU, this is also about continued support, legacy support, how well older APIs and exotic applications work, etc etc etc. AMD is doing the bare minimum and it shows. Every time, in everything they do. Its always late, not quite perfect, or a promise they still need to deliver upon.

The misguided idea that 'because a company engineered and released it' it must be okay has been proven numerous times to be just that - misguided. Never underestimate what the pressure of commercial targets and shareholders will mean for end users.



> Interesting, were you there like 3 years back (before Zen) when nearly every AMD supporter was labelled a shill or fanboi? Come 2019 & the IDF has taken an indefinite hiatus, not unlike Hunter X Hunter


Haha indeed lol. Anandtech comment section still isn't pretty btw.


----------



## Zubasa (Aug 13, 2019)

Jism said:


> As for your fancy heat story, VRM's are designed to withstand 110 degrees operating temperature. It's not really the VRM's that suffer but more things like the capacitors sitting right next to it. They have a estimated lifespan based on thermals. The hotter the shorter their mbtf basicly is. I woud'nt recommend playing on a card with a 100 degree vrm where GDDR chips are right next to it either, but it works and there are cards that last out many many years before giving their last frame ever.


A point to add on this. Even on the reference cards the VRMs are not reaching any where near 110C. They are around a modest 78C.
Also the most common points on failure are the solder joins or the Capacitiors of the VRMs.
The GPU and Memory ICs themselves are rarely the first to fail unless they have been overclocked heavily / subject to extremely high voltage.


----------



## R0H1T (Aug 13, 2019)

Vayra86 said:


> I'm a consumer buying a product and I've got a pretty good sense of what's quality and what's questionable. Experience, if you will... whether they designed it wrong or whether it was a bad batch or an unlucky combination of circumstances is entirely not my concern. Neither is having to do all sorts of tweaking to get a product to work as intended or 'comfortably' - this is the reason I still can't see myself buying an AMD GPU these days. Unfortunately - I might add. I'm just not seeing the dedication I'd want and require of a GPU vendor. Because it goes a lot further than the GPU, this is also about continued support, legacy support, how well older APIs and exotic applications work, etc etc etc.


Well then as consumers, not just you per se, how about supporting AMD with more $ especially when they release a competitive (perf/$) GPU? I see many forum dwellers complain about AMD not doing enough against Intel or Nvidia, then they go on & spend 200~400$ more for 5~25% performance, how do you suppose AMD will make money to then make better products ~ magic? AMD has perennially been the budget brand, even when they were superior to Nvidia & Intel, except a brief period with FX chips last decade! Even now people complain about gaming as if people buy 2000$ rigs just for that! If you wanna change the world you have to start with yourself, this applies in every walk of life not just what we're talking about ~ *short term pain vs long term gain*.


----------



## Jism (Aug 13, 2019)

cucker tarlson said:


> rubbing eyes
> 
> so how many ppl are still running 7970s/r9 2xx cards around here,which are 6-8 years old.



Uh yeah so the product does work then and the faillure rate is'nt that bad as it is being sounded in this thread.



Vayra86 said:


> You can compare resale value of Nvidia vs AMD cards over the last five to seven years and you'll understand my point. Its almost an Apple vs Android comparison, AMD cards lose value much faster and this is the reason they do. Its too easy to chalk that up to 'branding' alone.



Yes, when the mining craze was going on AMD cards where always favored on top of Nvidia. But i aint going to buy a 3 year old dated, and used card. I never buy used cards. I'm kind of done with that to be honest. Everything in my system is new when i upgrade.



Zubasa said:


> A point to add on this. Even on the reference cards the VRMs are not reaching any where near 110C. They are around a modest 78C.
> Also the most common points on failure are the solder joins or the Capacitiors of the VRMs.
> The GPU and Memory ICs themselves are rarely the first to fail unless they have been overclocked heavily / subject to high voltage.



It's really old news, this. I feel like alot of websites, channels and all that are rebranding old news that was on the net before. Really. VRM's CAN sustain 110 degrees. And they will still run perfectly fine. Here, https://www.techpowerup.com/review/amd-ryzen-9-3900x-tested-on-cheap-b350-motherboard/3.html

50$ motherboard combined with a high-end, 12 core and even overclocked CPU. Runs. And it will proberly run for another year or so if the build quality is just right.

The reason why it runs, and that you see nowhere, is that AMD requires this with mobo vendors. It does'nt want a FX era over and over again where certain boards throttle with a 125W CPU.


----------



## cucker tarlson (Aug 13, 2019)

Jism said:


> Uh yeah so the product does work then and the faillure rate is'nt that bad as it is being sounded in this thread.


what ?


----------



## Vayra86 (Aug 13, 2019)

Jism said:


> Uh yeah so the product does work then and the faillure rate is'nt that bad as it is being sounded in this thread.
> 
> 
> 
> Yes, when the mining craze was going on AMD cards where always favored on top of Nvidia. But i aint going to buy a 3 year old dated, and used card. I never buy used cards. I'm kind of done with that to be honest. Everything in my system is new when i upgrade.



Yes, you have already explained why, because the stuff you buy is not going to last longer anyway, so why would you expect that from a second hand purchase.

Meanwhile, I get about 150-200 EUR returned on every GPU upgrade which allows me to buy into same or higher tier without 'spending more' than I did on the previous card. Every time. I've made about 1200 EUR on GPU sales for personal use. You enjoy your 3 year cards, to each his own, its good these furnaces still have a market I guess.

Do check out that GN review of the Sapphire though, it nicely underlines the point, even memory ICs get to boiling point which is definitely not where you want them. I vividly remember the EVGA GTX 1070 FTW - another one of those cards 'that was just fine' until EVGA deemed it necessary to supply thermal pads after all and revise their product line and shroud entirely.

Anyway, non issue because it was already clear that you had to stay far away from the reference designs.



R0H1T said:


> how about supporting AMD



That's not how commerce works, that is how _charity _works. And not a single charity exists to solve problems, but rather to preserve them to cash in even more.

If AMD can't compete, we need a new player. I hear Intel is working on something. And if AMD GPU business falls flat (which it will eventually if they keep at it like this) someone will buy the IP and take over the helm. I'm not worried and I don't root for multinationals.


----------



## Marecki_CLF (Aug 13, 2019)

Hello Everyone,

My $0.02:
Please find below a GPU-Z screenshot taken after Fire Strike Extreme Stress Test run on my reference (Sapphire) RX5700XT.
The card is set in Wattman to boost up to 1980MHz at 1006mV (with GDDR6 mildly OCed from 875MHz to 900MHz). It runs with these settings just fine. Performance is very satisfactory (I have a 2560x1440 144Hz display), temps are in check, fan noise is barely audible. Just so you know, I have a mATX case, it does not have very good airflow.
I have no idea why RX5700XT runs by default at 1203mV. IMHO this is very high and is the culprit of reference cards running hot, loud and being power hungry. From what I've seen so far, all reference cards can be undervolted by a huge margin, which resolves all heat/noise/power consumption issues.
For the sake of comparison, reference RX5700 (non-XT) runs at 1025mV.
Food for thought.


----------



## Jism (Aug 13, 2019)

The reason why they put the voltages higher then usual or what seems to be the sweetspot for those chips, is because binning. A higher voltage allows for a bigger extraction of chips on a single wafer. So you might be lucky, have a chip that has a better density and requires a overall lower voltage compared to the rest. But it's always within silicon spec. They wont release a GPU that's running beyond of what it's capable and what is considered the safe zone.

My RX590 goes from 1150mv back to 1110mv. It can do 1090mv but at the cost of a crashing radeon relive. So i stick it at 1110mv with a core of around 1450mhz which is still very good.


----------



## cucker tarlson (Aug 13, 2019)

UV requires binning just as much as OC does,seen Vega users saying their cards crash if they as much as touch undervolting,yet it's commonplace to see people say all Radeons can undervolt substantially.Well if they could there'd be no rreason for AMD to set higher voltage in the first place.
I'd rather take a shot at overclocking a card that runs great out of the box than undervolting one that needs it badly.


----------



## lZKoce (Aug 13, 2019)

Vayra86 said:


> The misguided idea that 'because a company engineered and released it' it must be okay has been proven numerous times to be just that - misguided. Never underestimate what the pressure of commercial targets and shareholders will mean for end users.



+1, you don't want to know what happens in automotive industry ....just ask Ford about transmissions on Fiestas....or Volvo about that plastic piece on the fuel lines....


----------



## Vya Domus (Aug 13, 2019)

Please refrains yourselves from talking about things that you just simply do not know anything about. I don't expect TPU to be brewing with academicians but not complete ignorance either. 

Dies do not have uniform thermals across their surface and on certain spots such as where the FPUs sit indeed can reach well over 100C. This has gotten worse over the years as the thermal density of chips keeps rising, you can have TDPs ranging from 10W to 1000W these hotspots will not go away. AMD being on 7nm, again, makes this worse. 

Let's spell it out in the simplest of terms so that everyone gets it :

You have two dies, each use 100W and each benefit from the same amount of cooling *but* one of them is half the size. This inevitably means it will have higher thermal density and will run at higher temperatures, there is no going around it. This of course is taken into account when this things are designed but you can only minimize this effect so much.  

Again, this is about the thermal density not TDP, not cooling, nor architecture and you can't really do anything about it. Do not believe for a second Nvidia, Intel or anyone isn't dealing with this. This hot and power hungry meme should die, it has run it's course, now you just look like you don't have a damn clue what you're talking about.


----------



## Dave65 (Aug 13, 2019)

Not that it is our job to fix factory built good, but the washer mod, new thermal pads and some kryonaut does wonders for these cards. My neighbor said them thermal pads are the cheapest, low cost garbage you can put on how components.
If you go down one size on the memory thermal pads from 1.5mm to 1mm it closes the gap between the die and cooler. It really does work.


----------



## Midland Dog (Aug 13, 2019)

las said:


> Most Nvidia cards are cool and quiet for a reason, lower temps overall.


less heat = less leakage = more efficiency


----------



## las (Aug 13, 2019)

ZoneDymo said:


> and its 100 - 150 dollars cheaper.... so why are you comparing the two?
> If anything you should compare it to the RTX2060 Super (like in your link...was the 2070 a typo?) and then the 5700XT is overall the better option.



Custom vs Custom and 2060 Super and 5700 XT performs pretty much the same. Not sure why you think 5700 XT is the overall better option. That entirely depends on games played. On average 2060 Super overclocks better than 5700 XT. It looks like 5700 XT has next to zero OC headroom, just like on AMD's CPU's.

AMD officially said they will max all their chips, instead of leaving some in the tank for the "few percent" that overclock. Looks like this holds true when looking at Ryzen and the 5700 XT custom cards. 2-3% performance gained with max OC. The Asus Strix gained 0.7%...



londiste said:


> 5700XT is cheaper.



Not really... You can get reference 5700 XT for 10 bucks less than custom 2060 Super... You need hearing protection with the 5700 XT ref tho

You get Control and Wolfenstein with all 2060 Super which can easily be sold.


----------



## londiste (Aug 13, 2019)

las said:


> Custom vs Custom and 2060 Super and 5700 XT performs pretty much the same. Not sure why you think 5700 XT is the overall better option.


I wanted to write that RX 5700 XT is cheaper but it turns out right now RTX 2060 Super has a slight edge in prices.


----------



## las (Aug 13, 2019)

londiste said:


> I wanted to write that RX 5700 XT is cheaper but it turns out right now RTX 2060 Super has a slight edge in prices.



A friend of mine bought custom 2060 Super for $399 with free delivery and sold the gamekeys for 50 bucks


----------



## Dave65 (Aug 13, 2019)

las said:


> A friend of mine bought custom 2060 Super for $399 with free delivery and sold the gamekeys for 50 bucks


It's always, a friend


----------



## cucker tarlson (Aug 13, 2019)

Dave65 said:


> It's always, a friend


isn't it what they cost ?

cheapest non-reference 5700xt here is 2200pln for pulse,2060 S is 1800 PLN for zotac/pny/gainward dual fan + 2 games worth 300 pln total


----------



## TheoneandonlyMrK (Aug 13, 2019)

Vayra86 said:


> No you misunderstand, none of this is true and everybody does this, you just never saw it because AMD is the only one doing temp sensors right...
> 
> 
> Seriously people.


No surely not, no yeah , your spot on , temp offset has been a thing for ages(10 years), now just imagine the real temp of that 9900K die T junction running at 5Ghz eh ,85-100 yeah right.


----------



## cucker tarlson (Aug 13, 2019)

theoneandonlymrk said:


> No surely not, no yeah , your spot on , temp offset has been a thing for ages(10 years), now just imagine the real temp of that 9900K die T junction running at 5Ghz eh ,85-100 yeah right.


oh look,a squirrel!


----------



## R0H1T (Aug 13, 2019)

Vayra86 said:


> That's not how commerce works, that is how _charity _works. And not a single charity exists to solve problems, but rather to preserve them to cash in even more.


That's exactly how things work & there's nothing about charity in my post. You have a choice between 3700x, 3800x & 9900k for let's say gaming. You chose 5~15 fps for ~150$ so the next time you don't get to say why AMD still can't match Intel's clocks or gaming performance! *Likewise when you want things for cheap*, you don't get to complain that your jobs are shipped overseas. This is how things work & will always do in a profit "driven" world


----------



## Space Lynx (Aug 13, 2019)

I won't have to worry about this with my 3 fan 5700 XT arriving Friday, also this will run around 2100 core matching 2070 super in most games across the board (if the Asus 3 fan version review on guru3d is anything to go by).

Also, that Asus 3 fan 5700 XT comes within 10 fps of 2080 SUPER on a few games, Sekiro at 1440p being one. My gigabyte 3 fan should do the same, not bad for $420.


----------



## Axaion (Aug 13, 2019)

Yeah no thanks AMD, i dont wish to have hearing damage because of your poor cooler design


----------



## notb (Aug 13, 2019)

R0H1T said:


> Well then as consumers, not just you per se, how about supporting AMD with more $ especially when they release a competitive (perf/$) GPU?


LOL. And what next? AMD GPUs on Kickstarter?

How about AMD makes attractive, complete products - not just in benchmarks, but also in real life (quiet, cool, easy to setup, tinker-free and well supported by OEMs)?
Maybe then they'll be able to sell more?

They're making products aimed at enthusiasts - willingly focusing on a group that is more enticed to pay "200~400$ more for 5~25% performance". I mean: how much do people on this forum spend on OC? 

If AMD lacks money on polishing their GPUs, they can do an FPO like every normal listed company would do.


----------



## Space Lynx (Aug 13, 2019)

Axaion said:


> Yeah no thanks AMD, i dont wish to have hearing damage because of your poor cooler design



or you could buy a two-three fan design one, they just released this week and only cost $20 more... but mmk


----------



## R0H1T (Aug 13, 2019)

notb said:


> LOL. And what next? AMD GPUs on Kickstarter?
> 
> How about AMD makes attractive, complete products - not just in benchmarks, but also in real life (quiet, cool, easy to setup, tinker-free and well supported by OEMs)?
> Maybe then they'll be able to sell more?
> ...


What BS, you're making it sound like* AMD GPUs are unusable garbage* & Nvidia not only outstrips it across the board but also in every price bracket, every game you can think of! Which is of course BS as well


----------



## IceShroom (Aug 13, 2019)

er557 said:


> Radeons have always ran hot, but this is ludicrous.


Nvidia cards run so cool that those needs 3 slots and 3 fan cooler to cool.


----------



## laszlo (Aug 13, 2019)

at least amd admit the hotspot ; for sure nvidia has also but keep waiting for telling you...


----------



## TheinsanegamerN (Aug 13, 2019)

Microsoft also claimed that the 95C temps the xbox 360 reached were perfectly normal and nothing to worry about.....right up until the hardware started dropping like flies.

Sorry, but just because the max temp of the silicon may be 110C does NOT mean it should reach that normally. This would be like if I drove my car at 155 MPH every single day with the heat pegged out. AMD is just making excuses for their ludicrously junk cooler design. Reaching such high tempts then cooling off when not gaming is going to prematurely wear out these chips, especially their solder connections.


----------



## er557 (Aug 13, 2019)

IceShroom said:


> Nvidia cards run so cool that those needs 3 slots and 3 fan cooler to cool.


no they dont, they run fine with blower, it's aib design for three fans for overclockability and higher-end cooling


----------



## randomUser (Aug 13, 2019)

My HD4850 reference design GPU ran 90C at idle and 110C when gaming. I don't think that was a silicon temp tho, might be tCase.
So silicone 120-130C?


----------



## Space Lynx (Aug 13, 2019)

er557 said:


> no they dont, they run fine with blower, it's aib design for three fans for overclockability and higher-end cooling



Nvidia's blower was much better designed, vapor chamber, etc. AMD really should have not done a blower launch, I think internally they know this and probably won't make same mistake with 5800 XT, but eh, who knows.


----------



## londiste (Aug 13, 2019)

On one hand, there are hotspots on GPUs and exposing that reading for monitoring externally is definitely a good thing. I do not doubt for a second that Nvidia has similar sensor readings internally available, just not exposed.

On the other hand, 110°C being expected and in spec is a suspicious statement because we know these GPUs throttle at that exact 110°C point.
It is like saying Ryzen 3000 running at 95°C is expected and in spec. It is technically correct...


----------



## jmcosta (Aug 13, 2019)

This reminds me of the GTX480 but at least Nvidia put some effort making a decent cooling solution

"This temperature is fine" but fan noise and throttling isn't...

and Im aware that AMD partners have fixed this issue, unfortunally they come a little late, a month late.


----------



## Space Lynx (Aug 13, 2019)

jmcosta said:


> This reminds me of the GTX480 but at least Nvidia put some effort making a decent cooling solution
> 
> "This temperature is fine" but fan noise and throttling isn't...
> 
> and Im aware that AMD partners have fixed this issue, unfortunally they come a little late, a month late.



with a slightly higher fan curve above stock fan curve, blower fans do just fine on temps. this is just stock blower fan.  which yeah most users won't run a custom fan, but I always have, even with nvidia. /shrug

let's just hope AMD learned their lesson finally and do better coolers for 5800 xt


----------



## Zubasa (Aug 13, 2019)

jmcosta said:


> "This temperature is fine" but fan noise and throttling isn't...


I can understand the fan noise argument.
But where is your evidence of the card actually throttling?
Because if the reference design is really throttling and not boosting to full potential, the Sapphire Pulse wouldn't perform only marginally better even with a factory overclock.


----------



## las (Aug 13, 2019)

Dave65 said:


> It's always, a friend



399 is MSRP...


----------



## Vya Domus (Aug 13, 2019)

TheinsanegamerN said:


> Sorry, but just because the max temp of the silicon may be 110C does NOT mean it should reach that normally.



I am amazed by these claims, how the hell do you know that ? What is normal and how do you know that's supposed to be normal ? Are you by any chance working on chip design and know this stuff better than we or AMD ?


----------



## Space Lynx (Aug 13, 2019)

Vya Domus said:


> I am amazed by these claims, how the hell do you know that ? What is normal and how do you know that's supposed to be normal ? Are you by any chances working on chip design and know this stuff better than we or AMD ?



I think a lot of people are taking this out of context possibly, this isn't the GPU temp folks, similar to how a lot of cheap motherboards CPU's you can get the CPU good temps, but there will be a hotspot on VRM somewhere at 92 celsius and not a huge deal if not overclocking.  I think this is similar, gpu will never itself get that hot, it's just a hotspot of a spefici part that is normally always a bit hotter than the gpu core.

At least that is my line of thought anyway. Still glad I got the 3 fan gigabyte version for only $20 more though


----------



## IceShroom (Aug 13, 2019)

er557 said:


> no they dont, they run fine with blower, it's aib design for three fans for overclockability and higher-end cooling


Nvidia blower card like GTX 1080 ran 84°. RX 5700 XT blower ran from 76°- 82° depends on website, expect one outlier TPU-92°(dont know Junction or not). 
So RX 5700 XT is chilling compared to Nvidia blower. And don't forget thermi, unless you born yesterday.
Here is  Nvidia blower temperature : https://www.guru3d.com/articles_pages/amd_radeon_rx_5700_and_5700_xt_review,8.html


----------



## jmcosta (Aug 13, 2019)

Zubasa said:


> I can understand the fan noise argument.
> But where is your evidence of the card actually throttling?
> Because if the reference design is really throttling and not boosting to full potential, the Sapphire Pulse wouldn't perform only marginally better even with a factory overclock.



The reference starts to thermal throttling at 90-91C (from 1900mhz to very unstable clocks below 1800mhz) and even shuts down while gaming after a while if its fully utilized (linus and other reviewers have mentioned this)
The reason you don't see a significant boost is because the gain from pushing the frequency is poor in Navi(maybe driver issue?). This chip having an overclock of 15% results in a <4% performance gain


----------



## Vayra86 (Aug 13, 2019)

Vya Domus said:


> Please refrains yourselves from talking about things that you just simply do not know anything about. I don't expect TPU to be brewing with academicians but not complete ignorance either.
> 
> Dies do not have uniform thermals across their surface and on certain spots such as where the FPUs sit indeed can reach well over 100C. This has gotten worse over the years as the thermal density of chips keeps rising, you can have TDPs ranging from 10W to 1000W these hotspots will not go away. AMD being on 7nm, again, makes this worse.
> 
> ...



Cool story but the GN Sapphire review proves you wrong. AMD just designed a shit cooler for the heat Navi produces at stock, end of story. Take note of the memory IC temp as well. Red line.






Nuff said, I would say.

You're not wrong about thermal density, its been a problem starting with Ivy Bridge's 22nm, I vividly remember Tomshardware making remarks on it as an explanation for the crappy heat transfer off die. Yet everyone insisted in complaining about shitty TIM instead. We know better now that Intel solders its high end range and still reaches boiling point.



theoneandonlymrk said:


> No surely not, no yeah , your spot on , temp offset has been a thing for ages(10 years), now just imagine the real temp of that 9900K die T junction running at 5Ghz eh ,85-100 yeah right.



The real temp of Tjunctiion on Intel has been known for years. I'm not sure what you're trying to say here, other than those K models get really hot, which is absolutely true. But not 110C.

You're also not convincing me that as nodes (and thus transistor size/thickness of materials) get smaller, they can readily handle more heat. I'd say it is quite the opposite.









						Intel® Core™ i9 Processor - Features, Benefits and FAQs
					

Deliver fantastic entertainment and gaming, seamless 4K Ultra HD, and 360 video with latest Intel® Core™ i9 processors.




					www.intel.com
				













						Thermal management for Intel 6th/7th Generation (Tcase vs. Tjunction)
					

Hi sir, We have questions about Intel's sixth/seventh/eighth generation thermal management. Data sheet volume from the Intel 6th Generation Intel Core processor family. 1, the thermal management section 5 (page 81) mentioned that in order to maintain the reliability of the processor, the...



					forums.intel.com
				






R0H1T said:


> That's exactly how things work & there's nothing about charity in my post. You have a choice between 3700x, 3800x & 9900k for let's say gaming. You chose 5~15 fps for ~150$ so the next time you don't get to say why AMD still can't match Intel's clocks or gaming performance! *Likewise when you want things for cheap*, you don't get to complain that your jobs are shipped overseas. This is how things work & will always do in a profit "driven" world



They call that a loser's strategy, begging for people to keep coming to the rescue. A winner's strategy is what AMD does for CPU right now. They know how it works. Only the hardened AMD fanbase seems to have trouble grasping that.


----------



## FordGT90Concept (Aug 13, 2019)

las said:


> The GPU is not the only thing using power...
> 
> 
> 
> ...


2070 Super also has 3.3 billion more transistors.  5700 XT is being pushed to the limit while 2070 Super isn't, so 5700 XT ends up using more power.  5700 is closer to where nominal Navi 10 perf/watt.  5700 XT is a direct response to NVIDIA RTX launch and AMD's lack of having a bigger chip to compete.


----------



## las (Aug 13, 2019)

FordGT90Concept said:


> 2070 Super also has 3.3 billion more transistors.  5700 XT is being pushed to the limit while 2070 Super isn't, so 5700 XT ends up using more power.  5700 is closer to where nominal Navi 10 perf/watt.  5700 XT is a direct response to NVIDIA RTX launch and AMD's lack of having a bigger chip to compete.



And considering Nvidia uses 12nm things look bad for AMD GPU's... It won't be pretty when Ampere launches at Samsung 7nm EUV or better.

I hope AMD is right about the "Nvidia Killer" they are working on... I believe it when I see it.. Would be awesome.


----------



## Vya Domus (Aug 13, 2019)

Vayra86 said:


> Cool story but the GN Sapphire review proves you wrong. AMD just designed a shit cooler for the heat Navi produces at stock, end of story. Take note of the memory IC temp as well. Red line.
> 
> View attachment 129159
> 
> ...



How am I wrong and about what ? I didn't link this in away with how shitty AMD's coolers might be, I said this is a problem that can arise irrespective of cooling. Radeon 7 is proof of that where you wouldn't call it's cooler shitty but it still goes over 100C. And you can also find this on GN where they found out that screwing around with the cooler didn't really make a difference with respects to the Tjunction temperature which still went above 100C.



Vayra86 said:


> We know better now that Intel solders its high end range and still reaches boiling point.



That also validates my point that this can happen no matter how good the cooling is.


----------



## Vayra86 (Aug 13, 2019)

Vya Domus said:


> How am I wrong and about what ? I didn't link this in away with how shitty AMD's coolers might be, I said this is a problem that can arise irrespective of cooling. Radeon 7 is proof of that where you wouldn't call it's cooler shitty but it still goes over 100C. And you can also find this on GN where they found out that screwing around with the cooler didn't really make a difference with respects to the Tjunction temperature which still went above 100C.
> 
> 
> 
> That also validates my point that this can happen no matter how good the cooling is.



Radeon VII is a big die and it is only delivered with... an AMD stock cooler. Common denominator I think looks pretty clear... The simple fact AIBs can get mid- and high TDP cards to temps as much as 15 C lower simply tells us the truth. Another writing on the wall is every Nvidia card from Maxwell onwards. Even their NVTTM shroud doesn't hit this temp, even as it throttles. Its simply not pushed as far. And for Turing we noticed the blower was suddenly gone in favor of more direct cooling.

Also simply look at and compare vcore. Nvidia readily drops vcore as it reaches higher temps, AMD is much more liberal with that. And when you hit throttle point (84C on an Nvidia card and dropping boost biins won't suffice), you get bumped back rigorously, with vcore dropping to below 0,9V.



Vya Domus said:


> That also validates my point that this can happen no matter how good the cooling is.



Yes but one does not exclude the other, and you cán run a 9900K in spec at stock and even a little beyond that without needing custom water. Why do you think Intel doesn't deliver a boxed cooler?


----------



## R0H1T (Aug 13, 2019)

Vayra86 said:


> They call that a loser's strategy, begging for people to keep coming to the rescue. A winner's strategy is what AMD does for CPU right now. They know how it works. Only the hardened AMD fanbase seems to have trouble grasping that.


So you're pretending that only good products get to be winners & all bad products or  companies lose (customers) 

Must've missed the P4, Atoms or various Nvidia GPUs then, brand name & market position are just as important if not more than the actual product in many cases!


----------



## Vya Domus (Aug 13, 2019)

Vega isn't much bigger, we are talking 330 mm^2 vs 250 mm^2 and keep in mind Radeon 7 has some shaders disabled. In the end they're pretty close. But that doesn't even matter, the transistor density is pretty much the same.

As someone else said before Nvidia does not expose these hotspot temperatures so we can't compare them and know with certainty that Nvidia does deal with this as well.


----------



## Vayra86 (Aug 13, 2019)

R0H1T said:


> So you're pretending that only good products get to be winners & all bad products or  companies lose (customers)
> 
> Must've missed the P4 or various Nvidia GPUs then, brand name & Market position are just as important if not more than the actual product in many cases!



You have got to stop omitting half the text you quote to make your point, because its simply invalid. In the very same sentence you can read AMD's doing it well for CPU. And yes, that is how it works, if you repeatedly 'lose' at some point you're gone. AMD does not repeatedly lose, but GPU has been trouble for them ever since acquiring ATI. They did have some decent releases, but they are not very recent, so its about damn time - and Navi so far 'is not it',  - its more a case of barely hanging on and that only flies because Nvidia chose to waste time on RTX.



Vya Domus said:


> Vega isn't much bigger, we are talking 330 mm^2 vs 250 mm^2 and keep in mind Radeon 7 has some shaders disabled. In the end they're pretty close. But that doesn't even matter, the transistor density is pretty much the same.
> 
> As someone else said before Nvidia does not expose these hotspot temperatures so we can't compare them and know with certainty that Nvidia does deal with this as well.



Still not getting the memo - GN's review shows us that hitting 110C is totally unnecessary. It does not make sense to assume Nvidia cards that are on a larger node and run cooler are showing similar behaviour. In fact, that is just weak deflection.


----------



## deu (Aug 13, 2019)

Guys please understand the topic before you comment:  This does not actually say whether or not it is hot or cold compared to nvidia since the way of meassuring is different (some would say more precise) Put on the edge nvidia could be probing up your a** and get an overall temp of 37,8C; it all depends on the placement of the probe what temperature you get. IF these sensors are placed "correctly" it is an super smart way to optimize the boost of af GPU; if done wrong it is a super optimized way to melt a GPU; Im pretty sure AMD goes for the first option, since they A: want to stay in the market and B: want a fault-rate under 50%. In short terms: It would not make sense to f*** your own GPU over but in reality we dont know is this is good or bad, since we clearly cant compare the two methods


----------



## R0H1T (Aug 13, 2019)

Vayra86 said:


> You have got to stop omitting half the text you quote to make your point, because its simply invalid. In the very same sentence you can read AMD's doing it well for CPU. And yes, that is how it works, if you repeatedly 'lose' at some point you're gone. AMD does not repeatedly lose, but GPU has been trouble for them ever sincee acquiring ATI.


That still doesn't explain half the point you omitted about bad products getting good $ does it? You also conveniently sidestepped the good points of AMD GPUs or do you believe they have none? There is no product without compromises, just with AMD you have to compromise more, again depending on what you do & it's not like AIB cards are "horrible" as well.


----------



## Vayra86 (Aug 13, 2019)

R0H1T said:


> That still doesn't explain half the point you omitted about bad products getting good $ does it? You also conveniently sidestepped the good points of AMD GPUs or do you believe they have none? There is no product without compromises, just with AMD you have to compromise more, again depending on what you do & it's not like AIB cards are "horrible" as well.



You really need to clarify whether you actually have a point or just want to keep this slowchat going with utter bullshit. The numbers speak for themselves, what are you really arguing against? That AMD is a sad puppy not getting enough love?

Grow up

And yes, AIB cards are not horrible, if you care to read back I just about repeated that every other post. That is the whole god damn  point.


----------



## Vya Domus (Aug 13, 2019)

Vayra86 said:


> It does not make sense to assume Nvidia cards that are on a larger node and run cooler are showing similar behaviour.



And it does not make sense to assume that they don't like you clearly insinuated. Why do you people not understand that that you're definition of "cooler" is really, really primitive. Your shinny RTX Titan may show through it's one sensor reading exposed to software that it runs at 75C while some parts of the die might in fact hit over 100C. You don't know that, but it's safe to assume that this does happen because all ICs behave like this. Equally, maybe some parts of a Navi 10 die hit more than 110c, maybe that is within spec, maybe it's not. AMD knows best, more than you and me.

Point is AMD uses a different set of sensors and ways to measure temperatures, this can't directly translate into "Nvidia cards run cooler" nor does it mean that this must make them better products. That's the memo.


----------



## Vayra86 (Aug 13, 2019)

Vya Domus said:


> And it does not make sense to assume that they don't like you clearly insinuated. Why do you people not understand that that you're definition of "cooler" is really, really primitive. Your shinny RTX Titan may show through it's one sensor reading exposed to software that it runs at 75C while some parts of the die might in fact hit over 100C. You don't know that, but it's safe to assume that this does happen because all ICs behave like this. Equally, maybe some parts of a Navi 10 die hit more than 110c, maybe that is within spec.
> 
> Point is AMD uses a different set of sensors and ways to measure temperatures, this can't directly translate into "Nvidia cards run cooler". That's the memo.



Did you catch the line about AIB cards running much cooler yet? Even on AMD's revolutionary sensor placement? And staying well clear of 110C?

Simple case of connected dots here... if you feel confident this 110C is a guarantee for longevity, power to you. I don't.

I might be a stubborn idiot but this is clear as day, sorry.


----------



## Vya Domus (Aug 13, 2019)

Vayra86 said:


> Simple case of connected dots here...



And there is a discontinuity among those dots here that you conveniently ignored : *Radeon 7.*

A card with a more than decent cooler that still reports these "hella scary" temperatures.



Vayra86 said:


> if you feel confident this 110C is a guarantee for longevity, power to you. I don't.



It's not a guarantee for anything because I don't have a bloody clue what that 110C figure is supposed to tell me. I am trying really hard to understand how is it that you people are so convinced that these numbers have some negative implication when in reality you have absolutely no reference point. You simply insist to believe AMD is doing something wrong with no proof.

The Sapphire Pulse model is an astonishingly 2% faster than reference, all this talk about how crappy AMD's cooler and temperatures are would have led me to believe things would have been a lot more different.


----------



## Vayra86 (Aug 13, 2019)

Vya Domus said:


> And there is a discontinuity among those dots here that you conveniently ignored : *Radeon 7.*
> 
> A card with a more than decent cooler that still reports these "hella scary" temperatures.



We are going in circles because I covered that one already; Radeon 7 has a *much higher TDP* and is a bigger die requiring more power, while ALSO being on a stock AMD cooler. We don't know if AIBs would do better, but its very very likely. You should look at similar TDP Nvidia cards that you like to think get just as hot. Here's a hint, compare the vcore curves they use, and how GPU Boost 3.0 works. I also, already, went into that one. Nvidia's boost simply does not _allow the GPU to get that hot._ You just lose a few hundred mhz in the worst case scenario. AMD's Navi just keeps bumping into its throttle point ad infinitum.



Vya Domus said:


> The Sapphire Pulse model is an astonishingly 2% faster than reference, all this talk about how crappy AMD's cooler and temperatures are would have led me to believe things would have been a lot more different.



This was never about being able to hit higher clocks... this is about the temps while getting those clocks. But keep deflecting, all is well.

At the same time this only confirms my idea that AMD pushed Navi out of the box right up into the danger zone and slapped a blower on top for good measure. Its OC'd out of the box, practically, without a cooler to match.



londiste said:


> On one hand, there are hotspots on GPUs and exposing that reading for monitoring externally is definitely a good thing. I do not doubt for a second that Nvidia has similar sensor readings internally available, just not exposed.
> 
> On the other hand, 110°C being expected and in spec is a suspicious statement because we know these GPUs throttle at that exact 110°C point.
> It is like saying Ryzen 3000 running at 95°C is expected and in spec. It is technically correct...



Ah my shining beacon of wisdom and clarity. Thank you.


----------



## killster1 (Aug 13, 2019)

Jism said:


> Yes, improved. But know that the Vega with HBM was 'prone' to crack if the pressure was too high. The interposer or HBM would simply fail when the pressure was too tight. It's why AMD is going for a safe route. Every GPU you see these days is with a certain force but not too tight if you know what i mean. Any GPU could be brought 'better' in relation of temperatures if you start adding washers to it. It's no secret sauce either.
> 
> "- AMD is known for several releases with above average temperature-related long term fail rates."
> 
> I do not really agree. As long as the product is working within spec, no faillure that occurs or at least survives it's warranty period what is wrong with that? It's not like your going to use your videocard for longer then 3 years. You could always tweak the card to have lower temps. I simply slap on a AIO watercooler and call it a day. GPU hardware is designed to run 'hot'. Have'nt you seen the small heatsinks that they are applying to the Firepro series? Those are single-slotted coolers with small fans that you would see back in laptops and such.



why wouldnt you use the card for more than 3 years? i guess you throw your parts in the trash after 3 years? i give mine away at the very least, this isnt a disposable world we are living in like you think! If your car died the day after warranty expired it would be OK with yoU? or do you even live in the real world?

im waiting for a hdmi 2.1 cards that come out and dont run 100C  I guess i dont play games very often and only recently upgraded from i7 3930k from 8 years ago. We all choose to spend our money different ways. im not a big eat out / fast food kinda guy, id rather buy the Tbone for 12$ and cook it myself then pay 120 for it cooked already.


----------



## Zubasa (Aug 13, 2019)

jmcosta said:


> The reference starts to thermal throttling at 90-91C (from 1900mhz to very unstable clocks below 1800mhz) and even shuts down while gaming after a while if its fully utilized (linus and other reviewers have mentioned this)
> The reason you don't see a significant boost is because the gain from pushing the frequency is poor in Navi(maybe driver issue?). This chip having an overclock of 15% results in a <4% performance gain


How much of that is due to the cooler and how much of that was due to unstable drivers?
The fact is all the recent reviews shows that the Sapphire Pulse barely out performs the Reference Card.
Any for the overclock results, the Reference Card's gpu actually overclocked better than the Sapphire Pulse on W1zzard's sample.
Let me remind you the official given "game clock" is 1755Mhz, so the card ran below 1900Mhz is throttling is just not true.

How do you explain this?
https://www.techpowerup.com/review/sapphire-radeon-rx-5700-xt-pulse/34.html





It is not just TPU reviews, even GN's reviews shows that the non-reference card performs almost the same as the reference design.
So it takes more than just "cooler card must be better, hotter card must be running out of spec and losing performance" to prove it.
It is all speculation and GN's own opinion on what is too hot, while even his own data cannot prove the Reference card is losing significant clock speed or performance.


----------



## TheinsanegamerN (Aug 13, 2019)

Vya Domus said:


> I am amazed by these claims, how the hell do you know that ? What is normal and how do you know that's supposed to be normal ? Are you by any chance working on chip design and know this stuff better than we or AMD ?


Common sense and physics. Use your brain.

A device has a max rated limit. This is the max it can take before IMMEDIATE damage occurs. Long term damage does not play by the same rule. Whenever you are dealing with a physical product, you NEVER push it to 100% limit constantly and expect it to last. This applies to air conditioners, jacks, trucks, computers, tables, fans, anything you use on a daily basis. Like I said, my car can do 155 MPH. But if I were to push it that fast constantly, every day, the car wouldnt last very long before experiencing mechanical issues, because it isnt designed to SUSTAIN that speed.

Every time the GPU heats up and cools down, the solder connectors experience expansion and contraction. Over time, this can result in the solder connections cracking internally, resulting in a card that does not work properly. The greater the temperature variance, the faster this occurs. This is why many GPUs now shut the fans off under 50C, because cooling it all the way down to 30C increases the variance the GPU experiences.

What AMD is doing here is allowing the GPU to run at max tjunct temp for extended periods of time and calling this acceptable. Given the GPU also THROTTLES at this temp, AMD is admitting it designed a GPU that cant run at full speed during typical gaming workloads. Given AMD also releases GPUs that can be tweaked to both run faster and consume less voltage rather reliably, it would seem a LOT of us know better then RTG engineers.

Would you care to explain how AMD's silicon is magically no longer affected by physical expansion and contraction from temperatures? I'd love to hear about this new technology.


----------



## Vya Domus (Aug 13, 2019)

Vayra86 said:


> We don't know if AIBs would do better, but its very very likely.



Really ? What would they do with it ? Put a liquid cooler on it, because I can't think of anything else that they could do to improve the cooler, it already has a hefty heatsink with three fans and GN already showed you can't really do much to the TIM and mounting.

We are going circles because you are trying really, really hard to dismiss evidence that you don't like.



TheinsanegamerN said:


> Given the GPU also THROTTLES at this temp, AMD is admitting it designed a GPU that cant run at full speed during typical gaming workloads.



As I said above the Sapphire Pulse model is a mere 2% faster than reference, *this argument is stupid*. The reference model runs fine during typical gaming workloads, speed wise.

Navi shows one of the smallest gaps between reference and AIB models in the last few generations that we've seen. How the hell does that work if AMD made a shitty GPU that can't run at full speed due to thermal throttling if the AIB models eliminate this possibility  ?


----------



## Vayra86 (Aug 13, 2019)

Vya Domus said:


> Really ? What would they do with it ? Put a liquid cooler on it, because I can't think of anything else that they could do to improve the cooler, it already has a hefty heatsink with three fans and GN already showed you can't really do much to the TIM and mounting.
> 
> We are going circles because you are trying really, really hard to dismiss evidence that you don't like.
> 
> ...



And we arrive once again upon your assumption versus mine, and I say, power to you, buy more save more, go go. You're doing the exact same wrt 'evidence' (limited to Radeon 7 'also having a hot spot' versus overwhelming evidence that other cards run much cooler and even Navi can)  and this will go nowhere.

Its times like these that common sense gets you places. Try it someday. Calling the argument stupid because you cannot quantify things, is not usually a good idea.


----------



## dinmaster (Aug 13, 2019)

back with rx280's, i rma'd 5 (mining) of them when i ran a game to push each of them one at a time and manually slowed the fans down to heat them up. if they had artifacts before 85c i would send them back. then i tested the new ones and sent back another 3. they are just confirming to us that it is defective if the card cannot reach temps without errors. my personal way of binning cards


----------



## Zubasa (Aug 13, 2019)

TheinsanegamerN said:


> Every time the GPU heats up and cools down, the solder connectors experience expansion and contraction. Over time, this can result in the solder connections cracking internally, resulting in a card that does not work properly. The greater the temperature variance, the faster this occurs. This is why many GPUs now shut the fans off under 50C, because cooling it all the way down to 30C increases the variance the GPU experiences.


The reason for shutting off the fan at idle is just for noise reasons, that is nothing to do with reducing temperature gradient at all.
Fact is older GPUs do not have this feature at all and all of them ran fine and did not pre-maturely die because it.

Also starting and stopping the fans more often than otherwise is actually slightly detrimental to the life span of the fans.
For motors the ideal condition is actually to run them at a steady state.
This is the same reason why you don't want to start and stop your HDD motor too often.


----------



## Vya Domus (Aug 13, 2019)

Zubasa said:


> The reason for shutting off the fan at idle is just for noise reasons, that is nothing to do with reducing temperature gradient at all.
> Fact is older GPUs do not have this feature at all and all of that ran fine and did not pre-maturely die because it.
> Also starting and stopping the fans more often than otherwise is actually slightly detrimental to the life span of the fans.



Wasn't this contracting and expanding the same nonsense some site tried to pass as the reason why Intel didn't use solder, I can't remember who made an article on this.

And even if that would be the case, it's not just the temperature delta that matters, the frequency of these deltas is what really may have an effect on the material. And thankfully, GPU usually run at high constant temperatures for extended periods of times and idle at low constant temperature for the rest of the time.


----------



## medi01 (Aug 13, 2019)

I love, how the most offended users actually own NV cards,... ^))))

Given that temps are reached on the ref card and that today we see know AIBs drop card temperatures by good 25+ degrees, could we find another reason to get offended? Like lack of cross fire or something?



Vayra86 said:


> Calling the argument stupid because you cannot quantify things...


He literally chewed it for you, let me repeat the relevant part, perhaps you'd get it in second go: had thermals been a problem, gap between AIB and ref cards would be much bigger than 3-5% that we see now (especially taking into account much lover temps on AIBs).


----------



## Anymal (Aug 13, 2019)

Zubasa said:


> Newer unreleased / not even announced GPU demolishes older GPUs, such insight much wow.


Well, 7nm radeon just match 16nm 3 years old Pascal in p/W. Nvidia should just make 7nm Pascal, why bother with Turing.


----------



## danbert2000 (Aug 13, 2019)

In my opinion, we have to trust that AMD knows what they're doing with the max temperature. If they have done engineering tests at these heats and aren't worried about degradation, then the cards will probably be fine throughout their designed lifespan. 110 degrees seems like a lot, but part of that is that we were trained to watch temps from one sensor. I'm guessing that setting a temperature limit of 92 degrees or so for older GPUs was a way of using the one sensor to try to extrapolate the maximum temperature from one source.

If it is a problem, then these cards will start failing and people will complain about it. If we subscribe to the bathtub model of component failure, there should be a large percentage of the total failures for a product early on, due to defective cards or if this heat is really a problem, so it shouldn't take too long to tell if the GPU is immolating itself. It's not like every 5700 will last for 3 years 1 month and then burn up after the warranty is through. If the heat is a problem, we'll hear about it soon and people will still be under warranty.


----------



## Vayra86 (Aug 13, 2019)

medi01 said:


> I love, how the most offended users actually own NV cards,... ^))))
> 
> Given that temps are reached on the ref card and that today we see know AIBs drop card temperatures by good 25+ degrees, could we find another reason to get offended? Like lack of cross fire or something?
> 
> ...



Bought one yet? You were waiting and they're out, what's keeping you? After all, ref is 'just fine' 

Also, this line, is a bit of head scratcher
_"the relevant part, perhaps you'd get it in second go: had thermals been a problem, gap between AIB and ref cards would be much bigger than 3-5%"_

Actually... not having headroom while still having lower temps is a clear sign the card is clocked straight to the limit out of the box, and this also echoes in the GN review. @TheinsanegamerN worded it nicely, ref design is like a car running at top speed full in the red zone all the time, and considering that normal is a rather weird approach. The GN review also handiily points out memory ICs are also a hair below running out of spec. Now, imagine what happens with a bit of dust, wear and tear over time - or in fact, in most use cases outside the review bench. The throttling will get worse, and that peak temp won't be lower for it.


----------



## jmcosta (Aug 13, 2019)

Zubasa said:


> How much of that is due to the cooler and how much of that was due to unstable drivers?
> 
> 
> Spoiler: Zubasa reply
> ...



yeah it could be the driver or the architecture... we don't know as of now but the performance gain from overclocking is poor in Navi, this is the reason you see the premium cards with higher clocks being close to the reference.
Check the clock speeds page and compare between the two, the frequency in the reference is all over the place once it starts to reach 91C and as i said above theres a case some of them in warm environments that they even shutdown.

AMD cheap out their cooler that is a fact even knowing about the thermal density issue...and now they come with the "oh it's fine".
They did the same in the CPU department 



http://imgur.com/a/XJwc1dx


Its all about profits with these corporations.
we are living in a time when truth has been so diminished in value that even thosse at the top are quite comfortable with truth being whatever they can convince people to believe


----------



## medi01 (Aug 13, 2019)

Vayra86 said:


> Bought one yet? You were waiting and they're out, what's keeping you? After all, ref is 'just fine'


No. Thanks for asking.
I need to complete a woodworking project, for there to even be a place for a PC with monitor (my current something is hooked to a TV and that's not the way I'd like to play games).
Besides, AIBs are not really available yet.



Vayra86 said:


> Now, imagine what happens with a bit of dust...


Clearly nothing, but who cares about ref cards anyway.



Vayra86 said:


> Actually... not having headroom while still having lower temps is a clear sign the card is clocked straight to the limit out of the box...


Actually, talk was about thermal design and horrors that nvidia GPU owners feel, for some reason, for 5700 XT ref GPU owners. 

Now that we've covered that, NV's 2070s (I didn't check others) AIBs aren't great OCers either, diff between Ref and AIB performance is also similar between brands.


----------



## notb (Aug 13, 2019)

R0H1T said:


> What BS, you're making it sound like* AMD GPUs are unusable garbage* & Nvidia not only outstrips it across the board but also in every price bracket, every game you can think of! Which is of course BS as well


By all means, AMD GPUs aren't unusable. That's not what I said.
But these GPUs aren't mainstream. To be mainstream, they have to offer more than just performance/price ratio. There's so much to improve in thermals, efficiency and stability. In marketing and support as well.
Nvidia's cards are so much more attractive, because Nvidia sells a polished, complete product. AMD sells a DIY project.

This becomes obvious when you look at what some of AMD's custom GPU clients can achieve. Apple, Sony, Microsoft and soon Samsung - they're offering AMD's chips in a much easier to digest form.
Of course AMD could make more robust products. They could do better pre-launch testing, improve compatibility and drivers. And work on relations with partners to deliver AIB cards and OEM systems on day of launch (like Nvidia and Intel do). But that would raise costs and - at least for now - AMD wants to remain the cheaper alternative. It's a conscious decision.



Zubasa said:


> Also starting and stopping the fans more often than otherwise is actually slightly detrimental to the life span of the fans.


First of all: is this your intuition or are there some publications to support this hypothesis? 

Second: you seem a bit confused. The passive cooling does not increase the number of times the fan starts. The fan is not switching on and off during gaming.
If the game applies a lot of load, the fan will be on during the whole session. Otherwise the fan is off.
So the number of starts and stops is roughly the same. It's just that your fan starts during boot and mine during game launch. So I don't have to listen to it when I'm not gaming (90% of the time).

In fact it actually decreases the number of starts for those of us who don't play games every day.


----------



## eidairaman1 (Aug 13, 2019)

er557 said:


> I wouldn't run furmark on this card unless I want to cook breakfast



Furmark is trash on any card



Axaion said:


> Yeah no thanks AMD, i dont wish to have hearing damage because of your poor cooler design


Please give me a break with that crap. Try a server fan.

Tbf all cards use crap thermal compound/pads, why? Cheap in bulk.


----------



## killster1 (Aug 13, 2019)

TheinsanegamerN said:


> Common sense and physics. Use your brain.
> 
> A device has a max rated limit. This is the max it can take before IMMEDIATE damage occurs. Long term damage does not play by the same rule. Whenever you are dealing with a physical product, you NEVER push it to 100% limit constantly and expect it to last. This applies to air conditioners, jacks, trucks, computers, tables, fans, anything you use on a daily basis. Like I said, my car can do 155 MPH. But if I were to push it that fast constantly, every day, the car wouldnt last very long before experiencing mechanical issues, because it isnt designed to SUSTAIN that speed.
> 
> ...



really what damage to your car would happen at 155mph daily? Do you perhaps have 3 gears? small motor struggling to get to 155? id say letting a car sit idle would be more damage then most cars at 155 


getting from 0 to 100mph is where you’re going to be doing the most ‘damage’ - if you do it in a quarter mile, you’re really stressing the car, but if you take 20 miles to get to that speed, your wear and tear is much less, due to less torque. Once you get to that speed, it doesn’t much matter if you’re driving a muscle car or a Prius, as long as the overdrive gear is set up to sip fuel (or pull juice from the battery) just enough to overcome 100mph drag.


----------



## efikkan (Aug 13, 2019)

I don't care what excuses they come up with, any sustained temperatures in the range of 100-110°C can't be good for long term reliability of the product. And this goes for any brand, not just AMD.

We have to remember that most reviews are conducted on open test benches or in open cases, while all customers will run these in closed cases, and even the best of us will not keep it completely dust free. That's why it's important that any product have some thermal headroom when reviewed under ideal circumstances, since real world conditions will always be slightly worse.


----------



## Vayra86 (Aug 13, 2019)

medi01 said:


> Actually, talk was about thermal design and horrors that nvidia GPU owners feel, for some reason, for 5700 XT ref GPU owners.
> 
> Now that we've covered that, NV's 2070s (I didn't check others) AIBs aren't great OCers either, diff between Ref and AIB performance is also similar between brands.



You're right, Turing clocks right to the moon out of the box as well, but still gets an extra 3-6% across the whole line - its minor, but its there, and it says something about how the card is balanced at stock. The actual 'OC' on Nvidia cards is very liberal, because boost also always punches above the specced number. And I have to say, the AIB Navi's so far look pretty good too, on a similar level even - sans the OC headroom - AMD squeezed out every last drop at the expense of a bit of efficiency, and it shows. Was that worth it? I don't know.

The problem here is AMD once again managed to release ref designs that visibly suck, and its not good for their brand image, it does not show dedication to their GPUs much like Nvidia's releases are managed. The absence of AIB cards at launch makes that problem a bit more painful. And its not a first - it goes on, and on. In the meantime, we are looking at a 400 dollar card. Its not strange to expect a bit more.

Oh and by the way, I said similar stuff about the Nvidia Founders when Pascal launched, but the difference there was that Pascal and GPU Boost operated at much lower temps. And even thén the FE's still limited performance a bit.


----------



## jmcosta (Aug 13, 2019)

Vayra86 said:


> You're right, Turing clocks right to the moon out of the box as well, but still gets an extra 3-6% across the whole line - its minor, but its there, and it says something about how the card is balanced at stock. The actual 'OC' on Nvidia cards is very liberal, because boost also always punches above the specced number. And I have to say, the AIB Navi's so far look pretty good too, on a similar level even - sans the OC headroom - AMD squeezed out every last drop at the expense of a bit of efficiency, and it shows. Was that worth it? I don't know.



The Turing chips have a small OC headroom but the performance gain is almost 10%, which is the opposite of what you see with Navi


----------



## Sithaer (Aug 13, 2019)

Honestly I don't care about this 'issue' and I don't belive it for a second that Nvidia or Intel doesn't have the same stuff going on anyway.

In the past ~10+ years I only had 2 cards die on me and both were Nvidia cards so theres that.

Don't care about ref/blower cards either,whoever buys those should know what they are buying instead of waiting some time to get 'proper' models.

I'm planning to buy a 5700 but I'm not in a hurry,I can easily wait till all of the decent models are out and then buy one of them 'Nitro/pulse/giga G1 probably'.


----------



## Aerpoweron (Aug 14, 2019)

Every Vega has the T-juction temp sensors. GPU-Z showed them, which confused a lot of people. So they choose for one GPU-Z version not to show it by default. But you could activate that. 

And don't forget there is still the usual GPU-Temperature. Which is showing the temperatures we are used to.

We could only compare Nvidia and AMD cards, when we have a sensor with with a lot of temp zones, which we could put between the GPU-Die and the cooler. We could see the hot spots no matter who built the card.


----------



## Th3pwn3r (Aug 14, 2019)

Ultimately what I and others should care about is power consumption to performance ratio BUT with AMD being lower nm than Nvidia I hoped and expected much better results. Oh well.


----------



## Zubasa (Aug 14, 2019)

notb said:


> First of all: is this your intuition or are there some publications to support this hypothesis?
> 
> Second: you seem a bit confused. The passive cooling does not increase the number of times the fan starts. The fan is not switching on and off during gaming.
> If the game applies a lot of load, the fan will be on during the whole session. Otherwise the fan is off.
> ...


First of all, motors draw the maximum amount of current when they start, this heats up the wire winding in the motor.
Extra start and stops means extra thermal cycles for the wires. This is similar to the concern that other members have raised about the solder joins of the GPU.
The there is the wear on the bearings depending on type. Rifled Bearing and Fluid Dynamic Bearings require a certain speed to get the lubricant flowing.
This means at start up there are parts of the bearing with very little lubrication which cause extra wear on the bearing than otherwise.

Now because the the fan blades are rather light loads, the motor gets up to speed quickly and the effects are minimal.
Therefore I said it is only slightly detrimental to fan life span.
Shutting the fans off at idle is for noise reasons and nothing else, that is exactly what I said in my post thanks for repeating my point.

No not a fact, it certainly doesn't decrease the number of starts, the GPU fans will spin up at least once on boot.
Also depending on the design of the card, some GPUs will start the fans on video play back due to the the gpu heating up under load for hardware acceleration.
So the best case scenario is it is the same number of start cycles.


----------



## R0H1T (Aug 14, 2019)

Vayra86 said:


> You really need to clarify whether you actually have a point or just want to keep this slowchat going with utter bullshit. The numbers speak for themselves, what are you really arguing against? That AMD is a sad puppy not getting enough love?


So that's your argument huh? What's your data point for 110°c "hotspot" temperature being (always) bad given we have absolute no reference, especially at 7nm nor do we know if 110°c is sustained for n length of time? Do you for instance have data about *hotspots* on 9900k @5GHz or above & how about this ~ Temperature Spikes Reported on Intel's Core i7-7700, i7-7700K Processors

So you come with absolutely no data, lots of assumptions & then ignoring historical trends & call whatever I'm saying as utter BS, great 

Could AMD have done a better job with the cooling ~ sure, do we know that the current solution will fail medium - long term? You have absolutely 0 basis to claim that, unless you know more than us about this "issue" or any other on similar products from the competitors.


----------



## Bytales (Aug 14, 2019)

Zubasa said:


> It is hard to compare to the competition, because nVidia GPUs do not have a TJunction sensor at all.
> Without knowing where the Temp sensor on nVidia GPUs is located, there really is no valid comparison.
> The edge temp on AMD gpus aka "GPU" read out is much closer to what you typically expects.
> 
> ...



Yeah, People never bother to read the whole text, they only see, AMD 110 Degrees. Then start complaining.
You moronic faqs, why bother to waste energy commenting at all ! You are in no measure to understand squat, so please go do something else with your life, instead of spamming us on the Forums here !


----------



## Anymal (Aug 14, 2019)

Almost no OC headroom since 7970 Ghz ed. OC dream Watercooled Fury X? No OC AIB cards. 7nm brand new Navi and here we are, underwhealming already OCed straight from production lines. RDNA 2.0 to the rescue in 2020.


----------



## medi01 (Aug 14, 2019)

Vayra86 said:


> You're right, Turing clocks right to the moon out of the box as well, but still gets an extra 3-6% across the whole line - its minor, but its there, and it says something about how the card is balanced at stock.


When I checked it was 4% for ASUS and 3% for MSI.



Vayra86 said:


> The actual 'OC' on Nvidia cards is very liberal, because boost also always punches above the specced number.


Boost doesn't count as OC in my books. It's part of the standard package, and wizard keeps explicitly stating the clock range in recent reviews.
The performance we get at the end of the day, includes that boost.
You can't count it as something being added on top.




Vayra86 said:


> The problem here is AMD once again managed to release ref designs that visibly suck, and its not good for their brand image, it does not show dedication to their GPUs much like Nvidia's releases are managed.


More of a PR, nothing practical. We don't even know what "spot" temps of NV are.




Vayra86 said:


> The absence of AIB cards at launch makes that problem a bit more painful. And its not a first - it goes on, and on. In the meantime, we are looking at a 400 dollar card. Its not strange to expect a bit more.


That's simply caused by playing catch-up game.
And, frankly, I'd rather learn what's coming 1-2 month in advance, rather than wait for Ref and AIB cards to hit together. (I don't even get what ref cards are for, other than that)



Vayra86 said:


> Oh and by the way, I said similar stuff about the Nvidia Founders when Pascal launched, but the difference there was that Pascal and GPU Boost operated at much lower temps. And even thén the FE's still limited performance a bit.


Ok, let me re-state this again:
1) AMD used a blower type (stating that is the only way they can guarantee the thermals)
2) Very small perf diff between AIB and Ref proves that *even ref 5700 XT is not doing excessive throttling, despite being 20+ degrees hotter*.
3) "Spot temperature" is just a number, that makes sense only in pair with ref cards (who buys them) and even there, is not causing practical problems, although I admit that @efikkan has a point and it might have bad impact on card's longevity, still, "ref card, who cares"

In short: possibly bad impact on card longevity, but we are not sure. Definitely not having serious performance impact. We don't even know what values are for NV, as there is no exposed sensor.


----------



## BorgOvermind (Aug 14, 2019)

That's exactly what was said about the 1070 GPU, which indeed could exceed 100C, but that in turn for notebooks overheated the CPU too much, so maintenance was due anyway.



er557 said:


> Radeons have always ran hot, but this is ludicrous.


And nVs didn't ?
lol


----------



## Frick (Aug 14, 2019)

efikkan said:


> I don't care what excuses they come up with, any sustained temperatures in the range of 100-110°C can't be good for long term reliability of the product. And this goes for any brand, not just AMD.
> 
> We have to remember that most reviews are conducted on open test benches or in open cases, while all customers will run these in closed cases, and even the best of us will not keep it completely dust free. That's why it's important that any product have some thermal headroom when reviewed under ideal circumstances, since real world conditions will always be slightly worse.



The thing is we don't know that. If these cards start to drop dead in a year or so an it is confirmed to be temperature related (as opposed to sloppy manufacturing), then we know. Until then it's more or less qualified guesswork. And modern chips rarely sustain any load, the voltage regulation and boost thingies are way too sophisticated for that.


----------



## B-Real (Aug 14, 2019)

las said:


> The GPU is not the only thing using power...
> 
> 
> 
> ...



The RTX 2060 uses more power than an RX 5700 in gaming on average while performing worse. So what did you want to say?


----------



## notb (Aug 14, 2019)

Frick said:


> The thing is we don't know that. If these cards start to drop dead in a year or so an it is confirmed to be temperature related (as opposed to sloppy manufacturing), then we know. Until then it's more or less qualified guesswork. And modern chips rarely sustain any load, the voltage regulation and boost thingies are way too sophisticated for that.


Well. Many people on this forum are convinced that high temperatures are killing Intel CPUs. Do you want to tell them that AMD GPUs are magically resistant to 100*C? :-D


----------



## Xuper (Aug 14, 2019)

Read this :








						Why 110-Degree Temps Are Normal for AMD's Radeon 5700, 5700 XT - ExtremeTech
					

AMD's new 5700 XT and 5700 can hit higher temps than some people are comfortable with, but the company changed how and where it measures temperatures with Navi.




					www.extremetech.com
				




This hotspot 110'c was there long before Navi/Radeon VII but people couldn't understand thing.here one of people said very clear :



> Under the old way of measuring things, AMD had one value to work with. It established a significant guard band around its measurements and left headroom in the card design to avoid running too close to the proverbial ragged edge.
> 
> Using this new method, AMD is able to calibrate its cards differently. They don't need to leave as much margin on the table, because they have a much more accurate method of monitoring temperature. The GPU automatically adjusts its own voltage and frequencies depending on the specific characteristics of each individual GPU rather than preprogrammed settings chosen by AMD at the factory.
> 
> It is *also* possible that pre-Navi AMD GPUs hit temperatures above 95C in other places but that this is not reported to the end-user because there's only one sensor on the die. AMD did not say if this was the case or not. All it said is that they measured in one place and based their temperature and frequency adjustments on this single measurement as opposed to using a group of measurements.



I hope one day Intel/Nvidia/AMD follows path and allow us to see temp of all array of sensors.









						Views of Pluto Through the Years
					

This animation combines various observations of Pluto over the course of several decades.




					www.nasa.gov
				



this is best example for those who don't understand.
From 1995 to 2015.It took them 20 years to get a SHARP image of Pluto Planet.


----------



## Aerpoweron (Aug 14, 2019)

medi01 said:


> Boost doesn't count as OC in my books. It's part of the standard package, and wizard keeps explicitly stating the clock range in recent reviews.
> The performance we get at the end of the day, includes that boost.
> You can't count it as something being added on top.



I think the differences between Boost and Overclock have started blending together for some time now. And AMD with it's Junction temp and the Precision Boost can get close to a good Overclock with it's Boost features.
I find it amazing that they have 32 sensors on the Vega and 64 on the Radeon VII. I wonder how many the Navi GPUs have. And if we get a tool to see the temps in a colored 2d-texture we could see where and when what part of the GPU is utilized.
And think about the huge GPU dies of the Nvidia RTX cards. They likely have a lot of headroom with a junction temp optimization.

People complaining about the "not so good" 7nm GPUs. It is a new process, and it will take some time to get the best out of that manufacturing node. And we will see how good Nvidia's architecture is scaling on 7nm when it will be released


----------



## Frick (Aug 14, 2019)

notb said:


> Well. Many people on this forum are convinced that high temperatures are killing Intel CPUs. Do you want to tell them that AMD GPUs are magically resistant to 100*C? :-D



I've never seen that claim. And yes, if that is what the specs says.


----------



## ssdpro (Aug 14, 2019)

110 seems pretty hot. Not so hot it dies within warranty though.


----------



## ZoneDymo (Aug 14, 2019)

cucker tarlson said:


> rubbing eyes
> 
> so how many ppl are still running 7970s/r9 2xx cards around here,which are 6-8 years old.



My sisters are still running my old HD6950's sooo yeah, oh and a friend uses an HD7950 still today purely because the prices are ridiculous so upgrading does not make sense.


----------



## Vayra86 (Aug 14, 2019)

R0H1T said:


> So that's your argument huh? What's your data point for 110°c "hotspot" temperature being (always) bad given we have absolute no reference, especially at 7nm nor do we know if 110°c is sustained for n length of time? Do you for instance have data about *hotspots* on 9900k @5GHz or above & how about this ~ Temperature Spikes Reported on Intel's Core i7-7700, i7-7700K Processors
> 
> So you come with absolutely no data, lots of assumptions & then ignoring historical trends & call whatever I'm saying as utter BS, great
> 
> Could AMD have done a better job with the cooling ~ sure, do we know that the current solution will fail medium - long term? You have absolutely 0 basis to claim that, unless you know more than us about this "issue" or any other on similar products from the competitors.



You oughta scroll back a bit, I covered this at length - Memory ICs reach 100C for example, which is definitely not where you want them. That heat affects other components and none of this helps chip longevity. The writing is on the wall. To each his own what he thinks of that, but its not looking comfy to me.

By the way, your 7700K link kinda underlines that we know about the 'hot spots' on Intel processors, otherwise you wouldn't have that reading. But these Navi temps are not 'spikes'. They are sustained.

We can keep going in circles about this but the idea that Nvidia ref also hits these temps is the same guesswork; but we do have much better temp readings from all other sensors on a ref FE board - including memory ICs. And note: FE's throttle too but I've seen GPU Boost in action and it does the job a whole lot better; as in, it will rigorously manage voltages and temps instead of 'pushing for the limit' like we see on these Navi boards. This is further underlined by the OC headroom the cards still have. There are more than enough 'data points' available...

Besides, nothing is really new here - AMD's ref cards have always been complete junk.



medi01 said:


> "ref card, who cares"
> 
> In short: possibly bad impact on card longevity, but we are not sure. Definitely not having serious performance impact. We don't even know what values are for NV, as there is no exposed sensor.



We are never sure until after the fact. I'll word it differently. The current state of affairs does not instill confidence. And no, I don't 'care' about ref cards either, but I pointed that out earlier; AMD should, especially when AIB cards are late to the party. These kinds of events kill their momentum for any GPU launch, and it keeps repeating itself.


----------



## John Naylor (Aug 14, 2019)

ZoneDymo said:


> and its 100 - 150 dollars cheaper.... so why are you comparing the two?
> If anything you should compare it to the RTX2060 Super (like in your link...was the 2070 a typo?) and then the 5700XT is overall the better option.



To my eyes the 5700 XT should be compared with the 2070 .... the 5700 w/ the 2060... no supers.
With both cards overclocked, the AIB MSI 2070 Gaming Z (Not super)  is still about 5% faster than the MSI Evoke 5700 XT ... so if price difference is deemed big enough (-$50) I can see the attraction ... but the 5700XT being 2.5 times as loud is a deal breaker at any price.  The Sappire is slower still but it's significantly quieter

MSI 2070 = 30 dbA 
MSI 5700 XT = 43 dba  .. 13 dba = 2.46 times as loud



			https://tpucdn.com/review/msi-radeon-rx-5700-xt-evoke/images/relative-performance_2560-1440.png
		


MSI 5700 XT = 100%
Reference 2070 = 96%



			https://tpucdn.com/review/msi-radeon-rx-5700-xt-evoke/images/overclocked-performance.png
		

MSI Evoke Gain from OC = 100% x (119.6 / 115.1) = 103.9



			https://tpucdn.com/review/msi-geforce-rtx-2070-gaming-z/images/overclocked-performance.png
		

MSI 2070 Gain from overclocking = 96% x (144.5 / 128.3) = 108.1

108.1 / 103.9 = + 4.85%

The Gaming Z is a $460, the Evoke suggested at $430.... will likely be higher for the 1st few months.

If we ask,  "Is a 5% increase in performance worth a 7% increase in price ?"  It would be to me.   But with a $1200 build versus a $1230 build, that's a 5% increase in speed for a 2.5% increase in price, and that's a more appropriate comparison as the whole system is faster and the card don't deliver any fps sitting on your desk.   However, the 800 pound gorilla in the room is the 43 dbA 2.5 times as loud thing. 

I think the issue here is, from what we have seen so far most of the 5700XT cards are not true AIB cards but more like the EVGA Black series ... pretty much a reference PCB with a AIB cooler.   Asus went out and beefed up the VRMS with an 11  / 2 + 1 design versus the 7 / 2 reference .  They didn't cool near as well as the MSI 5700 XT or 2070,  The did a lot better on the"outta the box" performance but OC headroom was dismal.  As the card was so aggressively OC'd in the box, manual OC'ing added just 0.7% performance.  


Asus 5700 XT Strix = 100% x  (118.3 / 117.4) = 100.77
MSI 2070 Gaming Z = 95% x  (144.5 / 128.3) = 107.00

107.00 / 100.77 = + 6.18 %

Interesting tho that Asus went all out, spending money on the PCB redesign when MSI (and Sapphire)  looks like they used a cheaper memory controller than the reference card and yet MSI hit 119.6 in the OC test where as Asus only hit 118.3.  Still, tho it will surely cost closer to what the premium AIB 2070s costs due to the PCB redesign and tho it's 7C hotter and 6% slower than the MSI 2070 ... it's only 6 dbA louder (performance BIOS).  To get lower (+2 dbA), the performance drops and temps go up to 82C.

Tho the Asus is 6% slower and the MSI is 5% slower than the MSI 2070.... if I couldn't get a 2070, and was looking to choose a 5700 XT, it would have to be the Asus. ... but not at $440.

As for the hot spots, I'm kinda betwixt and between ... Yes, I'm inclined to figure that I have neither the background nor experience to validate or invalidate what they are saying .... but in this era ... lying to everybody seems to be common practice.  In recent memory we have AMD doing the "it was designed that way" routine when the 6 pin 480's were creating fireworks .... and then they issued a soft fix , followed by a move to 8-pin cards.   EVGA said "we designed it that way" when 1/3 of the heat sink missed the GPU on the 970 .... and again,  shortly thereafter they issued a redesign.   Yet again, when the EVGA 1060s thru 1080s started smoking, the "we designed it that way" mantra was the 1st response and then there was the reacall / do it yaself kit / redesign with thermal pads. 

All I can say is "I don't know ...  I'm in no position to judge.  Ask me again in 6 monts after we get user feedback.  But I also old enough to remember AMDhaving fun at nvidias expense frying an egg on the GTX 480 card.


----------



## Prima.Vera (Aug 14, 2019)

Just admit it AMD. You overclock and overvolt those GPUs like crazy just to gain 5-6% more performance in order to _barely_ compete with nVidia. 
How much is the power consumption when those GPUs pass 100°C ??


----------



## Zubasa (Aug 15, 2019)

Prima.Vera said:


> Just admit it AMD. You overclock and overvolt those GPUs like crazy just to gain 5-6% more performance in order to _barely_ compete with nVidia.
> How much is the power consumption when those GPUs pass 100°C ??


They don't the GPUs average reach high 80s to 90-ish degrees, and the power consumption are in the reviews.
Every modern GPU has a target set in the bios, without any overclock it just reach the power target and stays there.
It is not like the old days where GPUs run themselves into the ground when you run Furmark.


Spoiler


----------



## killster1 (Aug 15, 2019)

Sithaer said:


> Honestly I don't care about this 'issue' and I don't belive it for a second that Nvidia or Intel doesn't have the same stuff going on anyway.
> 
> In the past ~10+ years I only had 2 cards die on me and both were Nvidia cards so theres that.
> 
> ...



110c is to hot hopefully they come out with a 5800 that solves it, but i dont think they care as long as they have your money till the warranty is expired.

i have *only had nvidia cards die too* but thats because they are always the best bang for buck. (only bought a few amd gpu's 9800 that could unlock shaders? or 9700 vanilla? i forget and the 1950xtxtxtx? still have it on the wall of my garage. Most deaths are from simple cap that i could have replaced but by the time they die i would rather hang them on the wall then repair and use.  (maybe 30 motherboards some with cpu's and coolers intact on my wall and 20 gfx cards over the years.)

its strange to me why people want a 5700 anyway, the *1080ti* has been out for how long? i purchased two of them used long ago for *450 and 500 (just about 2 years ago to the day)* they seem to run better then the new 5700xt in every scenario. so its people that love the amd brand and are hoping for a better future?

if i was to purchase a card today it would be a open box 2080 i think they run 550? To bad nothing has hdmi 2.1 so i will just sit and wait for next gen after next gen still so slow and overpriced. (id be happy with 8k@120hz hehehe)


----------



## notb (Aug 15, 2019)

Frick said:


> I've never seen that claim. And yes, if that is what the specs says.


No? You've never seen a topic where people criticize Intel for 80*C+ and praise Ryzen for being cooler? Maybe some bad TIM discussion? Anything? :-D


----------



## Frick (Aug 15, 2019)

notb said:


> No? You've never seen a topic where people criticize Intel for 80*C+ and praise Ryzen for being cooler? Maybe some bad TIM discussion? Anything? :-D



TIM discussions, but not straigt up "heat is murdering intel CPU's". But then I don't pay much attention.


----------



## las (Aug 15, 2019)

B-Real said:


> The RTX 2060 uses more power than an RX 5700 in gaming on average while performing worse. So what did you want to say?



5700 uses more power than 2060.

It's clear that AMD maxed these chips completely out, just like Ryzen chips, to look good in reviews but there's no OC headroom as a result. Which is why Custom versions perform pretty much identical to reference and overclocks 1.5% on average.


----------



## medi01 (Aug 15, 2019)

Vayra86 said:


> We are never sure until after the fact. I'll word it differently.


Well, actually no, we are pretty sure that if someone falls from 100 meter height onto a concrete surface, he/she will inevitably die.

Your example with "it makes everything hotter" is moot here, as we are talking about only 1 out of 64 sensors reporting that temp.
*Overall temp of the chip in TPUs tests of ref card **was 79C*, +4 degrees if OCed.
Nowhere 110.
Only 6 degrees higher than 2070s (blower ref vs aib-ish ref)



Vayra86 said:


> AIB cards are late to the party


I don't see it that way. No matter who what and where, first couple of month (or longer) there are shortages and price gouging, regardless of when AIBs come.



killster1 said:


> 110c is to hot hopefully they come out with a 5800 that solves it


There is nothing to fix, besides people's perception.
We are talking about 79C temp overall, with one out of gazillion of "spot" sensors reporting particular temp.
We have no idea how much those temps would be in case of NV, but likely also over 100.


----------



## R0H1T (Aug 15, 2019)

Vayra86 said:


> You oughta scroll back a bit, I covered this at length - Memory ICs reach 100C for example, which is definitely not where you want them. That heat affects other components and none of this helps chip longevity. The writing is on the wall. To each his own what he thinks of that, but its not looking comfy to me.
> 
> By the way,* your 7700K link kinda underlines that we know about the 'hot spots' on Intel processors*, otherwise you wouldn't have that reading. But these Navi temps are not 'spikes'. They are sustained.


These are hotspots, not the entire die's temp! Did you even read what the blog post said?





> Paired with this array of sensors is the ability to identify the ‘hotspot’ across the GPU die. *Instead of setting a conservative, ‘worst case’ throttling temperature for the entire die, the RadeonTM RX 5700 series GPUs will continue to opportunistically and aggressively ramp clocks until any one of the many available sensors hits the ‘hotspot’ or ‘Junction’ temperature of 110 degrees Celsius. Operating at up to 110C Junction Temperature during typical gaming usage is expected and within spec*. This enables the RadeonTM RX 5700 series GPUs to offer much higher performance and clocks out of the box, while maintaining acoustic and reliability targets.
> 
> 
> 
> *We provide users with both measurements – the average GPU Temperature and Junction Temperature – so that they have the best and most complete visibility into the operation of their RadeonTM RX 5700 series GPUs*.


No it doesn't, the forum poster who claimed the "same" about Intel was doing this after (*entire*) bare *die* (or delid?) & *water cooling* IIRC. These are *hotspots*, I mean are you intentionally trying to be obtuse or do you have an axe to grind here


----------



## Xuper (Aug 15, 2019)

Some people Have no idea How Hotspot Temp measures.*If Overall temp reaches 100'c , this means Hotspot Temp is above 120'c*.each GPU chip does have multiple layers and due to Heat transfer fluids , arrays Sensor in middle layer always reports highest temp.Top and bottom layers have lower temp than junction temp.See that ?
according to Globalfoundries :



			https://www.globalfoundries.com/sites/default/files/product-briefs/product-brief-14lpp-14nm-finfet-technology.pdf
		


Standard temperature range: -40°C to 125°

AMD set it to 110'c.So any process Unit's Temp in any layers must not excess more than 110'c and you guy ,like kid, scream it like House is in Fire?
What's junction temp for Turning card ? I bet Nvidia doesn't want to reveal it.


----------



## Sithaer (Aug 15, 2019)

killster1 said:


> 110c is to hot hopefully they come out with a 5800 that solves it, but i dont think they care as long as they have your money till the warranty is expired.
> 
> i have *only had nvidia cards die too* but thats because they are always the best bang for buck. (only bought a few amd gpu's 9800 that could unlock shaders? or 9700 vanilla? i forget and the 1950xtxtxtx? still have it on the wall of my garage. Most deaths are from simple cap that i could have replaced but by the time they die i would rather hang them on the wall then repair and use.  (maybe 30 motherboards some with cpu's and coolers intact on my wall and 20 gfx cards over the years.)
> 
> ...



Sry for the late reply.

In my case its because I don't buy 'high end' hardware,more of a budget-mid range user so I never really considered 1080 and cards around that range when they were new/expensive.

Pretty much always use my cards for 2-3 years before upgrading and this 5700 will be my biggest/most expensive upgrade yet and it will be used for 3 years at least.
I don't mind playing with 45-50 fps and droping settings to ~medium when needed so I easily last that much,probably wouldn't even bother upgrading yet from my RX 570 if I still had my 1920x1080 monitor but this 2560x1080 res is more GPU heavy and some new games are kinda pushing it already.
If Borderlands 3 will run alright with the 570 I might even delay that purchase since it will be my main game for a good few months at least.

+Problem is that I don't want to buy a card with 6GB Vram cause I prefer to keep the Texture setting ~high at least and with 3-4 years in mind thats gonna be a problem 'already ran into this issue with my previous cards'.
Atm all of the 8GB Nvidia cards are out of my budget '2060S ' and I'm not a fan of used cards especially when I plan to keep it for long 'having no warranty is a dealbreaker for me'.
Dual fan 2060S models start around ~500$ here with tax included,blower 5700 non XT ~410 so even the aftermarket models will be cheaper and thats the max I'm willing to spend.


My cards were like this,at least what I can remember:

AMD 'Ati' 9600 Pro,6600 GT,7800 GT,8800GT 'Died after 2.5 years',GTS 450 which was my warranty replacement,GTX 560 Ti 'Died after 1 Year,had no warranty on it..',AMD 7770,GTX 950 Xtreme and now the RX 570.
That 950 is still running fine at my friend who bought it from me,its almost 4 years old now.

My bro had more AMD cards than me now that I think of it,even had a 7950 Crossfire system for a while and that ran 'hot'. 
If I recall correctly then his only dead card was a 8800GTX,all of his AMD cards survived somehow.


----------



## Vayra86 (Aug 15, 2019)

Frick said:


> TIM discussions, but not straigt up "heat is murdering intel CPU's". But then I don't pay much attention.



I found myself dialing back my OC multiple times due to temps on Intel CPUs. Its not pretty and its a sign of the times as performance caps out. And its STILL not pretty on a hot summer day - still seeing over 85C on the regular. Some years back the consensus was that 80 C was just about the max 'safe' temp. Go higher continuously, and you may suffer noticeable degradation in the useful life of the chip. 'In spec' is not the same as 'safe'. Murder maybe should be rephrased to a slow painful death 



R0H1T said:


> These are hotspots, not the entire die's temp! Did you even read what the blog post said?



Do YOU even read? You say we don't know about hotspots on Intel CPUs, and in the same sentence you linked that 7700K result _with hotspot readings. _I also pointed out that Intel _already reports TJunction_ for quite a while now.

Gotta stop acting like AMD is doing something radically new. Its clear as day; the GPU has no headroom, and it constantly pushes itself to max temp limit, and while doing so, heat at memory ICs gets to max 'specced' as well. So what if the die is cooler - it still won't provide any headroom to push the chip further. The comparisons with Nvidia therefore fall flat completely as well, because Nvidia DOES have that headroom - and does not suffer from the same heat levels elsewhere on the board.

Its not my problem you cannot connect those dots, and you can believe whatever you like to believe... to which the follow up question is: did you buy one yet? After all, they're fine and AIB cards don't clock higher, so you might as well... GPU history is full of shitty products, and this could well be another one (on ref cooling).


----------



## Vya Domus (Aug 15, 2019)

Xuper said:


> What's junction temp for Turning card ? I bet Nvidia doesn't want to reveal it.



Admittedly, it's a smart choice. This way the simple minded folk wont be bothered by scary numbers that they don't understand.

AMD has always been the most transparent to what their products are doing under the hood but by the same token this drives away people that don't know what to do with this information, it's a shame.


----------



## INSTG8R (Aug 15, 2019)

Vya Domus said:


> Admittedly, it's a smart choice. This way the simple minded folk wont be bothered by scary numbers that they don't understand.


As a Vega owner I learned not to look at the Hotspot, it just makes you sad. That said I run a custom fan curve in my Nitro+ and keep mine around 90C


----------



## Xuper (Aug 15, 2019)

NVIDIA GPU maximum operating temperature and overheating | NVIDIA
					






					nvidia.custhelp.com
				






> NVIDIA GPUs are designed to operate reliably up to their maximum specified operating temperature. This maximum temperature varies by GPU, but is generally in the *105C *range (refer to the nvidia.com product page for individual GPU specifications). If a GPU hits the maximum temperature, the driver will throttle down performance to attempt to bring temperature back underneath the maximum specification. If the GPU temperature continues to increase despite the performance throttling, the GPU will shutdown the system to prevent damage to the graphics card. Performance utilities such as EVGA Precision or GPU-Z can be used to monitor temperature of NVIDIA GPUs. If a GPU is hitting the maximum temperature, improved system cooling via an added system fan in the PC can help to reduce temperatures.



If One spot is under 105'c then it's ok until throttle.this article doesn't say entire spots , rather if one of spot


----------



## gamefoo21 (Aug 15, 2019)

cucker tarlson said:


> rubbing eyes
> 
> so how many ppl are still running 7970s/r9 2xx cards around here,which are 6-8 years old.



I have a system that dailies an HD5830, a bunch of HD5450s floating around. The other half has a 650 Ti in their system. My T61 has an HD5450 in the dock and a NVS140M on it's mobo.

I know a guy who still games on his R9 290, with 390X bios.

I run a Fury X, and it was in the first 99 boards made. It's running a modified bios that lifts the power limit, under volts, tightens the HBM timings, and performs far better than stock.

The Fury series like the Vegas, need water cooling to perform their best. Vega64/V2/56 on air, is just disappointing because they are loud and/or throttle everywhere.

I have had a few GPUs that were bit by the NV soldergate... 

Toshiba lappy 7600 GT, replaced and increased clamp pressure mods they directed to use.

Thinkpad T61 and it's Quadro NVS140M, Lenovo made Nvidia, remake the GPU, with proper solder. I hunted down and aquired myself one.

But ATI/AMD aren't exempt...

My most notorious death card, was a Powercolor 9600XT... that card died within 2-3 weeks everytime, and I had to RMA it 3 times. I still refuse to use anything from TUL/Powercolor because of the horrible RMA process, horrible customer service, and their insistence on using slow UPS. So I got nailed with a $100 brokerage bill every time. I sold it cheap after the last RMA, guy messaged me angry a month later that it died on him.

My uncle got 2 years out of a 2900 XT... It was BBA... lol


----------



## Vayra86 (Aug 15, 2019)

Xuper said:


> NVIDIA GPU maximum operating temperature and overheating | NVIDIA
> 
> 
> 
> ...



One of the spots of which we know is not the hottest point in an Nvidia GPU, because they throttle way earlier than that - but more importantly, they throttle more rigorously than AMD's Navi does. The 'throttle point' for an Nvidia GPU is 84C on the die sensor. Will there be hotter spots on it? Sure. But when 84C _is maintained _and GPU boost cannot lower it reliably with small voltage drops and dropping boost bins in increments of 13mhz, it will go down hard on the voltage and kick off half, or your entire boost clock until things settle down - and on top of that, it takes away a few boost bins from your highest clock - it did that already because temp also makes it lose boost bins.

Now, enter Navi: if you don't adjust the fan profile, the card will simply continuously bump into the red zone, right up to max spec. There is no safeguard to kick it down a notch consistently. Like a mad donkey it will bump its head into that same rock every time, all the time.

The way both boost mechanics work is quite different, still, and while AMD finally managed to get a form of boost going that can utilize the headroom available, it does rely on cooling far more so than GPU Boost does - and what's more, it also won't boost higher if you give it temperature headroom. Bottom line, they've still got a very 'rigid' way of boosting versus a highly flexible one.

If you had to capture it one sentence; Nvidia's boost wants to stay as far away from the throttle point as it can to do best, while AMD's boost doesn't care how hot it gets to maximize performance as long as it doesn't melt.


----------



## Xuper (Aug 15, 2019)

Vayra86 said:


> One of the spots of which we know is not the hottest point in an Nvidia GPU, because they throttle way earlier than that - but more importantly, they throttle more rigorously than AMD's Navi does. The 'throttle point' for an Nvidia GPU is 84C on the die sensor. Will there be hotter spots on it? Sure. But when 84C _is maintained _and GPU boost cannot lower it reliably with small voltage drops and dropping boost bins in increments of 13mhz, it will go down hard on the voltage and kick off half, or your entire boost clock until things settle down - and on top of that, it takes away a few boost bins from your highest clock - it did that already because temp also makes it lose boost bins.



Please provide source about 84'c.It's first time I hear it.


----------



## Vya Domus (Aug 15, 2019)

Vayra86 said:


> as long as it doesn't melt.



Your assumption continues to be that AMD must not know what they are doing and they have set limits which are not safe even though they have explicitly stated that they are. If their algorithm figures out it's a valid move to keep clocks up and the power usage and temperature does not keep on rising this means an equilibrium has been reached, this conclusion is elementary.

I do not understand at all how you conclude that their algorithm must be worse because it does not make frequent adjustments like Nvidia's. If anything this is proof their hardware is more balanced and no large adjustments are needed to keep the GPU in it's desired operating point.



Vayra86 said:


> There is no safeguard to kick it down a notch consistently.



Again, If their algorithm figures out it's a valid move to do that *this means an equilibrium has been reached*. There is no need for any additional interventions. The only safeguards needed after that are for thermal shutdown and whatnot and I am sure they work just fine otherwise they would all burn away from the moment they are turned on.

Do not claim their cards do not have safeguards in this regard, it's simply untrue. You now better than this, come on.



Vayra86 said:


> If you had to capture it one sentence; Nvidia's boost wants to stay as far away from the throttle point as it can to do best, while AMD's boost doesn't care how hot it gets to maximize performance as long as it doesn't melt.



You are simply wrong and I am starting to question whether or not you really understand how these things work.

They both seek to maximize performance while staying away from the throttle point as far as possible only if that's the right thing to do. If you go and look back at reference models of Pascal cards they all immediately hit their temperature limit and stay there just in the same way the 5700XT does. Does that mean they didn't care how hot those got ?

Of course the reason I brought up Pascal is because those have the same blower coolers, they don't use those anymore but let's see what happens when Truing GPUs do have that kind of cooling :






What a surprise, they also hit their temperature limit. So much for Nvidia wanting to stay as far away from the throttle point, right ?

*This is not how these things are supposed to work.* Their goal is not to just stay as far away from the throttle point, if you do that your going to have a crappy boost algorithm. Their main concern is to maximize performance even if that means you need to stay right at the throttling point.


----------



## Vayra86 (Aug 16, 2019)

Vya Domus said:


> Your assumption continues to be that AMD must not know what they are doing and they have set limits which are not safe even though they have explicitly stated that they are. If their algorithm figures out it's a valid move to keep clocks up and the power usage and temperature does not keep on rising this means an equilibrium has been reached, this conclusion is elementary.
> 
> I do not understand at all how you conclude that their algorithm must be worse because it does not make frequent adjustments like Nvidia's. If anything this is proof their hardware is more balanced and no large adjustments are needed to keep the GPU in it's desired operating point.
> 
> ...



You missed the vital part where I stressed that Navi does not clock further when you give it temperature headroom. Which destroys the whole theory about 'equilibrium'. The equilibrium does not max out performance at all, it just boosts to a predetermined cap that you cannot even manually OC beyond. 0,7% - that is margin of error.

And the ref cooler is balanced out so that, in ideal (test bench) conditions, it can remain just within spec without burning itself up too quickly. I once again stress the Memory IC temps, which, once again, is easily glossed over but very relevant here wrt longevity. AIB versions then confirm the behaviour because all they really manage is a temp drop with no perf gain.

And ehm... about AMD not knowing what they're doing... we are in Q2 2019 and they finally managed to get their GPU Boost to 'just not quite as good as' Pascal. You'll excuse me if I lack confidence in their expertise with this. Ever since GCN they have been struggling with power state management. Please - we are WAY past giving AMD the benefit of the doubt when it comes to their GPU division. They've stacked mishap upon failure for years and resources are tight. PR, strategy, timing, time to market, technology... none of it was good and even Navi is not a fully revamped arch, its always 'in development', like an eternal Beta... and it shows.

Here is another graph to drive the point home that Nvidia's boost is far better.

NAVI:
Note the clock spread while the GPU keeps on pushing 1.2V. And not just at 1.2V but at each interval. Its a mess and it underlines voltage control is not as directly linked to GPU clock as you'd want.

There is also still an efficiency gap between Navi and Pascal/Turing, despite a node advantage. This is where part of that gap comes from.

Ask yourself this, where do you see an equilibrium here? This 'boost' runs straight into a heat wall and then panics all the way down to 1800mhz, while losing out on good ways to drop temp: dropping volts. And note; this is an AIB card.












						MSI Radeon RX 5700 XT Evoke Review
					

The MSI Radeon RX 5700 XT Evoke is a completely new line of graphics cards by MSI. Visually, this factory-overclocked board pleases with a champagne-gold cooler and matching backplate. Temperatures of the triple-slot, dual-fan card are excellent, and idle fan stop is included, too.




					www.techpowerup.com
				





Turing:

You can draw up a nice curve to capture a trend here that relates voltage to clocks, all the way up to the throttle target (and néver beyond it, under normal circumstances - GPU Boost literally keeps it away from throttle point before engaging in _actually throttling_). At each and every interval, GPU boost finds the optimal clock to settle at. No weird searching and no voltage overkill for the given clock at any given point in time. Result: lower temps, higher efficiency, maximized performance, and (OC) headroom if temps and voltages allow.




People frowned upon Pascal when it launched for 'minor changes' compared to Maxwell, but what they achieved there was pretty huge, it was Nvidia's XFR. Navi is not AMD's GPU XFR, and if it is, its pretty shit compared to their CPU version.

And.. surprise for you apparently but that graph you linked contains a 2080 in OC mode doing... 83C. 1C below throttle, settled at max achievable clockspeed WITHOUT throttling.



Xuper said:


> Please provide source about 84'c.It's first time I hear it.





			https://www.nvidia.com/en-us/geforce/forums/discover/236948/titan-x-pascal-max-temperatures/
		






So as you can see, Nvidia GPUs throttle 6C before they reach 'out of spec' - or permanent damage. In addition, they throttle such that they *stop exceeeding* the throttle target from then onwards under a continuous load. Fun fact, Titan X is on a blower too... with a 250W TDP.


----------



## Th3pwn3r (Aug 24, 2019)

So I realize this thread had died a bit but I just thought about and possibly realized something.

Long gone are the days when a new generation of video cards or processors offer big or even relatively impressive performance for the prices of the hardware it's/they're replacing. Now days it seems like all they do is give is just enough of an upgrade to justify the existence of said products or at least in their(AMD/Intel/Nvidia) minds.

Sad times.


----------



## Anymal (Aug 24, 2019)

Dont let AMD to fool you. 7nm Geforce will bring what you are missing.


----------



## Th3pwn3r (Aug 24, 2019)

Anymal said:


> Dont let AMD to fool you. 7nm Geforce will bring what you are missing.



I sure hope so. I've been waiting so long to be wowed like the way we used to in the old days. Like when we went from AGP to PCI Express. Remember those days? What a time to be into computers. Hell...when every issue of Maximum PC seemed amazing and full of great content, , I really miss those days, things have become so stale. What really sucks is so many of us here trash each other on this forum and this seems to be the best place we have for us, it really makes me sad(not trolling) and not many things do.


----------



## Vya Domus (Aug 24, 2019)

Anymal said:


> Dont let AMD to fool you. 7nm Geforce will bring what you are missing.


 
The more you buy the more you save.

Do you by any chance own a leather jacket and your favorite color is green ?


----------



## Anymal (Aug 24, 2019)

No, no need to buy to see p/W graphs, W1zzard does it for us.

Th3pwne3, I feel you. Those days are gone but I am sure that jump from 16nm to 7nm would bring much bigger differences than RDNA 1.0 showed. Ampere and/or RDNA 2.0 to the rescue in 2020. 28nm to 14/16nm brought impressive results, cards like 480 and 1060 for 250eur with 500eur 980 perf. Well, 280eur Turing 1660ti almost mimic that with 450eur 1070 perf, 16nm vs 12nm. FFS, 7nm Navi for 400eur only with 500eur ultra oced 2070 perf.


----------



## R33mba (Jan 29, 2020)

So why did then xfx decided to work on their cards cons? if those temps are "normal" ?? bunch of cr@p!


----------



## ArrowMarrow (Feb 15, 2020)

R33mba said:


> So why did then xfx decided to work on their cards cons? if those temps are "normal" ?? bunch of cr@p!



As one can read above it's an issue of mentality - nothing is running too hot. The VII, with hotspot maxing at 110°; VDDC ~90°; and GPU Temp with around 80-85° (under full load - in stock settings and sufficient airflow thru average casing) are as they'de be expected and stated.
There is no point in discussing readouts like "Nvidia vs AMD". It's obvious and known that Radeons generally runs a highter temp. level. 
Important here is to mention -or to repeat - as it has been pointed out previously from many others here: Temperature is relative and therefore it has to be evaluated accordingly. Means: As far as I know the VIIs are layed out to run on ~125°C (don't ask now which part of the card - answer: probably the ones mentioned above as they r the ones generating most). 
So again, temperatures should be compared on Devices with same or derivative architectures. I mean, of course one can compare (almost) anything - but rather more as a basis of discussion and opnion. Like e.x.:  the tires and wheels of a sports car get much hotter then a family sedan. Are high temperatures bad for the wheels ? - yes    and no. But for a sports car it's  needed no question (breaks, tyre grip...). So again it's relative/"a matter of perspective".
The last point goes with that above. The discussion about the Sensors/readout-points. I want to point out I'm not having any qualified knowledge or education about this subject per se. But it's actually simple logic - the nearer you get with the sensors to where  energy transfers to heat, the higher read out you will get. So simple enough right ? - as mentioned above if other cards have architectures and within those, Sensors that are related/derivative/similar, one could compare. But how can anybody compare readouts of different sensors at different "places". in short: The higher the "delta" btw. the "near-core" temperatures in respect to the package temperature - the better.  With AMDs  sensors/sensorplacement and their readouts. users possess  more detailed info of heat generation and travel. 
So from that what I've read from the posts before and according what I've put together above - the VII (at least) have more accurate readouts.
And finally, our concern is the Package Temp - the rest, one should check every now and then to have a healthy card.
And finally about buying a GPU and wanting to keep it ... 3-4 even 7 years some had written.
BS  -   if we talk here about high-end cards, it's very simple for 90% of us the life span is 1-2 years for 5% 0.5-1year and 5% max 3 year. Any GPU you buy now (except for the Titans&Co.  - maybe) is in 3 years good enough for your 8-9 year old brother to play minecraft .... that's fact - the ones complaining/worrying and making romantic comments like wanting to keep them 7 years and .... so on. My answer is - then keep them. have fun. And make your self busy in complaining and being toxic about the cards that the became slower then when you had purchased them. ( headshake* - people who write crap like that and sadly think like that they need and always will need to something to complain about, fight about - it's not an issue of knowledge or even opinion. It's and issue of character....

Finally I bought last year 1.Q a Strix 1080Ti (before 2x Saphire RX 550+ nitro+ SE in Crossfire) - 2 days later went then to buy the VII - why ? I got the LG 43" monitor centered and 2 AOC monitors left&right in portrait orientation. And I realized .... the Multimonitor Software of  AMD is better then Nvidias ... cuz it doesn't support it - so because of AMDs "Radeon Adrenalin" app it was a simple decision - and until today I've not regreted it nor had any issues with it 'til today.


----------



## R-T-B (Feb 15, 2020)

R33mba said:


> So why did then xfx decided to work on their cards cons? if those temps are "normal" ?? bunch of cr@p!



That was memory temp, not even die...


----------



## Aquinus (Feb 15, 2020)

I see that this thread is full of armchair engineers.

People seem to have a really hard time understanding that the closer to the transistors you put a temperature sensor, the hotter it's going to be. If you could measure the temperature inside a transistor gate, I'm sure it would be even higher. This is why temperature limits for the junction is different than the limits for the edge.

If temperatures are within spec and it maintains them, who cares? It literally means that this is the range that AMD certifies their hardware good at. I'd be more concerned if it was regularly exceeding the spec.


----------



## R33mba (Feb 15, 2020)

R-T-B said:


> That was memory temp, not even die...


So by having bug in my head by Steve from GN yt channel,  about my xfx temps, I decided to do what he did, remove plastic parts that doesnt do anything,and I repasted my gpu/cpu.. After that I run 2h of stress test in furmark and my hotspot mem temp was 98c.. Other temps of gpu remained under 90c.. I dont know what I did manage to fck up, but again I have never have left furmark test so long.. I was planning to sell my gpu, but I dont have any real problems with it,except that heat bug in my head.


----------



## eidairaman1 (Feb 15, 2020)

R33mba said:


> So by having bug in my head by Steve from GN yt channel,  about my xfx temps, I decided to do what he did, remove plastic parts that doesnt do anything,and I repasted my gpu/cpu.. After that I run 2h of stress test in furmark and my hotspot mem temp was 98c.. Other temps of gpu remained under 90c.. I dont know what I did manage to fck up, but again I have never have left furmark test so long.. I was planning to sell my gpu, but I dont have any real problems with it,except that heat bug in my head.



Stop running furmark/kombustor abusing hardware doesnt help you.

Run 3DMARk, Unigen Valley/Heaven and Games.


----------



## R33mba (Feb 16, 2020)

eidairaman1 said:


> Stop running furmark/kombustor abusing hardware doesnt help you.
> 
> Run 3DMARk, Unigen Valley/Heaven and Games.


I Will thank you for advice.


----------



## R33mba (Feb 18, 2020)

Is this normal?


----------



## KatanQ (Jun 3, 2020)

cucker tarlson said:


> rubbing eyes
> 
> so how many ppl are still running 7970s/r9 2xx cards around here,which are 6-8 years old.


Up until recently I was rolling with the R9 270X, which was used by my roommate until I got it and now is being fully utilized in a Hackintosh. I don't know what you guys are doing to your cards or if I've just got really lucky or you unlucky, but I have yet to have an AMD card fail on me even with OC nearly every one I've had.


----------



## cucker tarlson (Jun 3, 2020)

KatanQ said:


> Up until recently I was rolling with the R9 270X, which was used by my roommate until I got it and now is being fully utilized in a Hackintosh. I don't know what you guys are doing to your cards or if I've just got really lucky or you unlucky, but I have yet to have an AMD card fail on me even with OC nearly every one I've had.


not about failing,I ditched my 7870GHz in 2014 cause it was too slow for 1080p.



R33mba said:


> Is this normal?


what ?


----------



## R33mba (Sep 1, 2020)

cucker tarlson said:


> not about failing,I ditched my 7870GHz in 2014 cause it was too slow for 1080p.
> 
> 
> what ?



temps..

My hw info is showing me that my hotspot temp is 115c


----------



## HeavyHemi (Apr 6, 2021)

Just for giggles I'd thought I'd report my 980Ti is still being used daily by my Nephew along with the 980x at 4.3ghz system I built. He leaves it on pretty much 24/7. My 1080 Ti is still running perfect OC 2050 @ 1.043v on a Gen2 EVGA Hybrid cooler on my current X99 6850k running at 4.5ghz. Folks meatshielding poor quality by claiming 'nobody uses 'X' beyond a  certain amount of time are dullards. I've audiophile grade gear I collected overseas in the 80's that is still in use daily and in pristine condition. My daily driver, Ranger 4x4 I bought new in '88. You guessed it, still drive it daily and still looks like new.


----------



## R33mba (Apr 7, 2021)

Thank God for mining booming againn, I sold my rx5700 for more money that I payed her, and bought rx6800..


----------

