Your assumption continues to be that AMD must not know what they are doing and they have set limits which are not safe even though they have explicitly stated that they are. If their algorithm figures out it's a valid move to keep clocks up and the power usage and temperature does not keep on rising this means an equilibrium has been reached, this conclusion is elementary.
I do not understand at all how you conclude that their algorithm must be worse because it does not make frequent adjustments like Nvidia's. If anything this is proof their hardware is more balanced and no large adjustments are needed to keep the GPU in it's desired operating point.
Again, If their algorithm figures out it's a valid move to do that
this means an equilibrium has been reached. There is no need for any additional interventions. The only safeguards needed after that are for thermal shutdown and whatnot and I am sure they work just fine otherwise they would all burn away from the moment they are turned on.
Do not claim their cards do not have safeguards in this regard, it's simply untrue. You now better than this, come on.
You are simply wrong and I am starting to question whether or not you really understand how these things work.
They both seek to maximize performance while staying away from the throttle point as far as possible
only if that's the right thing to do. If you go and look back at reference models of Pascal cards they all immediately hit their temperature limit and stay there just in the same way the 5700XT does. Does that mean they didn't care how hot those got ?
Of course the reason I brought up Pascal is because those have the same blower coolers, they don't use those anymore but let's see what happens when Truing GPUs do have that kind of cooling :
View attachment 129319
What a surprise, they also hit their temperature limit. So much for Nvidia wanting to stay as far away from the throttle point, right ?
This is not how these things are supposed to work. Their goal is not to just stay as far away from the throttle point, if you do that your going to have a crappy boost algorithm. Their main concern is to maximize performance even if that means you need to stay right at the throttling point.
You missed the vital part where I stressed that Navi does not clock further when you give it temperature headroom. Which destroys the whole theory about 'equilibrium'. The equilibrium does not max out performance at all, it just boosts to a predetermined cap that you cannot even manually OC beyond. 0,7% - that is margin of error.
And the ref cooler is balanced out so that, in ideal (test bench) conditions, it can remain just within spec without burning itself up too quickly. I once again stress the Memory IC temps, which, once again, is easily glossed over but very relevant here wrt longevity. AIB versions then confirm the behaviour because all they really manage is a temp drop with no perf gain.
And ehm... about AMD not knowing what they're doing... we are in Q2 2019 and they finally managed to get their GPU Boost to 'just not quite as good as' Pascal. You'll excuse me if I lack confidence in their expertise with this. Ever since GCN they have been struggling with power state management. Please - we are WAY past giving AMD the benefit of the doubt when it comes to their GPU division. They've stacked mishap upon failure for years and resources are tight. PR, strategy, timing, time to market, technology... none of it was good and even Navi is not a fully revamped arch, its always 'in development', like an eternal Beta... and it shows.
Here is another graph to drive the point home that Nvidia's boost is far better.
NAVI:
Note the clock spread while the GPU keeps on pushing 1.2V. And not just at 1.2V but at each interval. Its a mess and it underlines voltage control is not as directly linked to GPU clock as you'd want.
There is also still an efficiency gap between Navi and Pascal/Turing, despite a node advantage. This is where part of that gap comes from.
Ask yourself this, where do you see an equilibrium here? This 'boost' runs straight into a heat wall and then panics all the way down to 1800mhz, while losing out on good ways to drop temp: dropping volts. And note; this is an AIB card.
The MSI Radeon RX 5700 XT Evoke is a completely new line of graphics cards by MSI. Visually, this factory-overclocked board pleases with a champagne-gold cooler and matching backplate. Temperatures of the triple-slot, dual-fan card are excellent, and idle fan stop is included, too.
www.techpowerup.com
Turing:
You can draw up a nice curve to capture a trend here that relates voltage to clocks, all the way up to the throttle target (and néver beyond it, under normal circumstances - GPU Boost literally keeps it away from throttle point before engaging in
actually throttling). At each and every interval, GPU boost finds the optimal clock to settle at. No weird searching and no voltage overkill for the given clock at any given point in time. Result: lower temps, higher efficiency, maximized performance, and (OC) headroom if temps and voltages allow.
People frowned upon Pascal when it launched for 'minor changes' compared to Maxwell, but what they achieved there was pretty huge, it was Nvidia's XFR. Navi is not AMD's GPU XFR, and if it is, its pretty shit compared to their CPU version.
And.. surprise for you apparently but that graph you linked contains a 2080 in OC mode doing... 83C. 1C below throttle, settled at max achievable clockspeed WITHOUT throttling.
Please provide source about 84'c.It's first time I hear it.
So as you can see, Nvidia GPUs throttle 6C before they reach 'out of spec' - or permanent damage. In addition, they throttle such that they
stop exceeeding the throttle target from then onwards under a continuous load. Fun fact, Titan X is on a blower too... with a 250W TDP.