• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD "Navi 12" Silicon Powering the Radeon Pro 5600M Rendered

AMD has no efficiency advantage in the mobile space.
I've seen reviews of a 100W~ 5600M losing to a 90W RTX 2060 by 10% or so.


The video you linked clearly shows the 2060 system consuming more power. TDP does not equal power consumption. I'm tired of repeating this.

In addition, you cherry picked your 10% number. That's 10% at Maximum details settings, which in many games (being a laptop GPU) isn't a good experience. At high settings the lead is closer to 6% and at medium about 2%.

On top of that the GPU being discussed in the article is the Radeon Pro 5600M, which is a completely different piece of silicon. If Navi 12 is based on RDNA2, which is highly likely, it will be significantly more power efficient then existing chips. AMD is claiming a 50% increase in performance per watt with Navi 2. Of course, just having HBM instead of GDDR means power savings as well.
 
On top of that the GPU being discussed in the article is the Radeon Pro 5600M, which is a completely different piece of silicon. If Navi 12 is based on RDNA2, which is highly likely, it will be significantly more power efficient then existing chips. AMD is claiming a 50% increase in performance per watt with Navi 2. Of course, just having HBM instead of GDDR means power savings as well.
Navi12 is RDNA1, not RDNA2. RDNA2 gpus will have Navi2x naming scheme.
 
The video you linked clearly shows the 2060 system consuming more power. TDP does not equal power consumption. I'm tired of repeating this.

In addition, you cherry picked your 10% number. That's 10% at Maximum details settings, which in many games (being a laptop GPU) isn't a good experience. At high settings the lead is closer to 6% and at medium about 2%.

On top of that the GPU being discussed in the article is the Radeon Pro 5600M, which is a completely different piece of silicon. If Navi 12 is based on RDNA2, which is highly likely, it will be significantly more power efficient then existing chips. AMD is claiming a 50% increase in performance per watt with Navi 2. Of course, just having HBM instead of GDDR means power savings as well.

Total system power consumption is different because those are different laptops with many different parts. I can't believe I have to explain that.
And yes, TDP mostly equals power consumption.
I didn't say anything about this specific apple GPU, I just said AMD has no efficiency advantage.
 
What a shame it's Apple exclusive. This would make such a lovely ITX card...
 
Total system power consumption is different because those are different laoptops with many different parts. I can't believe I have to explain that.
And yes, TDP mostly equals power consumption.
I didn't say anything about this specific apple GPU, I just said AMD has no efficiency advantage.
The video show that 2060 laptop consuming more power than the 5600M one. The CPU is same, so how 2060 is more efficient than the 5600M, when the 2060 laptop consuming more power??
 
The video show that 2060 laptop consuming more power than the 5600M one. The CPU is same, so how 2060 is more efficient than the 5600M, when the 2060 laptop consuming more power??

Because the tested RTX 2060 has a 90W POWER LIMIT and the 5600M consumes up to 100W while the 2060 performs better. That's how.
 
Because the tested RTX 2060 has a 90W POWER LIMIT and the 5600M up to 100W while the 2060 performs better. That's how.
How did he mesured that? Currently you cant do that on Smartshif Laptop as it show power consumption for both cpu and gpu??
And if 2060 is consuming less power than why the ASUS one has more power consumption??
 
The only thing I need to understand is that I have been taught after many years of leaving comments on the internet is not to engage with people like you.

Yeah right why would you want to engage with peoples like me who provide argumented criticism in order to correct your misconceptions ....... when it's much easier to stay in your denial bubble !

You forgot about HBM2, which is more power efficient than GDDR6.

Yep .
 
Total system power consumption is different because those are different laptops with many different parts. I can't believe I have to explain that.
And yes, TDP mostly equals power consumption.
I didn't say anything about this specific apple GPU, I just said AMD has no efficiency advantage.


You are straight up wrong.

As gamersnexus described it, it's a made up number used to beat down forum users over which processor has the lower TDP when in reality it isn't isn't supposed to represent power consumption, let alone being accurate at what it is supposed to indicate (thermal power dissipation).
 
Last edited:
<TDP talk>

As gamersnexus described it, it's a made up number used to beat down forum users over which processor has the lower TDP when in reality it isn't isn't supposed to represent power consumption, let alone being accurate at what it is supposed to indicate (thermal power dissipation).

Yep. AMD's current 65W models (so 3600, or 3700X) pull 85-90W at stock with PBO boosting enabled. Intel's current 10500 pulls about 130W when boosting.

Laptop 15W TDP ranges from 10W to about 45W depending on vendor and configuration.

GPU TDP is actually more closely constrained by AMD and Nvidia, simply because their entire product is a single board and as such they have far more control over the power delivery than AMD or Intel do with CPUs that rely on third-party motherboard manufacturers to handle.
 
AMD has no efficiency advantage in the mobile space.
I've seen reviews of a 100W~ 5600M losing to a 90W RTX 2060 by 10% or so.


Objectively, I agree that the TDP comparison is moot. Generally the chip will not stick to the stated TDP, whether its GPU or CPU. At this point, CPU is the biggest offender.

In addition, it is very difficult to have an apple to apple comparison of performance between laptops. This is because the specs, cooling solution and the BIOS configurations differs widely. Even if we can get one with more or less exact specs, the cooling solution and laptop configurations will largely determine the performance of the laptop. Unlike on a desktop where you can afford huge cooling solution, the cooling solutions in laptops are really bare minimal. So if one is to cut cost and scrimp on the cooling solution, this may result in a hotter and/or slower performance. In terms of the laptop configuration, it depends on how aggressive the laptop maker wants to be by allowing a longer boost, higher power, higher temps, etc. All these are preset in the BIOS which we have no/ limited access to in laptops.
 
GPU TDP is the power limit for it and has been for a long while now.
CPU situation is different.
 
  • Like
Reactions: M2B
AMD has no efficiency advantage in the mobile space.
I've seen reviews of a 100W~ 5600M losing to a 90W RTX 2060 by 10% or so.

That isn't the same GPU ...

The GPU in question here is the Navi 12-based Radeon Pro 5600M (with HBM2).

The GPU in your video is the Navi 10-based Radeon RX 5600M (with GDDR6).

Yes, this naming is confusing in its similarity, but they are differen product lines (RX is mainstream consumer/gaming, Pro is productivity/workstation), so the similar naming just indicates similar performance/product stack positioning.
If this was used by anyone other than Apple I would expect a higher clocked version named RX 5700M or some such.
Are we really expecting the full 2560 (40CU) configuration in something that's wearing the 5600 moniker?
Yes. Considering that AMD just sent out a press release saying this in clear text, yes, that is exactly what is happening.


I have to say, I would love for them to make this into a premium SFF desktop GPU ... push it to 75W, stick it on a 2-slot HHHL card, wow, it would knock the socks off anything else available in that form factor. The price would obviously be high, but there are quite a few SFF enthusiasts out there willing to pay that premium.
 
Yes. Considering that AMD just sent out a press release saying this in clear text, yes, that is exactly what is happening.

I have to say, I would love for them to make this into a premium SFF desktop GPU ... push it to 75W, stick it on a 2-slot HHHL card, wow, it would knock the socks off anything else available in that form factor. The price would obviously be high, but there are quite a few SFF enthusiasts out there willing to pay that premium.
Oh okay, I hadn't seen the press release when I asked that.

Agree with you on the premium SFF GPU. I intentionally paid extra for a 5700XT and other than a brief excercise in seeing what it was capable of, have never run it at speeds that would even beat a stock 5700. My daily-driver undervolt barely spins the fans beyond their minimum rpm and in benchmarks I'm giving up about 15% of the stock performance.

Yes, it's not great performance/$, but I'm happy to pay the premium for performance/Watt because although I don't care about electricity costs, I do care about it being damn-near silent.
 
Last edited:
Oh okay, I hadn't seen the press release when I asked that.

Agree with you on the premium SFF GPU. I intentionally paid extra for a 5700XT and other than a brief excercise in seeing what it was capable of, have never run it at speeds that would even beat a stock 5700. My daily-driver undervolt barely spins the fans beyond their minimum rpm and in benchmarks I'm giving up about 15% of the stock performance.

Yes, it's not great performance/$, but I'm happy to pay the premium for performance/Watt because although I don't care about electricity costs, I do care about it being damn-near silent.
If one has the money for that, that's a good approach, particularly with AMD cards that for the past near decade have been pushed too far up their DVFS curves.

I'm just imagining a ... let's call it RX 5700 Nano - why not? It would sure be a worthy successor to the near-legendary R9 Nano - at 75W, two slot HHHL form factor, probably ~1300MHz or possibly a bit more (given that this does 1035 at 50W). That would absolutely destroy the current highest performance HHHL GPU, the GTX 1650. They could even make it a harvested die with a couple of CUs cut off, letting Apple run off with the best chips. It could still deliver ~1660 Ti performance unless the frequency scaling falls off a cliff at low clocks (considering how low the MBP version is clocked, there should be plenty of headroom without getting notably inefficient). It might not be the highest volume product ever - not by any means given the premium pricing something like this would demand - but it would be the darling of the SFF crowd for years to come.
 
It should be easy enough to test. Get a vanilla 5700, set the power limit to -50% (so it's 90W max), tune the voltage curve with a bit of quick graph-plotting and then pick your 75W point on the curve. I reckon the HBM2 makes this considerably more efficient so 75W with GDDR6 may actually be only around 1GHz.
 
Back
Top