Thursday, June 18th 2020

AMD Radeon Pro 5600M with HBM2 Benchmarked

Benchmarks of the new Apple-exclusive AMD Radeon Pro 5600M graphics solution by Max Tech reveals that the new GPU is about 50% faster than the Radeon Pro 5500M, and within striking distance of the Radeon Pro Vega 48 found in Apple's 5K iMacs. The Pro 5600M is an Apple-exclusive solution by AMD, based on the "Navi 12" silicon that features a 7 nm GPU die based on the RDNA graphics architecture, flanked by two 4 GB HBM2 memory stacks over a 2048-bit interface. The GPU die features 2,560 stream processors, but clocked differently from Radeon Pro discrete graphics cards based on the "Navi 10" ASIC that uses conventional GDDR6.

The Radeon Pro 5600M solution was found to be 50.1 percent faster than the Radeon Pro 5500M in Geekbench 5 Metal (another Apple-exclusive SKU found in 16-inch MacBook Pros), and just 12.9 percent behind the Radeon Vega 48. The Vega 56 found in iMac Pro is still ahead. Unigine Heaven sees the Pro 5600M being 48.1% faster than the Pro 5500M, and interestingly, faster than Vega 48 by 11.3%. With 2,560 RDNA stream processors, you'd expect more performance, but this card was designed to meet stringent power limits of 50 W, and has significantly lower clock-speeds than "Navi 10" based Radeon Pro graphics cards (1035 MHz max boost engine clock vs. 1930 MHz and 205 W TDP of the Pro W5700). Find more interesting commentary in the Max Tech video presentation.
Source: VideoCardz
Add your own comment

52 Comments on AMD Radeon Pro 5600M with HBM2 Benchmarked

#26
Midland Dog
mtcn7750 vs. 150w tdp.
could have been ~20w nvidia AYYYYYYYYYYYYMDUMB
Posted on Reply
#27
mtcn77
Midland Dogcould have been ~20w nvidia AYYYYYYYYYYYYMDUMB
/doubt.
Posted on Reply
#28
Valantar
When current 20W Nvidia performance barely (20-30%) beats an APU with DDR4 in a shared 15W TDP, that seems... less than likely.
Posted on Reply
#29
T4C Fantasy
CPU & GPU DB Maintainer
mtcn77He is the lord gaben.
W1zzard is the man, the master coder. But I am the only one who does the GPU database xD
Posted on Reply
#30
mtcn77
T4C FantasyW1zzard is the man, the master coder. But I am the only one who does the GPU database xD
Hail the sun god.
Posted on Reply
#31
Pixrazor
so a r9 290 with 8gb hbm2 and 7nm
Posted on Reply
#32
Valantar
Pixrazorso a r9 290 with 8gb hbm2 and 7nm
Except it's RDNA and not GCN?
Posted on Reply
#33
ARF
ValantarExcept it's RDNA and not GCN?
"RDNA" is only a marketing term for GCN 1.5.0 (Navi, or Bermuda ?). R9 290 Hawaii (or Ibiza) is GCN 1.1 which is, yeah, the very same basic architecture with minor tweaks here and there.


clrx.nativeboinc.org/

This "Navi 12" is GCN 1.5.1.
Posted on Reply
#34
mtcn77
ARF"RDNA" is only a marketing term for GCN 1.5.0 (Navi, or Bermuda ?). R9 290 Hawaii (or Ibiza) is GCN 1.1 which is, yeah, the very same basic architecture with minor tweaks here and there.


clrx.nativeboinc.org/
Yes, master:
Posted on Reply
#35
Valantar
ARF"RDNA" is only a marketing term for GCN 1.5.0 (Navi, or Bermuda ?). R9 290 Hawaii (or Ibiza) is GCN 1.1 which is, yeah, the very same basic architecture with minor tweaks here and there.


clrx.nativeboinc.org/

This "Navi 12" is GCN 1.5.1.
Don't be daft. Backwards compatibility being baked in does not make it the same architecture.
Posted on Reply
#36
mtcn77
ValantarDon't be daft.
Dankness is a facility some consider to be unnatural. Personally, I embrace our null hypothesis overlords.
Posted on Reply
#37
ARF
ValantarDon't be daft. Backwards compatibility being baked in does not make it the same architecture.
Look at the schemes, you will see that the architecture is the same, just some units have new positions left or right.



If "rDNA" is so straightforward compatible with all the previous GCN architectures, why is it so difficult for AMD to release stable drivers ?
Why is the last driver version 20.5.1 which is normally released in early May and effectively they stopped releasing new drivers ?

What is so wrong with AMD's "rDNA" so it doesn't work flawlessly yet?
Posted on Reply
#38
Vayra86
ARFLook at the schemes, you will see that the architecture is the same, just some units have new positions left or right.



If "rDNA" is so straightforward compatible with all the previous GCN architectures, why is it so difficult for AMD to release stable drivers ?
Why is the last driver version 20.5.1 which is normally released in early May and effectively they stopped releasing new drivers ?

What is so wrong with AMD's "rDNA" so it doesn't work flawlessly yet?
Company culture and policy. Its not a technical problem, its about allocation of resources. Discrete GPU is somewhere on the bottom of that ladder.
Posted on Reply
#39
medi01
Midland Dogcould have been ~20w nvidia AYYYYYYYYYYYYMDUMB
Yeah, and even much faster. Remind me, how the Nintendo Switch performance/battery life expectation panned out.
Posted on Reply
#40
Valantar
ARFLook at the schemes, you will see that the architecture is the same, just some units have new positions left or right.



If "rDNA" is so straightforward compatible with all the previous GCN architectures, why is it so difficult for AMD to release stable drivers ?
Why is the last driver version 20.5.1 which is normally released in early May and effectively they stopped releasing new drivers ?

What is so wrong with AMD's "rDNA" so it doesn't work flawlessly yet?
You're contradicting yourself. If RDNA is just another revision of GCN, why does it need large driver changes at all? Also, looking at two somewhat similar (but by no means identical) block diagrams and saying "these are the same architecture" just shows that you are looking at this way too superficially. A block diagram is a high level overview, and tells us next to nothing about how the parts in the diagram are constructed or how they work. You can make a thousand different architectures that could be illustrated with identical block diagrams yet be fundamentally Incompatible. Any modern GPU architecture could be illustrated by a reasonably similar block diagram.
Posted on Reply
#41
Pixrazor
ValantarExcept it's RDNA and not GCN?
Yeah if you want, but for me rdna is yet a gcn but tweaked to the better. Not something life changing seen from terascale vliw to gcn simd.
But to come back to what I said, we see same number of shaders 2560, same clock speed ~1000mhz, if you see my reference to r9 290.
Posted on Reply
#42
ARF
ValantarYou're contradicting yourself. If RDNA is just another revision of GCN, why does it need large driver changes at all? Also, looking at two somewhat similar (but by no means identical) block diagrams and saying "these are the same architecture" just shows that you are looking at this way too superficially. A block diagram is a high level overview, and tells us next to nothing about how the parts in the diagram are constructed or how they work. You can make a thousand different architectures that could be illustrated with identical block diagrams yet be fundamentally Incompatible. Any modern GPU architecture could be illustrated by a reasonably similar block diagram.
Look, it is designated as GCN 1.5.0 and 1.5.1. This means it is a GCN evolution. Even during the official presentation, they said it is a "hybrid" between GCN and another new architecture.
1.5.0 doesn't mean "backwards" compatibility - it is the version of the architecture itself..
Posted on Reply
#43
Valantar
mtcn77Dankness is a facility some consider to be unnatural. Personally, I embrace our null hypothesis overlords.
I am ever more convinced that you live in some parallel universe in which your statements are deeply insightful and eloquent. Until I find a portal to that universe however, I remain unable to understand what you mean whatsoever.
ARFLook, it is designated as GCN 1.5.0 and 1.5.1. This means it is a GCN evolution. Even during the official presentation, they said it is a "hybrid" between GCN and another new architecture.
1.5.0 doesn't mean "backwards" compatibility - it is the version of the architecture itself..
Or it means that the backwards compatibility is itself not identical to any version of GCN, and thus requires some adaptations, thus making presenting it as GCN 1.5 a reasonable thing. That RDNA 1 in backwards compatibility mode is recognized as GCN 1.5 does not mean that the overall architecture is GCN 1.5.
PixrazorYeah if you want, but for me rdna is yet a gcn but tweaked to the better. Not something life changing seen from terascale vliw to gcn simd.
But to come back to what I said, we see same number of shaders 2560, same clock speed ~1000mhz, if you see my reference to r9 290.
"For you" - what does that mean? Does your opinion change the physical properties of the die? I certainly don't think so, and the undeniable fact is that RDNA - even in its not fully realized 1.0 form, where parts of GCN are carried over wholesale - handles instructions and other crucial aspects of GPU operation in very different ways from GCN. Just because there are commonalities does not mean they are the same. Do you also say that all Ford cars are the same, just because they share a brand, and some even share engines and drivetrains? All modern GPUs operate in similar ways, as there are standards in place that they must follow, and there is a limited number of possible ways of following them. Also, it is pretty much a universal truth that the longer something is developed (such as GPUs, or cars, or toasters, etc.) changes over time will diminish simply due to the fact that the possibility of finding a better design among the limited possible designs fulfilling the desired function becomes increasingly difficult. There are always developments that break this trend, but over time it all normalizes into a downward slope in terms of the scope of changes. In other words: it is entirely natural that RDNA is less of a break from GCN than GCN was from VLIW, but that doesn't mean that it no longer qualifies as a new architecture. By that logic I might as well say all GPU architectures with unified shaders are the same. You see how that is problematic, right?
Posted on Reply
#44
ARF
ValantarYou're contradicting yourself. If RDNA is just another revision of GCN, why does it need large driver changes at all?
Well, I do not know the reasons in details but my guess is that the Vega optimised and the previous drivers in general had been optimised for the compute features of the Vega.
For instance, Vega 10 has FP32 single 12.66 TFlops, FP64 double 791.6 GFlops.
While Navi is stripped of some of the computing oriented units, so it's more gaming oriented, and has FP32 single of only 9.754 TFlops and FP64 double of 609.6 Gflops.
Posted on Reply
#45
Valantar
ARFWell, I do not know the reasons in details but my guess is that the Vega optimised and the previous drivers in general had been optimised for the compute features of the Vega.
For instance, Vega 10 has FP32 single 12.66 TFlops, FP64 double 791.6 GFlops.
While Navi is stripped of some of the computing oriented units, so it's more gaming oriented, and has FP32 single of only 9.754 TFlops and FP64 double of 609.6 Gflops.
That doesn't demonstrate any reduction in compute beyond the reduction in CUs.

The RX 5700 XT is rated at 9.754 TFLOPS due to having 2560 shaders capable of 2 FLOPS/clock running at a nominal ~1900 MHz. Similarly, the Vega 64 has 4096 shaders capable of 2 FLOPS/clock running at a nominal ~1550MHz for 12.66TFLOPS. Both have a 16:1 ratio between FP32 and FP64-capable hardware. The difference between the two lies in that the former manages to outperform the latter in games despite its seemingly overall compute power deficiency. In other words the RX 5700 has architectural changes that allows it to significantly better utilize its compute power in gaming applications.
Posted on Reply
#46
Valantar
While one source is less than ideal, this is definitely promising in terms of perf/W: reportedly in the performance ballpark of an RTX 2060 (max-P, not max-Q), which has a TGP of 80W.
www.notebookcheck.net/50-W-AMD-Radeon-Pro-5600M-in-the-MacBook-Pro-16-offers-gaming-performance-equivalent-to-that-of-an-80-W-RTX-2060-non-Max-Q-laptop-GPU.476935.0.html

I find it very weird that the reviewer opted to use resolution scaling to test different settings rather than just changing the output resolution (does BootCamp not support non-native resolutions, maybe?), but nonetheless the perf/W on display is quite impressive. Certainly bodes well for RDNA 2, and it has definitely brought back a hope that "Big Navi" might use HBM2 for me.
Posted on Reply
#47
mtcn77
ValantarWhile one source is less than ideal, this is definitely promising in terms of perf/W: reportedly in the performance ballpark of an RTX 2060 (max-P, not max-Q), which has a TGP of 80W.
www.notebookcheck.net/50-W-AMD-Radeon-Pro-5600M-in-the-MacBook-Pro-16-offers-gaming-performance-equivalent-to-that-of-an-80-W-RTX-2060-non-Max-Q-laptop-GPU.476935.0.html

I find it very weird that the reviewer opted to use resolution scaling to test different settings rather than just changing the output resolution (does BootCamp not support non-native resolutions, maybe?), but nonetheless the perf/W on display is quite impressive. Certainly bodes well for RDNA 2, and it has definitely brought back a hope that "Big Navi" might use HBM2 for me.
Resolution scaling is the correct method, these chips have no scaling difference between drawn and textured resolution. Past gpus, do, however.
Posted on Reply
#48
Valantar
mtcn77Resolution scaling is the correct method, these chips have no scaling difference between drawn and textured resolution. Past gpus, do, however.
No. Resolution scaling does (generally) not allow you to set specific resolutions, only ratios or percentages of the display resolution. In other words it makes true 1:1 comparisons impossible. As in this case, instead of testing at a standard 1080p, or at least the 16:10 equivalent 1200p, the reviewer used various scaling factors to arrive at silly non-standard rendering resolutions like 1843x1152 and 2427x1517 for rough 1080p and 1440p equivalents. This means that any comparison made will at best be an approximate comparison, which is just unnecessary when the point is to compare performance between different parts.
Posted on Reply
#49
mtcn77
ValantarAs in this case, instead of testing at a standard 1080p, or at least the 16:10 equivalent 1200p, the reviewer used various scaling factors to arrive at silly non-standard rendering resolutions like 1843x1152 and 2427x1517 for rough 1080p and 1440p equivalents.
You know, doing things at point scale is a driver choice which OGSSAA is akin to. You might enjoy the greater textural quality, however the method is not without its disclaimer: ordered sampling introduces its grid pattern into the sampling distribution. So, it skews the sampling distribution rather than filtering it. RGSSAA provides more(root^2) multisample antialiasing at xy axis, in relation to diagonal antialiasing(-root^2).
All things said, it is not without cause; antialiasing is for filtering, not sampling aliases.
So, these aren't just preferential differences why the fraction is not set to be divisible.
Posted on Reply
#50
Valantar
mtcn77You know, doing things at point scale is a driver choice which OGSSAA is akin to. You might enjoy the greater textural quality, however the method is not without its disclaimer: ordered sampling introduces its grid pattern into the sampling distribution. So, it skews the sampling distribution rather than filtering it. RGSSAA provides more(root^2) multisample antialiasing at xy axis, in relation to diagonal antialiasing(-root^2).
All things said, it is not without cause; antialiasing is for filtering, not sampling aliases.
So, these aren't just preferential differences why the fraction is not set to be divisible.
As usual, your posts are needlessly technical gibberish that generally don't relate to the topic. "point scale"? "textural quality"? "sampling distribution"? I'm talking about being able to compare two things with the same metric. For gaming performance, that generally means the same game, at the same resolution, at the same settings, with the same driver and as few background applications as possible. What you are talking about has no relevance to this whatsoever.
Posted on Reply
Add your own comment
Jan 10th, 2025 00:26 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts