• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Radeon Pro 5600M with HBM2 Benchmarked

It is, the perfect version. 50w is the killer deal.

Its nice, a low power GPU.

But I was more referring to the tier list in those benches... we're looking at benches that run ancient content and omg look at those fps! In reality though this is just a tiny slice of performance that is realistically available. Bit of a distortion of reality.
 
When current 20W Nvidia performance barely (20-30%) beats an APU with DDR4 in a shared 15W TDP, that seems... less than likely.
 
so a r9 290 with 8gb hbm2 and 7nm
 
  • Like
Reactions: ARF
Except it's RDNA and not GCN?


"RDNA" is only a marketing term for GCN 1.5.0 (Navi, or Bermuda ?). R9 290 Hawaii (or Ibiza) is GCN 1.1 which is, yeah, the very same basic architecture with minor tweaks here and there.

1592516228419.png


This "Navi 12" is GCN 1.5.1.
 
"RDNA" is only a marketing term for GCN 1.5.0 (Navi, or Bermuda ?). R9 290 Hawaii (or Ibiza) is GCN 1.1 which is, yeah, the very same basic architecture with minor tweaks here and there.

View attachment 159440

This "Navi 12" is GCN 1.5.1.
Don't be daft. Backwards compatibility being baked in does not make it the same architecture.
 
Don't be daft. Backwards compatibility being baked in does not make it the same architecture.

Look at the schemes, you will see that the architecture is the same, just some units have new positions left or right.

1592545343912.png


If "rDNA" is so straightforward compatible with all the previous GCN architectures, why is it so difficult for AMD to release stable drivers ?
Why is the last driver version 20.5.1 which is normally released in early May and effectively they stopped releasing new drivers ?

What is so wrong with AMD's "rDNA" so it doesn't work flawlessly yet?
 
Look at the schemes, you will see that the architecture is the same, just some units have new positions left or right.

View attachment 159467

If "rDNA" is so straightforward compatible with all the previous GCN architectures, why is it so difficult for AMD to release stable drivers ?
Why is the last driver version 20.5.1 which is normally released in early May and effectively they stopped releasing new drivers ?

What is so wrong with AMD's "rDNA" so it doesn't work flawlessly yet?

Company culture and policy. Its not a technical problem, its about allocation of resources. Discrete GPU is somewhere on the bottom of that ladder.
 
could have been ~20w nvidia AYYYYYYYYYYYYMDUMB
Yeah, and even much faster. Remind me, how the Nintendo Switch performance/battery life expectation panned out.
 
Look at the schemes, you will see that the architecture is the same, just some units have new positions left or right.

View attachment 159467

If "rDNA" is so straightforward compatible with all the previous GCN architectures, why is it so difficult for AMD to release stable drivers ?
Why is the last driver version 20.5.1 which is normally released in early May and effectively they stopped releasing new drivers ?

What is so wrong with AMD's "rDNA" so it doesn't work flawlessly yet?
You're contradicting yourself. If RDNA is just another revision of GCN, why does it need large driver changes at all? Also, looking at two somewhat similar (but by no means identical) block diagrams and saying "these are the same architecture" just shows that you are looking at this way too superficially. A block diagram is a high level overview, and tells us next to nothing about how the parts in the diagram are constructed or how they work. You can make a thousand different architectures that could be illustrated with identical block diagrams yet be fundamentally Incompatible. Any modern GPU architecture could be illustrated by a reasonably similar block diagram.
 
Last edited:
Except it's RDNA and not GCN?
Yeah if you want, but for me rdna is yet a gcn but tweaked to the better. Not something life changing seen from terascale vliw to gcn simd.
But to come back to what I said, we see same number of shaders 2560, same clock speed ~1000mhz, if you see my reference to r9 290.
 
You're contradicting yourself. If RDNA is just another revision of GCN, why does it need large driver changes at all? Also, looking at two somewhat similar (but by no means identical) block diagrams and saying "these are the same architecture" just shows that you are looking at this way too superficially. A block diagram is a high level overview, and tells us next to nothing about how the parts in the diagram are constructed or how they work. You can make a thousand different architectures that could be illustrated with identical block diagrams yet be fundamentally Incompatible. Any modern GPU architecture could be illustrated by a reasonably similar block diagram.


Look, it is designated as GCN 1.5.0 and 1.5.1. This means it is a GCN evolution. Even during the official presentation, they said it is a "hybrid" between GCN and another new architecture.
1.5.0 doesn't mean "backwards" compatibility - it is the version of the architecture itself..
 
Dankness is a facility some consider to be unnatural. Personally, I embrace our null hypothesis overlords.
I am ever more convinced that you live in some parallel universe in which your statements are deeply insightful and eloquent. Until I find a portal to that universe however, I remain unable to understand what you mean whatsoever.
Look, it is designated as GCN 1.5.0 and 1.5.1. This means it is a GCN evolution. Even during the official presentation, they said it is a "hybrid" between GCN and another new architecture.
1.5.0 doesn't mean "backwards" compatibility - it is the version of the architecture itself..
Or it means that the backwards compatibility is itself not identical to any version of GCN, and thus requires some adaptations, thus making presenting it as GCN 1.5 a reasonable thing. That RDNA 1 in backwards compatibility mode is recognized as GCN 1.5 does not mean that the overall architecture is GCN 1.5.
Yeah if you want, but for me rdna is yet a gcn but tweaked to the better. Not something life changing seen from terascale vliw to gcn simd.
But to come back to what I said, we see same number of shaders 2560, same clock speed ~1000mhz, if you see my reference to r9 290.
"For you" - what does that mean? Does your opinion change the physical properties of the die? I certainly don't think so, and the undeniable fact is that RDNA - even in its not fully realized 1.0 form, where parts of GCN are carried over wholesale - handles instructions and other crucial aspects of GPU operation in very different ways from GCN. Just because there are commonalities does not mean they are the same. Do you also say that all Ford cars are the same, just because they share a brand, and some even share engines and drivetrains? All modern GPUs operate in similar ways, as there are standards in place that they must follow, and there is a limited number of possible ways of following them. Also, it is pretty much a universal truth that the longer something is developed (such as GPUs, or cars, or toasters, etc.) changes over time will diminish simply due to the fact that the possibility of finding a better design among the limited possible designs fulfilling the desired function becomes increasingly difficult. There are always developments that break this trend, but over time it all normalizes into a downward slope in terms of the scope of changes. In other words: it is entirely natural that RDNA is less of a break from GCN than GCN was from VLIW, but that doesn't mean that it no longer qualifies as a new architecture. By that logic I might as well say all GPU architectures with unified shaders are the same. You see how that is problematic, right?
 
You're contradicting yourself. If RDNA is just another revision of GCN, why does it need large driver changes at all?

Well, I do not know the reasons in details but my guess is that the Vega optimised and the previous drivers in general had been optimised for the compute features of the Vega.
For instance, Vega 10 has FP32 single 12.66 TFlops, FP64 double 791.6 GFlops.
While Navi is stripped of some of the computing oriented units, so it's more gaming oriented, and has FP32 single of only 9.754 TFlops and FP64 double of 609.6 Gflops.
 
Well, I do not know the reasons in details but my guess is that the Vega optimised and the previous drivers in general had been optimised for the compute features of the Vega.
For instance, Vega 10 has FP32 single 12.66 TFlops, FP64 double 791.6 GFlops.
While Navi is stripped of some of the computing oriented units, so it's more gaming oriented, and has FP32 single of only 9.754 TFlops and FP64 double of 609.6 Gflops.
That doesn't demonstrate any reduction in compute beyond the reduction in CUs.

The RX 5700 XT is rated at 9.754 TFLOPS due to having 2560 shaders capable of 2 FLOPS/clock running at a nominal ~1900 MHz. Similarly, the Vega 64 has 4096 shaders capable of 2 FLOPS/clock running at a nominal ~1550MHz for 12.66TFLOPS. Both have a 16:1 ratio between FP32 and FP64-capable hardware. The difference between the two lies in that the former manages to outperform the latter in games despite its seemingly overall compute power deficiency. In other words the RX 5700 has architectural changes that allows it to significantly better utilize its compute power in gaming applications.
 
While one source is less than ideal, this is definitely promising in terms of perf/W: reportedly in the performance ballpark of an RTX 2060 (max-P, not max-Q), which has a TGP of 80W.

I find it very weird that the reviewer opted to use resolution scaling to test different settings rather than just changing the output resolution (does BootCamp not support non-native resolutions, maybe?), but nonetheless the perf/W on display is quite impressive. Certainly bodes well for RDNA 2, and it has definitely brought back a hope that "Big Navi" might use HBM2 for me.
 
While one source is less than ideal, this is definitely promising in terms of perf/W: reportedly in the performance ballpark of an RTX 2060 (max-P, not max-Q), which has a TGP of 80W.

I find it very weird that the reviewer opted to use resolution scaling to test different settings rather than just changing the output resolution (does BootCamp not support non-native resolutions, maybe?), but nonetheless the perf/W on display is quite impressive. Certainly bodes well for RDNA 2, and it has definitely brought back a hope that "Big Navi" might use HBM2 for me.
Resolution scaling is the correct method, these chips have no scaling difference between drawn and textured resolution. Past gpus, do, however.
 
Resolution scaling is the correct method, these chips have no scaling difference between drawn and textured resolution. Past gpus, do, however.
No. Resolution scaling does (generally) not allow you to set specific resolutions, only ratios or percentages of the display resolution. In other words it makes true 1:1 comparisons impossible. As in this case, instead of testing at a standard 1080p, or at least the 16:10 equivalent 1200p, the reviewer used various scaling factors to arrive at silly non-standard rendering resolutions like 1843x1152 and 2427x1517 for rough 1080p and 1440p equivalents. This means that any comparison made will at best be an approximate comparison, which is just unnecessary when the point is to compare performance between different parts.
 
As in this case, instead of testing at a standard 1080p, or at least the 16:10 equivalent 1200p, the reviewer used various scaling factors to arrive at silly non-standard rendering resolutions like 1843x1152 and 2427x1517 for rough 1080p and 1440p equivalents.
You know, doing things at point scale is a driver choice which OGSSAA is akin to. You might enjoy the greater textural quality, however the method is not without its disclaimer: ordered sampling introduces its grid pattern into the sampling distribution. So, it skews the sampling distribution rather than filtering it. RGSSAA provides more(root^2) multisample antialiasing at xy axis, in relation to diagonal antialiasing(-root^2).
All things said, it is not without cause; antialiasing is for filtering, not sampling aliases.
So, these aren't just preferential differences why the fraction is not set to be divisible.
 
Back
Top