Friday, January 29th 2021
Samsung Exynos SoC with AMD RDNA GPU Destroys Competition, Apple 14 Bionic SoC Kneels
Some time ago, Samsung and AMD announced that they will be building a mobile processor that utilizes AMD RDNA architecture for graphics processing. Samsung is readying its Exynos 2100 SoC and today we get to see its performance results in the first leaked benchmark. The new SoC design has been put through a series of GPU-only benchmarks that stress just the AMD RDNA GPU. Firstly there is Manhattan 3 benchmark where the Exynos SoC scored 181.8 FPS. Secondly, the GPU has scored 138.25 FPS in Aztek Normal and 58 FPS in Aztek High. If we compare those results to the Apple A14 Bionic chip, which scored 146.4 FPS in Manhattan 3, 79.8 FPS in Aztek Normal, and 30.5 FPS in Aztek High, the Exynos design is faster anywhere from 25% to 100%. Of course, given that this is only a leak, all information should be taken with a grain of salt.
Source:
Tom's Hardware
43 Comments on Samsung Exynos SoC with AMD RDNA GPU Destroys Competition, Apple 14 Bionic SoC Kneels
As i read somewhere qualcomm GPU is based on bought IP from AMD many many years ago, so everything in mobile phones space is based on old Amd IP that got refreshed with whatever money qualcomm and probably apple put in R&D.
Samsung made a smart move updating it's GPU to the best architecture available except ray-tracing :D
This score leaked in may 2020
It was claimed that they could leave most part of the chip intact and just swap cores.
RDNA2 cores going into mobile chips will be disruptive.
I hope someone rolls out OLED notebook with Ryzen 6000 series (doesn't need to be 4k) for under 2000Euro.
Amd/comments/l01p4t
AMD haven't prioritised IGP performance at all for the entirety of Zen Mobile. My 2700U is still close enough in graphics performance to the latest Cezanne 5000-series, according to early reviews. No matter how much you dress it up with faster RAM and higher clocks, Vega8 was a downgrade from Vega10 in the first-gen Ryzen mobiles - and that's assuming you could even find a Vega8. There were so few of the 4800U that even many of the popular reviewers gave up waiting and reviewed the 4700U instead! With CL14 DDR4-2400 and (temporarily) unrestricted power budget, my three year old 2700U was pretty comparable to the 4700U, just because it had 10CUs to the 4700U's 7. Also, at 1080p and under, bandwidth is less of an issue than absolute latency, and most of the DDR4-3200 Renoir models are using cheap, high-latency RAM :(
The one and only improvement AMD have made to their IGPs in the last 3.5 years is moving to 7nm. All that brings to the table is higher clocks within any given cooling envelope, and I have to limit my 14nm Raven Ridge laptop to 22.5W for any long-term usage. 35W needed to reach 1100MHz+ on the graphics will overwhelm the cooling in under 10 minutes. The performance isn't exactly rocket-science maths though:
[INDENT]Raven Ridge Vega10: 10CU x 1100MHz = 11000[/INDENT]
[INDENT]Cezanne Vega 7 = 7CU x 1800MHz = 12600[/INDENT]
That's an incredible (/s) 14% improvement over three years and three generations of x700U SKU.
RDNA2 is sorely needed in AMD's APUs. They also need to focus on less die-area for CPU cores and more die area for graphics. Nobody using a 15W laptop is going to care about having 16 threads running - especially since the 15W limit is likely to dial back the clocks of those threads hard, anyway. What would make more sense would be to drop Rembrandt (6000-series) down to 6-cores and potentially free up enough die area to boost the graphics from 8 Vega CUs to 12-15 Navi CUs. Each CPU core is easily as big as three Vega CUs on the existing dies.
This will change with DDR5, but low end GPUs will already be better by then.
Sure, at higher-resolutions bandwidth is a real problem, but laptop IGPs don't have the graphics horsepower to run at higher resolutions in the first place. We're aiming for 720p60 or 1080p30 where bandwidth doesn't make a huge amount of difference so long as you're running reasonably low absolute RAM latency and have enough bandwidth to shift the bottleneck over to the graphics CUs. Dual-channel DDR4-3200 is close enough in performance to both my 2400 CL14 and LPDDR4X 4266 that clearly a near-doubling of bandwidth doesn't get anywhere close to a doubling of performance. 15-25% at best, CU-for-CU and clock-for-clock.
Probably they bought ati/amd ip many years ago, then bult their tech up on them. Actual amd IP is "little" different from ip qualcom bought, so i can bet a qualcom/samsung cpu with a new amd gpu will totally destroy eny apple soc with imagination tech. gpu
www.techpowerup.com/266530/samsung-amd-radeon-gpu-for-smartphones-is-reportedly-beating-the-competition
Why make the same article 10 month after ?
We're not talking about enough of a performance jump to go to much higher resolutions, we're just trying to get barely playable games that run 20-30fps to run at maybe 30-50fps. Even a 50% performance jump is well within the limits of current DDR4.