Wednesday, November 22nd 2023

Intel Core Ultra 7 155H iGPU Outperforms AMD Radeon 780M, Comes Close to Desktop Intel Arc A380

Intel is slowly preparing to launch its next-generation Meteor Lake mobile processor family, dropping the Core i brand name in favor of Core Ultra. Today, we are witnessing some early Geekbench v6 benchmarks with the latest leak of the Core Ultra 7 155H processor, boasting an integrated Arc GPU featuring 8 Xe-Cores—the complete configuration expected in the GPU tile. This tile is also projected to be a part of the more potent Core 9 Ultra 185H CPU. The Intel Core Ultra 7 155H processor has been benchmarked in the new ASUS Zenbook 14, which houses a 16-core and 22-thread hybrid CPU configuration capable of boosting up to 4.8 GHz. Paired with 32 GB of memory, the configuration was well equipped to supply CPU and GPU with sufficient memory space.

Perhaps the most interesting information from the submission was the OpenCL score of the GPU. Clocking in at 33948 points in Geekbench v6, the GPU is running over AMD's Radeon 780M GPU found in APU solutions like AMD Ryzen 9 7940HS and Ryzen 9 7940U, which scored 30585 and 27345 points in the same benchmark, respectively. The GPU tile is millimeters away from closing the gap between itself and the desktop Intel Arc A380 discrete GPU, which scored 37105 points for less than a 10% difference. The Xe-LPG GPU version is bringing some interesting performance points for the integrated GPU platform, which means that Intel's Meteor Lake SKUs will bring more performance/watt than ever.
Source: VideoCardz
Add your own comment

39 Comments on Intel Core Ultra 7 155H iGPU Outperforms AMD Radeon 780M, Comes Close to Desktop Intel Arc A380

#26
watzupken
I won't take synthetic test result to conclude this being a better product. When Intel first launched XE iGPU, it is fast in benchmarks. In games, that is another story. I think Intel have been focusing more on their iGPU, which is a good thing to have more competition. I do like Intel's solution because they are feature rich, but performance consistency still needs improvement. Furthermore, I believe Intel's updated iGPU have more GPU cores than AMD's solution. So there may be knock on impact on power requirement. Looking forward to seeing the final product.
Posted on Reply
#27
Darmok N Jalad
Curious what power envelope we can expect, and by that, I mean actual power consumed and not a rating that the APU laughs at as it speeds by.
AssimilatorGeekbench results are about as useful as pissing in the wind.
Hey, if it's downwind, it can be an even better benchmark.
Posted on Reply
#28
Jism
TumbleGeorgeIt is already known that AMD Phoenix models with 12 CUs are severely hampered in terms of graphics performance by the low RAM speed. So, in the niche of integrated graphics, the top models seem to be competing to see who can best take advantage of the slow memory and mitigate the bandwidth shortage as much as possible with a large and fast cache...
IGP's with "Shared memory" never has bin a succes in any regard of performance. It's best to design a APU with it's own CPU CCX and it's own GPU + HBM or something in that matter.

You would be looking at good 1440p stuff right there. And if you want to upgrade it you just swap out the CPU in it's total.

And you can do the same tricks as on a console vs desktop CPU/GPU - 35W power limit for the CPU and 105W power limit for the GPU.
Posted on Reply
#29
AnarchoPrimitiv
JismIGP's with "Shared memory" never has bin a succes in any regard of performance. It's best to design a APU with it's own CPU CCX and it's own GPU + HBM or something in that matter.

You would be looking at good 1440p stuff right there. And if you want to upgrade it you just swap out the CPU in it's total.

And you can do the same tricks as on a console vs desktop CPU/GPU - 35W power limit for the CPU and 105W power limit for the GPU.
Yeah, unfortunately with AI basically gobbling up any and all HBM capacity for the foreseeable future, I highly doubt we'll see any APU's with integrated HBM for the next year or two at the earliest, which is a shame. I've been saying for years that I would 100% buy something akin to the Xbox Series X APU if it had 8GB of integrated HBM2e, even if the package was as big as threadripper....I think it'd make an awesome platform....but, I'm probably one of the only ones interested in such a product because if AMD determined there was mass appeal for such a product, I'm sure they would have tried to release it already.
Posted on Reply
#30
R0H1T
JismIGP's with "Shared memory" never has bin a succes in any regard of performance.
And that changed with Apple's Mx chips back in 2020(?) so no you're wrong about that.
Posted on Reply
#31
ToTTenTranz
R0H1TIf it does come to mass market it certainly won't be competing with anything Intel for at least 2 years, from now, it's competition is Apple's Mx or Snapdragon Elite X(?) with top of the line specs & margins! I think ultimately its future would depend a lot on how AMD provisions capacity/margins for their desktop/server chips & then laptops. Because the other two are basically interchangeable, this is a monolithic die IIRC.
Strix Halo is supposedly a chiplet solution.
TumbleGeorgeAnd it's motherboard with 4 channels for Strix Halo is invisible because no exists.
Its (sans apostrophe) motherboard probably exists in a lab.
There are plenty 4-channel x86 motherboards out there, but Strix Halo won't be socketed anyway. And if it's not socketed, it doesn't matter if the LPDDR width is 128, 256, 512bit or even 1024bit.
GodisanAtheist- I've wondered why AMD doesn't throw iGPUs an Infinity Cache bone. When I saw IC as a concept my first thought was "this is perfect for APUs, their whole deal is being memory constrained" and yet here we are years later, no IC on any APUs.

It seems like a very simple solution to squeeze additional performance out of the existing arch, since adding more CUs wouldn't take AMD anywhere and who knows how much efficiency there is to extract after three revisions of RDNA.
I used to think that way, but large on-die caches seem to have trouble powering down to sub-10W levels, and that might also be why AMD's APUs have smaller L3 caches on the CPU front as well.
JismIGP's with "Shared memory" never has bin a succes in any regard of performance.
Console APUs from the last two decades say hi.
Posted on Reply
#32
theouto
mechtechhmmm

Well if it wasn't for AMD's half decent igpu, Intel would probably be running a prescott era igpu still. So good for competition. Maybe this will spur a decent igpu race.............
Inb4 the iGpu race efforts bleed into the gpu market, and we get two very competitive markets, like the cpu market has been ever since both amd and intel realized they could make good products.
Posted on Reply
#33
TumbleGeorge
ToTTenTranzIts (sans apostrophe) motherboard probably exists in a lab.
There are plenty 4-channel x86 motherboards out there, but Strix Halo won't be socketed anyway. And if it's not socketed, it doesn't matter if the LPDDR width is 128, 256, 512bit or even 1024bit.
Yes, it's nice to theorize how an APU miraculously works without a motherboard, just because it wouldn't have been made to slot. The RAM still has to be put in somehow, the persistent memory too, space for peripheral pins and sockets too, for the various controllers, the bios chip that are not part of the APU. Everything must be calculated. Haha, even smartphones and other small gadgets that run systems on a chip still have motherboards.
Posted on Reply
#34
ymdhis
ToTTenTranzI used to think that way, but large on-die caches seem to have trouble powering down to sub-10W levels, and that might also be why AMD's APUs have smaller L3 caches on the CPU front as well.
I thought the L3 cache is halved simply to make more space, since they have to put both the CPU, the IO unit and the iGPU on all the same monolithic die. The fact that everything is monolithic already helps power usage a lot since less power is wasted on the chiplet to chiplet interconnects. The IO unit on my 5600G can idle at less than half the power than the one on my previous 3600 (down to sub-5W from the previous 10W).
Posted on Reply
#35
ToTTenTranz
ymdhisI thought the L3 cache is halved simply to make more space, since they have to put both the CPU, the IO unit and the iGPU on all the same monolithic die.
I used to think that way as well, but the fact is that you can't partially power down Last Level Cache like you can with CPU cores (and their own individual caches).
That means if you want to use only one core out of 8, you can power down 7 cores worth of ALUs, registers, L0, L1 and L2 caches. But the whole L3 must be all powered up at all times.

And the system can't just use half the L3. It needs to always use all the cells in parallel.
Posted on Reply
#36
Six_Times
When will Meteor lake be available in laptops?
Posted on Reply
#37
Tek-Check
BorcHawk Point is a refresh - same GPU+CPU and same process node. What do you expect from this? Strix Point is scheduled for H2 2024, means it comes late 2024 to early 2025 in a meaningful volume. We basically can add 6 months to the release date of a new mobile generation from AMD. Arrow Lake is around the corner when this comes out.
I expect similar 9-10% uplift from Phoenix, as Rembrandt had over Cezanne. Just about enough to trade blows with Meteor Lake throuout 2024.
AMD knows, and we know now, that Meteor Lake is not any revolution in mobility performance. MTL will certainly, and finally, improve power efficiency, but performancewise it's not miles ahead of Raptor Lake. We are hearing that increasing number of OEMs are not happy with it either.
www.techspot.com/review/2487-amd-ryzen-6800h/

Volume is another story... Let's not conflate two things here. I do think that AMD should not announce products 4-6 months before those become available, or at least announce at CES and say directly there that laptops would land in May. Let's not forget that Intel briefed OEMs that Meteor Lake would have been ready for back to school season. As we know now, it will be for Xmas. I waited for my laptop model with 6800H around 6 months. I was ok with that as I did not need it urgently and I was targetting 4K OLED laptop from Asus for media consumption. Those were delayed a bit due to new Samsung displays needing more tuning. It's not always CPU as one reason for delay on all lines of machines. There are a lot of moving parts that need to come together. My impression is marketing teams of several tech companies are sometimes too eager and also under pressure to make annoucements to fit into quarterly deadlines and reports for shareholders.
TumbleGeorgeHuh, if all that's talked about is food. All sorts of nonsense is mentioned about the Strix, including 4-channel RAM access. Which is not going to happen. It's not that it isn't technically feasible, it's just that someone has to make a whole new platform just for Strix.
I am not talking about rumoured Strix Halo, but mainstream Strix Point on Zen5 was on AMD's official roadmap.
If AMD decides to compete with Apple, they will need to design a new platform with four channels. All cards are in the game.
TumbleGeorgeIt is already known that AMD Phoenix models with 12 CUs are severely hampered in terms of graphics performance by the low RAM speed. So, in the niche of integrated graphics, the top models seem to be competing to see who can best take advantage of the slow memory and mitigate the bandwidth shortage as much as possible with a large and fast cache...
Let's not expect miracles from APUs on two memory channel systems. Besides, faster SO-DIMMs at 6400 MT/s are available for those who want to squeeze more performance. Whoever wants more memory bandwidth, Apple is available too. Good luck with gaming though.
Posted on Reply
#38
TumbleGeorge
Tek-CheckLet's not expect miracles
Yes, we agree. As you can see, I tried to calm the agitation of the colleague I was answering to: "You can't just pull a piece of silicon out of your sleeve and have it magically work while you're holding it in your hands".
Posted on Reply
#39
Tek-Check
DristunIt's a shame that integrated graphics in desktop CPUs will still be getting the most cut-down piece of junk imaginable instead of a decent unit like here.
You can always buy mobile CPU/APU in desktop form system.
There will be 8000G desktop APUs from January if you need better iGPU.

It's all about necessity and functionality. AMD introduced the first iGPU on desktop CPUs last year. It's simply for monitor connection and media engine. For anything else, we have GPUs. In fact, iGPU has more powerful display engine than OEMs are willing to expose. iGPU on Raphael supports both DP 2.1 at 40 Gbps and HDMI 2.1 at 48 Gbps. None of motherboard vendors have exposed those ports to their full capability. None. Only Asrock launched boards with HDMI 2.1 at 32 Gbps for newer monitors.
FlankerInteresting. Now I want to see what framerates they get in games
Don't hold your hopes too high. We know what Arc is (in)capable of.
john_Intel GPUs do nicely in benchmarks, how about games?
That being said, we expect AMD to push Nvidia for better GPUs, we expect Intel to push AMD for better iGPUs.
There's more to those games between manufacturers.

AMD launched the first GPUs with DisplayPort 2.1 video signal, client at 54 Gbps (UHBR13.5 signal) and PRO W series with full 80 Gbps (UHBR20 signal). Nvidia does not have any GPUs with DP 2.1 until 2025. Now, Intel wants to launch Thunderbolt 5 descrete controller next year for halo products. Currently, no iGPU supports DP 2.1 signal at 80 Gbps, so Thunderbolt 5 cannot take advantage of video signal from there.

The only product Thunderbolt 5 could take full DP 2.1 signal from are AMD PRO W7000 cards at the moment. So, for any wider adoption, Intel must wait for new Nvidia 5000 cards to implement DP 2.1 video signal. Or from its own Battlemage GPUs if those support full DP 2.1 signal.
TumbleGeorgeAnd it's motherboard with 4 channels for Strix Halo is invisible because no exists.
We don't know this. It's a tech rumour. We will find out next year.
Posted on Reply
Add your own comment
May 25th, 2024 22:57 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts