Tuesday, June 7th 2022
Intel Arc A730M Tested in Games, Gaming Performance Differs from Synthetic
Intel Arc A730M "Alchemist" discrete GPU made headlines yesterday, when a notebook featuring it achieved a 3DMark TimeSpy score of 10,138 points, which would put its performance in the same league as the NVIDIA GeForce RTX 3070 Laptop GPU. The same source has taken the time to play some games, and come up with performance numbers that would put the A730M in a category lower than the RTX 3070 Laptop GPU.
The set of games tested is rather small—F1 2020, Metro Exodus, and Assassin's Creed Odyssey, but the three are fairly "mature" games (have been around for a while). The A730M is able to score 70 FPS at 1080p, and 55 FPS at 1440p in Metro Exodus. With F1 2020, we're shown 123 FPS (average) at 1080p, and 95 FPS avg at 1440p. In Assassin's Creed Odyssey, the A730M yields 38 FPS at 1080p, and 32 FPS at 1440p. These numbers roughly translate to the A730M being slightly faster than the desktop GeForce RTX 3050, and slower than the desktop RTX 3060, or in the league of the RTX 3060 Laptop GPU. Intel is already handing out stable versions of Arc Alchemist graphics drivers, and the three are fairly old games, so this might not be a case of bad optimization.
Sources:
Golden Pig Update (Weibo), VideoCardz
The set of games tested is rather small—F1 2020, Metro Exodus, and Assassin's Creed Odyssey, but the three are fairly "mature" games (have been around for a while). The A730M is able to score 70 FPS at 1080p, and 55 FPS at 1440p in Metro Exodus. With F1 2020, we're shown 123 FPS (average) at 1080p, and 95 FPS avg at 1440p. In Assassin's Creed Odyssey, the A730M yields 38 FPS at 1080p, and 32 FPS at 1440p. These numbers roughly translate to the A730M being slightly faster than the desktop GeForce RTX 3050, and slower than the desktop RTX 3060, or in the league of the RTX 3060 Laptop GPU. Intel is already handing out stable versions of Arc Alchemist graphics drivers, and the three are fairly old games, so this might not be a case of bad optimization.
66 Comments on Intel Arc A730M Tested in Games, Gaming Performance Differs from Synthetic
Probably not seeing intel is really good at dropping chips fast seen it over and over against amd so dropping a gpu for mining hungry people is not out of their playbook
But chips are a hell of a lot smaller than a gpu.
A good sign imho, feature parity is a win.
EDIT: 12GB GDDR6? Yeah. That sounds great for compute purposes.
If Arc had have came out on time by now they'd probably gotten some decent sales and developed drivers a lot more. Now that it'll be facing heavily discounted Ampere and RDNA2 and up against Lovelace and RDNA3 will cause it no end of grief,. Intel don't do cheap hardware, they will have to heavily discount Arc to garner interest and I'll bet they can't bring themselves to do it, it might be 10-15% cheaper than the established players at best.
Drivers aren't going to be a problem, imo. At least I haven't had any issues with Intel drivers for a while now. Unless first gen Arc is meant to be only a test run like the 5700 XT was, which wouldn't surprise me.
This generation failed. The delay is purely due to underwelming expectations. Cant be anything else.
With the GPU market turning into a gold mine the last years I am surprised Apple didn't join the party. Who knows, maybe they are on it.
I'm telling you, the guy is a genius... Time for another sabbatical.
It wouldn't be so bad. More availability for us with other vendors :)
Raja probably has a way to simulate some sort of CUDA. Makes some sense too. They don't need it at full performance/feature parity.
Besides, have you seen how close to PhysX the stuff in, say, UE5 is? Physics calculations aren't rocket science, and they can emulate things.
Note also how other technologies, notably the ones that say 'I need a tensor/RT core' are still absent.
Another option is that what we're reading below is just utter bullshit. Or maybe that 9 FPS dip is a PhysX effect :D
Intel has a different vision to those kinds of computational needs - oneAPI&co. However they are going the other way - you write your program in oneAPI tech and it then gets compiled to CUDA in order to run on NVIDIA GPUs. Obviously it can target CPUs, AMD and Intel GPUs/accelerators as well. Oh yes, from what I've read PhysX in UE5 is deprecated in favor of Unreal Chaos. Unity also supports Havok and Unity Physics together with PhysX (and Box2D for 2D simulations). I don't think it's an important competitive advantage to NVIDIA any more, its last big thing was open-sourcing the SDK. However NVIDIA is known for very good relations with developers, so maybe Metro is using something special. Intel Arc will have tensor computational capabilities with Matrix Engines and RT acceleration of some sort (it remains to be seen if they go with more specialized units like NVIDIA or more generic ones like AMD). But again, apart from common APIs like Direct3D/OpenGL/Vulkan I don't think they will provide support compatible with, for example, NVIDIA OptiX. That was my thought as well, it might be the game being confused somehow, or the screenshot is faked, we'll have to wait for official reviews ;)
If TDP/chip size is right, this could be quite damning for NV in the notebook market.
CPU, GPU chipset, network, wifi, every last controller chipset and doodad to be made by intel for max profits.
And soon enough it'll leak out that manufacturers using these get nice big discounts, and penalised for selling mixed-vendor products (Since y'know, it's happened before)