Tuesday, June 7th 2022

Intel Arc A730M Tested in Games, Gaming Performance Differs from Synthetic

Intel Arc A730M "Alchemist" discrete GPU made headlines yesterday, when a notebook featuring it achieved a 3DMark TimeSpy score of 10,138 points, which would put its performance in the same league as the NVIDIA GeForce RTX 3070 Laptop GPU. The same source has taken the time to play some games, and come up with performance numbers that would put the A730M in a category lower than the RTX 3070 Laptop GPU.

The set of games tested is rather small—F1 2020, Metro Exodus, and Assassin's Creed Odyssey, but the three are fairly "mature" games (have been around for a while). The A730M is able to score 70 FPS at 1080p, and 55 FPS at 1440p in Metro Exodus. With F1 2020, we're shown 123 FPS (average) at 1080p, and 95 FPS avg at 1440p. In Assassin's Creed Odyssey, the A730M yields 38 FPS at 1080p, and 32 FPS at 1440p. These numbers roughly translate to the A730M being slightly faster than the desktop GeForce RTX 3050, and slower than the desktop RTX 3060, or in the league of the RTX 3060 Laptop GPU. Intel is already handing out stable versions of Arc Alchemist graphics drivers, and the three are fairly old games, so this might not be a case of bad optimization.
Sources: Golden Pig Update (Weibo), VideoCardz
Add your own comment

66 Comments on Intel Arc A730M Tested in Games, Gaming Performance Differs from Synthetic

#26
ThrashZone
bugThey've already built mining accelerators for that, so that's not it.
Hi,
Probably not seeing intel is really good at dropping chips fast seen it over and over against amd so dropping a gpu for mining hungry people is not out of their playbook
But chips are a hell of a lot smaller than a gpu.
Posted on Reply
#27
Xajel
DavenThere’s a good chance no one will be able to buy the first generation as an AIB. These GPUs might only be available in complete OEM systems and released in a limited number of countries.
AFAIK, These are mobile GPUs, so no AIBs.. only OEMs.
Posted on Reply
#28
Daven
XajelAFAIK, These are mobile GPUs, so no AIBs.. only OEMs.
Sorry I was referring to the upcoming launch of the desktop versions that suppose to happen this quarter. I thought the original commenter was referring to the price of the desktop boards.
Posted on Reply
#29
Vayra86
Did you guys notice the Nvidia Gameworks technologies are 'ON' in this Metro benchmark? Hairworks and PhysX.

A good sign imho, feature parity is a win.
Posted on Reply
#30
ncrs
Vayra86Did you guys notice the Nvidia Gameworks technologies are 'ON' in this Metro benchmark? Hairworks and PhysX.

A good sign imho, feature parity is a win.
Don't both of them work regardless of the GPU vendor? Even in the NVIDIA driver settings you can force CPU-only PhysX.
Posted on Reply
#31
Fouquin
bugThey have also had i740 which was decent and Larrabee (which sucked balls)
i740 was terrible, it only existed as a feature presentation for AGP 1.0. Not competitive at all. Larrabee literally never released, and the demos showed it being close to competitive while offering some interesting features such as ray tracing and real time path tracing. You can't claim Larrabee was good or bad though, nobody outside Intel has ever collected performance numbers.
Posted on Reply
#32
aQi
Chrispy_That's not true - the Xe architecture has been running in laptops for a few years now and Intel absolutely has been working with game developers to optimise drivers, game profiles, and fix bugs. Forza is one of the engines that Xe drivers initially struggled with at launch of Tiger Lake about 2 years ago.
Yes i did know that but if these discrete gpus are already in the hands of gamers. Then the developers would already make there titles more optimised just as they do with the green and red
Posted on Reply
#33
dragontamer5788
I mean, if they're high-compute but terrible at gaming performance, that sounds ideal for a "GPU-compute" build. (assuming the price drops low enough)

EDIT: 12GB GDDR6? Yeah. That sounds great for compute purposes.
Posted on Reply
#34
AusWolf
This is positive news. I just wish there was more about desktop cards, including release dates.
Posted on Reply
#35
Minus Infinity
Intel has to start somewhere and I think hardware isn't their problem it's drivers. I'm sure it will be a bit rough initially, and why I don't buy first gen products, look at the problems Alder Lake faced due to the little cores, but ultimately more competition is great news. The only real problem I have is yet again Intel failed miserably on their timeline of delivery. And if anyone beleives their BS timeline for Raptor Lake and going forward I have a nice bridge I can sell you in Sydney. Raptor Lake will struggle to be out by December fro latest reports and the diea Meteor Lake is shipping in less than 12 months on desktop is ludicrous.

If Arc had have came out on time by now they'd probably gotten some decent sales and developed drivers a lot more. Now that it'll be facing heavily discounted Ampere and RDNA2 and up against Lovelace and RDNA3 will cause it no end of grief,. Intel don't do cheap hardware, they will have to heavily discount Arc to garner interest and I'll bet they can't bring themselves to do it, it might be 10-15% cheaper than the established players at best.
Posted on Reply
#36
AusWolf
Minus InfinityIntel has to start somewhere and I think hardware isn't their problem it's drivers. I'm sure it will be a bit rough initially, and why I don't buy first gen products, look at the problems Alder Lake faced due to the little cores, but ultimately more competition is great news. The only real problem I have is yet again Intel failed miserably on their timeline of delivery. And if anyone beleives their BS timeline for Raptor Lake and going forward I have a nice bridge I can sell you in Sydney. Raptor Lake will struggle to be out by December fro latest reports and the diea Meteor Lake is shipping in less than 12 months on desktop is ludicrous.

If Arc had have came out on time by now they'd probably gotten some decent sales and developed drivers a lot more. Now that it'll be facing heavily discounted Ampere and RDNA2 and up against Lovelace and RDNA3 will cause it no end of grief,. Intel don't do cheap hardware, they will have to heavily discount Arc to garner interest and I'll bet they can't bring themselves to do it, it might be 10-15% cheaper than the established players at best.
But if they do discount it, it might be a decent offering. It's not that the masses need 3090 level performance in their rigs anyway.

Drivers aren't going to be a problem, imo. At least I haven't had any issues with Intel drivers for a while now. Unless first gen Arc is meant to be only a test run like the 5700 XT was, which wouldn't surprise me.
Posted on Reply
#37
Jism
Chrispy_EXACTLY as I called it here yesterday:
I suspect intel is shaving off the edges in terms of drivers and any populair benchmark.

This generation failed. The delay is purely due to underwelming expectations. Cant be anything else.
Posted on Reply
#38
watzupken
Actually after reading the article, I can’t really tell if the result is good or bad. In the first place, we are coming the A730M which is not the flagship A770M and seems to kind of match the RTX 3060 mobile in terms of specs. The conclusion seems to imply that the A730M performs at a RTX 3060 Mobile level, so It does not sound bad. I can’t wrap my head around why the comparison is made with the RTX 3050 and 3060 desktop version in the first place since it is clear that desktop components have more power headroom and almost no thermal restrictions, as compared to a laptop GPU. Also, the faster desktop CPUs will most likely contribute to better performance results at say 1080p, and to a minimal extend, 1440p.
Posted on Reply
#40
Prima.Vera
This is what Raja came after all those years developing?? :laugh::laugh::laugh:
I'm telling you, the guy is a genius... Time for another sabbatical.
Posted on Reply
#41
ratirt
I wonder how good those Intel's products are with mining. Maybe Intel will become a new mining kingpin with them graphics :)
It wouldn't be so bad. More availability for us with other vendors :)
Posted on Reply
#42
Vayra86
ncrsDon't both of them work regardless of the GPU vendor? Even in the NVIDIA driver settings you can force CPU-only PhysX.
Advanced PhysX is GPU PhysX
Posted on Reply
#43
ncrs
Vayra86Advanced PhysX is GPU PhysX
I find it highly unlikely Intel licensed this from NVIDIA. According to the PhysX SDK on Windows GPU acceleration requires CUDA, and that is something that's never going to be licensed out by NVIDIA.
Posted on Reply
#44
Vayra86
ncrsI find it highly unlikely Intel licensed this from NVIDIA. According to the PhysX SDK on Windows GPU acceleration requires CUDA, and that is something that's never going to be licensed out by NVIDIA.
And yet, the bench says it is being run and they produce a number of FPS with it 'ON'.

Raja probably has a way to simulate some sort of CUDA. Makes some sense too. They don't need it at full performance/feature parity.
Besides, have you seen how close to PhysX the stuff in, say, UE5 is? Physics calculations aren't rocket science, and they can emulate things.

Note also how other technologies, notably the ones that say 'I need a tensor/RT core' are still absent.

Another option is that what we're reading below is just utter bullshit. Or maybe that 9 FPS dip is a PhysX effect :D

Posted on Reply
#45
ncrs
Vayra86And yet, the bench says it is being run and they produce a number of FPS with it 'ON'.

Raja probably has a way to simulate some sort of CUDA. Makes some sense too. They don't need it at full performance/feature parity.
Unfortunately it's not simple to "simulate CUDA", there have been attempts but not very successful. It's a huge advantage for NVIDIA that made their near monopoly on certain sectors.
Intel has a different vision to those kinds of computational needs - oneAPI&co. However they are going the other way - you write your program in oneAPI tech and it then gets compiled to CUDA in order to run on NVIDIA GPUs. Obviously it can target CPUs, AMD and Intel GPUs/accelerators as well.
Vayra86Besides, have you seen how close to PhysX the stuff in, say, UE5 is? Physics calculations aren't rocket science, and they can emulate things.
Oh yes, from what I've read PhysX in UE5 is deprecated in favor of Unreal Chaos. Unity also supports Havok and Unity Physics together with PhysX (and Box2D for 2D simulations). I don't think it's an important competitive advantage to NVIDIA any more, its last big thing was open-sourcing the SDK. However NVIDIA is known for very good relations with developers, so maybe Metro is using something special.
Vayra86Note also how other technologies, notably the ones that say 'I need a tensor/RT core' are still absent.
Intel Arc will have tensor computational capabilities with Matrix Engines and RT acceleration of some sort (it remains to be seen if they go with more specialized units like NVIDIA or more generic ones like AMD). But again, apart from common APIs like Direct3D/OpenGL/Vulkan I don't think they will provide support compatible with, for example, NVIDIA OptiX.
Vayra86Another option is that what we're reading below is just utter bullshit. Or maybe that 9 FPS dip is a PhysX effect :D
That was my thought as well, it might be the game being confused somehow, or the screenshot is faked, we'll have to wait for official reviews ;)
Posted on Reply
#46
medi01
Given how wildly unpredictable the Ampere GPU lineup is, I'm shocked someone is using them as reference.

If TDP/chip size is right, this could be quite damning for NV in the notebook market.
Posted on Reply
#47
Mussels
Freshwater Moderator
It's good to have a triopoly and see a new competitor, but we all know these are meant to be budget GPU's at high prices, intended so intel can sell OEM systems and laptops that are entirely 100% intel hardware

CPU, GPU chipset, network, wifi, every last controller chipset and doodad to be made by intel for max profits.

And soon enough it'll leak out that manufacturers using these get nice big discounts, and penalised for selling mixed-vendor products (Since y'know, it's happened before)
Posted on Reply
#48
AusWolf
MusselsIt's good to have a triopoly and see a new competitor, but we all know these are meant to be budget GPU's at high prices, intended so intel can sell OEM systems and laptops that are entirely 100% intel hardware

CPU, GPU chipset, network, wifi, every last controller chipset and doodad to be made by intel for max profits.

And soon enough it'll leak out that manufacturers using these get nice big discounts, and penalised for selling mixed-vendor products (Since y'know, it's happened before)
I can see that happen, though I hope you're wrong. Personally, I'm just waiting for the desktop release to see how they stack up against nvidia and AMD. What bothers me is that there's a lot of news about laptops, but not much about desktop cards, which gives way for suspicion that your theory may be right.
Posted on Reply
#49
Vayra86
MusselsIt's good to have a triopoly and see a new competitor, but we all know these are meant to be budget GPU's at high prices, intended so intel can sell OEM systems and laptops that are entirely 100% intel hardware

CPU, GPU chipset, network, wifi, every last controller chipset and doodad to be made by intel for max profits.

And soon enough it'll leak out that manufacturers using these get nice big discounts, and penalised for selling mixed-vendor products (Since y'know, it's happened before)
Makes sense, this is and has historically been Intel's ticket to keep those chips rolling off shelves. Regardless of whatever they do really. Put sticker on it, brand it, sell it.
Posted on Reply
#50
ghazi
watzupkenActually after reading the article, I can’t really tell if the result is good or bad. In the first place, we are coming the A730M which is not the flagship A770M and seems to kind of match the RTX 3060 mobile in terms of specs. The conclusion seems to imply that the A730M performs at a RTX 3060 Mobile level, so It does not sound bad. I can’t wrap my head around why the comparison is made with the RTX 3050 and 3060 desktop version in the first place since it is clear that desktop components have more power headroom and almost no thermal restrictions, as compared to a laptop GPU. Also, the faster desktop CPUs will most likely contribute to better performance results at say 1080p, and to a minimal extend, 1440p.
It's hard to make a direct comparison because this chip is so heavily cut down, but rumors were saying the full desktop version would compete with the 3070 Ti. And it turns out the cut down is GA106 tier. The spec difference between this variant and the full chip is SMALLER than the difference between the 3060 and the 3060 Ti. So we can easily infer from this that the first gen Xe will offer similar performance to the 3060 Ti on desktop, falling short of expectations. So yes, the result is bad, quite bad.
Posted on Reply
Add your own comment
Jan 10th, 2025 18:48 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts