Thursday, August 11th 2022

Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games

Intel earlier this week released its own performance numbers for as many as 50 benchmarks spanning the DirectX 12 and Vulkan APIs. From our testing, the Arc A380 performs sub-par with its rivals in games based on the DirectX 11 API. Intel tested the A750 in the 1080p and 1440p resolutions, and compared performance numbers with the NVIDIA GeForce RTX 3060. Broadly, the testing reveals the A750 to be 3% faster than the RTX 3060 in DirectX 12 titles at 1080p; about 5% faster at 1440p; about 4% faster in Vulkan titles at 1080p, and about 5% faster at 1440p.

All testing was done without ray tracing, performance enhancements such as XeSS or DLSS weren't used. The small set of 6 Vulkan API titles show a more consistent performance lead for the A750 over the RTX 3060, whereas the DirectX 12 API titles sees the two trade blows, with a diversity of results varying among game engines. In "Dolmen," for example, the RTX 3060 scores 347 FPS compared to the Arc's 263. In "Resident Evil VIII," the Arc scores 160 FPS compared to 133 FPS of the GeForce. Such variations among the titles pulls up the average in favor of the Intel card. Intel stated that the A750 is on-course to launch "later this year," but without being any more specific than that. The individual test results can be seen below.
The testing notes and configuration follows.

Source: Intel Graphics
Add your own comment

85 Comments on Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games

#26
Dragokar
Well I thought they are going the Larrabee route, but this looks like Vega and within that another Koduri masterpiece. I guess they are shipping it soon to get rid off the first gen inventory and try to rescue it with Battlemage somehow. Investors must be screaming right now.
Posted on Reply
#27
Jism
chstamosWhat I don't get is how apple could make a decent performing GPU -integrated, at that- seemingly out of nowhere, and intel has been developing this debacle since 2017 (check it out - that's when Xe discrete graphics was first announced, half a decade ago) and still end up with... this clusterf__k.

I'm not trolling for apple, honestly. I'd like some kind of explanation for that. Is it that the Apple igpu is not required to support as many games, for example, considering the relative scarcity of gaming on mac? What is it? I do get how difficult it is to develop a brand new architecture, so how did apple do it?
It really is the same as what Raja attempted with Vega vs Intel with it's arc now. A compute based GPU (intially designed for Compute) derived and put to a gaming version of it. They usually did'nt pass quality standards and with "some tricks" you can utilize it as a gaming GPU.

The tradeoff is that it might underperform in games, but excell in compute, and at the cost of more power often compared to a traditional and real "gaming" GPU as you see with Nvidia Geforce and Quaddro, or RDNA with CDNA.

The drivers of Intel just suck. First reviews coud'nt get it stable. Many glitches, artifacts, underperforming or even chrashes. If you buy one now and you would even attempt playing the latest or newest games, the chances are it just wont work or it's performance would significantly lack.

The delays of over a year has nothing todo with covid or something else. The first batch just did'nt cut it. It proberly had bugs, the performance was whack and they had to respin it to get it right. Or at least working. These GPU's simply dont match, consume more power and games are a gamble for the next few years in terms of drivers and / or performance.

I mean Vega was in it's ways a good card, excelled at Compute, did games and was equal over time to a 1080Ti, it just required a tad more power and that and polaris was just clocked beyond it's efficiency curve. Polaris was also bandwidth starved in terms of performance. For example the memory was rated to only "feed" the gpu up to 1000Mhz GPU clock. Anything above was just a waste of power.

But they kind of had to, in order to still compete with the 1060. The price tag of 250$ however made it good and it was the best 1080p card at that time.
Posted on Reply
#28
john_
chstamosWhat I don't get is how apple could make a decent performing GPU
They probably.... borrowed some ideas from Imagination Technologies. Hired some very talented and experienced people and then throw a huge amount of money on the problem.
I really wonder why Intel didn't tried to buyout Imagination. It would have been probably "cheap" and Imagination could offer experienced and talented personnel and patents that could help Intel to start producing better stuff sooner. After 2017 Imagination probably was begging someone to come and buy them.
Posted on Reply
#29
Dristun
john_Intel is probably lucky that AMD and Nvidia will probably not offer a cheap next gen product early. Nvidia will start with 4080 and up. Even if they throw 4070 in the market, it's not going to be cheap. They lowered their profit margin from about 65% to 45% and started offering GPUs to their partners at much lower prices. Nvidia needs money to go forward and having the strongest by far brand and 80% of the market, also knowing that Intel is far behind and AMD's need to keep focusing on EPYC because of capacity restrictions, they will not price low. I am expecting Nvidia to start with products costing over $800 in the 4-6 first months (just a random prediction to give an idea) and AMD to also start at higher than expected prices, like $600 for their cheaper RX 7000 option. So Intel will have plenty of time and room to play under $500. Fun fact, Raja is reliving his Polaris times. He just doesn't have a Vega equivalent yet.
They just had a second mining jackpot in 6 years and you think they need money to go forward? I don't think Huang's bookkeepers and analysts are so dumb they expected the crazy times to roll on forever, especially considering that even back when the 2020 boom started, ETH PoS switch was already on the map.

Here's my calculation: $500 adjusted for 20% inflation + 50$ towards a new leather jacket for Jensen. MSRP is going to be 650$.
Posted on Reply
#30
Chrispy_
After discovering that the previous slides of Intel's GPU performance bore little resemblence to the real game performance, how is anyone expected to trust these Intel slides?

Intel, you were caught lying, and you haven't addressed that yet. Pull yourselves together FFS.
Posted on Reply
#31
john_
DristunThey just had a second mining jackpot in 6 years and you think they need money to go forward? I don't think Huang's bookkeepers and analysts are so dumb they expected the crazy times to roll on forever, especially considering that even back when the 2020 boom started, ETH PoS switch was already on the map.

Here's my calculation: $500 adjusted for 20% inflation + 50$ towards a new leather jacket for Jensen. MSRP is going to be 650$.
They where dumb. That's why they ordered huge amount of wafers from Samsung, produced huge amounts of RTX 3000 cards and now they are discounting them like there is no tomorrow to manage to sell them. They where that dumb to order a huge amount of 5nm wafers from TSMC that now wish to pospone getting them 6 months later than the agreed date, they where dump enough to have to announce about 1.4 billions less income, to have to suffer a huge drop on profit margin from about 65% to about 45%. That's even lower than Intel's profit margin at Intel's current situation, lower than AMD's profit margin considering AMD is the little guy between the three.

Now if you think that a 12GB card at a much more expensive process, like TSMC's 5nm with performance close to 3090 Ti will start selling at $650, with all my heart I wish you end up correct. But if in their current position after doing all those price redactions they are at 45% profit margin, selling a card with those characteristics at $650 will be suicidal. Not to mention the price redaction in current RTX 3000 unsold cards we will have to witness. Can you imagine an RTX 3090 selling for $550? An RTX 3070 for $300? I doubt we are in a GTX 970 era. Not to mention inflation.
And it doesn't matter how much money they have in their bank accounts. For companies of this size trying to be on top in new markets like AI, with competition from huge companies like Intel, Amazon, Google, Alibaba etc. the amount of money they have, will never be enough.
Posted on Reply
#32
efikkan
I'm curious to see details about framerate consistency across games.

Also, is it just me, or are the "heavier" games mostly on the lower end of that scale comparing it to RTX 3060?
john_Optimization was always a problem, it just seems to be bigger today because games are huge, developers try to market them as fast as possible and frankly a 10+ core CPU and modern GPUs are huge carpets to hide underneath any performance problem.
Adding cores will not solve any performance problem in a game. In a highly synchronized workload like a game, the returns from using more threads are diminishing very quickly, and can quickly turn into unreliable performance or even glitching. What you're saying here is just nonsense.
john_Also an unoptimised game will sell more CPUs and GPUs than an optimized one, meaning not only you can market it faster, you can also get nice sponsor money from Nvidia, AMD and Intel, by partially optimizing for their architecture instead for everyones.
Firstly, no modern PC game is optimized for a specific GPU architecture, that would require using the GPU's low-level API instead of DirectX/Vulkan/OpenGL and bypassing the driver (because translating APIs is the primary task of the driver).

Your claims are approaching conspiracy territory. No game developer wants their game to perform poorly, that would make the gameplay less enjoyable for the majority of their customers. Game developers don't get a cut of GPU sales either, and in the cases of GPU makers "sponsoring" games, that has more to do with technical assistance and marketing, and even if they were to receive any funds, that would be drops in the bucket compared to the budget of the big game titles.

Many games today are bloated and poorly coded for a host of reasons;
- Most use off-the-shelf game engines, writing little or no low-level code themselves. Instead they interface with the engine. This also means these engines have generic rendering pipelines design to render arbitrary objects, not specifically tuned to the specific game.
- Companies want quick returns, often resulting in short deadlines, changing scopes and last minute changes.
- Maintenance is often not a priority, as the code is often hardly touched after launch, leading programmers to rush to meet requirements instead of writing good code. This is the reason why game code is known as some of the worst in the industry.
JismI mean Vega was in it's ways a good card, excelled at Compute, did games and was equal over time to a 1080Ti, it just required a tad more power and that and polaris was just clocked beyond it's efficiency curve. Polaris was also bandwidth starved in terms of performance. For example the memory was rated to only "feed" the gpu up to 1000Mhz GPU clock. Anything above was just a waste of power.

But they kind of had to, in order to still compete with the 1060. The price tag of 250$ however made it good and it was the best 1080p card at that time.
Which Vega cards are you talking about?
Vega 56 performed slightly over GTX 1070 but cost $500 (with a $100 "value" of games).

And how was Polaris bandwidth starved?
RX 480 had 224/256 GB/s vs. GTX 1060's 192 GB/s.

Both Polaris and Vega underperformed due to poor GPU scheduling, yet they performed decently in some compute workloads, as some of then are easier to schedule.
Posted on Reply
#33
chstamos
john_They probably.... borrowed some ideas from Imagination Technologies. Hired some very talented and experienced people and then throw a huge amount of money on the problem.
I really wonder why Intel didn't tried to buyout Imagination. It would have been probably "cheap" and Imagination could offer experienced and talented personnel and patents that could help Intel to start producing better stuff sooner. After 2017 Imagination probably was begging someone to come and buy them.
They had some shares... up to 16% I see.

"On 22 June 2017, Imagination Technologies' board of directors announced it was putting the entire company up for sale[33] and, on 25 September 2017, they announced that the company was being acquired by Canyon Bridge, a private equity fund ultimately owned by the Chinese government.[34][35] In November 2017 the sale to Canyon Bridge was approved in a transaction which valued the business at £550 million (£1.82 per share)."

And you got the begging part right, too.
Posted on Reply
#34
john_
efikkanWhat you're saying here is just nonsense.
Why people have to add this after posting their opinion? You said something, it looked logical, you had in the end to throw an insult. Do you feel smarter by adding that sentence? Are you a game developer in a multi billion company? Am I? This is a forum. Tell your opinion, let the "nonsense" comment in your mind. Don't post it. And if you are NOT a game developer in a multi billion company, how do you know that having more cores doesn't help? Today 4 cores even with HyperThreading are not enough. 12 threads are considered minimum. That will become 16 threads in a year or two, we might move to 24 or 32 threads as a minimum in 5-10 years to avoid having performance problems. How do you explain that? That today we play with 12 or 16 threads to get the best performance? If this post was 5 years old, your post would have being the same with the only difference me talking about 6+ cores CPUs. And your arguments the same.

I totally ignored the rest of your post. You might have some good points there, but I really don't post here to get upset from every "I know better, you say nonsense" individual.

PS I just remembered you. You are that "I know everything person you know nothing".
OK, time to expand the ignore list. You are free to post whatever you like as a reply to my posts. Don't care.
Posted on Reply
#35
ModEl4
Chrispy_After discovering that the previous slides of Intel's GPU performance bore little resemblence to the real game performance, how is anyone expected to trust these Intel slides?

Intel, you were caught lying, and you haven't addressed that yet. Pull yourselves together FFS.
To what Intel's ARC claims are you referring exactly?
Regarding A380 they said that based on their chinese SRP 1030 yuan and AMD's RX6400 1199 yuan they had 25% better performance per price which means in the games they tested RX6400 had -6.87% worst performance vs A380 and in the TPU test the results were the following, so quiet close to TPU correct?


Regarding A750 they showed previously how it performed in 5 titles vs RTX 3060 (+6% up to +17%) saying at the same time that in some newer titles that had DX12/Vulcan based engines the performance delta could reach these levels if the game is suitable to ARC's architecture, clarifying also that A750 won't look as good in all the games and that in many DX11 games they will have low performance for the reasons they explained.
Now they tested 50 games and the performance claim is 3-5% higher than RTX 3060.
I expect the TPU results to be very close to these claims just like in A380's case!
Posted on Reply
#36
bonehead123
btarunrIntel earlier this week released its own performance numbers
^^THIS^^

Lets see some real, INDEPENDANT, 3rd party tests results, then perhaps we can actually decide if these cards are ok, or just DOA :)

Come on TPU, surely someone here can convince team blue to give up a review sample asap, yes ?
Posted on Reply
#37
Unregistered
ZoneDymoBe sure to take us along for your ride, I find this product interesting but I have enough issues to figure out as is :p
Also as of yet DX11 performance is apparently bad and seeing as one of the games I play the most is DX11, this card does not seem like its for me atm.
Dx12 games are plagued with stutters due to shader compilation.
#38
Jimmy_
Hmmm, finally the games have increased from the last leak. Now, they are calling this special edition something. is their driver stable enough? or are they still doing workarounds?
These performance numbers seem pretty average :)
Waiting for the spec sheets of these special edition ARC and also wondering what happened to their ARC 3 & 5 series cards. As of now, they jumped straight to ARC 7xx cards!
How will these cards play a vital role in the revenue game for intel as the graphics division is -$500mn?

The marketing team is doing good enough to get these numbers and everything with YouTubers also, waiting for reviews of these special edition card ( FE vs OEM )
Posted on Reply
#39
64K
bonehead123^^THIS^^

Lets see some real, INDEPENDANT, 3rd party tests results, then perhaps we can actually decide if these cards are ok, or just DOA :)

Come on TPU, surely someone here can convince team blue to give up a review sample asap, yes ?
imo Intel is hesitant to release review samples because they are concerned that a thorough review like it would get on this site wouldn't make their GPUs look very good.
Posted on Reply
#40
DeeJay1001
Wasn't the A7XX card supposed to compete with the 3070? There were "leaks" posted nearly 1 year ago saying they were neck and neck with a 3070. I think we all know the fate of ARC. Intel is just going to milk this lame cow to get some scraps out of this failed project.
Posted on Reply
#41
DeathtoGnomes
btarunrAll testing was done without ray tracing, performance enhancements such as XeSS or DLSS weren't used.
Every gamer I know would use these enhancements whenever possible. Typical Intel, changing the parameters to create a fake favorable result.
Posted on Reply
#42
john_
DeeJay1001Wasn't the A7XX card supposed to compete with the 3070? There were "leaks" posted nearly 1 year ago saying they were neck and neck with a 3070. I think we all know the fate of ARC. Intel is just going to milk this lame cow to get some scraps out of this failed project.
Rumors where saying 3080, then 3070, then over 3060 Ti, for the top model.

Intel only needs problematic free hardware and software that offers some kind of performance and compatibility that will not force consumers returning their systems back to manufacturers/sellers. Intel can keep bleading money and keep selling ARC or whatever future GPUs to OEMs at cost, or even under the cost, if those GPUs are bought together with a big enough quantity of CPUs and chipsets. As long as Intel is improving it's designs and software it can keep losing money. In the end it's an investment that can start generating billions of income or even profit for Intel in a few years. If they abandon GPUs, getting in super computers will start becoming more and more difficult in the future. And if that happens, what are they going to do? Become AMD in CPUs and try to stay alive through manufacturing?
Posted on Reply
#43
DeathtoGnomes
natr0nI hope they dont try that shit were you have to buy a k/unlocked version down the road to overclock your gpu.
in interesting twist would indeed be sketchy.
Posted on Reply
#44
Chrispy_
ModEl4To what Intel's ARC claims are you referring exactly?
Maybe I'm missing something? Every site and streamer I read/follow (TPU, G3D, KG, GN, HUB, hell - even LTT) were all massively disappointed by the A380 because it failed to live up to claims. I vaguely remember Intel saying that the A380 was originally supposed to be about par with a GTX 1060. It's so late to market that Pascal was the relevant competition!

It turns out that yes, the A380 does actually match a 1060 (or fall somewhere between a 6400 and 6500XT) but with the caveat of only with ReBAR enabled, and only in a modern motherboard with a PCIe 4.0 slot, and only in games that support ReBAR, and only in DX12 or modern Vulkan titles, and only if the drivers actually work at all, which is not a given under any circumstances. That's added to the fact that Intel clearly optimised for misleading synthetic benchmarks as the 3DMark score is way out of line with the performance it demonstrates in any game titles.

The media being unanimously disappointed with the Arc is not because of unrealistic expectations. Those expectations were set by Intel themselves and the fact that post-launch (in China) Intel adjusted their claims to be more in line with how it actually performs (including the whole laundry-list of the non-trivial caveats) is kinda just confirmation of that disappointment. It's even worse than than it seems, too - because the target market for an A380 buyer isn't a brand-new machine playing modern AAA DX12/Vulcan titles. It's going to be someone looking to cheaply upgrade an old PCIe 3.0 board and likely playing older games because the A380 isn't really good enough to get a great experience in the ReBAR-optimised, AAA DX12 titles that Intel actually have acceptable performance with.

Let's see if the results from independent reviewers match these official Intel graphs for the games shown when we actually have an official Arc A750 launch....
Posted on Reply
#45
HD64G
We know that for DX12 and Vulkan performance isn't totally bad. Thing is that for OpenGL and DX9-11 and power draw they keep the data close to their chest. Those will make or brake their higher performance level GPU market share even with a better price than their competitors' GPUs.
Posted on Reply
#46
Aretak
Considering that it's likely to be a disaster in DirectX 9/10/11 and OpenGL titles, they can't price this thing at more than $200. At that price it might be worth taking a risk on, even if I expect Intel to abandon it in terms of driver support pretty quickly, because Linux and Mesa make that somewhat irrelevant. Any more than $200 and there's zero reason to consider it over an AMD or Nvidia alternative.
Posted on Reply
#47
JalleR
Let me buy that...... oh.......
Posted on Reply
#48
HD64G
AretakConsidering that it's likely to be a disaster in DirectX 9/10/11 and OpenGL titles, they can't price this thing at more than $200. At that price it might be worth taking a risk on, even if I expect Intel to abandon it in terms of driver support pretty quickly, because Linux and Mesa make that somewhat irrelevant. Any more than $200 and there's zero reason to consider it over an AMD or Nvidia alternative.
Me thinks they will try to sell those GPUs for $50 less than the GPUs that match in performance on DX12/Vulkan games. And if they don't sell well they will go on discount for lower price.
Posted on Reply
#49
RedelZaVedno
I'm buying ARC even if it's total crap, just to keep Intel's dGPU division alive. God knows we need big 3rd player on dGPU market soooo badly.
Lisa & Huang must be kept in check or their profit margins will keep going up and up and up...
Posted on Reply
#50
Mistral
Impressive numbers... Now, how are the visual quality, stability and crashes?
Posted on Reply
Add your own comment
Dec 18th, 2024 11:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts