Thursday, August 11th 2022
Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games
Intel earlier this week released its own performance numbers for as many as 50 benchmarks spanning the DirectX 12 and Vulkan APIs. From our testing, the Arc A380 performs sub-par with its rivals in games based on the DirectX 11 API. Intel tested the A750 in the 1080p and 1440p resolutions, and compared performance numbers with the NVIDIA GeForce RTX 3060. Broadly, the testing reveals the A750 to be 3% faster than the RTX 3060 in DirectX 12 titles at 1080p; about 5% faster at 1440p; about 4% faster in Vulkan titles at 1080p, and about 5% faster at 1440p.
All testing was done without ray tracing, performance enhancements such as XeSS or DLSS weren't used. The small set of 6 Vulkan API titles show a more consistent performance lead for the A750 over the RTX 3060, whereas the DirectX 12 API titles sees the two trade blows, with a diversity of results varying among game engines. In "Dolmen," for example, the RTX 3060 scores 347 FPS compared to the Arc's 263. In "Resident Evil VIII," the Arc scores 160 FPS compared to 133 FPS of the GeForce. Such variations among the titles pulls up the average in favor of the Intel card. Intel stated that the A750 is on-course to launch "later this year," but without being any more specific than that. The individual test results can be seen below.The testing notes and configuration follows.
Source:
Intel Graphics
All testing was done without ray tracing, performance enhancements such as XeSS or DLSS weren't used. The small set of 6 Vulkan API titles show a more consistent performance lead for the A750 over the RTX 3060, whereas the DirectX 12 API titles sees the two trade blows, with a diversity of results varying among game engines. In "Dolmen," for example, the RTX 3060 scores 347 FPS compared to the Arc's 263. In "Resident Evil VIII," the Arc scores 160 FPS compared to 133 FPS of the GeForce. Such variations among the titles pulls up the average in favor of the Intel card. Intel stated that the A750 is on-course to launch "later this year," but without being any more specific than that. The individual test results can be seen below.The testing notes and configuration follows.
85 Comments on Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games
The tradeoff is that it might underperform in games, but excell in compute, and at the cost of more power often compared to a traditional and real "gaming" GPU as you see with Nvidia Geforce and Quaddro, or RDNA with CDNA.
The drivers of Intel just suck. First reviews coud'nt get it stable. Many glitches, artifacts, underperforming or even chrashes. If you buy one now and you would even attempt playing the latest or newest games, the chances are it just wont work or it's performance would significantly lack.
The delays of over a year has nothing todo with covid or something else. The first batch just did'nt cut it. It proberly had bugs, the performance was whack and they had to respin it to get it right. Or at least working. These GPU's simply dont match, consume more power and games are a gamble for the next few years in terms of drivers and / or performance.
I mean Vega was in it's ways a good card, excelled at Compute, did games and was equal over time to a 1080Ti, it just required a tad more power and that and polaris was just clocked beyond it's efficiency curve. Polaris was also bandwidth starved in terms of performance. For example the memory was rated to only "feed" the gpu up to 1000Mhz GPU clock. Anything above was just a waste of power.
But they kind of had to, in order to still compete with the 1060. The price tag of 250$ however made it good and it was the best 1080p card at that time.
I really wonder why Intel didn't tried to buyout Imagination. It would have been probably "cheap" and Imagination could offer experienced and talented personnel and patents that could help Intel to start producing better stuff sooner. After 2017 Imagination probably was begging someone to come and buy them.
Here's my calculation: $500 adjusted for 20% inflation + 50$ towards a new leather jacket for Jensen. MSRP is going to be 650$.
Intel, you were caught lying, and you haven't addressed that yet. Pull yourselves together FFS.
Now if you think that a 12GB card at a much more expensive process, like TSMC's 5nm with performance close to 3090 Ti will start selling at $650, with all my heart I wish you end up correct. But if in their current position after doing all those price redactions they are at 45% profit margin, selling a card with those characteristics at $650 will be suicidal. Not to mention the price redaction in current RTX 3000 unsold cards we will have to witness. Can you imagine an RTX 3090 selling for $550? An RTX 3070 for $300? I doubt we are in a GTX 970 era. Not to mention inflation.
And it doesn't matter how much money they have in their bank accounts. For companies of this size trying to be on top in new markets like AI, with competition from huge companies like Intel, Amazon, Google, Alibaba etc. the amount of money they have, will never be enough.
Also, is it just me, or are the "heavier" games mostly on the lower end of that scale comparing it to RTX 3060? Adding cores will not solve any performance problem in a game. In a highly synchronized workload like a game, the returns from using more threads are diminishing very quickly, and can quickly turn into unreliable performance or even glitching. What you're saying here is just nonsense. Firstly, no modern PC game is optimized for a specific GPU architecture, that would require using the GPU's low-level API instead of DirectX/Vulkan/OpenGL and bypassing the driver (because translating APIs is the primary task of the driver).
Your claims are approaching conspiracy territory. No game developer wants their game to perform poorly, that would make the gameplay less enjoyable for the majority of their customers. Game developers don't get a cut of GPU sales either, and in the cases of GPU makers "sponsoring" games, that has more to do with technical assistance and marketing, and even if they were to receive any funds, that would be drops in the bucket compared to the budget of the big game titles.
Many games today are bloated and poorly coded for a host of reasons;
- Most use off-the-shelf game engines, writing little or no low-level code themselves. Instead they interface with the engine. This also means these engines have generic rendering pipelines design to render arbitrary objects, not specifically tuned to the specific game.
- Companies want quick returns, often resulting in short deadlines, changing scopes and last minute changes.
- Maintenance is often not a priority, as the code is often hardly touched after launch, leading programmers to rush to meet requirements instead of writing good code. This is the reason why game code is known as some of the worst in the industry. Which Vega cards are you talking about?
Vega 56 performed slightly over GTX 1070 but cost $500 (with a $100 "value" of games).
And how was Polaris bandwidth starved?
RX 480 had 224/256 GB/s vs. GTX 1060's 192 GB/s.
Both Polaris and Vega underperformed due to poor GPU scheduling, yet they performed decently in some compute workloads, as some of then are easier to schedule.
"On 22 June 2017, Imagination Technologies' board of directors announced it was putting the entire company up for sale[33] and, on 25 September 2017, they announced that the company was being acquired by Canyon Bridge, a private equity fund ultimately owned by the Chinese government.[34][35] In November 2017 the sale to Canyon Bridge was approved in a transaction which valued the business at £550 million (£1.82 per share)."
And you got the begging part right, too.
I totally ignored the rest of your post. You might have some good points there, but I really don't post here to get upset from every "I know better, you say nonsense" individual.
PS I just remembered you. You are that "I know everything person you know nothing".
OK, time to expand the ignore list. You are free to post whatever you like as a reply to my posts. Don't care.
Regarding A380 they said that based on their chinese SRP 1030 yuan and AMD's RX6400 1199 yuan they had 25% better performance per price which means in the games they tested RX6400 had -6.87% worst performance vs A380 and in the TPU test the results were the following, so quiet close to TPU correct?
Regarding A750 they showed previously how it performed in 5 titles vs RTX 3060 (+6% up to +17%) saying at the same time that in some newer titles that had DX12/Vulcan based engines the performance delta could reach these levels if the game is suitable to ARC's architecture, clarifying also that A750 won't look as good in all the games and that in many DX11 games they will have low performance for the reasons they explained.
Now they tested 50 games and the performance claim is 3-5% higher than RTX 3060.
I expect the TPU results to be very close to these claims just like in A380's case!
Lets see some real, INDEPENDANT, 3rd party tests results, then perhaps we can actually decide if these cards are ok, or just DOA :)
Come on TPU, surely someone here can convince team blue to give up a review sample asap, yes ?
These performance numbers seem pretty average :)
Waiting for the spec sheets of these special edition ARC and also wondering what happened to their ARC 3 & 5 series cards. As of now, they jumped straight to ARC 7xx cards!
How will these cards play a vital role in the revenue game for intel as the graphics division is -$500mn?
The marketing team is doing good enough to get these numbers and everything with YouTubers also, waiting for reviews of these special edition card ( FE vs OEM )
Intel only needs problematic free hardware and software that offers some kind of performance and compatibility that will not force consumers returning their systems back to manufacturers/sellers. Intel can keep bleading money and keep selling ARC or whatever future GPUs to OEMs at cost, or even under the cost, if those GPUs are bought together with a big enough quantity of CPUs and chipsets. As long as Intel is improving it's designs and software it can keep losing money. In the end it's an investment that can start generating billions of income or even profit for Intel in a few years. If they abandon GPUs, getting in super computers will start becoming more and more difficult in the future. And if that happens, what are they going to do? Become AMD in CPUs and try to stay alive through manufacturing?
It turns out that yes, the A380 does actually match a 1060 (or fall somewhere between a 6400 and 6500XT) but with the caveat of only with ReBAR enabled, and only in a modern motherboard with a PCIe 4.0 slot, and only in games that support ReBAR, and only in DX12 or modern Vulkan titles, and only if the drivers actually work at all, which is not a given under any circumstances. That's added to the fact that Intel clearly optimised for misleading synthetic benchmarks as the 3DMark score is way out of line with the performance it demonstrates in any game titles.
The media being unanimously disappointed with the Arc is not because of unrealistic expectations. Those expectations were set by Intel themselves and the fact that post-launch (in China) Intel adjusted their claims to be more in line with how it actually performs (including the whole laundry-list of the non-trivial caveats) is kinda just confirmation of that disappointment. It's even worse than than it seems, too - because the target market for an A380 buyer isn't a brand-new machine playing modern AAA DX12/Vulcan titles. It's going to be someone looking to cheaply upgrade an old PCIe 3.0 board and likely playing older games because the A380 isn't really good enough to get a great experience in the ReBAR-optimised, AAA DX12 titles that Intel actually have acceptable performance with.
Let's see if the results from independent reviewers match these official Intel graphs for the games shown when we actually have an official Arc A750 launch....
Lisa & Huang must be kept in check or their profit margins will keep going up and up and up...