Thursday, August 11th 2022
Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games
Intel earlier this week released its own performance numbers for as many as 50 benchmarks spanning the DirectX 12 and Vulkan APIs. From our testing, the Arc A380 performs sub-par with its rivals in games based on the DirectX 11 API. Intel tested the A750 in the 1080p and 1440p resolutions, and compared performance numbers with the NVIDIA GeForce RTX 3060. Broadly, the testing reveals the A750 to be 3% faster than the RTX 3060 in DirectX 12 titles at 1080p; about 5% faster at 1440p; about 4% faster in Vulkan titles at 1080p, and about 5% faster at 1440p.
All testing was done without ray tracing, performance enhancements such as XeSS or DLSS weren't used. The small set of 6 Vulkan API titles show a more consistent performance lead for the A750 over the RTX 3060, whereas the DirectX 12 API titles sees the two trade blows, with a diversity of results varying among game engines. In "Dolmen," for example, the RTX 3060 scores 347 FPS compared to the Arc's 263. In "Resident Evil VIII," the Arc scores 160 FPS compared to 133 FPS of the GeForce. Such variations among the titles pulls up the average in favor of the Intel card. Intel stated that the A750 is on-course to launch "later this year," but without being any more specific than that. The individual test results can be seen below.The testing notes and configuration follows.
Source:
Intel Graphics
All testing was done without ray tracing, performance enhancements such as XeSS or DLSS weren't used. The small set of 6 Vulkan API titles show a more consistent performance lead for the A750 over the RTX 3060, whereas the DirectX 12 API titles sees the two trade blows, with a diversity of results varying among game engines. In "Dolmen," for example, the RTX 3060 scores 347 FPS compared to the Arc's 263. In "Resident Evil VIII," the Arc scores 160 FPS compared to 133 FPS of the GeForce. Such variations among the titles pulls up the average in favor of the Intel card. Intel stated that the A750 is on-course to launch "later this year," but without being any more specific than that. The individual test results can be seen below.The testing notes and configuration follows.
85 Comments on Intel Arc A750 Trades Blows with GeForce RTX 3060 in 50 Games
We need Intel and as long as we see them trying, we can be at least safe to assume they are not throwing the towel.
Also as of yet DX11 performance is apparently bad and seeing as one of the games I play the most is DX11, this card does not seem like its for me atm.
Nothing regarding Intel ARC is "on-course". It has been postponed for almost two years now, and it looks like it will directly compete with late 2022 / early 2023 generation of Nvidia, AMD.
And it doesn't seem to be the price lowering force either. We'll get the price / performance equivalent of 2020 Nvidia, AMD cards, but with all the driver problems, non working functions, bugs... You'll pay more to have the privilege to help beta test the Intel product!
If I actually manage to buy one at a fairly OK price, I'll post my experiences, for sure. :)
Don't speak like that. Intel otherwise will get upset and leave us.
Not to mention that in reality - 90% or even higher percentage of people are still using full hd monitors. For full HD resolution RTX 20xx is already overkill not to mention rtx 30xx and AMD rx 6xxx.
What I don't like is this. we get newer and stronger gpu in 1 of 2 years after previous release which is awesome of course, but there is big but...... nowadays game makers spits on optimization and just hope that GPU raw power will be alright to handle the game.
But even if we just get RTX 4070 and 4080 from Nvidia, it will still cause a price adjustment for most of the range. Even with very expensive RTX 4070 - if it's really the equivalent of RTX 3090 Ti, it could be offered at $1100 (with no price / performance increase, we have seen that in RTX 20x0 launch). But it will be probably well below that - at least theoretically, they can still claim various difficulties later and raise the price.
Optimization was always a problem, it just seems to be bigger today because games are huge, developers try to market them as fast as possible and frankly a 10+ core CPU and modern GPUs are huge carpets to hide underneath any performance problem. Also an unoptimised game will sell more CPUs and GPUs than an optimized one, meaning not only you can market it faster, you can also get nice sponsor money from Nvidia, AMD and Intel, by partially optimizing for their architecture instead for everyones. I was reading all over the internet about 4070 being a $500-$600 card. So for some people it's not a given. Probably they just try to justify waiting 2 years for a brand new GPU. Don't know.
But I don't expect it to be over $1000. RTX 2000's pricing was a result of the lack of competition and Ray Tracing marketing. RTX 4070 is not the top model and there is competition. But who knows. Someday we will definitely get a x070 for over $1000 anyway. It might be now.
never seen one yet in store, but already limited .. wth
www.tomshardware.com/news/intel-gpu-division-losses-estimated-at-3-5-billion-usd
Intel can put benchmarks out or in context all they want. Generally they perform worse at quite higher consumption compared to Nvidia or AMD. And both camps about to release their next gen that would make intel look like low-low-end.
I'm not trolling for apple, honestly. I'd like some kind of explanation for that. Is it that the Apple igpu is not required to support as many games, for example, considering the relative scarcity of gaming on mac? What is it? I do get how difficult it is to develop a brand new architecture, so how did apple do it?