Wednesday, June 22nd 2022
Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks
Intel's Arc A380 desktop graphics card is generally available in China, and real-world gaming benchmarks of the cards by independent media paint a vastly different picture than what we've been led on by synthetic benchmarks. The entry-mainstream graphics card, being sold under the equivalent of $160 in China, is shown beating the AMD Radeon RX 6500 XT and RX 6400 in 3DMark Port Royal and Time Spy benchmarks by a significant margin. The gaming results see it lose to even the RX 6400 in each of the six games tested by the source.
The tests in the graph below are in the order: League of Legends, PUBG, GTA V, Shadow of the Tomb Raider, Forza Horizon 5, and Red Dead Redemption 2. We see that in the first three tests that are based on DirectX 11, the A380 is 22 to 26 percent slower than an NVIDIA GeForce GTX 1650, and Radeon RX 6400. The gap narrows in DirectX 12 titles SoTR and Forza 5, where it's within 10% slower than the two cards. The card's best showing, is in the Vulkan-powered RDR 2, where it's 7% slower than the GTX 1650, and 9% behind the RX 6400. The RX 6500 XT would perform in a different league. With these numbers, and given that GPU prices are cooling down in the wake of the cryptocalypse 2022, we're not entirely sure what Intel is trying to sell at $160.
Sources:
Shenmedounengce (Bilibili), VideoCardz
The tests in the graph below are in the order: League of Legends, PUBG, GTA V, Shadow of the Tomb Raider, Forza Horizon 5, and Red Dead Redemption 2. We see that in the first three tests that are based on DirectX 11, the A380 is 22 to 26 percent slower than an NVIDIA GeForce GTX 1650, and Radeon RX 6400. The gap narrows in DirectX 12 titles SoTR and Forza 5, where it's within 10% slower than the two cards. The card's best showing, is in the Vulkan-powered RDR 2, where it's 7% slower than the GTX 1650, and 9% behind the RX 6400. The RX 6500 XT would perform in a different league. With these numbers, and given that GPU prices are cooling down in the wake of the cryptocalypse 2022, we're not entirely sure what Intel is trying to sell at $160.
190 Comments on Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks
Admittedly 3 sometimes works by crippling the competition instead, as famously demonstrated by ICC...
Anyone with a rudimentary understanding of what debugging and profiling tools do for development will see through this. And no, these tools do not optimize the engine code, the developer does that.
That latter alone is enough to gather a lot of good will, now you add people's tendency to defend their choices and purchases no matter the cost and the inherent need to feel accepted among their peers, you'll find that the AMD vs. NVIDIA war is no different to iOS vs. Android or Pepsi vs. Coke, it's just people perpetuating lies, hearsay and spreading FUD about it :oops:
If you knew how debugging and profiling tools worked, you wouldn't come up with something like that. These tools will not optimize(or sabotage) the code. The code is still written by the programmer.
And BTW, AMD offer comparable tools too.
Performance optimizations for specific hardware in modern PC-games is a myth.
As of now Nvidia have stronger RT capabilities, so games which utilizes RT heavier will scale better on Nvidia hardware. Once AMD releases a generation with similar capabilities they will perform just as well, perhaps even better.
Firstly, as mentioned earlier, in order to optimize for e.g. Nvidia, we would have to write code targeting specific generations (e.g. Pascal, Turing, Ampere…), as the generations change a lot internally, and there could not be a universal Nvidia-optimization vs. AMD-optimization, as newer GPUs from competitors might be more similar with each other than their own GPUs two-three generations ago. This means the game developer needs to maintain multiple code paths to ensure their Nvidia-chips outperform their AMD counterparts. But this all hinges on the existence of a GPU specific low-level API to use. Does any such API exist publicly? Because if not, the whole idea of specific optimizations is dead. (The closest you will find is experimental features(extensions to OpenGL and Vulkan), but these are high-level API functions and are usually new features, and I've never seen such used in games. And these are not exclusive either, as anyone can implement them if needed.)
Secondly, optimizing for future or recently released GPU architectures would be virtually impossible. Game engines are often written/rewritten 2-3 years ahead of a game release date, and even top game studios rarely have access to new engineering samples more than ~6 months ahead of a GPU release. And we all know how badly game studios screw up when they try to patch in some new big feature at the end of the development cycle.
Thirdly, most games use third party game engines, which means the game studio don't even write any low-level rendering code. The big popular game engines might have many advanced features, but their rendering code is generic and not hand-tailored to the specific needs of the objects in a specific game. So any optimized game would have to use a custom game engine without bloat and abstractions.
As for proof, 1 is provable to the extent that these mystical GPU-specific APIs are not present on Nvidia's and AMD's developer websites. 2 is a logical deduction. 3 is provable in a broad sense as few games use custom game engines. The remaining would require disassembly to prove 100%, but is pointless unless you disprove 1 first.
Where have you been, can AMD cards run RTX code, no.
No rant here though I disagree and your opinion isn't enough to change that, opinion.
The only vendor-specific code I can think about is GameWorks. If you enable Advanced Physics in a Metro game, an nvidia card will be OK, but AMD just dies.
Other than that, why and how would games be optimised for a vendor (and not architecture)?
It will take years for them to make something competitive.
As for their drivers, it will take them forever.
I said it before and I say it again: as a gamer, I will never buy their GPUs.
But they will probably come in handy for office computers without integrated graphics :D
"RTX" is a marketing term for their hardware, which you can clearly see uses DXR or Vulkan as the API front-end.
Direct3D 12 ray-tracing details: docs.microsoft.com/en-us/windows/win32/direct3d12/direct3d-12-raytracing
The Vulkan ray tracing spec: VK_KHR_ray_tracing_pipeline is not Nvidia specific, and includes contributions from AMD, Intel, ARM and others.
And as you can see from Nvidia's DirectX 12 tutorials and Vulkan-tutorial, this is vendor neutral high-level API code. And as their web page clearly states; I haven't looked into how Intel's ARC series compares in ray tracing support level vs. Nvidia and AMD.
So in conclusion again; modern PC games are not optimized for specific hardware. Some games may feature optional special effects which are vendor specific, but these are not low-level hardware-specific optimizations, and they are not the basis for comparing performance between products. If a Nvidia card performs better in a game than AMD or Intel, it's not because the game is optimized for that Nvidia card. Claiming it's an optimization would be utter nonsense.GameWorks is the large suite of developer tools, samples, etc. Nvidia provides for game developers. They have some special effects in there that may only work on Nvidia hardware, but the vast majority is plain DirectX/OpenGL/Vulkan.
AMD have their own Developer tool suite, which is pretty much the same deal, complete with some unique AMD features.
But you're pulling vulkan and similar out.
Parity may have been achieved Now, but Microsoft worked with Nvidia first on DxR so gains were made, and used.
So believe what you want.
MS announced at GDC in March 2018 their DXR as the front-end to Nvidia's RTX which was announced at the same conference. So it was DXR long before Turing launched later the same year. The initial API draft may have been a little different from the final version, but that's irrelevant for the games which shipped with DXR support much later. Drafts and revisions are how the graphics APIs are developed.
The games which uses ray-tracing today use DXR (or Vulkan, if there are any). So this bogus claim that these games are optimized for Nvidia hardware should be defeated once and for all. Please stop spreading misinformation, as you clearly don't comprehend this subject.
October 10 2018 DXR windows update 1809 came out.
Hmmnnn.
I have one, and I assure you, the card is fine (and extremely silent).