Wednesday, June 22nd 2022
Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks
Intel's Arc A380 desktop graphics card is generally available in China, and real-world gaming benchmarks of the cards by independent media paint a vastly different picture than what we've been led on by synthetic benchmarks. The entry-mainstream graphics card, being sold under the equivalent of $160 in China, is shown beating the AMD Radeon RX 6500 XT and RX 6400 in 3DMark Port Royal and Time Spy benchmarks by a significant margin. The gaming results see it lose to even the RX 6400 in each of the six games tested by the source.
The tests in the graph below are in the order: League of Legends, PUBG, GTA V, Shadow of the Tomb Raider, Forza Horizon 5, and Red Dead Redemption 2. We see that in the first three tests that are based on DirectX 11, the A380 is 22 to 26 percent slower than an NVIDIA GeForce GTX 1650, and Radeon RX 6400. The gap narrows in DirectX 12 titles SoTR and Forza 5, where it's within 10% slower than the two cards. The card's best showing, is in the Vulkan-powered RDR 2, where it's 7% slower than the GTX 1650, and 9% behind the RX 6400. The RX 6500 XT would perform in a different league. With these numbers, and given that GPU prices are cooling down in the wake of the cryptocalypse 2022, we're not entirely sure what Intel is trying to sell at $160.
Sources:
Shenmedounengce (Bilibili), VideoCardz
The tests in the graph below are in the order: League of Legends, PUBG, GTA V, Shadow of the Tomb Raider, Forza Horizon 5, and Red Dead Redemption 2. We see that in the first three tests that are based on DirectX 11, the A380 is 22 to 26 percent slower than an NVIDIA GeForce GTX 1650, and Radeon RX 6400. The gap narrows in DirectX 12 titles SoTR and Forza 5, where it's within 10% slower than the two cards. The card's best showing, is in the Vulkan-powered RDR 2, where it's 7% slower than the GTX 1650, and 9% behind the RX 6400. The RX 6500 XT would perform in a different league. With these numbers, and given that GPU prices are cooling down in the wake of the cryptocalypse 2022, we're not entirely sure what Intel is trying to sell at $160.
190 Comments on Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks
I honestly see no usefulness for consumers to look at synthetic game benchmarks at all, except perhaps if there are new technologies to just get a sneak peek. Most gamers have many misconceptions about how optimization works.
Drivers are not (normally) optimized for specific applications/games/benchmarks etc., except the few times they are cheating*.
If we see games scale in one way and synthetics scale in another, then it's mostly due to hardware resources. We should not expect games to scale badly if they use the same graphics APIs. And if there are exceptions, those would probably boil down to certain parameters to functions causing extra work, and such should be identified quickly and fixed. But if we see general performance differences, then it's not due to "immature drivers".
*)
Drivers have been known to cheat certain synthetic benchmarks and sometimes even games, like replacing shaders to gain performance with "minor" degradation in visual quality. Overriding API-calls is also possible, but will cause driver overhead, so this is super rare. One very old documented example of cheating is this. More like infinitely better! :P Games are not optimized for specific hardware, at least not the way you think.
Doing so would require either 1) utilize very specific low-level hardware differences unique to one hardware maker or 2) utilize separate code paths and low-level APIs targeting each specific GPU-family to optimize for.
(1) is "never" done intentionally and 2) is only used in very specific cases).
Practically all game engines these days are using very bloated and abstracted engines. Most games today contain little to no low-level code at all, and quite often use an abstracted game engine (often third-party).
It is also equally unfortunate to see how clubism seems to be alive and well to an extent that the entry of a third player is being seeing as a hostility and an act to be disregarded when we as consumers should be cheering on a new product from a new manufacturer. I wish Intel well and Raja Koduri's team success in developing Arc into a fine series of dedicated graphics, and if I can, I will definitely be purchasing one, even if just a basic model.
It is a sidegrade or downgrade which no one needs or has ever asked for.
self conviction is only left when a product is so hard scam in various levels like:
-price 200us (what a trash)
-4gb of 64bit vram (for a 200us what a trash again)
-seriously cutted pci-e lanes (meanwhile other products in price range have 16x lanes and
before company product like rx 5500 xt comes with 8x lanes, what a trash again)
-cutted encode and encode capabilities (meanwhile other products in price range have encode and decode capabilities and before company product like rx 5500 xt have encode and decode capabilities, what a trash again)
back to theme still with a380 games performance think about intel need launch arc now for try improve drivers with users feedback
with proper price according actual game performance, if price stay between 120 to 130us max will be very interesting product because offer 6gb 96bit of vram, 8x lanes pci-e and complete decode and encode capabilities included av1 format
:)
3 for low end, 5 for mid range and 7 for high end. I guess the 8 stands for top offer in the corresponding tier - in the case 3.
The 6600 is already barely enough at 128-bits *( because , very small die size means noticeably less Infinity Cache,, and there is a minimum size required for your nearly doubling of performance per memory tick (SEE RX 6900 xt, vs 5700 xt)!
The existing 128-bit RDNA cards are faster than the 6500 Xt! ( if the cache actually worked on a cad this small, then the performance would be in the range of the 3050, not 10% slower than a 5500 XT!
And I am really curious about the actual performance. Because there's no reason the performance gap between synthetics and games is so big. Either Intel is somehow cheating in tests (not proven so far, cheating usually affects image quality) or the potential is actually there, but for some reason it's not realized in games so far.
They just need to work hard.
We need more actual products, not fancy hype train PPTs.
That game is optimized for Nivdia gpus, I had at first HD 7870 & then upgraded to R9 Nano back in the day, & AMD drivers were always struggling to keep gameplay smooth n' consistent everywhere in that AAA rated & supremely popular game. Now anyone can criticise how crappy the game engine was in the fist place, but that's getting into another argument not relevant for this thread.