Wednesday, June 22nd 2022
![Intel ARC](https://tpucdn.com/images/news/intelarc-v1734446766296.png)
Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks
Intel's Arc A380 desktop graphics card is generally available in China, and real-world gaming benchmarks of the cards by independent media paint a vastly different picture than what we've been led on by synthetic benchmarks. The entry-mainstream graphics card, being sold under the equivalent of $160 in China, is shown beating the AMD Radeon RX 6500 XT and RX 6400 in 3DMark Port Royal and Time Spy benchmarks by a significant margin. The gaming results see it lose to even the RX 6400 in each of the six games tested by the source.
The tests in the graph below are in the order: League of Legends, PUBG, GTA V, Shadow of the Tomb Raider, Forza Horizon 5, and Red Dead Redemption 2. We see that in the first three tests that are based on DirectX 11, the A380 is 22 to 26 percent slower than an NVIDIA GeForce GTX 1650, and Radeon RX 6400. The gap narrows in DirectX 12 titles SoTR and Forza 5, where it's within 10% slower than the two cards. The card's best showing, is in the Vulkan-powered RDR 2, where it's 7% slower than the GTX 1650, and 9% behind the RX 6400. The RX 6500 XT would perform in a different league. With these numbers, and given that GPU prices are cooling down in the wake of the cryptocalypse 2022, we're not entirely sure what Intel is trying to sell at $160.
Sources:
Shenmedounengce (Bilibili), VideoCardz
The tests in the graph below are in the order: League of Legends, PUBG, GTA V, Shadow of the Tomb Raider, Forza Horizon 5, and Red Dead Redemption 2. We see that in the first three tests that are based on DirectX 11, the A380 is 22 to 26 percent slower than an NVIDIA GeForce GTX 1650, and Radeon RX 6400. The gap narrows in DirectX 12 titles SoTR and Forza 5, where it's within 10% slower than the two cards. The card's best showing, is in the Vulkan-powered RDR 2, where it's 7% slower than the GTX 1650, and 9% behind the RX 6400. The RX 6500 XT would perform in a different league. With these numbers, and given that GPU prices are cooling down in the wake of the cryptocalypse 2022, we're not entirely sure what Intel is trying to sell at $160.
190 Comments on Intel Arc A380 Desktop GPU Does Worse in Actual Gaming than Synthetic Benchmarks
But when someone is trying to be competitive in the discrete GPU market, they can't avoid situations like this. They will have to fix the bugs, they will have to optimize performance. While Intel is building GPUs for decades and drivers for GPUs for decades, I doubt they had thrown the necessary resources on optimizations and bug fixing. That "heavy optimization and fixing ALL bugs" situation is probably "brand new" for Intel's graphics department.
My point is, Intel's architecture is not fundamentally new and they have a working driver from their integrated graphics, so if they have problems with scalability then it's a hardware issue.
I'm not saying there can't be minor bugs and tweaks to the driver, but the bigger problem lies in hardware, and will probably take them a couple more iterations to sort out.
Don't buy a product expecting the drivers to suddenly add performance later, that has not panned out well in the past.
Let's try to explain it with an example(in poorer English).
Let's say that Intel is producing only iGPUs and iGPUs are performing poorly in game title A and also have a bug(image corruption) with graphics setting X in that game.
Do you throw resources to optimize the driver in that game title A, to move fps from 20 to 22 and also fix that graphics setting X, especially when enabling that setting means dropping framerate from 20fps to 12fps? Probably not. If that game is a triple A title you might spent resources to optimize it, but at the same time the solution for graphics setting X will be simply to ask gamers to keep it disabled(if it is difficult to fix the bug). If that game is a not so much advertised game, you probably wouldn't even spend resources to move that fps counter from 20 to 22fps.
Let's say that Intel now is producing discrete GPUs and targets at least the mid range market against AMD and Nvidia. Well, now you will have to hire more programmers for your driver department and now optimization in game title A will move fps probably from 50 fps to 60 fps. You now also need to achieve this optimization, because you are competing with other discrete GPUs. Also you can't go out and say to gamers "please keep setting X in game disabled, because it does not work properly with ARC". No. You will have to throw resources to fix that bug or sales of your discrete GPUs will fall. People can ignore low performance and bugs from a discrete iGPU that comes for "free" in the CPU. It's a different situation for a discrete GPU that people bought paying $150-$400. People expect best performance and bugs fixed.
I wasn't describing a scaling problem. I was saying that building graphics drivers for low performing iGPUs is probably very different than building drivers for discrete GPUs. You can bypass/ignore some driver issues when you support "free" and slow iGPUs, you can't when you support expensive discrete GPUs.
Most of you in here attribute way too much to drivers in general, when the driver really does as little as possible, as anything the driver spends CPU time on will add overhead, so it's a trade-off. So let me explain how a driver works for rendering, and while this holds true for DirectX/OpenGL/Vulkan and others, I will use OpenGL as an example since it's the most simple to understand and I've used it for nearly two decades.
The main responsibility of the driver is to take generic API calls and translate it into the low-level API for the GPU architecture. This is not done one API call at the time, but instead of queues of operations. A typical block of code to render an "object" in OpenGL would look something like this:
What kind of low-level operations this is translated into will vary depending on the GPU architecture, but it will be the same whether the GPU is integrated or a high-end GPU. And to make it clear, the driver will operate the same regardless of the application being a AAA game title or a hobby project.
And to your point of Intel not having to prioritize performance or driver quality overall for integrated GPUs vs. dedicated GPUs, I strongly disagree, and have some solid arguments to why;
1) AMD have offered horrible OpenGL support for ages, while Intel's support have been mostly fine. And while it took a while for Intel to catch up on OpenGL 4.x features, the ones they've implemented has been seemingly working. AMD's support have been really bad, they even managed around ~10 years ago to ship two drivers in a row which mostly broken GLSL shader compilation (essentially breaking nearly all OpenGL games and applications).
2) The overall quality and stability of Intel's drivers have been better than AMD's for years. Graphics APIs are not just used for games, but today are used by the desktop environment itself, CAD/modelling applications, photo and video editing and even some multimedia applications. And it's not just in the forums we hear about way more issues with AMD than the others, those who do graphics development quickly get a feeling of the quality of the drivers by how little "misbehaving code" is needed to crash the system. While this is of course totally anecdotal, none of my main systems run AMD graphics for this very reason, it's quite annoying to get something done when systems crash up to several times per day during development.
Now to answer even more specifically: Drivers aren't really optimized for specific games, at least not the way you think. When you see driver updates offer up to X % more performance in <selected title>, it's usually tweaking the game profiles or sometimes overriding shader programs. These aren't really so much optimizations as them "cheating" to try to reduce image quality very slightly to get a few percent more performance in benchmarks.
When they do real performance optimizations, it's usually one of these;
a) General API overhead (tied to the internal state machine of an API) - Will affect anything that uses this API.
b) Overhead of a specific API call or parameter - Will affect anything that uses this API call
So therefore, I reject your premise of optimizing performance for a specific title.
The graphics APIs have a spec, the driver's responsibility is to behave according to that spec. If e.g. Nvidia wanted to deviate from that spec to boost the performance of a particular game, then that would add bloat and overhead to the driver and would risk introducing bugs in the driver. On top of that, if the API no longer behaves according to the spec, the game programmers are likely to introduce "bugs" which are very hard to track down and wastes a lot of the developer's time.
The driver developers don't know the game's internal state and don't know the assumptions the programmers who wrote the game. All the driver sees is a stream API calls, it can't know context to optimize differently frame to frame.
So this idea of the driver doing all kinds of wizardry to gain performance is just utter nonsense. As I've said, the driver does as little as possible to quickly translate a queue of API calls to the native instructions of the GPU, the GPU scheduler internally does the heavy lifting.
Most of people in forums like this thinks Nvidia's advantage is mostly due to game optimization and drivers optimized for those games, when in reality these optimizations are a myth. Nvidia has achieved most of their upper hand vs. AMD thanks to better scheduling of their GPUs' resources, which is why they've often managed to extract more performance out of less computational resources (Tflops, fillrate, etc.). When I say the following I mean it in a loving way; Please try to get this into your heads, when something performs better, it's usually because it's actually better, stop using optimizations (or lack thereof) as an excuse when there isn't evidence to support that.
Anyway let's keep questions simple here.
Why ARC performs on par with the competition in 3DMark and lose badly in games?
Why most bugs in ARC are bugs that lead to crush of the application or texture corruption? In AMD's and Nvidia's driver FAQ you will read about strange behaviors when doing very specific stuff. In ARC FAQ half bugs are about application crush, or textures after just running the game.
I might be wrong, but these are my observations through GPU behaviour. Does it really do that? Do you have sources? If so, I believe it must be some bug in the driver that can be ironed out - and not an issue of optimisation. But I'm curious about a proper answer, as I don't know much about driver code myself.
But let's wait for a proper answer from someone who knows more than I do.
To add to that, while games usually try to render things with reasonable efficiency, while synthetic benchmarks try to simulate "future" gaming workloads. Usually they end up stressing the GPU much more than a normal game would, but honestly I don't think the performance scores here have any use to consumers. I use them for stress testing after setting up a computer. I think synthetics can be useful for driver developers though, to try to provoke bugs. If there are texture corruption across multiple games, and the same games don't have the same problem on other hardware, then it means the driver doesn't behave according to spec. Finding the underlying reason would require more details though, it could be either the driver or the hardware. This might surprise you, but when it comes to software bugs it's actually better if the bug occurs across many use cases. That usually means the bug is easier to reproduce and precisely locate. Such bugs are usually caught and fixed once there are enough testers. A rare and obscure bug is in many ways worse, as it will lead to very poor bug reports, which in turn leads to large efforts to find those bugs.
downloadmirror.intel.com/733544/ReleaseNotes_101.1736.pdf What doesn't surprice me is how the glass is half empty or half full, depending on the situation.
AMD CPUs have been famous for being better at productivity apps, while Intel is (or used to be) better at games. Is this due to some driver magic as well? No one said that there can't be bugs in the driver-API communication. AMD is notorious for leaving bugs in for a long time. The argument was that these bugs in no way mean that games are "optimised" for a certain architecture or god forbid, manufacturer.
Having said that, let's see why I said that. A driver does play a role. It's not a myth. When a new driver fixes performance in a game or multiple games, then something was changed in that driver. What was that? I am NOT a driver developer. Are you? Having luck of knowledge doesn't means that the phrase "nothing to suggest" has any real value here. A man from 100 BC will insist that there is "nothing to suggest" that a 10 tone helicopter is staying on the air by pushing that air down with it's rotor blade, lucking all the necessary knowledge about physics.* AMD CPUs have been famous for being better at productivity apps because they where having more cores until Alder Lake. On the other hand Intel almost always had the advantage in IPC and also many apps where optimized for Intel CPUs, not AMD CPUs. I am not going to comment about the notorious AMD. It's boring, after so many years reading the same stuff. People having the need to bush AMD, even when using it's products, it is not my area of expertise. I am not going to play with words also with someone who will never ever accept something different. I am reading for decades, even from Intel/AMD/Nvidia representatives about apps/games optimizations, apps/games been developed on specific platforms, have seen how Nvidia's perfect image was ruined for a year or two somewhere in 2014 I think, when games where optimized for the consoles, meaning GCN and PC versions where having a gazillion of problems on PCs, especially those games payed from Nvidia to implement GameWorks in their PC versions.
So, I am stopping here. No reason to lose more time with people who insist that it is not A, it is B, without ANY REAL arguments of why it is B and not A.
Have a nice day.
PS * Just remembered Carl Sagan
2. Who said that you can't write a game to favour the hardware resources of a certain architecture? It's not the same thing as "optimising" a new driver for a game that's already been made.
OK, that's more than enough from me having said that I would stop and not make another post. Especially when the other person keeps moving the goalposts.
stackoverflow.com/questions/12170575/using-nvidia-fxaa-in-my-code-whats-the-licensing-model
All it requires is a bit of code so the GPU knows what to do. In the end, its a processing unit working through an API, and the API just serves stuff to translate. If you have the full vocabulary on your GPU, you can have it translated. If not, you'll resort to something doing the same thing but slower. Or not at all, because it is somehow locked.
The end result might be the same, but the reasons are different, and the REASONS are the core of the Gameworks argument. There is absolutely not a single thing stopping AMD from providing similar to Gameworks solutions and support, and it hasn't stopped them either. The real question is, what features do you really need and how do they help gaming? The ones we really can use, are definitely getting copied and you're not missing much between Gameworks support or not. FXAA is a great example of that. The fact we don't have ready made answers but only guesses for these questions is quite simply because we don't know for sure. Perhaps there's monkeys disguised as humans building their code. Perhaps they have hardware issues they work around as we speak. Workarounds are going to be inefficient.
A benchmark is reproducible. Games are more variable in what they want at any given point in time. It does. Here's a car analogy. The driver is the DRIVER. But the car is the car. It has limits, it can accelerate to 100 in a defined number of seconds. But if the driver of the car is crap at shifting gears, it certainly won't meet that spec. A better driver, or hey, let's use the at-one-point implemented Shader cache as an example: a more experienced driver, having driven the car a few times in that situation; will know exactly when to shift gears and therefore meets the spec.
Now, let's consider the car and the driver on a new road (new game). The job at hand is to accelerate as fast as possible, and then hit the brakes to come to full stop as fast as possible so he can accelerate again to full speed (clocks/boost!). One driver has experience on fresh roads, knowing they can be more smooth and slippery, so he applies different braking action while the other is oblivious to road types. The brakes on the cars are identical. The driver determines when to hit them and how hard.
So yes. Drivers play a role. And so does experience. Experience is pretty much scheduling, using the hardware resources in the best possible way at the best possible time. The other part of drivers, where they apply trickery to hit bigger numbers is usually at the cost of image quality. That could be called optimization, but that's a choice of semantics, the reality is, you render less, so you produce more frames.
So what does that all mean? It means, that if driver updates tell you they suddenly got a major perf boost within a select number of applications, you should be on the lookout for what work it is they're not doing anymore. And if the driver update tells you there is a major increase of perf across the board, scheduling likely improved.
Calling either of it optimization is not really accurate, is it? The first is cheating, the latter is basically dev work on your GPU (hardware) that wasn't done prior to its release. And bugs.. are bugs, again, a matter of experience with the hardware. How does it behave, and why? Intel seems to have arrived at a point where they documented the how and haven't quite found the why for most situations. When they find that why, there's going to be numerous much smaller how's and why's underneath for those very specific situations you speak of in AMD/Nv driver FAQs. Refinement happens over time.
These videos where made before having an indication that Intel's ARC 3 will be worst than RX 6400.
These videos where made before Nvidia introducing the GTX 1630 that is far worst than even the RX 6400.
If any of those Steves make a review of GTX 1630 - they will not for one obvious reason - they will have to change their conclusion for the RX 6500 XT. But for now YOU can change your post because it is totally incorrect. 1 month ago, it would have been correct, but today RX 6500 XT is a huge upgrade over A380 and GXT 1630, two DESKTOP GPUs that are coming AFTER RX 67500 XT's introduction and availability. It's literally NOT, BY FAR, the worst GPU money can buy right now.
Before you know it, a reality catches on to crowds that simply isn't a reality, but some weird combination of 'bits we heard'. That is the vast majority of Youtube and social media 'press' right now and in the past ten to twenty years. The facts are clear. We see tons of individuals believing the most diverse bullshit like its a new Bible. From pizzagate to all sorts of nonexistant threats to keto diets. Its all static, and it serves no one. In our forum, the closest example is how people flash a BIOS to gain some performance. They heard it on 'Tube, was a good thing, and follow like Lemmings.
In terms of our subject, optimization or 'just producing a product as it should be' is a pretty big difference. And yes, its difficult to avoid the difference between camps here. Team Green has a much higher quality of driver on release, and Team Red gets there eventually - bar exceptions in a positive and negative sense. Somehow, pop media started calling the latter 'Fine Wine' ;) But when you read the above about optimization, is it really the best way to cover what it truly is?
A question, not a verdict :)