What a remarkable, fact based and well formed response! You know, with cursing you defeat any kind of logical argument
Technically speaking, it's not hard to analyze and detect overhead. There are profiling tools which can pinpoint pretty precisely timing and resource allocation. It's not like developers rely on the Ballmer peak and glass balls to optimize code, contrary to popular opinion development skills are surprisingly methodical, rational and deductive in nature.
So, if there were major driver overhead, this would be easily detectable. Not only could GPU performance be severly bottlenecked by the CPU then, but we would expect this bottleneck to increase with GPU power (assuming FPS will increase, not details), so we should expect a A770 to be significantly more bottlenecked by it than A380. As I've said, Intel Arc performs poorly in real world gaming compared to synthetic benchmarks, which points to hardware level scheduling, not driver overhead.
I don't think you grasp how massive a driver overhead issue would have to be to hold back ~20% performance. Whether this is a whole API or just single API calls causing this, this would be very massive, and would be very evident using a profiling tool. And remember, to unleash major gains this trend have to persist across "every" workload, so any such trend should be easy to find. Especially if there are a few API calls taking up too much time of frame rendering and the GPU is undersaturated, this sort of stuff is very evident in profiling. And the fact that they have been struggling since the early engineering samples last year to find anything to squeeze out a tiny bit more of performance, but they can't, because there isn't anything significant to gain from the driver side. So it's very unlikely that Intel will suddenly stumble across something that will unleash 20% more performance, and I'm not talking about a single edge case here, but 20% across the board, which is highly unlikely.
Then lastly, there is history;
Those who remember the launches of Polaris and Vega, remembered that not only forums but also some reviewers claimed that driver optimizations would make them age like "fine wine" and turn out to be better investments than their Nvidia counterparts. Some even suggested e.g. RX 480 to compete in the GTX 1070/1080 range, once the drivers matured after "a few months". Well did it happen? Not yet, but I'm sure it will happen any day now!
And there are not many examples of driver "miracles". The biggest pretty much across-the-board driver optimization I can recall was done by Nvidia shortly after the release of DirectX 12, when they brought most of their DirectX 12-related driver improvements to their respective DirectX 9/10/11 and OpenGL implementations, and achieved something like ~10% after a massive overhaul. And this was overhead they were well aware of for years.
Another recent example is AMD's OpenGL implementation rewrite which yielded some significant gains (and some regressions). And this was an issue OpenGL devs have known about since the early 2000s, AMD(ATI)'s OpenGL implementation was always buggy and underperforming, and it was simply not prioritized for over a decade.
So my point here is, we should stop making excuses about poorly performing hardware and blaming "immature" drivers. DirectX 10/11/12 are high priority APIs, so if there were major bottlenecks in their driver implementation, they would know, no matter how "stupid" you think Intel's engineers are.
And isn't it funny, that for years "immature drivers" have been the excuse whenever AMD have released an underperforming product (and now Intel), but not Nvidia? I smell bias…
Contributing to something and claiming A became B is not the same thing. And since you are twisting words I'm going to use
your own words against
you;
- "You mean like how
Mantle became Vulkan"
- "Mantle was influential in low level APIs, and literally commited swaths of
code to Vulkan"
Both of these claims are untrue, no matter how you try to twist it or split hairs.
Khronos developed Vulkan based on input from numerous contributors, including AMD and their Mantle, and they built this on top of their SPIR-V architecture and the ground work done by the AZDO initiative for OpenGL. While there may be some surface-level similarities between Mantle and Vulkan, Vulkan is far more featured and have much more state management than Mantle ever had, so these are not the same, even though many in the tech press don't know the difference.