I'm curious to see details about framerate consistency across games.
Also, is it just me, or are the "heavier" games mostly on the lower end of that scale comparing it to RTX 3060?
Optimization was always a problem, it just seems to be bigger today because games are huge, developers try to market them as fast as possible and frankly a 10+ core CPU and modern GPUs are huge carpets to hide underneath any performance problem.
Adding cores will not solve any performance problem in a game. In a highly synchronized workload like a game, the returns from using more threads are diminishing very quickly, and can quickly turn into unreliable performance or even glitching. What you're saying here is just nonsense.
Also an unoptimised game will sell more CPUs and GPUs than an optimized one, meaning not only you can market it faster, you can also get nice sponsor money from Nvidia, AMD and Intel, by partially optimizing for their architecture instead for everyones.
Firstly, no modern PC game is optimized for a specific GPU architecture, that would require using the GPU's low-level API instead of DirectX/Vulkan/OpenGL and bypassing the driver (because translating APIs is the primary task of the driver).
Your claims are approaching conspiracy territory. No game developer wants their game to perform poorly, that would make the gameplay less enjoyable for the majority of their customers. Game developers don't get a cut of GPU sales either, and in the cases of GPU makers "sponsoring" games, that has more to do with technical assistance and marketing, and even if they were to receive any funds, that would be drops in the bucket compared to the budget of the big game titles.
Many games today are bloated and poorly coded for a host of reasons;
- Most use off-the-shelf game engines, writing little or no low-level code themselves. Instead they interface with the engine. This also means these engines have generic rendering pipelines design to render arbitrary objects, not specifically tuned to the specific game.
- Companies want quick returns, often resulting in short deadlines, changing scopes and last minute changes.
- Maintenance is often not a priority, as the code is often hardly touched after launch, leading programmers to rush to meet requirements instead of writing good code. This is the reason why game code is known as some of the worst in the industry.
I mean Vega was in it's ways a good card, excelled at Compute, did games and was equal over time to a 1080Ti, it just required a tad more power and that and polaris was just clocked beyond it's efficiency curve. Polaris was also bandwidth starved in terms of performance. For example the memory was rated to only "feed" the gpu up to 1000Mhz GPU clock. Anything above was just a waste of power.
But they kind of had to, in order to still compete with the 1060. The price tag of 250$ however made it good and it was the best 1080p card at that time.
Which Vega cards are you talking about?
Vega 56 performed slightly over GTX 1070 but cost $500 (with a $100 "value" of games).
And how was Polaris bandwidth starved?
RX 480 had 224/256 GB/s vs. GTX 1060's 192 GB/s.
Both Polaris and Vega underperformed due to poor GPU scheduling, yet they performed decently in some compute workloads, as some of then are easier to schedule.