Thank for addressing the problem, i was pointing out.
They can't fix the shader compilation stutter & traveral stutter. Not with the new render type & TAA. Its, already been like 8 to 9 years since dx12 came out. It was like back in the end of like 2015 to begining of 2016 when dx12 came out. Things like this stutter probelm would've gotten fixed already if it was possible to fix.
Its funny that developers were the ones who wanted a more "low level a.p.i." like Dx12. Yet they can't even use it, because it requires too much knowleadge of coding, recoding, redesigning & recomplining. Also too much time to do properly. That developers didn't need when using older DX11. DX11 had a lot of work easily done by compliers for developers. A lot of those compliers don't work on dx12.
I don't believe its harder to code for multiple gpu either now newer a.p.i they were designed around the idea. Espically since vulkan uses a similar method as dx12 for mutli-gpu. Its pretty simple to add support for it on newer vulkan builds. The trouble is when game developers purposly remove it on their game engines in Vulkan or don't bother to update their Vuñkan base code for it. Vulkan so far is better, but its barely used by any developers. As it the tools to do a lot of the work.
I mean were almost 10 years into dx12 and we've got less than around 500 dx12 games. On dx11 in two years we haf 700 games.
The rate difference is that dx11 was putting out 349 games a year to dx12's current 49 ganes a year thats around 8 times a many fof dx11.
"Crunch time" for developers is controlled by release dates that publisher put forth. A lot of them have unrealist deadlines. Then theres the adveriser pushing hype for games that gamers get too eger for. They complain about ir taking too long game gets a release date then maybe a bug found get pushed bacl gamer get mad. Pubisher feels heat by community complaints about game. Game gets slightly pushed hsrd to release.
In the end Dx12x is a big stinking pile of M$ broken promisies.
DXR does not help it either, since most general consumers assume raytracing = RTX. They have no clue thats a nvidia branding, nore any idea what DXR means.
Being stuck with single card as the only choice. Means they can charge what ever they want for the top tier card. You have no choice but to biy ot if you the maxium. ( I remeber someone on xtreme systems forum 10 to 15 years ago saying that it got to single card gpus only nvidia would charge $1,200 for the gpu. Here we are today talking about a gpu [RTX 5090] for $2,000 to $2,500)
Losing choices is never a good thing.
Sure they can, Black Myth Wukong loses it entirely after the first couple of hours played. Its smooth, like, buttery, even at 50 FPS. Similarly, Darktide (UE4) has it mostly fixed at this point - mostly because its not really possible to determine where the very rare occasional stutter that is left, actually comes from but it doesn't feel like a traversal stutter and the very rare stutter you still encounter actually never detracts from gameplay anymore; many games have similar stuff going on.
On the second point, I think we're actually seeing that since DX12, CPU bottlenecking has been virtually alleviated, games uses more threads. Also, its important to keep in mind we're still living in a DX11 world in terms of GPU market share. Games are still being developed with DX11 modes, so its not surprising the DX12 api isn't always fully explored. The fact this happens is not because devs don't want to move away from DX11 - its simply because studios want to sell games, and excluding everyone without a DX12 capable GPU isn't exactly helping your sales.
Third point... we have over a decade of living proof that its definitely harder to code well for mGPU / Crossfire / SLI because of the simple fact you are adding complexity to the pipeline. Frames must be sent to one or the other GPU, or both, so there is overhead, too, that you do not have with a single GPU; plus, all of this extra overhead
leans on VRAM data transport, which is a primary focus to actually make GPUs faster by using it better, bandwidth on VRAM is expensive. These are simple
facts, and these facts add milliseconds of latency to anything you do. This is why all SLI setups could spit out fantastic FPS, but their frametimes were always (virtually always) all over the place: you literally feel the impact of that extra overhead on every frame. This problem was
never fixed, and if it was, it always came at an immense scaling performance cost, effectively killing the advantage of mGPU on its own.
It's definitely a big complex die but no idea.
They can make it, no doubt in my mind, because they make larger GPUs too. The only/primary reason Nvidia positions its cards like it does, is because it wants to maximize profits. A random combination of yield, margin, 'market demand', and their overall strategy determines what we get. Everything is just a little knob they can turn to tweak their offering better AND prepare customers for the next round of GPUs. Nvidia has learned, and has the position, to look beyond the current gen, for quite a while now. Something they sucked at during the Kepler / refresh stage up until Maxwell, where they were just focused on developing faster chips than AMD and it was a close call every time. I think that Pascal was the first gen where Nvidia was confident they had market dominance, and acted upon it: they reined in direct sales with Founders' editions, and started elevating price points per tier since. The cadence between generations got slowed down considerably, too, and they started pushing RTX. All of those moves signal of an Nvidia that has gained lots of wiggle room to plan ahead and 'be safe' doing so.
Its also part of the reason why AMD's GPU gen cadence is still messy: they're
reactive, not pro active.