I think people have a hard time remembering because prior to RDNA2 AMD struggled to compete in the high end post 2013... They made a bad bet on HBM crippling the Fury X which might have ended up a decent product had it shipped with 8GB and after that they gave up trying to compete in the high end till rdna2 shipped.
4 GB VRAM wasn't the biggest problem at the time, Nvidia's counterparts 980 and 980 Ti had 4/6 GB. AMD's biggest problem was availability, these were virtually nowhere to be found for a long time. Heat and power management was also a problem on these.
2 points nobody has mentioned - if these fail hard, they are going to be worth $$$ some years in the future.
Only if Intel has already stopped the production of chips and only a few hundred thousand will be made, but I suspect it's too late for that.
What's more likely is that these will be sold in volumes to east-Asian markets and OEM markets for a lower price.
Also, concerns about driver quality are a double-edged sword. Sure, they might not be great, but that also means that the performance is only going to increase from here.
You fall into the trap of thinking that driver optimizations can just compensate for any shortcoming, but it can't.
The underlying problem for this architecture is clearly showcased when synthetic benchmarks scales much better than real games, that tells us the problem is scheduling inside the GPU, which is not done by the driver at all. This happens because synthetics typically are designed to be more computationally intensive while games are designed to render efficiently.
Intel's problem is essentially very similar to the problem that AMD has had for years, just a fair bit worse. If you remember back when RX 400/500 were competing with Pascal, AMD typically needed 40-50% more Flops, more memory bandwidth, etc. to produce the same performance as Nvidia. This lead many AMD fans to claim that these GPUs had a huge potential to unlock with driver optimizations, thinking there would be 30-40++% more to gain here. Even some reviewers claimed that these cards would perform better over time. So did RX 400/500 ever get that ~30-40% performance gain over Nvidia's counterparts? No, because it was never a caused by driver overhead.
While Intel can certainly fix their driver bugs, and perhaps squeeze out a few percent here and there if they find some overhead, but the overall performance level compared to competing products will remain roughly the same. Think about it, to go from a RTX 3060 performance level to RTX 3060 Ti is about ~30% and RTX 3070 ~45%, there is
no way removal of driver overhead is going to yield this amount of performance gains. And never mind competing with Nvidia's high-end products which has approx. twice the performance.
Lastly, there is not even a guarantee that performance is going to improve in all cases. There is a very real chance that
some cases are going to get slightly worse with bugfixes, especially if they end up doing quickfixes to eliminate bugs.