I'm well aware, but I mean, I have a 5950X and know what it can or can't do, in this case there is still a little more oomph to it vs. the 5900X (due to having two full CCDs and thus the same advantage of having more data access pathways and the extra bits of associated L1/L2 from the extra four cores present), but you know how pre-release first-party benchmarks go. I trust reputable reviewers (such as W1zz) and first-hand experience from actual owners more than AMD (or Intel, or NVIDIA, or whatever) marketing slides first... if AMD says median improvement of 20%, then 9% falls more in line with what I personally expect.
I think it will be a great processor, and it certainly heralds an innovation that will lead to wild successors in the future. But that is mostly because the 5800X itself is a great processor, and this is just a taste test for an upcoming packaging technology that is sure to revolutionize how we see the common desktop processor.
I think you're misjudging things here. While I entirely agree that we shouldn't blindly trust first party benchmarks, these are pretty conservative overall. They're also saying "~15%" average if you look at the slide, not 20, FWIW. IMO, the inclusion of examples with no improvement speaks to a degree of honesty in the benchmarks - though that is obviously also what they want to convey, so it still can't be taken at face value. Still, I see these as slightly more plausible than most first party benchmarks.
As for your 5950X comparison, there are some holes there. First off, L1 and L2 caches on Zen3 are per-core and do not whatsoever affect the performance of other cores. Unless those cores are being utilized, there is no advantage there - and arguably there's a minor disadvantage, as the L3 is divided across more cores (though that mainly makes a difference in heavy MT loads). Still, the advantages of the 5950X in gaming mainly come down to clocks and the ability to keep more high performance threads on the same CCX due to the extra cores. I don't know what you mean by "data access pathways" - the Infinity Fabric of each die is active no matter what, and the full L3 is accessible to all cores (the only difference is the ring bus has two stops disabled), so there's no real difference in that (except for the aforementioned advantage of more local workloads due to more cores, meaning less need to transfer data over IF).
But again: 9% in GB tells us nothing at all about gaming. It might be 9%, it might be -10%, it might be 15% - geekbench does not give a reliable indication of gaming performance. Period. Heck, even AMD's own untrustworthy data shows a range from 0% to 40%, giving an average in the lower bounds of the examples given. So, we can't know, and as you say, we need to see third party benchmarks. Skepticism is good, but you're latching onto an irrelevant comparison, seemingly because it seems to confirm your skepticism, which is a bad habit. Whether or not AMD's numbers are inaccurate, I would recommend trying not to argue so hard for the validity of data that is verifiably irrelevant just because it happens to align with your expectations.
LOL.
This chip is Zen 3 at its base, except with deity knows how many tweaks to make the 3D cache bit work. Do you know how many hardware bugs picked up over the course of Zen 3's lifetime that would have been fixed in this silicon at the same time? Do you know how many lessons they've learned from Zen 4 that they would've belatedly applied to Zen 3 to try to squeeze some extra ooomph out of it? Do you know how much it's benefited from literally years of process node refinements?
Sounds to me like you're overestimating the silicon changes made to a chip throughout its production run. Yes, tweaks and bug fixes happen, but in general those things are quite small undertakings. And Zen3 has had the connection points for this extra cache since the first engineering samples after all. It's taken time to get it to market, but this is not "new" in that sense. It's been in the works since the first iterations of the architecture.