3-channel's of DDR5 would be nice to see from Intel or AMD…
To some extent, but both of them have a two-tier high-end workstation lineup, it would make more sense to take the lower one and scale it down to a more compact socket with ~2500 pins, to reduce costs and add standard cooler compatibility. E.g.:
Threadripper: 48+24 PCIe, 4-channel memory vs. Threadripper Pro 128 PCIe, 8-channel memory
Sapphire Rapids WS: 2000 series 64 PCIe, 4-channel memory vs. 3000 series 112 PCIe, 8-channel memory.
Nor would it be wise to scale up the respective mainstream platforms to 3 channels and a few more PCIe lanes, as the amount needed to make it worth while would approach the high-end workstation platforms anyways, and this would only drive up the costs for basic office users and pure gamers.
If anything, I would prefer to lower the entry for high-end workstation and position it like the old HEDT lineups, and actually
lowering the capabilities of the mainstream platforms for next gen, basically "moving" the customers of the >$400 chips over to the new HEDT platform, and making the mainstream only like a cheap 100W TDP platform with less PCIe.
… or even Nvidia if they ever convincingly started making interesting ARM custom desktop's that rival the x86 dominance which is difficult due to the software scope of x86.
Perhaps for specific server workloads, but not for general desktop use, as ARM will never be able to compete with x86 when it comes to logically dense code, which matters a lot since user applications have naturally more logic than pure math. Logic and memory operations require more instructions on ARM. And x86 have since Pentium Pro featured instructions such as conditional move, which greatly reduces the amount of actual branching in the code, which in turn results in fewer pipeline stalls and have been essential for making good "responsive" desktop CPUs for ages. Over the years x86 added various small additions, such as faster memory copying for larger chunks etc., all of which adds up to either make the code more computationally dense or simplify control flow, all of which helps the CPU front-end achieve higher throughput. There is also much more extensive SIMD support for x86.
ARM designs, on the other hand, relies much more heavily on accelerated features (ASIC) to "compete" with x86. That's why your cell phone can browse the Internet or a Mac can do video editing. While it can come reasonably close if it's just pure math, it falters with more complex logic. Accelerated features may be more energy efficient in some cases, but requires customized software to utilize, and hardware very quickly becomes obsolete.
Intel is working on various advancements, incl. APX, which promises to further reduction of branching logic in compiled code, and in turn less code that will cause any kind of pipeline stall. If successful, this will unlock a lot of potential for many applications, although it will need recompilation. It would also make it easier to feed lots of more execution ports (like >2x of today), resulting in massive IPC gains. I don't know whether these advancements is enough to harness all the potential that I see, and sooner or later something will eventually replace x86, but it's not going to be ARM, that would be taking a couple of steps backwards.
But one thing needs to be said; any significant IPC gain is going to scale way better than throwing more cores into CPUs for user-interactive workloads (only async/batch workloads scales "indefinitely" with core counts), and we will continue to see diminishing returns with higher core counts. So lgocially, architectural improvements should excite users more than increased core counts. From a theoretical standpoint, there is far more potential left to be utilized in instruction-level-parallelism than in multithreading.
Edit:
I found ASUS Pro WS TRX50-SAGE WIFI + AMD 7970X CPU on ebay for about $2000 but I don't trust the seller. Too risky since they can't confirm any parts are working. My hopes are dashed for now.
You did the right call, if they have the required parts yet they can't confirm it's 100% working, then run far away. They do this so you can't demand a refund when you find something wrong, either general instability or a specific feature on the motherboard that's defective, or at the very least they suspect there is something wrong.
Statistically speaking it's far more likely the motherboard is bad than the CPU, so when buying used never pay a lot for a used motherboard, or take that into account that you might need to source another one.
You can certainly find great deals, especially if you don't have to add VAT and expensive shipping, but buy from reputable sellers. The amount of such deals aren't as great as they used to be like in the LGA2066 days, since the sales volume is much lower now.