You can find B class AM5 motherboards for $110 nowadays. At the same price as the Ryzen 7600 you are only able to get the 6p 4e 14400F. E cores in general might as well not even exist unless you are doing heavily threaded work that specifically calls for them. Otherwise they are a detriment to games if a thread even end up on them.
The point of going AM5 is that you don't need to have the CPU last for 10 years. Neither the Intel nor AMD option will be performant in 10 years but on AM5 that doesn't matter because you are provided a ton of upgradability options with minimal effort. The same does not apply if you go with the Intel platform.
No, Intel still uses more power in mixed workloads and games that only use a limited amount of cores. The issue for Intel is two-fold 1) they push their CPUs too much out of the box 2) There architecture is not as efficient. if you look at the scaling graphs between Zen 4 and Intel 14th gen, Zen 4 hits much closer to stock performance at lower wattage values. Intel's power consumption can be decent when properly tuned but to say they are more efficient than Zen is laughable.
Every thread you keep trying to use the same cherry picked narrative as if people can't read the reviews. First you keep implying as if ST is the only metric available and then you cherry pick the most efficient Intel chips from among them and imply that they represent the whole, which to say the least is completely misleading. X3D chips are efficient because of the cache, not because they are UV'd. X3D chips are even more efficient if you actually UV them. Non-X3D chips are pushed out of the box for max performance, hence why you can limit TDP with nearly zero impact to performance.
And yes Intel has a slight advantage when it comes to ST performance for it's top tier processors but that does not apply down the entire stack. You seem to be implying that OP will somehow benefit from the fact that the 14900K has slightly higher ST performance when in fact a much lower clocked and cached 14400F or older Intel processor has zero to do with that. It's not relevant to the discussion, just like how focusing on ST performance is silly when applications are not single threaded anymore, hence why few review outlets focus on mixed and full workloads both synthetic and real. We can all see the charts showing both Intel and AMD trading blows depending on the benchmark, the situation is far more nuaced then you are pretending.
This is another great example of cherry picking and ignoring the scenario presented in this thread. First off OP is not using a 4090 with a 13900K. Not even close. The differences presented in this chart will be much smaller to non-existent on lower end hardware.
Second, the graph provided is using low graphics settings at 1080p which is intended to minimize GPU bottlenecks and not represent a realistic scenario. How many people playing BG3 at low 1080p with a 4090 and 13900K? Excluding reviewers almost no one.
Third, depending on when this chart was made and in what area of the game, it likely doesn't represent current game performance nor average game performance. Given that it's 1080p low settings I'm going to assume they also picked the most CPU demanding part of the game (the city, Baldur's gate), which had stuttering regardless of memory or CPU used until after many patches.
Someone who upgraded from a 1600 to a 5600X3D would see a massive performance uplift for less than they would have spent on a single 9900K.
Buying good value parts more frequently is far better than spending big on a single product.
Where to begin.
Someone who used a 1600 would have had to use that Zen 1 crap for six years, before spending more money on another six core that is slightly faster (5%) in single core, and slightly slower (10%) in multi core than a stock 9900K. With the 5600X3D being around ~20-30% faster in games due to the 3DVCache (but only for the six months since it's been released, not the previous six years, assuming you even managed to buy one of these exclusive CPUs). Or, they could have just bought the 9900 K and still be enjoying that good performance now, as they were from day one.
You can go on about the "future proof" AM5 socket, with its rocky launch and EXPO issues, but we're only guaranteed ~one more year of CPU support (2025). Besides, I think the CPU upgradability of the motherboard is pretty much irrelevant for someone desiring a platform upgrade every ten years.
If you want to complain about Intel K series using 100-150 W (with more than twice the cores, and a 4090, so the CPU is full throttle) compared to 50-100 W during gaming (for Zen 4 X3D, which start at about 25% more $ than the Intel cpus being suggested here, and are "cherry picked" examples of Zen 4 efficiency, with the non X3D chips being much less efficient). A
non K 14700, that isn't pushed to the limits out of the box, and is comparable in its voltage curve to the K series in the same way the X3D Zen CPUs are to the X Zen chips. I.e. Much more efficient out of the box for the sake of around 500 MHz max boost. But the reality is if 50 W is so important to you, then you might as well discount the entire RDNA3 lineup when compared against the more efficient Ada cards, but I've never seen you do that.
You're welcome to take whatever conclusion you want from reviews, but please don't misquote me or try to imply all core synthetic load on a power limit unlocked KS CPU somehow means a non KS chip will be inefficient in gaming.
If you can't infer how improved minimum fps is relevant in a cpu limited scenario (100 FPS is well within capabilities of a 4070 class card), regardless of the GPU being used, then I suggest you do some research on what it means to be CPU bottlenecked. For example, you can be cpu bottlenecked in a game like the recently released Dragons Dogma II even with a 3060 class card, as people are finding out. Or in games like Cyberpunk, or simulation games. The list goes on. Implying you need a 4090 to notice improved CPU performance is disingenuous and you know it.
As for e cores, they're not for gaming. That's the point. You get your entire eight P cores for gaming and foreground tasks. The e cores are for everything else. Every test done for cyberpunk by both TPU and other reviewers has seen worse FPS when disabling e cores (and that's with nothing running in the background, try running discord, browsers, etc).
End of the day, if OP takes the advice and starts from scratch rather than going with an almost four year old platform, he's gonna be better off for the next ten years with 14, or 20 cores, not six.
(I don't count A620 motherboards as being worth the silicon they're made of, so yes, AM5 motherboards are still more expensive than Intel ones, especially DDR4 Intel boards.).
It's also worth mentioning that TPU testing comparing Zen 4 and Raptor Lake CPUs does so with both platforms using 6000 MT memory, as that's all the AM5 platform can realistically support without going out of sync and getting worse gaming performance. As we've seen from the 6400-6800MT jump I showed earlier, I think it's safe to conclude that Intel at 7000 MT is a fair bit faster in gaming than Intel at 6000 MT.