- Joined
- Mar 21, 2016
- Messages
- 2,508 (0.79/day)
It would something radical from Intel to go with them for my next build. My current Ryzen build is almost 4 years old and due for an upgrade. It runs cool and quiet, had a couple issues with the first 6 months of Windows 11 release, but who didn't at that time? After that was sorted everyhting is back to running flawless. Considering a Ryzen 9700X for my next build.
Intel needs a 3-5x performance per watt increase for me to go back (for real, just look at 7800X3D benchmarks right here at TPU, Intel is appalling in efficiency). A new architecture and moving away from their ancient lithography to Intel 20A might might do it.
What that chart doesn't include though is E cores disabled. If you look at the 7800X3D is indicative of poor development progress on cores and threading we're stuck in due to consoles still being limited to 8 cores and 16 threads. Disable the E cores and suddenly the 14700K is a better 14600K with two more P cores with better binning. The 7800X3D and other 8 core 16t CPU's are in the sweet spot of what developers are targeting right now with current consoles on the market. Expecting that to simply remain the same indefinitely is fools gold though. It also has less need for better quality DDR5 memory with that slab of stacked cache, but still can benefit from it just not as greatly as Intel chips will with a smaller cache size and stronger IMC.
I don't really get why w1zzard tested that at 1080p though while in other cases 720p is used to better represent a CPU bottleneck. I don't think it would help things particularly, but it probably would push CPU core usage and thread usage higher in some scenario's. Anyways we need to transition away from 8c/16t consoles before we see forward progress beyond that become standard. You can find examples where developers have targeted better hardware resource, but it won't become common until we see a shift at the largest audience developers target which is the console market.
This really isn't about which is better and why for which purpose use case under which testing scenario example however. This is about Intel making a bad decision or blunder and yes and/or maybe is kind of what we've gathered on the matter to this point.