Wednesday, January 4th 2023
AMD Ryzen 7000X3D Announced, Claims Total Dominance over Intel "Raptor Lake," Upcoming i9-13900KS Deterred
AMD today announced its Ryzen 7000X3D "Zen 4" desktop processors with 3D Vertical Cache technology. With these, the company is claiming to have the world's fastest processors for gaming. The company claims to have beaten the Intel Core i9-13900K "Raptor Lake" in gaming, by a margin it feels comfortable to remain competitive with against even the upcoming Core i9-13900KS. At the heart of these processors is the new "Zen 4" 3D Vertical Cache (3DV cache) CCD, which features 64 MB of L3 cache stacked on top of the region of the "Zen 4" CCD that has the on-die 32 MB L3 cache. The 3DV cache runs at the same speed as the on-die L3 cache, and is contiguous with it. The CPU cores see 96 MB of transparent addressable L3 cache.
3DV cache is proven to have a profound impact on gaming performance with the Ryzen 7 5800X3D "Zen 3" processor that helped it beat "Alder Lake" in gaming workloads despite "Zen 3" being a generationally older microarchitecture; and AMD claims to have repeated this magic with the 7000X3D "Zen 4" series, enabling it to beat Intel "Raptor Lake." Unlike with the 5800X3D, AMD don't intend to make gaming performance a trade-off for multi-threaded creator performance, and so it is introducing even 12-core and 16-core SKUs, so you get gaming performance alongside plenty of muscle for creator workloads.The series consists of three SKUs, the 8-core/16-thread Ryzen 7 7800X3D, the 12-core/24-thread Ryzen 9 7900X3D, and the flagship 16-core/32-thread Ryzen 9 7950X3D. The 7800X3D comes with an unknown base frequency above the 4.00 GHz-mark, along with up to 5.00 GHz boost. The 7900X3D has 4.40 GHz base frequency, and up to 5.60 GHz boost. The flagship 7950X3D ticks at 4.20 GHz base, and boosts up to 5.70 GHz.
There's something interesting about the cache setup of the three SKUs. The 7800X3D has 104 MB of total cache (L2+L3), whereas the 7900X3D has 140 MB and the 7950X3D has 144 MB. The 8-core CCD in the 7800X3D has 64 MB of 3DV cache stacked on top of the 32 MB on-die L3 cache, resulting in 96 MB of L3 cache, and with each of the 8 cores having 1 MB of L2 cache, we arrive at 104 MB total cache. Logically, the 7900X3D and 7950X3D should have 204-208 MB of total cache, but they don't.
While we await more details from AMD on what's happening here, there are two theories—one holds that the 3DV cache for the 7900X3D and 7950X3D is just 32 MB per chiplet, or 64 MB L3 cache per CCD. 140 MB total cache for the 7900X3D would hence come from ((2 x 64 MB L3) + (12 x 1 MB L2)); and for the 7950X3D this would be ((2 x 64 MB L3) + (16 x 1 MB L2)).
The second more radical theory holds that only one of the two CCDs has 64 MB of 3DV cache stacked on top of the on-die 32 MB L3 cache, and the other is a conventional "Zen 4" CCD with just 32 MB of on-die L3 cache. The math checks out. Dating all the way back to the Ryzen 3000 "Zen 2" Matisse dual-CCD processors, AMD has worked with Microsoft to optimize Windows 10 and Windows 11 schedulers to localize gaming workloads to one of the two CCDs (using methods such as CPPC2 preferred-core flagging), so if these processors indeed have an asymmetric L3 cache setup between the two CCDs, the one with the 3DV cache would be preferred by the OS for gaming workloads.
In its presentation, AMD uses the term "the world's best gaming processor" with the 7800X3D and not the 7950X3D. This should mean that despite its lower maximum boost frequency, the 7800X3D should offer the best gaming performance among the three SKUs, and very likely features 96 MB of L3 cache for the CCD; whereas the 7900X3D and 7950X3D feature either lower amounts of 3DV cache per CCD, or that asymmetric L3 cache setup we theorized.In terms of performance, AMD is claiming anywhere between 21% to 30% gaming performance gains for the 7800X3D over the previous-generation 5800X3D. This can be associated with the IPC increase of the "Zen 4" core, and faster DDR5 memory. AMD claims that the 7800X3D should particularly shine with CPU-limited gaming scenarios, such as lower-resolution high refresh-rate setups.
The 7950X3D is claimed to beat the Core i9-13900K in gaming performance by anywhere between 13% to 24% in the four tests AMD showed, while also offering big gains in multi-threaded productivity benchmarks. Especially in workloads involving large streaming data, such as file-compression and DaVinci Resolve, the 7950X3D is shown offering between 24% to 52% performance leads over the i9-13900K (which we doubt the i9-13900KS can make up for).
The Ryzen 7000X3D processors will be available from February 2023, and should be drop-in compatible with existing Socket AM5 motherboards, with some boards requiring a BIOS update. The USB BIOS Flashback feature is standardized by AMD across motherboard brands, so this shouldn't be a problem.
3DV cache is proven to have a profound impact on gaming performance with the Ryzen 7 5800X3D "Zen 3" processor that helped it beat "Alder Lake" in gaming workloads despite "Zen 3" being a generationally older microarchitecture; and AMD claims to have repeated this magic with the 7000X3D "Zen 4" series, enabling it to beat Intel "Raptor Lake." Unlike with the 5800X3D, AMD don't intend to make gaming performance a trade-off for multi-threaded creator performance, and so it is introducing even 12-core and 16-core SKUs, so you get gaming performance alongside plenty of muscle for creator workloads.The series consists of three SKUs, the 8-core/16-thread Ryzen 7 7800X3D, the 12-core/24-thread Ryzen 9 7900X3D, and the flagship 16-core/32-thread Ryzen 9 7950X3D. The 7800X3D comes with an unknown base frequency above the 4.00 GHz-mark, along with up to 5.00 GHz boost. The 7900X3D has 4.40 GHz base frequency, and up to 5.60 GHz boost. The flagship 7950X3D ticks at 4.20 GHz base, and boosts up to 5.70 GHz.
There's something interesting about the cache setup of the three SKUs. The 7800X3D has 104 MB of total cache (L2+L3), whereas the 7900X3D has 140 MB and the 7950X3D has 144 MB. The 8-core CCD in the 7800X3D has 64 MB of 3DV cache stacked on top of the 32 MB on-die L3 cache, resulting in 96 MB of L3 cache, and with each of the 8 cores having 1 MB of L2 cache, we arrive at 104 MB total cache. Logically, the 7900X3D and 7950X3D should have 204-208 MB of total cache, but they don't.
While we await more details from AMD on what's happening here, there are two theories—one holds that the 3DV cache for the 7900X3D and 7950X3D is just 32 MB per chiplet, or 64 MB L3 cache per CCD. 140 MB total cache for the 7900X3D would hence come from ((2 x 64 MB L3) + (12 x 1 MB L2)); and for the 7950X3D this would be ((2 x 64 MB L3) + (16 x 1 MB L2)).
The second more radical theory holds that only one of the two CCDs has 64 MB of 3DV cache stacked on top of the on-die 32 MB L3 cache, and the other is a conventional "Zen 4" CCD with just 32 MB of on-die L3 cache. The math checks out. Dating all the way back to the Ryzen 3000 "Zen 2" Matisse dual-CCD processors, AMD has worked with Microsoft to optimize Windows 10 and Windows 11 schedulers to localize gaming workloads to one of the two CCDs (using methods such as CPPC2 preferred-core flagging), so if these processors indeed have an asymmetric L3 cache setup between the two CCDs, the one with the 3DV cache would be preferred by the OS for gaming workloads.
In its presentation, AMD uses the term "the world's best gaming processor" with the 7800X3D and not the 7950X3D. This should mean that despite its lower maximum boost frequency, the 7800X3D should offer the best gaming performance among the three SKUs, and very likely features 96 MB of L3 cache for the CCD; whereas the 7900X3D and 7950X3D feature either lower amounts of 3DV cache per CCD, or that asymmetric L3 cache setup we theorized.In terms of performance, AMD is claiming anywhere between 21% to 30% gaming performance gains for the 7800X3D over the previous-generation 5800X3D. This can be associated with the IPC increase of the "Zen 4" core, and faster DDR5 memory. AMD claims that the 7800X3D should particularly shine with CPU-limited gaming scenarios, such as lower-resolution high refresh-rate setups.
The 7950X3D is claimed to beat the Core i9-13900K in gaming performance by anywhere between 13% to 24% in the four tests AMD showed, while also offering big gains in multi-threaded productivity benchmarks. Especially in workloads involving large streaming data, such as file-compression and DaVinci Resolve, the 7950X3D is shown offering between 24% to 52% performance leads over the i9-13900K (which we doubt the i9-13900KS can make up for).
The Ryzen 7000X3D processors will be available from February 2023, and should be drop-in compatible with existing Socket AM5 motherboards, with some boards requiring a BIOS update. The USB BIOS Flashback feature is standardized by AMD across motherboard brands, so this shouldn't be a problem.
177 Comments on AMD Ryzen 7000X3D Announced, Claims Total Dominance over Intel "Raptor Lake," Upcoming i9-13900KS Deterred
If its getting the job done, i thinks its fantastic you've got the 2700K sweet sailing for this long. Looks like you've had a blast at 4K and it makes sense with most of the weight probably thrown over at the GPU end.
With my 2700K, I kept hitting a brick wall with battlefield - although 100% playable i fell short on visual smoothness - its a difficult one to explain with FPS+FT being decent but i could still sense some lumpy roughness>some jiggery jaggery boo in fast paced scenes or in dense environments. The first assumption was the GPU which was upgraded and i could still feel some irregularity. Eventually reinstalled windows for one last attempt and then gave up... grabbed a 4790K. With each Battlefield release, the lack-of-smooth offender returned hence each time a jump up a couple of Gens resolved the problem which eventually landed me on a 9700K. Not gonna lie, it wasn't just observable performance punching in the upgrade ignition button, i sadly suffer from the upgrade-itch too. Now the current BF is starved for threads and the single threaded 9700K (does a decent job) will be sadly put to rest. I'm a buff for screen time silkiness and something like a 7800X3D/5800X3D sounds like a sound plan for a 3 year excursion (or 2, you know the upgrade-itch hehe) The goal posts stayed put for me when considering CPU upgrades.... but moved a couple of miles far and beyond when considering GPU upgrades. 40-series (or RDNA3) was the last stop, the unyielding affirmative buy... and then nV dropped those rediculous MSRPs and crushed the hope and glory and left me traumatised lol (ok a bit dramatic - simple as, no thanks aint gonna withdraw from me wallet to fill the corps already fattened up pockets)
Below are some average FPS differences between 13900K and 5800X3D showing that @ 4K the 13900K averages 1.3% faster but 6.2% faster @ 1080P. These results don't resolve who's claim is right regarding lows, nor an actual comparison to the 12900KS. It does however show how posting 720P results are not useful in arguing what CPU is going to be faster @ 4K, as the lower clocked, higher cache CPU is at a major disadvantage as the resolution is reduced below 4K.
Maybe you or @Crylune would like to actually provide 1% and 0.1% low @ 4K results between the 5800X3D and 12900KS (and I suppose the 12900K since you claimed 12900K was also faster) so the thread is more informative?
2160P:
1080P:
Its very common to see a combination of lower maximums and better control over outliers. It relates to bursty frequency behaviour as well: if the CPU can boost high, it creates a larger gap between boost and base clock, so your peak FPS might be higher, but your worst numbers are also worse. Why do you think Intel is progressively lowering base clocks gen to gen to attain higher boost? Its not to help minimums, but to shine in maximums. In GPUs, with pre-rendered frames and frame smoothing you create some of the same effects: maximum FPS is sacrificed to use the available time to start on the next frame earlier.
X3D isn't about peak frequency, its about peak consistency, and it shows everywhere. The CPUs are most useful for gaming because they elevate the performance in precisely those gaming situations where you dip the hardest because you're missing the required information at the correct timing. That's where the cache shows its value best and that's where it differs from every other CPU.
Intel can keep up for a large number of games because they're well managed in CPU load; this applies to most triple A content, most console content, but it absolutely does NOT apply to simulations that expand as you go into end-game (almost every generated frame is one where lots of info must be collected to present the correct next step in simulation, the amount increasing the further your army/village/galactic empire expands).
Who cares if you can run a shooter at 250 or 300 FPS, basically is the gist of this. What matters is if you can keep your minimums in check. Only X3Ds offer a technology that does that regardless of the frequency the CPU runs at.
And this in a nutshell is why most CPU reviews don't manage to emphasize or cover properly the impact of CPU performance in gaming. Measuring lows is the way, and in fact should be the defining thing on your CPU choice, and NOT max/avg FPS. The things that damage the experience most, are the dips, not the peaks. Minor difference, the X3D is real innovation, Intel's next KS is not.
And as pointed out above, there are tons of in-game situations where you play not a canned benchmark run, but a real game where the real CPU load is many times higher than you see in reviews. Stuff like Stellaris or Cities Skylines wants every % of performance on the CPU it can get.
Also, intel cpus never drop from the boost clocks during gaming. The 12900k is running at 4.9 ghz 100% of the time, the 13900k runs at 5.5GHz 100% of the time etc.
To the point now, I have a 12900k and a 13900k with 7600c34 ram. If anyone wants to test their 3d and see how much better it is compared to intels offerings, just come forward
I didn't have any opinion on this, I tried to fact check both claims that were made and your claim was the easiest to prove or disprove while his claim is harder, given the fact that 1% and 0.1% low data @ 4K is generally compiled for GPU reviews, not CPU reviews, and the KS is not reviewed as much.
Given the difficulty in finding 0.1/1% lows @ 4K for a 12900KS vs 5800X3D I won't be spending more time on this. It was interesting at first to see if one would be head and shoulders better than the other, but it appears the X3D only has a slight lead and I fully believe the X3D will be so similarly close in performance to the KS, based on the K, that it's not worth looking into further to find the answer.
Here's the 12900K/S & 5800X3D @ 4K on a TPU benchmark, they are VERY close in averages. It doesn't surprise me at all that the X3D, with its extra cache, could beat the KS in 0.1/1% lows, as it only trails the KS by 0.9% in average FPS:
You guys got me interested in looking in on 1% lows between the 2 discussed models.... a 20 game average:
[source: eTeknix]
In short, these 1% low averages puts both the 5800X3D and 12900K on an equal war path...practically the same. Its a given both trade blows depending on the titles played/resolutions applied leaving each inquirer to come to their own conclusion based on their setups and targeted games.
Again if i were going 12th Gen intel (for gaming) i wouldn't touch the 12900K.. just silly beans unless non-gaming core-hungry workloads suggest otherwise. The 12700/12700K is what makes sense or the 5800X/5800X3D. Oddly enough, i've seen the Zen 3 X3D even trading blows with 13th Gen in a small number of titles (probably compared to a 13600K at a given resolution - might need to revisit the stats) but overall 13th Gen easily came out ahead.
Anyone on either 12th Gen or AM4-5000-series should be over the moon for this sort of cutting edge processing power and yet the WWW is full of people beating the third leg against the non-conformist militant wall of futility.
I specifically talked about the relative impact of frequency. Frequency is what the CPU core runs at, its not indicative of how fast the CPU can fetch data. You can believe whatever you want to believe, but there are countless examples where the X3D shines and no Intel CPU can reach it, and they're specifically in highest CPU load cases for gaming. In other words: where it matters most.
And even in your own weird take on how the Intel CPUs work, you can't deny there are already games (like Cyberpunk... as if that's not a writing on the wall but an outlier) that pull these CPUs to base clock because they're exceeding limits for turbo. In fact, your ideas don't match reality in any way shape or form, except perhaps from your own N=1 perspective, but then all I can say is, you ain't gaming a lot, or you're playing the games where the impact just isn't there. I've already pointed out as well, specific types of games excel on X3Ds. Comparing the bog standard bench suite, even if its big, isn't really doing that justice. Not a single reviewer plays a Stellaris 'endgame' or a TW Warhammer 3 campaign in turn 200.
I agree with you that there are games that the 3d is king. But the same applies to the 12900k (im not even mentioning the 13900k that is much faster). Spiderman, spiderman miles morales, cyberpunk etc, the 12900k just poops on the 3d by a big juicy margin. Especially if you run ingame and not the built in bench, the differences are staggering. Im talking about close to 50% differences.
Im absolutely ready to backup my statements with videos, i have the 12900k and the 13900k running with 7600c34, if anyone has the 3d and wants to test the above games, lets do it.
This is just parroting the cherry picked marketing examples that were given the exact treatment to create buy incentive for high end. I consider them about as relevant as Minesweeper performance honestly.
From what I've gathered on those games a big part of the additional CPU load is in fact caused by graphics options, DLSS3, frame generation, etc. I suppose that's where Intel can put its core count to work. Its an interesting development nonetheless, both the biglittle approach and the cache heavy CPU, in how they accelerate gaming. There is definitely untapped potential in CPUs to put to use. I would definitely be interested in this!
This is a 12900k with just 6000 ram at max settings + RT