Wednesday, June 12th 2024
AMD Says Ryzen 9000 Series Won't Beat 7000X3D Series at Gaming
AMD's upcoming Ryzen 9000 "Granite Ridge" desktop processors based on the "Zen 5" microarchitecture won't beat the Ryzen 7000X3D series at gaming workloads, said Donny Woligroski, the company's senior technical marketing manager, in an interview with Tom's Hardware. The new "Zen 5" chips, such as the Ryzen 7 9700X and Ryzen 9 9950X, will come close to the gaming performance of the 7800X3D and 7950X3D, but won't quite beat it. The new processors, however, will offer significant generational performance uplifts in productivity workloads, particularly multithreaded workloads that use vector extensions such as VNNI and AVX512. The Ryzen 7 7800X3D remains the fastest gaming desktop processor you can buy, it edges out even Intel's Core i9-14900KS, in our testing.
Given this, we expect the gaming performance of processors like the Ryzen 7 9700X and Ryzen 9 9950X to end up closer to those of the Intel Core i9-13900K or i9-14900K. Gamers with a 7000X3D series chip or even a 14th Gen Core i7 or Core i9 chip don't have much to look forward to. AMD confirmed that it's already working on a Ryzen 9000X3D series—that's "Zen 5" with 3D V-cache technology, and is sounds confident of holding on to the title of having the fastest gaming processors. This doesn't seem implausible.Intel, in its recent "Lunar Lake" architecture reveal, went deep into the nuts and bolts of its "Lion Cove" P-core, where it claimed that the core posts a 14% IPC increase over the "Redwood Cove" P-core powering "Meteor Lake," which in turn has similar IPC to the "Raptor Cove" P-core powering the current 14th Gen Core processors. Intel intends to use "Lion Cove" P-cores in even its Core Ultra "Arrow Lake-S" desktop processors. Given that 3D V-cache gave "Zen 4" a 20-25% boost in gaming performance, a similar performance boost to "Zen 5" could make the 9000X3D series competitive with "Arrow Lake-S," if Intel's claims of a 14% IPC gain for the "Lion Cove" P-core holds up. That said, AMD in its interview stated that 3D V-cache may not add the kind of gaming performance gains to "Zen 5" that it did to "Zen 4."
AMD is building the "Zen 5" 8-core CCD on the 4 nm foundry process, which is expected to have the TSV foundation for stacked 3D V-cache memory, but there's an ace up AMD's sleeve. AMD hasn't ruled out the possibility of "Zen 5" having an expandable dedicated L2 cache. To a question by Tom's Hardware on whether the L2 cache is expandable on "Zen 5," AMD replied "Absolutely, if you get to finer-grain 3D interconnect. So we're at 9-micron through silicon via (TSV) pitches today. As you go down to, you know, 6-, 3-, 2- micron and even lower, the level of partitioning can become much finer-grained," It's important to note here, that this is not a confirmation on AMD's part. AMD didn't define the specific pitch required for an L2 cache.
If true, what this means is that in the 9000X3D series, the company could give the CCD a larger 3D V-cache chiplet, which not just expands the on-die L3 cache from 32 MB to 96 MB, but also increases the sizes of the dedicated L2 caches of each core. The "Zen 5" microarchitecture sees each core get 1 MB of dedicated L2 cache, which the new 3D V-cache chiplet could expand.
The L2 cache operates at a higher data-rate than the shared L3 cache, and uses a faster SRAM physical media. The next-gen 3D V-cache chiplet could hence feature two distinct kinds of SRAM—the 64 MB L3 SRAM that expands the on-die 32 MB L3 SRAM; and eight L2 cache SRAM units to expand each of the eight on-die L2 caches.
The L2 cache is expected to play a major role in gaming performance for next-gen processors, and Intel has significantly expanded it for "Lion Cove" P-cores with both "Lunar Lake" and the upcoming "Arrow Lake." On "Lunar Lake," the four P-cores each have a 2.5 MB of dedicated L2 cache. On "Arrow Lake," the same P-core is expected to get 3 MB of dedicated L2 cache. So AMD probably understands the importance of fattening not just the L3 cache, but also the L2.
The rumor mill is abuzz with reports of AMD bringing in the Ryzen 9000X3D series within 2024, with some sources pointing to a Q4-2024 debut, which should time them alongside Intel's launch of the Core Ultra "Arrow Lake-S" desktop processors.
Source:
Tom's Hardware
Given this, we expect the gaming performance of processors like the Ryzen 7 9700X and Ryzen 9 9950X to end up closer to those of the Intel Core i9-13900K or i9-14900K. Gamers with a 7000X3D series chip or even a 14th Gen Core i7 or Core i9 chip don't have much to look forward to. AMD confirmed that it's already working on a Ryzen 9000X3D series—that's "Zen 5" with 3D V-cache technology, and is sounds confident of holding on to the title of having the fastest gaming processors. This doesn't seem implausible.Intel, in its recent "Lunar Lake" architecture reveal, went deep into the nuts and bolts of its "Lion Cove" P-core, where it claimed that the core posts a 14% IPC increase over the "Redwood Cove" P-core powering "Meteor Lake," which in turn has similar IPC to the "Raptor Cove" P-core powering the current 14th Gen Core processors. Intel intends to use "Lion Cove" P-cores in even its Core Ultra "Arrow Lake-S" desktop processors. Given that 3D V-cache gave "Zen 4" a 20-25% boost in gaming performance, a similar performance boost to "Zen 5" could make the 9000X3D series competitive with "Arrow Lake-S," if Intel's claims of a 14% IPC gain for the "Lion Cove" P-core holds up. That said, AMD in its interview stated that 3D V-cache may not add the kind of gaming performance gains to "Zen 5" that it did to "Zen 4."
AMD is building the "Zen 5" 8-core CCD on the 4 nm foundry process, which is expected to have the TSV foundation for stacked 3D V-cache memory, but there's an ace up AMD's sleeve. AMD hasn't ruled out the possibility of "Zen 5" having an expandable dedicated L2 cache. To a question by Tom's Hardware on whether the L2 cache is expandable on "Zen 5," AMD replied "Absolutely, if you get to finer-grain 3D interconnect. So we're at 9-micron through silicon via (TSV) pitches today. As you go down to, you know, 6-, 3-, 2- micron and even lower, the level of partitioning can become much finer-grained," It's important to note here, that this is not a confirmation on AMD's part. AMD didn't define the specific pitch required for an L2 cache.
If true, what this means is that in the 9000X3D series, the company could give the CCD a larger 3D V-cache chiplet, which not just expands the on-die L3 cache from 32 MB to 96 MB, but also increases the sizes of the dedicated L2 caches of each core. The "Zen 5" microarchitecture sees each core get 1 MB of dedicated L2 cache, which the new 3D V-cache chiplet could expand.
The L2 cache operates at a higher data-rate than the shared L3 cache, and uses a faster SRAM physical media. The next-gen 3D V-cache chiplet could hence feature two distinct kinds of SRAM—the 64 MB L3 SRAM that expands the on-die 32 MB L3 SRAM; and eight L2 cache SRAM units to expand each of the eight on-die L2 caches.
The L2 cache is expected to play a major role in gaming performance for next-gen processors, and Intel has significantly expanded it for "Lion Cove" P-cores with both "Lunar Lake" and the upcoming "Arrow Lake." On "Lunar Lake," the four P-cores each have a 2.5 MB of dedicated L2 cache. On "Arrow Lake," the same P-core is expected to get 3 MB of dedicated L2 cache. So AMD probably understands the importance of fattening not just the L3 cache, but also the L2.
The rumor mill is abuzz with reports of AMD bringing in the Ryzen 9000X3D series within 2024, with some sources pointing to a Q4-2024 debut, which should time them alongside Intel's launch of the Core Ultra "Arrow Lake-S" desktop processors.
141 Comments on AMD Says Ryzen 9000 Series Won't Beat 7000X3D Series at Gaming
Personally, I find a stable 60 FPS much more enjoyable than an unstable 120 with dips. Correct, but to me, this isn't a matter of being right or wrong, but a matter of peace of mind. Whether you find the chirping of the bird peaceful or annoying is all in your head. I know Star Wars quotes are a cliché, but "your focus determines your reality". If you let something bother you, then it will. This "focus" of other people is what I'm trying to understand.
People come here to discuss and dissect all things tech. It's their passion, it's what they care about. I can completely understand how someone would think it's ridiculous (my wife does - she just thinks I'm 'playing on my computer' again). But why try to change minds? Would we go into a bread making forum and tell people they're wasting their time and it's silly - just run down to the market and pick up a loaf of Wonder* bread?
* I don't know if Wonder bread is a brand in the UK - It's plain white supermarket factory baked bread here in the States.
videocardz.com/newz/amd-reportedly-considering-higher-tdp-for-ryzen-7-9700x
Also MSI has new bios for am5 for Ryzen 5 cpu support.
videocardz.com/newz/msi-releases-agesa-1-2-0-0-bios-for-amd-x670-b650-a620-improvements-for-ryzen-9000-cpus-and-geforce-rtx-40-gpus How I use the 0.1% data is for example from the 7800X3D techpower review at 4k the minimum frame rate is slightly above 120fps even though the frame rates can peak at 170 fps in their average game run of the 7800X3D is a stable 4k 120fps experience. At 1080p the minimum frame rate was 190 fps meaning even with a 5090 the 7800X3D will probably not be a solid 4k 240fps experience in most titles ( outside of some outliers.) It's definitely on the subjective side. My recommendation for gamers is to find your own personal threshold of tolerance. Mine is 120fps specifically on an oled display. Capping performance at 4k 120 fps allows me to prevent throttling and to run at maximum efficiency. With the rise of 4k 240hz oleds efficiency and stability will probably go out the window until the hardware catches up again. My previous build with 3090 hybrid and 9900ks/7700x cpu the frame rate was un unstable 110 fps in my go to game, the performance would throttle and I would use more than 500 watts of total system powerto achieve that. Vs now I am using 300 to 250 watts of power in the same game at a solid 4k 120 fps in long game sessions.
So in conclusion the minimum or 0.1% and 1% lows can be the target threshold for maximum efficiency gaming with the benefit of smoothness and no throttling.
I don't make my own bread, but I can clearly see why it's fun, and why other people enjoy it.
My missus is also a non-gamer. She likes watching horror films and drama on TV, which is fine. I understand that it's a different kind of hobby, and if she's happy, I'm happy.
Similarly, I'm not interested in having 120+ FPS all the time, with no 0.1% lows dipping below it, but in this case, I also don't understand why others want it, as I don't see anything different at those high frame rates. It seems like complete placebo to me, or something you only see on your frametime graph (which is useless, unless you're diagnosing a system fault). Perhaps my focus isn't attuned to see what they see.
Basically, I'm being told that other people want a million FPS with no 0.1% dips, which is fine, each to their own, but no one tells me why. This is what I'd like to know.
Also sitting to a 120 and a 240hz doesn't tell you the whole story. Use a 240 hz playing at 240 fps for a month and then go back to a 120 monitor, your mind will be blown about how slow the 120 feels.
Common opinion might differ, but a high-range variable refresh rate is a much bigger improvement on my gaming experience than a high FPS.
Maybe I just have to accept that I don't have the eyes and reflexes for high-FPS gaming, and it's something I'll probably never understand. :ohwell:
Oh, and with Nier: Automata (what I'm playing now), it's fine. But that's really it.
I'l have to find something else... but really after my holiday :( (I'm flying away today). Currently, I've got nothing installed that would require that high of an FPS.
Have a good time.
Cheers! :toast:
But I have a lot of computers and only 1 input that goes to 144 so I play it at 100Hz and 60 Hz a bunch. My observations:
• I can play and have fun at 60Hz though it's not as responsive as I'd like. But you get used to it as the monkey can be retrained.
• The difference from 60 to 100 Hz is very big but if the card can only barely squeak 100Hz, then the occasional frame dips to 90s or 80s are noticeable (Radeon 780 iGPU @1440p Low for example). But only on some maps, it's 100 Hz with unnoticeable dips to ~95 on easier maps
• The difference from 100 to 144Hz is subtle, BUT as I'm usually playing with a GPU that's overkill for this, the smoothness of close to zero dips below 144 (maybe the occasional dropped frame or two) with ~zero GPU latency is appreciable. If there are frame drops, I do not notice them and it doesn't break gameplay/concentration at all.
When playing competitive online FPS shooters where precise headshots are a real advantage in a game, I could see where 200+ fps could be a noticeable if subtle advantage but the real problem would getting a few dropped frames at a key point where you'll end up dead instead of the other guy because of the precision-based gameplay. Avoiding those makes a big difference to some people. I play those types of games on occasion and enjoy them for a short time but lol my 2 best games of Fortnite were played on a Dell Latitude 7490 with Intel UHD 630. 1080p 60Hz very Low with frame drops. I dunno, I just have fun playing games with the equipment I have.
Edit: btw I play almost, if not all, my other games at 70-120 fps lock withtout any problem, it's just that the fps drop is making what you see "slower"/less responsive for a short period of time. Same for Eurotruck Simulator 2 in VR, it's a lot more visible and distracting going from 70-80fps to 50-60fps