Sunday, February 4th 2024
AMD Readies X870E Chipset to Launch Alongside First Ryzen 9000 "Granite Ridge" CPUs
AMD is readying the new 800-series motherboard chipset to launch alongside its next-generation Ryzen 9000 series "Granite Ridge" desktop processors that implement the "Zen 5" microarchitecture. The chipset family will be led by the AMD X870E, a successor to the current X670E. Since AMD isn't changing the CPU socket, and this is very much the same Socket AM5, the 800-series chipset will support not just "Granite Ridge" at launch, but also the Ryzen 7000 series "Raphael," and Ryzen 8000 series "Hawk Point." Moore's Law is Dead goes into the details of what sets the X870E apart from the current X670E, and it all has to do with USB4.
Apparently, motherboard manufacturers will be mandated to include 40 Gbps USB4 connectivity with AMD X870E, which essentially makes the chipset a 3-chip solution—two Promontory 21 bridge chips, and a discrete ASMedia ASM4242 USB4 host controller; although it's possible that AMD's QVL will allow other brands of USB4 controllers as they become available. The Ryzen 9000 series "Granite Ridge" are chiplet based processors just like the Ryzen 7000 "Raphael," and while the 4 nm "Zen 5" CCDs are new, the 6 nm client I/O die (cIOD) is largely carried over from "Raphael," with a few updates to its memory controller. DDR5-6400 will be the new AMD-recommended "sweetspot" speed; although AMD might get its motherboard vendors to support DDR5-8000 EXPO profiles with an FCLK of 2400 MHz, and a divider.The Ryzen 9000 series "Granite Ridge" will launch alongside a new wave of AMD X870E motherboards, although these processors very much will be supported on AMD 600-series chipset motherboards with BIOS updates. The vast majority of Socket AM5 motherboards feature USB BIOS Flashback, and so you could even pick up a 600-series chipset motherboard with a Ryzen 9000 series processor in combos. The company might expand the 800-series with other chipset models, such as the X870, B850, and the new B840 in the entry level.
Sources:
Moore's Law is Dead (YouTube), Tweaktown
Apparently, motherboard manufacturers will be mandated to include 40 Gbps USB4 connectivity with AMD X870E, which essentially makes the chipset a 3-chip solution—two Promontory 21 bridge chips, and a discrete ASMedia ASM4242 USB4 host controller; although it's possible that AMD's QVL will allow other brands of USB4 controllers as they become available. The Ryzen 9000 series "Granite Ridge" are chiplet based processors just like the Ryzen 7000 "Raphael," and while the 4 nm "Zen 5" CCDs are new, the 6 nm client I/O die (cIOD) is largely carried over from "Raphael," with a few updates to its memory controller. DDR5-6400 will be the new AMD-recommended "sweetspot" speed; although AMD might get its motherboard vendors to support DDR5-8000 EXPO profiles with an FCLK of 2400 MHz, and a divider.The Ryzen 9000 series "Granite Ridge" will launch alongside a new wave of AMD X870E motherboards, although these processors very much will be supported on AMD 600-series chipset motherboards with BIOS updates. The vast majority of Socket AM5 motherboards feature USB BIOS Flashback, and so you could even pick up a 600-series chipset motherboard with a Ryzen 9000 series processor in combos. The company might expand the 800-series with other chipset models, such as the X870, B850, and the new B840 in the entry level.
220 Comments on AMD Readies X870E Chipset to Launch Alongside First Ryzen 9000 "Granite Ridge" CPUs
RAM speed and latency is one of the pillars for high end gaming. No matter if you use Intel or AMD (X3D).
Yes, in slow paced games you won't notice a thing but the numbers say the truth.
Also the X3Ds can cover the memory latency issues and it's more difficult to notice while gaming.
Basically because the X3Ds improve more the minFPS than the avgFPS, the ram issues are hiding behind that.
But still, even with a X3D CPU there is a difference in performance.
Here is the underlying technical data:
My own take on it is that if I have 218/166 avg/min FPS or 204/149 is an insignificant difference, as the amount of info reaching my eyes and brain is exactly the same.
Also, this test was done with a 4090 at 1080p. With my 7800 XT at 1440 UW, I am infinitely more GPU-limited, therefore, my RAM speed matters even less.
In your own testing for a 7700X, the difference between 4800 and 6000 is 30 FPS.
Some games such as Tarkov, Factorio or Minecraft are intensely CPU bottlenecked almost at all times, and will have more significant differences regardless of GPU used.
Other games will be GPU bottlenecked at all times, mostly single player games.
Regardless of how significant you consider 10-30 FPS, or if you consider it to be the "same amount of info reaching your eyes and brain", it's a difference. Your earlier statement of "RAM speed and latency don't matter at all" is therefore subjective, not objective.
If you have low expectations of your hardware, that's fine. If you don't notice the difference between 120 and 150 FPS, that's fine, but lets not pretend that then translates into "RAM speed doesn't matter".
Because people read technical threads like these, and misinformation is then propagated.
Even once you go past 6000 MT, the "sweet spot" (more like the spot where you can probably reach with AMD, since 6400 MT+ is pretty much unattainable without going out of sync and losing performance), you still see CPU and therefore FPS improvements from faster RAM. E.g. ~10 FPS just from 400 MT in the chart I linked. Intel CPUs getting 8000+ MT operate on a whole other level compared to tests done at 6000 MT.
Yep you can definitely tell the difference between 130 and 100 FPS on a Freesync panel....not.
If the difference above is night and day to you, fair enough. All I'm saying is, it means nothing to me. My play style is slow enough not to notice any difference above a certain FPS. Of course everybody is different.
I'm sure you do indeed have "the best AMD computer", but how could you tell? What if another PC was 29% faster? It speaks for the fact I don't need to waste time logging my own testing to disprove anything you say, benchmarks off TPU are more than sufficient.
Or the general web, as AusWolf showed with his data.
I'm not talking JEDEC 4800. Im talking 5600 or 6000 Cl32/CL30 with EXPO. And I'm talking 1440p@144Hz or even UHD. Even with a 4080, you will run into GPU limit most of the time, and when you don't, there wont be any tangible difference between 5600, 6000 ord 6400. I definiately won't consider anything abvoe that for AM5, because I don't have the time I would need to invest into optimzing for so little gain.
If you want to achieve the highest possible framerate @1080p because you think that makes a better experience than hhigher resolutions, then that might be something different, but even then I think you really have to have a 4080 or 4090 before any investment in fast RAM makes sense.
The benchmarks use a 4090 at 1080p because that's an easy way to force a CPU bottleneck, instead of playing for an hour to get 10 minutes of usable data demonstrating the differences you want to focus on. Unless you think CPU/RAM benchmarking should use GPU limited scenarios? This is the basis of scientific testing, you exclude the control variables and test the variable you're interested in.
1440p is considered a CPU limited resolution these days BTW. Especially with the (now cheap) popular 240 Hz monitors, or the emerging 360 Hz/480 Hz 1440p monitors. 4K is pretty much the only resolution where you're GPU bound all the time, but even then you can see improved minimum FPS with better RAM. 144 Hz monitors have been around for more than a decade now, it's pretty entry level.
If you can't afford at least a 4080, worrying about highend RAM is nonsense. Even then, you should much rather invest in a 4090. If you can't even afford a 7800X3D and 4080, even more so.
What I do think helps performance a lot, but won't do myself for time reasons, is buying cheap RAM and OCing it to 6000/6400 or lowering bad timings to good.
So, for people useing a 7800X3D/7950X3D or 13900K/14900K(S), 4090 and a monitor supporting 240Hz+ wanting to get the framerate as high as possible, especially the lows, RAM OC might be important, but those people are few. I have yet to see such a benchmark in 1440p. Then I again I just wen't from 1200p@60Hz to 3440x1440p@144Hz, but haven't had a chance to really play on that since for time reasons. There isn't only gaming. I for example have to read a lot on my monitor, so I didn't want OLED. There isn't much with IPS, 3440@1440p, 10Bit above 144Hz.
But my point was: Consider how many can't afford a 4080 or even 4070TiS. How many have to make do with a 4060, 7600, 6600Xt or something like that, and a 7600, 5700X, 12400 etc. RAM Speed is something they should worry about last.
Does the 13900K have the same 1% performance of X3D chips?
For instance, ~20-25 FPS is the difference between a 4070 Ti and a 4080 (which used to be $400 difference), yet I don't see anyone saying that a faster GPU won't make a difference? Yet when that 20-25 FPS is coming from a RAM tune (free, or maybe $50 more expensive if you want to buy a faster stock kit), suddenly it's imperceivable or "not tangible". A 7900XTX vs 7900XT is even less than 20 FPS, does that render AMD's flagship pointless? No.
First, I have yet to see bencmarks in 1440p were RAM-tuning above DDR5-6000 CL30 from the shelf on a 7800XD does even make a difference of a two digit percentage, in low fps if you like. The benchmark you posted is, again, unrealistic 1080p with cards designed for 1440p and UHD. Then, even if there were games were that was the case, it would have to be in a fps-range were I would see the difference. I don't play fast shooters and I didn't get to play on my 34" 144Hz-monitor, but I really doubt I would see a difference between ~120fps and 130fps.
On the other hand, if you really achive 10fps+ performance gain by tuning cheap RAM to 6000+ with low latency and, of course that's good, but since up to that point the price difference is minimal in my country, my time is to precious for that. Gaining more than 10fps+ by tuning above DDR5-6000 CL30 with an 7800X3D in 1440p, that I relly wan't to see.
Edit: The only difference is that you don't get 30% extra with high graphical settings / resolutions, and with mid-range or lower GPUs.
Your own sensitivity matters a lot as well. I know someone who demands a constant 360 FPS on his 360 Hz monitor. As for me, anything above ~40 is smooth enough, especially with Freesync.
The point I've been trying to make for the past 30 minutes, which seems to have significant resistance (for some reason), is that you don't ever want to be CPU bottlenecked, because that is what people notice as stuttering, or dips. It's irrelevant if you're playing at 100 FPS or 200 FPS, that number suddenly halving or going down by an appreciable amount because your CPU is struggling to keep up is noticable and immersion breaking, and coincides with massively increased input lag. If you want to talk esports, then "muscle memory" is tied to frame rates, you want consistency, not high averages. Like I said, subjective analysis is fine, but lets call it for what it is.
Not much point buying a high refresh monitor if you barely ever hit or sustain that high FPS.
And I still don't believe in differences this huge between DDR5-6000 CL30 out of the box and tuning on 7800X3D. If you do, please show me. But, because I don't have enough time for gaming, I wouldn't invest the time I don't have in finetuning my RAM. Buying 6000 CL30 or 6400 CL32 with EXPO instead of 5x00 for 20-30€ more, fine by me. But only because I can afford a 7800XD. If I couldn't, I wouldn't waste money on RAM.
Artificially inducing a CPU-limited scenario purely for the sake of science is fine, but why should I give the results more credit than they're worth? I actually said the opposite. At low FPS, any small extra can help, but at high FPS, I couldn't care less if there's any difference. I agree with this sentiment, but which situations your experience is limited by your CPU/RAM is highly dependent on your hardware and your sensitivity. I doubt that I could ever notice when a game running at 200 FPS dips into 100 for a microsecond, and I also doubt that any mid-range GPU is CPU limited at 1440p unless you pair it with a Celeron. Exactly - it's all subjective. That's my point all along. :)
The point is not necessarily the high refresh rate, but VRR, which eliminates any screen tearing at the appropriate performance levels.
I'm busy doing other things now, but good chat I guess.
Again with the exaggerations though.
Dips do not last "microseconds".
Screen tearing is an entirely different problem than stuttering or framerate dips. They may have similar causes, but they're different issues entirely.
TRFC and other timings make more difference if you can't go past 6400 without going out of sync like with Zen. It's another reason the platform isn't the magic bullet people think it is. Intel scales past 8000 MT, Zen will do 6200, 6400 if you're very lucky.
Perf diff between 6000 EXPO and 6200 tuned is about 8% in my testing averages. But in best case scenario moved from 190 min FPS to my min FPS not deviating from my framelock, so 237 FPS, that's one game though.