Thursday, December 9th 2021
12-channel DDR5 Memory Support Confirmed for Zen 4 EPYC CPUs by AMD
Thanks to a Linux driver update, we now know that AMD's upcoming Zen 4 based EPYC CPUs will support up to 12 channels of DDR5 memory, an upgrade over the current eight. The EDAC driver, or Error Detection and Correction driver update from AMD contained details of the memory types supported by AMD's upcoming server and workstation CPUs and although this doesn't tell us much about what we'll see from the desktop platform, some of this might spill over to a future Ryzen Threadripper CPU.
The driver also reveals that there will be support for both RDDR5 and LRDDR5, which translates to Registered DDR5 and Load-Reduced DDR5 respectively. LRDDR5 is the replacement for LRDIMMs, which are used in current servers with very high memory densities. Although we don't know when AMD is planning to announce Zen 4, even less so the new EPYC processors, it's expected that it will be some time in the second half next year.
Source:
Phoronix
The driver also reveals that there will be support for both RDDR5 and LRDDR5, which translates to Registered DDR5 and Load-Reduced DDR5 respectively. LRDDR5 is the replacement for LRDIMMs, which are used in current servers with very high memory densities. Although we don't know when AMD is planning to announce Zen 4, even less so the new EPYC processors, it's expected that it will be some time in the second half next year.
63 Comments on 12-channel DDR5 Memory Support Confirmed for Zen 4 EPYC CPUs by AMD
Edit: Intel even uses this terminology in their official spec database. Look at the "Max # of memory channels" line.
I'd be Quite happy to have a seperate channel for each of the 2-4 slots on a board
AMD may have plans to include much more powerful iGPUs in their future chips, so why wouldn't they at least reserve space for that on the socket?
For iGPU, they could use stacked cache die to alleviate the bandwidth problem. Cache have 2 benefits, it have much higher bandwidth and much lower latency. The downside is you benefit from this only when there is a cache hit and it can slow down a bit the memory access when there isn't (since you had to lookup in the cache.
I (nor anyone else from what I can tell) have at no point said that anything can "change" how DDR5 channels "work". I provided a source indicating that while DDR5 is technically 2x32/40-bit channels per DIMM, outside of technical documentation, these two are spoken of as one "channel" even by platform owners and designers as this is much simpler to get your mind around both due to previous standards, due to the incommensurability of DDR5 channels vs. channels in previous DDR standards, and due to these channels being always already paired up in any practical implementation, making the distinction meaningless outside of these highly technical situations.
This is a question of terminology and communication, not technical questions such as pairing of channels. They are paired on DIMMs, and are thus highly analogous to single channels on previous DDR standards, even if they are now two channels. Hence the question: does this mean 6*n DIMMs (technically correct), or 12*n DIMMs (if "channel" is used analogously to previous standards)? I'm inclined heavily towards the latter, as reducing the effective channel width (from 8*64=512 to 6*2*32=384) seems extremely unlikely to happen on bandwidth-starved server chips. Thus, "12 channels" here most likely means 12x2x32=768 bits of (pre-ECC) total bus width. I'm sorry, but I don't see whatsoever how what you said here relates to my post that you quoted nor the post I was responding to. It's obviously entirely possible for AMD (and any other SoC vendor) to equip their chips with however many channels of RAM they want to, within the constraints of physics and manufacturing technology. That's not what I was addressing at all. I was responding to the possibility of AMD moving to a "4-channel" (which I interpreted as 4x2x32-bit channels, i.e. equivalent to what would have been a 4-channel layout on DDR4, not 4x32-bit channels). And for consumer use cases, outside of iGPU gaming, there is essentially no reason to go above two "channels" (i.e. 2*n DIMMs, or 2x2x32-bit DDR5 channels) on consumer motherboards. There are no workloads that truly benefit from this, while it would significantly affect motherboard complexity and thus price. This doesn't relate whatsoever to the theoretical possibility of non-power-of-2 channel layouts, core layouts, or anything in that direction.
As for why they wouldn't reserve space for that on the socket:
- Segmenting a single platform into multiple RAM channel widths is a really bad idea. Do all chips support all channels, or only some? How many motherboards will support each tier? You're suddenly requiring a massive increase in the SKUs produced by OEMs, as users will have different desires and needs.
- This segmentation also doesn't make much sense economically: iGPU gaming is generally a low-end endeavor. High RAM channel counts are expensive, and thus are a high end thing. So, this would lead us to a situation where we high end motherboards are the best suited for low end applications, which ... yeah, that doesn't work. Nobody would buy those boards outside of some very, very niche use cases.
- High speed I/O is already ballooning pin counts on chips. Intel is up to 1700, AMD likely something similar on AM5, and that's with "dual channel" (2x2x32). Another two channels on top of that is another >500 pins on the CPU (can't find the number for DDR5, but DDR4 had 288 pins, and while some of those are power and other non-CPU-bound, most are connected to the CPU). That would necessitate a much larger socket, making board designs even more complicated, dense and expensive.
- All of this for something that doesn't benefit the vast majority of consumer applications whatsoever.
Intel state DDR5 as dual channel even if it's 2x32 x2.
The current status is to name all DIMM slot linked to the same channel as a "channel" even if it contains 2x32 channels. Because when you think about it. If you load a 4 DIMM z690 Motherboard with 4 DDR5 DIMM that still give you 2*32 *2 and not 2*32 *4. You get the exact same amount of channel if you populate 2 or 4 DIMM (if you populate them correctly indeed...).
This is why this is what people continue to use as Channel and why they do not use the 2x32 DIMM channel as channel. Simply because it would cause too much confusion.
EPYC currently have 8 channel or 1 channel per CCD. Zen 4 EPYC will have up to 12 die so the 12 channel (or 2*32 *12....) is what they will use. They won't go back on memory bandwidth because it's DDR5...
"All your memory channels are belong to US" (regardless of the technical foofoo's involved).... hahahaha :)
Segmenting would be a minor marketing (and stock keeping) nightmare, that's right. But it does not preclude all CPUs and APUs from being compatible with all motherboards, and at worst, you'd have 2 channels instead of 3, if you pair wrong parts.
I suspect Intel has already reserved space for more RAM width on LGA1700/LGA1800. 500 or 600 new pins is a huge increase. We'll find the pinout diagram one day. As for DDR5, I can only find this: media-www.micron.com/-/media/client/global/documents/products/technical-marketing-brief/ddr5_key_module_features_tech_brief.pdf?la=en&rev=f3ca96bed7d9427ba72b4c192dfacb56
and it seems that the number of command/address pins has actually gone down, not up, between DDR4 and DDR5:
You're in a quantum state of being right and wrong at the same time. DDR5 is still new and the confusion about "how much is one channel" is ongoing, and manufacturers aren't of much help here. I think it would be wise to state the channel width when the number of channels is discussed.
going from ddr4 3200 to ddr5 5200 and going to 12 half channel... would increase bandwidth by 22.5%, but core counts are increasing by 50-100% for Genoa-Bergamo.
I agree that this does not seem logical and they are considering channels to be fully functional units of measure and not half channels.