• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

12-channel DDR5 Memory Support Confirmed for Zen 4 EPYC CPUs by AMD

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,595 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
Thanks to a Linux driver update, we now know that AMD's upcoming Zen 4 based EPYC CPUs will support up to 12 channels of DDR5 memory, an upgrade over the current eight. The EDAC driver, or Error Detection and Correction driver update from AMD contained details of the memory types supported by AMD's upcoming server and workstation CPUs and although this doesn't tell us much about what we'll see from the desktop platform, some of this might spill over to a future Ryzen Threadripper CPU.

The driver also reveals that there will be support for both RDDR5 and LRDDR5, which translates to Registered DDR5 and Load-Reduced DDR5 respectively. LRDDR5 is the replacement for LRDIMMs, which are used in current servers with very high memory densities. Although we don't know when AMD is planning to announce Zen 4, even less so the new EPYC processors, it's expected that it will be some time in the second half next year.



View at TechPowerUp Main Site
 
Joined
Feb 11, 2020
Messages
247 (0.14/day)
64-bit channels, as has been the norm up till now, or 32-bit channels? Remember, DDR5 DIMMs are spec'd with dual 32-bit channels per DIMM.
 
Joined
Nov 6, 2016
Messages
1,751 (0.60/day)
Location
NH, USA
System Name Lightbringer
Processor Ryzen 7 2700X
Motherboard Asus ROG Strix X470-F Gaming
Cooling Enermax Liqmax Iii 360mm AIO
Memory G.Skill Trident Z RGB 32GB (8GBx4) 3200Mhz CL 14
Video Card(s) Sapphire RX 5700XT Nitro+
Storage Hp EX950 2TB NVMe M.2, HP EX950 1TB NVMe M.2, Samsung 860 EVO 2TB
Display(s) LG 34BK95U-W 34" 5120 x 2160
Case Lian Li PC-O11 Dynamic (White)
Power Supply BeQuiet Straight Power 11 850w Gold Rated PSU
Mouse Glorious Model O (Matte White)
Keyboard Royal Kludge RK71
Software Windows 10
96 core, 12 Channel DDR5, 128 PCIe 5.0 lanes, 3d V-cache around 1GB....those will be some serious CPUs... Then they have 128 core very soon after that, albeit with reduced cache, but still
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
64-bit channels, as has been the norm up till now, or 32-bit channels? Remember, DDR5 DIMMs are spec'd with dual 32-bit channels per DIMM.
All industry speak about "channels" in DDR5 (outside of technical documentation) has been in the form of "analogous to DDR4 channels" i.e. one channel per DIMM/2 DIMMs, despite the DIMMs technically being dual channel themselves. Look at any Z690 spec sheet - they all say "dual channel" despite this being 2x2x32 rather than 2x64. It doesn't make much sense to speak of single 32-bit channels as channels once implemented given that they can never be separated out individually, but will always be paired. And, of course, there is no way they are reducing the number of memory channels on next-gen EPYC - servers love their bandwidth.
 
Joined
Feb 11, 2020
Messages
247 (0.14/day)
All industry speak about "channels" in DDR5 (outside of technical documentation) has been in the form of "analogous to DDR4 channels" i.e. one channel per DIMM/2 DIMMs, despite the DIMMs technically being dual channel themselves. Look at any Z690 spec sheet - they all say "dual channel" despite this being 2x2x32 rather than 2x64.
Chipsets haven't contained the memory controller since Core2 days for Intel. Even older for AMD. The marketing blurbs for motherboards probably aren't great reference sources for actual channel count.
 
Joined
Oct 22, 2014
Messages
14,083 (3.82/day)
Location
Sunshine Coast
System Name H7 Flow 2024
Processor AMD 5800X3D
Motherboard Asus X570 Tough Gaming
Cooling Custom liquid
Memory 32 GB DDR4
Video Card(s) Intel ARC A750
Storage Crucial P5 Plus 2TB.
Display(s) AOC 24" Freesync 1m.s. 75Hz
Mouse Lenovo
Keyboard Eweadn Mechanical
Software W11 Pro 64 bit
Updates to the SMCA also indicate AMD may be attempting a big little approach in the near future.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Chipsets haven't contained the memory controller since Core2 days for Intel. Even older for AMD. The marketing blurbs for motherboards probably aren't great reference sources for actual channel count.
Okay, I thought this was strongly enough implied in what I wrote, but apparently not, so let me fix it:
Look at any Z690 motherboard spec sheet
Motherboard spec sheets denote the supported number of memory channels for the relevant platform, as they need to communicate the capabilities of the platform in question - you can't make use of whatever the CPU provides without a compatible motherboard, after all. Z690 motherboard spec sheets universally use "dual channel" as their terminology, despite this being 2x2x32 and not 2x64. And sure, this is marketing, but it's also an example of how these things are spoken of in the industry more broadly - as I said, outside of technical documentation.

Edit: Intel even uses this terminology in their official spec database. Look at the "Max # of memory channels" line.
 
Joined
Oct 21, 2019
Messages
41 (0.02/day)
With a bit of luck we might get 4 channel options on desktop ryzen 7000 series then
I'd be Quite happy to have a seperate channel for each of the 2-4 slots on a board
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
With a bit of luck we might get 4 channel options on desktop ryzen 7000 series then
I'd be Quite happy to have a seperate channel for each of the 2-4 slots on a board
Doubtful - datacenter workloads are bandwidth limited, consumer workloads are almost never that. Doubling the channel count would also significantly increase the number of traces run from the CPU to RAM, (which is already ~500 traces for a 2-channel setup), making motherboard design a lot more complex and expensive (on top of the increases we've already seen for DDR5). Pretty much the only consumer workload that would benefit from 4-channel RAM would be iGPU gaming, and they're definitely not designing AM5 around that as a central use case.
 
Joined
Feb 11, 2020
Messages
247 (0.14/day)
Joined
Jan 3, 2021
Messages
3,482 (2.46/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Doubtful - datacenter workloads are bandwidth limited, consumer workloads are almost never that. Doubling the channel count would also significantly increase the number of traces run from the CPU to RAM, (which is already ~500 traces for a 2-channel setup), making motherboard design a lot more complex and expensive (on top of the increases we've already seen for DDR5). Pretty much the only consumer workload that would benefit from 4-channel RAM would be iGPU gaming, and they're definitely not designing AM5 around that as a central use case.
Three-channel IMC? Why not, 3- and 6-channel systems exist, and this 12-channel Epyc is another one that doesn't respect the power of 2. Uhm, AMD was also the first with a non-power-of-2 number of cores.

AMD may have plans to include much more powerful iGPUs in their future chips, so why wouldn't they at least reserve space for that on the socket?
 
Joined
Oct 12, 2005
Messages
707 (0.10/day)
The best solutions will not be more channel on the motherboard but more cache on the CPU socket. I expect to see large L4 HBM or something before seeing 4 dimm on desktop. (or at least we will already have very large L3). We could even see at some point CPU + RAM in a single socket with no DIMM on the motherboard. It would suck for upgradability, but it could lead to much higher memory performance with lower latency. The CPU could also be designed with a specific memory in mind a little bit like the M1 are.

For iGPU, they could use stacked cache die to alleviate the bandwidth problem. Cache have 2 benefits, it have much higher bandwidth and much lower latency. The downside is you benefit from this only when there is a cache hit and it can slow down a bit the memory access when there isn't (since you had to lookup in the cache.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,595 (2.41/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
The best solutions will not be more channel on the motherboard but more cache on the CPU socket. I expect to see large L4 HBM or something before seeing 4 dimm on desktop. (or at least we will already have very large L3). We could even see at some point CPU + RAM in a single socket with no DIMM on the motherboard. It would suck for upgradability, but it could lead to much higher memory performance with lower latency. The CPU could also be designed with a specific memory in mind a little bit like the M1 are.

For iGPU, they could use stacked cache die to alleviate the bandwidth problem. Cache have 2 benefits, it have much higher bandwidth and much lower latency. The downside is you benefit from this only when there is a cache hit and it can slow down a bit the memory access when there isn't (since you had to lookup in the cache.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
The DDR5 DIMM pinouts are 32/40-bit data width per channel. There's nothing that can change that. So that leaves it as a question of if channels can be paired electrically in board layout, or if it has to be done by pairing the physical 32-bit buses inside the memory controller.
But ... are you following this discussion? We are discussing whether or not the 12 channels in question here are most likely 12 channels in the technically accurate term for DDR5 of 12x32-bit, i.e. 6*n DIMM slots per socket, or if "channel" is used in the semi-colloquial way of "analogous to how DDR4 and previous channels have worked", i.e. "12*n DIMM slots per socket", as this is established nomenclature that, while no longer technically correct, would be easily misleading and confusing if we were to move to the technically correct terminology.

I (nor anyone else from what I can tell) have at no point said that anything can "change" how DDR5 channels "work". I provided a source indicating that while DDR5 is technically 2x32/40-bit channels per DIMM, outside of technical documentation, these two are spoken of as one "channel" even by platform owners and designers as this is much simpler to get your mind around both due to previous standards, due to the incommensurability of DDR5 channels vs. channels in previous DDR standards, and due to these channels being always already paired up in any practical implementation, making the distinction meaningless outside of these highly technical situations.

This is a question of terminology and communication, not technical questions such as pairing of channels. They are paired on DIMMs, and are thus highly analogous to single channels on previous DDR standards, even if they are now two channels. Hence the question: does this mean 6*n DIMMs (technically correct), or 12*n DIMMs (if "channel" is used analogously to previous standards)? I'm inclined heavily towards the latter, as reducing the effective channel width (from 8*64=512 to 6*2*32=384) seems extremely unlikely to happen on bandwidth-starved server chips. Thus, "12 channels" here most likely means 12x2x32=768 bits of (pre-ECC) total bus width.
Three-channel IMC? Why not, 3- and 6-channel systems exist, and this 12-channel Epyc is another one that doesn't respect the power of 2. Uhm, AMD was also the first with a non-power-of-2 number of cores.

AMD may have plans to include much more powerful iGPUs in their future chips, so why wouldn't they at least reserve space for that on the socket?
I'm sorry, but I don't see whatsoever how what you said here relates to my post that you quoted nor the post I was responding to. It's obviously entirely possible for AMD (and any other SoC vendor) to equip their chips with however many channels of RAM they want to, within the constraints of physics and manufacturing technology. That's not what I was addressing at all. I was responding to the possibility of AMD moving to a "4-channel" (which I interpreted as 4x2x32-bit channels, i.e. equivalent to what would have been a 4-channel layout on DDR4, not 4x32-bit channels). And for consumer use cases, outside of iGPU gaming, there is essentially no reason to go above two "channels" (i.e. 2*n DIMMs, or 2x2x32-bit DDR5 channels) on consumer motherboards. There are no workloads that truly benefit from this, while it would significantly affect motherboard complexity and thus price. This doesn't relate whatsoever to the theoretical possibility of non-power-of-2 channel layouts, core layouts, or anything in that direction.

As for why they wouldn't reserve space for that on the socket:
- Segmenting a single platform into multiple RAM channel widths is a really bad idea. Do all chips support all channels, or only some? How many motherboards will support each tier? You're suddenly requiring a massive increase in the SKUs produced by OEMs, as users will have different desires and needs.
- This segmentation also doesn't make much sense economically: iGPU gaming is generally a low-end endeavor. High RAM channel counts are expensive, and thus are a high end thing. So, this would lead us to a situation where we high end motherboards are the best suited for low end applications, which ... yeah, that doesn't work. Nobody would buy those boards outside of some very, very niche use cases.
- High speed I/O is already ballooning pin counts on chips. Intel is up to 1700, AMD likely something similar on AM5, and that's with "dual channel" (2x2x32). Another two channels on top of that is another >500 pins on the CPU (can't find the number for DDR5, but DDR4 had 288 pins, and while some of those are power and other non-CPU-bound, most are connected to the CPU). That would necessitate a much larger socket, making board designs even more complicated, dense and expensive.
- All of this for something that doesn't benefit the vast majority of consumer applications whatsoever.
 
Joined
Oct 15, 2019
Messages
585 (0.31/day)
All industry speak about "channels" in DDR5 (outside of technical documentation) has been in the form of "analogous to DDR4 channels" i.e. one channel per DIMM/2 DIMMs, despite the DIMMs technically being dual channel themselves. Look at any Z690 spec sheet - they all say "dual channel" despite this being 2x2x32 rather than 2x64. It doesn't make much sense to speak of single 32-bit channels as channels once implemented given that they can never be separated out individually, but will always be paired. And, of course, there is no way they are reducing the number of memory channels on next-gen EPYC - servers love their bandwidth.
Yeah, the correct way to count would be to say it has 2 memory controllers, with each supporting up to 2 dimms. The fact that motherboard manufacturers and intel can’t get it right does not make the wrong way to indicate channel counts the right one.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Yeah, the correct way to count would be to say it has 2 memory controllers, with each supporting up to 2 dimms. The fact that motherboard manufacturers and intel can’t get it right does not make the wrong way to indicate channel counts the right one.
Well, what is a correct technical term and what is a useful term for communication are not necessarily the same. Technically correct terminology is often unnecessarily detailed, even in ways that undermine the information conveyed as it is again reliant on further information and detail that isn't provided in shorthand communication. This is a perfect example of that - if we stuck to the correct terminology, any layperson hearing that motherboard now have "four DDR5 channels" vs. "two DDR4 channels" would be likely to think "that's twice as many, must be twice as good", despite the total channel width being the same. And given that "number of memory channels" has been how this has been spoken of for more than a decade, changing that now is confusing by itself, even if the alternative terminology then would be analogous to previous channel counts. Also, of course, there's no necessary relation between number of controllers and channels - AFAIK there's nothing stopping anyone from designing single 32-bit-channel DDR5 controllers, or for that matter controllers for any integer number of channels. If Intel suddenly moved to a single controller block with 4x32 channels, while AMD stuck with two 2x32-bit blocks, then we'd again have a situation of terminology misleading people. Thus, I think sticking to the not technically accurate but correctly indicative and comparable colloquial use of "channels" is the best approach going forward - no matter how many engineers this will annoy.
 
Joined
Feb 11, 2020
Messages
247 (0.14/day)
We are discussing whether or not the 12 channels in question here are most likely 12 channels in the technically accurate term for DDR5 of 12x32-bit ...
Yep, all my comments have been exactly that. You pointed out Intel saying otherwise so I then came up with options for Intel to be merging channel pairs so as to make four channels into two with their 12 series CPUs.
 
Joined
Dec 28, 2012
Messages
3,875 (0.89/day)
System Name Skunkworks 3.0
Processor 5800x3d
Motherboard x570 unify
Cooling Noctua NH-U12A
Memory 32GB 3600 mhz
Video Card(s) asrock 6800xt challenger D
Storage Sabarent rocket 4.0 2TB, MX 500 2TB
Display(s) Asus 1440p144 27"
Case Old arse cooler master 932
Power Supply Corsair 1200w platinum
Mouse *squeak*
Keyboard Some old office thing
Software Manjaro
But ... are you following this discussion? We are discussing whether or not the 12 channels in question here are most likely 12 channels in the technically accurate term for DDR5 of 12x32-bit, i.e. 6*n DIMM slots per socket, or if "channel" is used in the semi-colloquial way of "analogous to how DDR4 and previous channels have worked", i.e. "12*n DIMM slots per socket", as this is established nomenclature that, while no longer technically correct, would be easily misleading and confusing if we were to move to the technically correct terminology.

I (nor anyone else from what I can tell) have at no point said that anything can "change" how DDR5 channels "work". I provided a source indicating that while DDR5 is technically 2x32/40-bit channels per DIMM, outside of technical documentation, these two are spoken of as one "channel" even by platform owners and designers as this is much simpler to get your mind around both due to previous standards, due to the incommensurability of DDR5 channels vs. channels in previous DDR standards, and due to these channels being always already paired up in any practical implementation, making the distinction meaningless outside of these highly technical situations.

This is a question of terminology and communication, not technical questions such as pairing of channels. They are paired on DIMMs, and are thus highly analogous to single channels on previous DDR standards, even if they are now two channels. Hence the question: does this mean 6*n DIMMs (technically correct), or 12*n DIMMs (if "channel" is used analogously to previous standards)? I'm inclined heavily towards the latter, as reducing the effective channel width (from 8*64=512 to 6*2*32=384) seems extremely unlikely to happen on bandwidth-starved server chips. Thus, "12 channels" here most likely means 12x2x32=768 bits of (pre-ECC) total bus width.

I'm sorry, but I don't see whatsoever how what you said here relates to my post that you quoted nor the post I was responding to. It's obviously entirely possible for AMD (and any other SoC vendor) to equip their chips with however many channels of RAM they want to, within the constraints of physics and manufacturing technology. That's not what I was addressing at all. I was responding to the possibility of AMD moving to a "4-channel" (which I interpreted as 4x2x32-bit channels, i.e. equivalent to what would have been a 4-channel layout on DDR4, not 4x32-bit channels). And for consumer use cases, outside of iGPU gaming, there is essentially no reason to go above two "channels" (i.e. 2*n DIMMs, or 2x2x32-bit DDR5 channels) on consumer motherboards. There are no workloads that truly benefit from this, while it would significantly affect motherboard complexity and thus price. This doesn't relate whatsoever to the theoretical possibility of non-power-of-2 channel layouts, core layouts, or anything in that direction.

As for why they wouldn't reserve space for that on the socket:
- Segmenting a single platform into multiple RAM channel widths is a really bad idea. Do all chips support all channels, or only some? How many motherboards will support each tier? You're suddenly requiring a massive increase in the SKUs produced by OEMs, as users will have different desires and needs.
- This segmentation also doesn't make much sense economically: iGPU gaming is generally a low-end endeavor. High RAM channel counts are expensive, and thus are a high end thing. So, this would lead us to a situation where we high end motherboards are the best suited for low end applications, which ... yeah, that doesn't work. Nobody would buy those boards outside of some very, very niche use cases.
- High speed I/O is already ballooning pin counts on chips. Intel is up to 1700, AMD likely something similar on AM5, and that's with "dual channel" (2x2x32). Another two channels on top of that is another >500 pins on the CPU (can't find the number for DDR5, but DDR4 had 288 pins, and while some of those are power and other non-CPU-bound, most are connected to the CPU). That would necessitate a much larger socket, making board designs even more complicated, dense and expensive.
- All of this for something that doesn't benefit the vast majority of consumer applications whatsoever.
Ddr5 channels are always 32 bit each, so you're looking at a total 384 bits of memory bus, the equivalent of 6 channel RAM on older platforms.
 
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Yep, all my comments have been exactly that. You pointed out Intel saying otherwise so I then came up with options for Intel to be merging channel pairs so as to make four channels into two with their 12 series CPUs.
But... There's no such thing as "merging channel pairs". There's the technical definition, and then there's how they are described and spoken of.
Ddr5 channels are always 32 bit each, so you're looking at a total 384 bits of memory bus, the equivalent of 6 channel RAM on older platforms.
I did that calculation in the post you quoted, yes. But I've also provided examples of industry actors, both motherboard makers and Intel themselves, saying Alder Lake has "two channels" of DDR5, despite it technically supporting four - so what you're saying, that they are "always" 32-bit, is only true in technical definitions, not in how this is spoken of and communicated. Also, with servers being massive bandwidth hogs and the main driver for DDR5 prioritizing bandwidth over latency, it seems extremely unlikely for AMD to reduce the channel count for their servers, even when accounting for the higher per-channel bit rate. Server vendors pay good money for bandwidth alone - just think of AMD's 8-core Epyc chips, which exist essentially only as a platform for 8-channel memory and PCIe lanes. So, while a reduction in bus width is the technically correct interpretation of this, it is extremely unlikely that this is accurate, and far more likely that the "channels" in question are aggregate/per-DIMM/paired channels.
 
Joined
Oct 12, 2005
Messages
707 (0.10/day)
Ddr5 channels are always 32 bit each, so you're looking at a total 384 bits of memory bus, the equivalent of 6 channel RAM on older platforms.

Intel state DDR5 as dual channel even if it's 2x32 x2.

The current status is to name all DIMM slot linked to the same channel as a "channel" even if it contains 2x32 channels. Because when you think about it. If you load a 4 DIMM z690 Motherboard with 4 DDR5 DIMM that still give you 2*32 *2 and not 2*32 *4. You get the exact same amount of channel if you populate 2 or 4 DIMM (if you populate them correctly indeed...).

This is why this is what people continue to use as Channel and why they do not use the 2x32 DIMM channel as channel. Simply because it would cause too much confusion.

EPYC currently have 8 channel or 1 channel per CCD. Zen 4 EPYC will have up to 12 die so the 12 channel (or 2*32 *12....) is what they will use. They won't go back on memory bandwidth because it's DDR5...
 
Last edited:
Joined
May 2, 2017
Messages
7,762 (2.81/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro

Intel state DDR5 as dual channel even if it's 2x32 x2.

The current status is to name all DIMM slot linked to the same channel as a "channel" even if it contains 2x32 channel. Because when you think about it. If you load a 4 DIMM z690 Motherboard with 4 DDR5 DIMM that still give you 2*32 *2 and not 2*32 *4. You get the exact same amount of channel if you populate 2 or 4 dimm or the 4 dimm (if you populate them correclty indeed...).

This is why this is what people continue to use as Channel and why they do not use the 2x32 DIMM channel as channel. Simply because it would cause too much confusion.

EPYC currently have 8 channel or 1 channel per CCD. Zen 4 EPYC will have up to 12 die so the 12 channel (or 2*32 *12....) is what they will use. They won't go back on memory bandwidth because it's DDR5...
They also run the risk of being accused of misleading marketing if they start pushing DDR5 as having twice as many channels as DDR4 without simultaneously mentioning that they are half the width, at which point you're looking at some really confusing PR. Being technically correct is not always the same as being able to communicate accurately.
 
Joined
Oct 18, 2013
Messages
6,184 (1.53/day)
Location
Over here, right where you least expect me to be !
System Name The Little One
Processor i5-11320H @4.4GHZ
Motherboard AZW SEI
Cooling Fan w/heat pipes + side & rear vents
Memory 64GB Crucial DDR4-3200 (2x 32GB)
Video Card(s) Iris XE
Storage WD Black SN850X 4TB m.2, Seagate 2TB SSD + SN850 4TB x2 in an external enclosure
Display(s) 2x Samsung 43" & 2x 32"
Case Practically identical to a mac mini, just purrtier in slate blue, & with 3x usb ports on the front !
Audio Device(s) Yamaha ATS-1060 Bluetooth Soundbar & Subwoofer
Power Supply 65w brick
Mouse Logitech MX Master 2
Keyboard Logitech G613 mechanical wireless
Software Windows 10 pro 64 bit, with all the unnecessary background shitzu turned OFF !
Benchmark Scores PDQ
Intel & AMD to mobo mfgr's:

"All your memory channels are belong to US" (regardless of the technical foofoo's involved).... hahahaha :)
 
Joined
Jan 3, 2021
Messages
3,482 (2.46/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
I'm sorry, but I don't see whatsoever how what you said here relates to my post that you quoted nor the post I was responding to. It's obviously entirely possible for AMD (and any other SoC vendor) to equip their chips with however many channels of RAM they want to, within the constraints of physics and manufacturing technology. That's not what I was addressing at all. I was responding to the possibility of AMD moving to a "4-channel" (which I interpreted as 4x2x32-bit channels, i.e. equivalent to what would have been a 4-channel layout on DDR4, not 4x32-bit channels). And for consumer use cases, outside of iGPU gaming, there is essentially no reason to go above two "channels" (i.e. 2*n DIMMs, or 2x2x32-bit DDR5 channels) on consumer motherboards. There are no workloads that truly benefit from this, while it would significantly affect motherboard complexity and thus price. This doesn't relate whatsoever to the theoretical possibility of non-power-of-2 channel layouts, core layouts, or anything in that direction.

As for why they wouldn't reserve space for that on the socket:
- Segmenting a single platform into multiple RAM channel widths is a really bad idea. Do all chips support all channels, or only some? How many motherboards will support each tier? You're suddenly requiring a massive increase in the SKUs produced by OEMs, as users will have different desires and needs.
- This segmentation also doesn't make much sense economically: iGPU gaming is generally a low-end endeavor. High RAM channel counts are expensive, and thus are a high end thing. So, this would lead us to a situation where we high end motherboards are the best suited for low end applications, which ... yeah, that doesn't work. Nobody would buy those boards outside of some very, very niche use cases.
- High speed I/O is already ballooning pin counts on chips. Intel is up to 1700, AMD likely something similar on AM5, and that's with "dual channel" (2x2x32). Another two channels on top of that is another >500 pins on the CPU (can't find the number for DDR5, but DDR4 had 288 pins, and while some of those are power and other non-CPU-bound, most are connected to the CPU). That would necessitate a much larger socket, making board designs even more complicated, dense and expensive.
- All of this for something that doesn't benefit the vast majority of consumer applications whatsoever.
You mentioned doubling the number of channels, which wouldn't make sense, and I agree. I however said that there's a middle ground too, which is somewhat more probable than doubling. One reason for going to 3 64-bit channels could be the IGP; but core counts are going up too, in Epyc, TR and probably Ryzen as well, so all of them will benefit from a 50% increase in RAM data width.

Segmenting would be a minor marketing (and stock keeping) nightmare, that's right. But it does not preclude all CPUs and APUs from being compatible with all motherboards, and at worst, you'd have 2 channels instead of 3, if you pair wrong parts.

I suspect Intel has already reserved space for more RAM width on LGA1700/LGA1800. 500 or 600 new pins is a huge increase. We'll find the pinout diagram one day. As for DDR5, I can only find this: https://media-www.micron.com/-/medi...df?la=en&rev=f3ca96bed7d9427ba72b4c192dfacb56
and it seems that the number of command/address pins has actually gone down, not up, between DDR4 and DDR5:

1639071342445.png


64-bit channels, as has been the norm up till now, or 32-bit channels? Remember, DDR5 DIMMs are spec'd with dual 32-bit channels per DIMM.
You're in a quantum state of being right and wrong at the same time. DDR5 is still new and the confusion about "how much is one channel" is ongoing, and manufacturers aren't of much help here. I think it would be wise to state the channel width when the number of channels is discussed.
 
Joined
Oct 27, 2009
Messages
1,182 (0.21/day)
Location
Republic of Texas
System Name [H]arbringer
Processor 4x 61XX ES @3.5Ghz (48cores)
Motherboard SM GL
Cooling 3x xspc rx360, rx240, 4x DT G34 snipers, D5 pump.
Memory 16x gskill DDR3 1600 cas6 2gb
Video Card(s) blah bigadv folder no gfx needed
Storage 32GB Sammy SSD
Display(s) headless
Case Xigmatek Elysium (whats left of it)
Audio Device(s) yawn
Power Supply Antec 1200w HCP
Software Ubuntu 10.10
Benchmark Scores http://valid.canardpc.com/show_oc.php?id=1780855 http://www.hwbot.org/submission/2158678 http://ww
I refer to them as half channels for a reason. DDR5 is less doubling the channels and more splitting them. While the count is doubled the bit width/bandwidth is not.
going from ddr4 3200 to ddr5 5200 and going to 12 half channel... would increase bandwidth by 22.5%, but core counts are increasing by 50-100% for Genoa-Bergamo.
I agree that this does not seem logical and they are considering channels to be fully functional units of measure and not half channels.
 
Top