The DDR5 DIMM pinouts are 32/40-bit data width per channel. There's nothing that can change that. So that leaves it as a question of if channels can be paired electrically in board layout, or if it has to be done by pairing the physical 32-bit buses inside the memory controller.
But ... are you following this discussion? We are discussing whether or not the 12 channels in question here are most likely 12 channels in the technically accurate term for DDR5 of 12x32-bit, i.e. 6*n DIMM slots per socket, or if "channel" is used in the semi-colloquial way of "analogous to how DDR4 and previous channels have worked", i.e. "12*n DIMM slots per socket", as this is established nomenclature that, while no longer technically correct, would be easily misleading and confusing if we were to move to the technically correct terminology.
I (nor anyone else from what I can tell) have at no point said that anything can "change" how DDR5 channels "work". I provided a source indicating that while DDR5 is technically 2x32/40-bit channels per DIMM, outside of technical documentation, these two are spoken of as one "channel" even by platform owners and designers as this is
much simpler to get your mind around both due to previous standards, due to the incommensurability of DDR5 channels vs. channels in previous DDR standards, and due to these channels being always already paired up in any practical implementation, making the distinction meaningless outside of these highly technical situations.
This is a question of terminology and communication, not technical questions such as pairing of channels. They
are paired on DIMMs, and are thus highly analogous to single channels on previous DDR standards, even if they are now two channels. Hence the question: does this mean 6*n DIMMs (technically correct), or 12*n DIMMs (if "channel" is used analogously to previous standards)? I'm inclined heavily towards the latter, as reducing the effective channel width (from 8*64=512 to 6*2*32=384) seems extremely unlikely to happen on bandwidth-starved server chips. Thus, "12 channels" here most likely means 12x2x32=768 bits of (pre-ECC) total bus width.
Three-channel IMC? Why not, 3- and 6-channel systems exist, and this 12-channel Epyc is another one that doesn't respect the power of 2. Uhm, AMD was also the first with a non-power-of-2 number of cores.
AMD may have plans to include much more powerful iGPUs in their future chips, so why wouldn't they at least reserve space for that on the socket?
I'm sorry, but I don't see whatsoever how what you said here relates to my post that you quoted nor the post I was responding to. It's obviously entirely possible for AMD (and any other SoC vendor) to equip their chips with however many channels of RAM they want to, within the constraints of physics and manufacturing technology. That's not what I was addressing
at all. I was responding to the possibility of AMD moving to a "4-channel" (which I interpreted as 4x2x32-bit channels, i.e. equivalent to what would have been a 4-channel layout on DDR4, not 4x32-bit channels). And for consumer use cases, outside of iGPU gaming, there is essentially no reason to go above two "channels" (i.e. 2*n DIMMs, or 2x2x32-bit DDR5 channels) on consumer motherboards. There are no workloads that truly benefit from this, while it would significantly affect motherboard complexity and thus price. This doesn't relate whatsoever to the theoretical possibility of non-power-of-2 channel layouts, core layouts, or anything in that direction.
As for why they wouldn't reserve space for that on the socket:
- Segmenting a single platform into multiple RAM channel widths is a
really bad idea. Do all chips support all channels, or only some? How many motherboards will support each tier? You're suddenly requiring a massive increase in the SKUs produced by OEMs, as users will have different desires and needs.
- This segmentation also doesn't make much sense economically: iGPU gaming is generally a low-end endeavor. High RAM channel counts are expensive, and thus are a high end thing. So, this would lead us to a situation where we high end motherboards are the best suited for low end applications, which ... yeah, that doesn't work. Nobody would buy those boards outside of some very, very niche use cases.
- High speed I/O is already ballooning pin counts on chips. Intel is up to 1700, AMD likely something similar on AM5, and that's with "dual channel" (2x2x32). Another two channels on top of that is another >500 pins on the CPU (can't find the number for DDR5, but DDR4 had 288 pins, and while some of those are power and other non-CPU-bound, most are connected to the CPU). That would necessitate a
much larger socket, making board designs even more complicated, dense and expensive.
- All of this for something that doesn't benefit the vast majority of consumer applications
whatsoever.