Thursday, December 9th 2021

12-channel DDR5 Memory Support Confirmed for Zen 4 EPYC CPUs by AMD

Thanks to a Linux driver update, we now know that AMD's upcoming Zen 4 based EPYC CPUs will support up to 12 channels of DDR5 memory, an upgrade over the current eight. The EDAC driver, or Error Detection and Correction driver update from AMD contained details of the memory types supported by AMD's upcoming server and workstation CPUs and although this doesn't tell us much about what we'll see from the desktop platform, some of this might spill over to a future Ryzen Threadripper CPU.

The driver also reveals that there will be support for both RDDR5 and LRDDR5, which translates to Registered DDR5 and Load-Reduced DDR5 respectively. LRDDR5 is the replacement for LRDIMMs, which are used in current servers with very high memory densities. Although we don't know when AMD is planning to announce Zen 4, even less so the new EPYC processors, it's expected that it will be some time in the second half next year.
Source: Phoronix
Add your own comment

63 Comments on 12-channel DDR5 Memory Support Confirmed for Zen 4 EPYC CPUs by AMD

#1
user556
64-bit channels, as has been the norm up till now, or 32-bit channels? Remember, DDR5 DIMMs are spec'd with dual 32-bit channels per DIMM.
Posted on Reply
#2
AnarchoPrimitiv
96 core, 12 Channel DDR5, 128 PCIe 5.0 lanes, 3d V-cache around 1GB....those will be some serious CPUs... Then they have 128 core very soon after that, albeit with reduced cache, but still
Posted on Reply
#3
Valantar
user55664-bit channels, as has been the norm up till now, or 32-bit channels? Remember, DDR5 DIMMs are spec'd with dual 32-bit channels per DIMM.
All industry speak about "channels" in DDR5 (outside of technical documentation) has been in the form of "analogous to DDR4 channels" i.e. one channel per DIMM/2 DIMMs, despite the DIMMs technically being dual channel themselves. Look at any Z690 spec sheet - they all say "dual channel" despite this being 2x2x32 rather than 2x64. It doesn't make much sense to speak of single 32-bit channels as channels once implemented given that they can never be separated out individually, but will always be paired. And, of course, there is no way they are reducing the number of memory channels on next-gen EPYC - servers love their bandwidth.
Posted on Reply
#4
user556
ValantarAll industry speak about "channels" in DDR5 (outside of technical documentation) has been in the form of "analogous to DDR4 channels" i.e. one channel per DIMM/2 DIMMs, despite the DIMMs technically being dual channel themselves. Look at any Z690 spec sheet - they all say "dual channel" despite this being 2x2x32 rather than 2x64.
Chipsets haven't contained the memory controller since Core2 days for Intel. Even older for AMD. The marketing blurbs for motherboards probably aren't great reference sources for actual channel count.
Posted on Reply
#5
Caring1
Updates to the SMCA also indicate AMD may be attempting a big little approach in the near future.
Posted on Reply
#6
Valantar
user556Chipsets haven't contained the memory controller since Core2 days for Intel. Even older for AMD. The marketing blurbs for motherboards probably aren't great reference sources for actual channel count.
Okay, I thought this was strongly enough implied in what I wrote, but apparently not, so let me fix it:
ValantarLook at any Z690 motherboard spec sheet
Motherboard spec sheets denote the supported number of memory channels for the relevant platform, as they need to communicate the capabilities of the platform in question - you can't make use of whatever the CPU provides without a compatible motherboard, after all. Z690 motherboard spec sheets universally use "dual channel" as their terminology, despite this being 2x2x32 and not 2x64. And sure, this is marketing, but it's also an example of how these things are spoken of in the industry more broadly - as I said, outside of technical documentation.

Edit: Intel even uses this terminology in their official spec database. Look at the "Max # of memory channels" line.
Posted on Reply
#7
Quitessa
With a bit of luck we might get 4 channel options on desktop ryzen 7000 series then
I'd be Quite happy to have a seperate channel for each of the 2-4 slots on a board
Posted on Reply
#8
Valantar
QuitessaWith a bit of luck we might get 4 channel options on desktop ryzen 7000 series then
I'd be Quite happy to have a seperate channel for each of the 2-4 slots on a board
Doubtful - datacenter workloads are bandwidth limited, consumer workloads are almost never that. Doubling the channel count would also significantly increase the number of traces run from the CPU to RAM, (which is already ~500 traces for a 2-channel setup), making motherboard design a lot more complex and expensive (on top of the increases we've already seen for DDR5). Pretty much the only consumer workload that would benefit from 4-channel RAM would be iGPU gaming, and they're definitely not designing AM5 around that as a central use case.
Posted on Reply
#9
user556
ValantarEdit: Intel even uses this terminology in their official spec database. Look at the "Max # of memory channels" line.
The DDR5 DIMM pinouts are 32/40-bit data width per channel. There's nothing that can change that. So that leaves it as a question of if channels can be paired electrically in board layout, or if it has to be done by pairing the physical 32-bit buses inside the memory controller.
Posted on Reply
#10
Wirko
ValantarDoubtful - datacenter workloads are bandwidth limited, consumer workloads are almost never that. Doubling the channel count would also significantly increase the number of traces run from the CPU to RAM, (which is already ~500 traces for a 2-channel setup), making motherboard design a lot more complex and expensive (on top of the increases we've already seen for DDR5). Pretty much the only consumer workload that would benefit from 4-channel RAM would be iGPU gaming, and they're definitely not designing AM5 around that as a central use case.
Three-channel IMC? Why not, 3- and 6-channel systems exist, and this 12-channel Epyc is another one that doesn't respect the power of 2. Uhm, AMD was also the first with a non-power-of-2 number of cores.

AMD may have plans to include much more powerful iGPUs in their future chips, so why wouldn't they at least reserve space for that on the socket?
Posted on Reply
#11
Punkenjoy
The best solutions will not be more channel on the motherboard but more cache on the CPU socket. I expect to see large L4 HBM or something before seeing 4 dimm on desktop. (or at least we will already have very large L3). We could even see at some point CPU + RAM in a single socket with no DIMM on the motherboard. It would suck for upgradability, but it could lead to much higher memory performance with lower latency. The CPU could also be designed with a specific memory in mind a little bit like the M1 are.

For iGPU, they could use stacked cache die to alleviate the bandwidth problem. Cache have 2 benefits, it have much higher bandwidth and much lower latency. The downside is you benefit from this only when there is a cache hit and it can slow down a bit the memory access when there isn't (since you had to lookup in the cache.
Posted on Reply
#12
Asni
12 channel DDR5-5200MT/sgenerate such a thing as 500GB/s bandwidth. Simulation workloads with 96c and AVX-512 will be taken to another level.
Posted on Reply
#13
TheLostSwede
News Editor
PunkenjoyThe best solutions will not be more channel on the motherboard but more cache on the CPU socket. I expect to see large L4 HBM or something before seeing 4 dimm on desktop. (or at least we will already have very large L3). We could even see at some point CPU + RAM in a single socket with no DIMM on the motherboard. It would suck for upgradability, but it could lead to much higher memory performance with lower latency. The CPU could also be designed with a specific memory in mind a little bit like the M1 are.

For iGPU, they could use stacked cache die to alleviate the bandwidth problem. Cache have 2 benefits, it have much higher bandwidth and much lower latency. The downside is you benefit from this only when there is a cache hit and it can slow down a bit the memory access when there isn't (since you had to lookup in the cache.
www.theregister.com/2021/12/08/alibaba_teases_a_breakthrough_chip/
Posted on Reply
#14
Valantar
user556The DDR5 DIMM pinouts are 32/40-bit data width per channel. There's nothing that can change that. So that leaves it as a question of if channels can be paired electrically in board layout, or if it has to be done by pairing the physical 32-bit buses inside the memory controller.
But ... are you following this discussion? We are discussing whether or not the 12 channels in question here are most likely 12 channels in the technically accurate term for DDR5 of 12x32-bit, i.e. 6*n DIMM slots per socket, or if "channel" is used in the semi-colloquial way of "analogous to how DDR4 and previous channels have worked", i.e. "12*n DIMM slots per socket", as this is established nomenclature that, while no longer technically correct, would be easily misleading and confusing if we were to move to the technically correct terminology.

I (nor anyone else from what I can tell) have at no point said that anything can "change" how DDR5 channels "work". I provided a source indicating that while DDR5 is technically 2x32/40-bit channels per DIMM, outside of technical documentation, these two are spoken of as one "channel" even by platform owners and designers as this is much simpler to get your mind around both due to previous standards, due to the incommensurability of DDR5 channels vs. channels in previous DDR standards, and due to these channels being always already paired up in any practical implementation, making the distinction meaningless outside of these highly technical situations.

This is a question of terminology and communication, not technical questions such as pairing of channels. They are paired on DIMMs, and are thus highly analogous to single channels on previous DDR standards, even if they are now two channels. Hence the question: does this mean 6*n DIMMs (technically correct), or 12*n DIMMs (if "channel" is used analogously to previous standards)? I'm inclined heavily towards the latter, as reducing the effective channel width (from 8*64=512 to 6*2*32=384) seems extremely unlikely to happen on bandwidth-starved server chips. Thus, "12 channels" here most likely means 12x2x32=768 bits of (pre-ECC) total bus width.
WirkoThree-channel IMC? Why not, 3- and 6-channel systems exist, and this 12-channel Epyc is another one that doesn't respect the power of 2. Uhm, AMD was also the first with a non-power-of-2 number of cores.

AMD may have plans to include much more powerful iGPUs in their future chips, so why wouldn't they at least reserve space for that on the socket?
I'm sorry, but I don't see whatsoever how what you said here relates to my post that you quoted nor the post I was responding to. It's obviously entirely possible for AMD (and any other SoC vendor) to equip their chips with however many channels of RAM they want to, within the constraints of physics and manufacturing technology. That's not what I was addressing at all. I was responding to the possibility of AMD moving to a "4-channel" (which I interpreted as 4x2x32-bit channels, i.e. equivalent to what would have been a 4-channel layout on DDR4, not 4x32-bit channels). And for consumer use cases, outside of iGPU gaming, there is essentially no reason to go above two "channels" (i.e. 2*n DIMMs, or 2x2x32-bit DDR5 channels) on consumer motherboards. There are no workloads that truly benefit from this, while it would significantly affect motherboard complexity and thus price. This doesn't relate whatsoever to the theoretical possibility of non-power-of-2 channel layouts, core layouts, or anything in that direction.

As for why they wouldn't reserve space for that on the socket:
- Segmenting a single platform into multiple RAM channel widths is a really bad idea. Do all chips support all channels, or only some? How many motherboards will support each tier? You're suddenly requiring a massive increase in the SKUs produced by OEMs, as users will have different desires and needs.
- This segmentation also doesn't make much sense economically: iGPU gaming is generally a low-end endeavor. High RAM channel counts are expensive, and thus are a high end thing. So, this would lead us to a situation where we high end motherboards are the best suited for low end applications, which ... yeah, that doesn't work. Nobody would buy those boards outside of some very, very niche use cases.
- High speed I/O is already ballooning pin counts on chips. Intel is up to 1700, AMD likely something similar on AM5, and that's with "dual channel" (2x2x32). Another two channels on top of that is another >500 pins on the CPU (can't find the number for DDR5, but DDR4 had 288 pins, and while some of those are power and other non-CPU-bound, most are connected to the CPU). That would necessitate a much larger socket, making board designs even more complicated, dense and expensive.
- All of this for something that doesn't benefit the vast majority of consumer applications whatsoever.
Posted on Reply
#15
Dredi
ValantarAll industry speak about "channels" in DDR5 (outside of technical documentation) has been in the form of "analogous to DDR4 channels" i.e. one channel per DIMM/2 DIMMs, despite the DIMMs technically being dual channel themselves. Look at any Z690 spec sheet - they all say "dual channel" despite this being 2x2x32 rather than 2x64. It doesn't make much sense to speak of single 32-bit channels as channels once implemented given that they can never be separated out individually, but will always be paired. And, of course, there is no way they are reducing the number of memory channels on next-gen EPYC - servers love their bandwidth.
Yeah, the correct way to count would be to say it has 2 memory controllers, with each supporting up to 2 dimms. The fact that motherboard manufacturers and intel can’t get it right does not make the wrong way to indicate channel counts the right one.
Posted on Reply
#16
Valantar
DrediYeah, the correct way to count would be to say it has 2 memory controllers, with each supporting up to 2 dimms. The fact that motherboard manufacturers and intel can’t get it right does not make the wrong way to indicate channel counts the right one.
Well, what is a correct technical term and what is a useful term for communication are not necessarily the same. Technically correct terminology is often unnecessarily detailed, even in ways that undermine the information conveyed as it is again reliant on further information and detail that isn't provided in shorthand communication. This is a perfect example of that - if we stuck to the correct terminology, any layperson hearing that motherboard now have "four DDR5 channels" vs. "two DDR4 channels" would be likely to think "that's twice as many, must be twice as good", despite the total channel width being the same. And given that "number of memory channels" has been how this has been spoken of for more than a decade, changing that now is confusing by itself, even if the alternative terminology then would be analogous to previous channel counts. Also, of course, there's no necessary relation between number of controllers and channels - AFAIK there's nothing stopping anyone from designing single 32-bit-channel DDR5 controllers, or for that matter controllers for any integer number of channels. If Intel suddenly moved to a single controller block with 4x32 channels, while AMD stuck with two 2x32-bit blocks, then we'd again have a situation of terminology misleading people. Thus, I think sticking to the not technically accurate but correctly indicative and comparable colloquial use of "channels" is the best approach going forward - no matter how many engineers this will annoy.
Posted on Reply
#17
user556
ValantarWe are discussing whether or not the 12 channels in question here are most likely 12 channels in the technically accurate term for DDR5 of 12x32-bit ...
Yep, all my comments have been exactly that. You pointed out Intel saying otherwise so I then came up with options for Intel to be merging channel pairs so as to make four channels into two with their 12 series CPUs.
Posted on Reply
#18
TheinsanegamerN
ValantarBut ... are you following this discussion? We are discussing whether or not the 12 channels in question here are most likely 12 channels in the technically accurate term for DDR5 of 12x32-bit, i.e. 6*n DIMM slots per socket, or if "channel" is used in the semi-colloquial way of "analogous to how DDR4 and previous channels have worked", i.e. "12*n DIMM slots per socket", as this is established nomenclature that, while no longer technically correct, would be easily misleading and confusing if we were to move to the technically correct terminology.

I (nor anyone else from what I can tell) have at no point said that anything can "change" how DDR5 channels "work". I provided a source indicating that while DDR5 is technically 2x32/40-bit channels per DIMM, outside of technical documentation, these two are spoken of as one "channel" even by platform owners and designers as this is much simpler to get your mind around both due to previous standards, due to the incommensurability of DDR5 channels vs. channels in previous DDR standards, and due to these channels being always already paired up in any practical implementation, making the distinction meaningless outside of these highly technical situations.

This is a question of terminology and communication, not technical questions such as pairing of channels. They are paired on DIMMs, and are thus highly analogous to single channels on previous DDR standards, even if they are now two channels. Hence the question: does this mean 6*n DIMMs (technically correct), or 12*n DIMMs (if "channel" is used analogously to previous standards)? I'm inclined heavily towards the latter, as reducing the effective channel width (from 8*64=512 to 6*2*32=384) seems extremely unlikely to happen on bandwidth-starved server chips. Thus, "12 channels" here most likely means 12x2x32=768 bits of (pre-ECC) total bus width.

I'm sorry, but I don't see whatsoever how what you said here relates to my post that you quoted nor the post I was responding to. It's obviously entirely possible for AMD (and any other SoC vendor) to equip their chips with however many channels of RAM they want to, within the constraints of physics and manufacturing technology. That's not what I was addressing at all. I was responding to the possibility of AMD moving to a "4-channel" (which I interpreted as 4x2x32-bit channels, i.e. equivalent to what would have been a 4-channel layout on DDR4, not 4x32-bit channels). And for consumer use cases, outside of iGPU gaming, there is essentially no reason to go above two "channels" (i.e. 2*n DIMMs, or 2x2x32-bit DDR5 channels) on consumer motherboards. There are no workloads that truly benefit from this, while it would significantly affect motherboard complexity and thus price. This doesn't relate whatsoever to the theoretical possibility of non-power-of-2 channel layouts, core layouts, or anything in that direction.

As for why they wouldn't reserve space for that on the socket:
- Segmenting a single platform into multiple RAM channel widths is a really bad idea. Do all chips support all channels, or only some? How many motherboards will support each tier? You're suddenly requiring a massive increase in the SKUs produced by OEMs, as users will have different desires and needs.
- This segmentation also doesn't make much sense economically: iGPU gaming is generally a low-end endeavor. High RAM channel counts are expensive, and thus are a high end thing. So, this would lead us to a situation where we high end motherboards are the best suited for low end applications, which ... yeah, that doesn't work. Nobody would buy those boards outside of some very, very niche use cases.
- High speed I/O is already ballooning pin counts on chips. Intel is up to 1700, AMD likely something similar on AM5, and that's with "dual channel" (2x2x32). Another two channels on top of that is another >500 pins on the CPU (can't find the number for DDR5, but DDR4 had 288 pins, and while some of those are power and other non-CPU-bound, most are connected to the CPU). That would necessitate a much larger socket, making board designs even more complicated, dense and expensive.
- All of this for something that doesn't benefit the vast majority of consumer applications whatsoever.
Ddr5 channels are always 32 bit each, so you're looking at a total 384 bits of memory bus, the equivalent of 6 channel RAM on older platforms.
Posted on Reply
#19
Valantar
user556Yep, all my comments have been exactly that. You pointed out Intel saying otherwise so I then came up with options for Intel to be merging channel pairs so as to make four channels into two with their 12 series CPUs.
But... There's no such thing as "merging channel pairs". There's the technical definition, and then there's how they are described and spoken of.
TheinsanegamerNDdr5 channels are always 32 bit each, so you're looking at a total 384 bits of memory bus, the equivalent of 6 channel RAM on older platforms.
I did that calculation in the post you quoted, yes. But I've also provided examples of industry actors, both motherboard makers and Intel themselves, saying Alder Lake has "two channels" of DDR5, despite it technically supporting four - so what you're saying, that they are "always" 32-bit, is only true in technical definitions, not in how this is spoken of and communicated. Also, with servers being massive bandwidth hogs and the main driver for DDR5 prioritizing bandwidth over latency, it seems extremely unlikely for AMD to reduce the channel count for their servers, even when accounting for the higher per-channel bit rate. Server vendors pay good money for bandwidth alone - just think of AMD's 8-core Epyc chips, which exist essentially only as a platform for 8-channel memory and PCIe lanes. So, while a reduction in bus width is the technically correct interpretation of this, it is extremely unlikely that this is accurate, and far more likely that the "channels" in question are aggregate/per-DIMM/paired channels.
Posted on Reply
#20
Punkenjoy
TheinsanegamerNDdr5 channels are always 32 bit each, so you're looking at a total 384 bits of memory bus, the equivalent of 6 channel RAM on older platforms.
ark.intel.com/content/www/us/en/ark/products/134599/intel-core-i912900k-processor-30m-cache-up-to-5-20-ghz.html

Intel state DDR5 as dual channel even if it's 2x32 x2.

The current status is to name all DIMM slot linked to the same channel as a "channel" even if it contains 2x32 channels. Because when you think about it. If you load a 4 DIMM z690 Motherboard with 4 DDR5 DIMM that still give you 2*32 *2 and not 2*32 *4. You get the exact same amount of channel if you populate 2 or 4 DIMM (if you populate them correctly indeed...).

This is why this is what people continue to use as Channel and why they do not use the 2x32 DIMM channel as channel. Simply because it would cause too much confusion.

EPYC currently have 8 channel or 1 channel per CCD. Zen 4 EPYC will have up to 12 die so the 12 channel (or 2*32 *12....) is what they will use. They won't go back on memory bandwidth because it's DDR5...
Posted on Reply
#21
Valantar
Punkenjoyark.intel.com/content/www/us/en/ark/products/134599/intel-core-i912900k-processor-30m-cache-up-to-5-20-ghz.html

Intel state DDR5 as dual channel even if it's 2x32 x2.

The current status is to name all DIMM slot linked to the same channel as a "channel" even if it contains 2x32 channel. Because when you think about it. If you load a 4 DIMM z690 Motherboard with 4 DDR5 DIMM that still give you 2*32 *2 and not 2*32 *4. You get the exact same amount of channel if you populate 2 or 4 dimm or the 4 dimm (if you populate them correclty indeed...).

This is why this is what people continue to use as Channel and why they do not use the 2x32 DIMM channel as channel. Simply because it would cause too much confusion.

EPYC currently have 8 channel or 1 channel per CCD. Zen 4 EPYC will have up to 12 die so the 12 channel (or 2*32 *12....) is what they will use. They won't go back on memory bandwidth because it's DDR5...
They also run the risk of being accused of misleading marketing if they start pushing DDR5 as having twice as many channels as DDR4 without simultaneously mentioning that they are half the width, at which point you're looking at some really confusing PR. Being technically correct is not always the same as being able to communicate accurately.
Posted on Reply
#22
bonehead123
Intel & AMD to mobo mfgr's:

"All your memory channels are belong to US" (regardless of the technical foofoo's involved).... hahahaha :)
Posted on Reply
#23
Wirko
ValantarI'm sorry, but I don't see whatsoever how what you said here relates to my post that you quoted nor the post I was responding to. It's obviously entirely possible for AMD (and any other SoC vendor) to equip their chips with however many channels of RAM they want to, within the constraints of physics and manufacturing technology. That's not what I was addressing at all. I was responding to the possibility of AMD moving to a "4-channel" (which I interpreted as 4x2x32-bit channels, i.e. equivalent to what would have been a 4-channel layout on DDR4, not 4x32-bit channels). And for consumer use cases, outside of iGPU gaming, there is essentially no reason to go above two "channels" (i.e. 2*n DIMMs, or 2x2x32-bit DDR5 channels) on consumer motherboards. There are no workloads that truly benefit from this, while it would significantly affect motherboard complexity and thus price. This doesn't relate whatsoever to the theoretical possibility of non-power-of-2 channel layouts, core layouts, or anything in that direction.

As for why they wouldn't reserve space for that on the socket:
- Segmenting a single platform into multiple RAM channel widths is a really bad idea. Do all chips support all channels, or only some? How many motherboards will support each tier? You're suddenly requiring a massive increase in the SKUs produced by OEMs, as users will have different desires and needs.
- This segmentation also doesn't make much sense economically: iGPU gaming is generally a low-end endeavor. High RAM channel counts are expensive, and thus are a high end thing. So, this would lead us to a situation where we high end motherboards are the best suited for low end applications, which ... yeah, that doesn't work. Nobody would buy those boards outside of some very, very niche use cases.
- High speed I/O is already ballooning pin counts on chips. Intel is up to 1700, AMD likely something similar on AM5, and that's with "dual channel" (2x2x32). Another two channels on top of that is another >500 pins on the CPU (can't find the number for DDR5, but DDR4 had 288 pins, and while some of those are power and other non-CPU-bound, most are connected to the CPU). That would necessitate a much larger socket, making board designs even more complicated, dense and expensive.
- All of this for something that doesn't benefit the vast majority of consumer applications whatsoever.
You mentioned doubling the number of channels, which wouldn't make sense, and I agree. I however said that there's a middle ground too, which is somewhat more probable than doubling. One reason for going to 3 64-bit channels could be the IGP; but core counts are going up too, in Epyc, TR and probably Ryzen as well, so all of them will benefit from a 50% increase in RAM data width.

Segmenting would be a minor marketing (and stock keeping) nightmare, that's right. But it does not preclude all CPUs and APUs from being compatible with all motherboards, and at worst, you'd have 2 channels instead of 3, if you pair wrong parts.

I suspect Intel has already reserved space for more RAM width on LGA1700/LGA1800. 500 or 600 new pins is a huge increase. We'll find the pinout diagram one day. As for DDR5, I can only find this: media-www.micron.com/-/media/client/global/documents/products/technical-marketing-brief/ddr5_key_module_features_tech_brief.pdf?la=en&rev=f3ca96bed7d9427ba72b4c192dfacb56
and it seems that the number of command/address pins has actually gone down, not up, between DDR4 and DDR5:

user55664-bit channels, as has been the norm up till now, or 32-bit channels? Remember, DDR5 DIMMs are spec'd with dual 32-bit channels per DIMM.
You're in a quantum state of being right and wrong at the same time. DDR5 is still new and the confusion about "how much is one channel" is ongoing, and manufacturers aren't of much help here. I think it would be wise to state the channel width when the number of channels is discussed.
Posted on Reply
#24
Patriot
I refer to them as half channels for a reason. DDR5 is less doubling the channels and more splitting them. While the count is doubled the bit width/bandwidth is not.
going from ddr4 3200 to ddr5 5200 and going to 12 half channel... would increase bandwidth by 22.5%, but core counts are increasing by 50-100% for Genoa-Bergamo.
I agree that this does not seem logical and they are considering channels to be fully functional units of measure and not half channels.
Posted on Reply
#25
Valantar
WirkoYou mentioned doubling the number of channels, which wouldn't make sense, and I agree. I however said that there's a middle ground too, which is somewhat more probable than doubling. One reason for going to 3 64-bit channels could be the IGP; but core counts are going up too, in Epyc, TR and probably Ryzen as well, so all of them will benefit from a 50% increase in RAM data width.
Ah, I see. Sorry for my antagonizing tone in that previous post btw. Should have worded that differently. Anyhow, I still don't think that would make sense - DDR5 alone already delivers that 50% bw increase from clock speed alone after all, and promises to extend that to >2.6x (going by the fastest JEDEC DDR4, 3200, and the fastest proposed DDR5 JEDEC at 8400). Even if JEDEC DDR5-8400 is a few years out, there's plenty of bandwidth for consumer applications in a dual(dual) channel layout still - especially seeing how consumer applications don't really scale with memory bandwidth (and here's TPUs testing with DDR4 scaling as well).
WirkoSegmenting would be a minor marketing (and stock keeping) nightmare, that's right. But it does not preclude all CPUs and APUs from being compatible with all motherboards, and at worst, you'd have 2 channels instead of 3, if you pair wrong parts.
That's true, but that's a tech support nightmare as well (why isn't my third DIMM working?), plus a rolling wave of stockkeeping hell from 2-3-4-6-DIMM kits of every variation of RAM out there, etc. Again: not worth the hassle when the clock speed alone delivers such an increase and the applications just don't scale.
WirkoI suspect Intel has already reserved space for more RAM width on LGA1700/LGA1800. 500 or 600 new pins is a huge increase. We'll find the pinout diagram one day. As for DDR5, I can only find this: media-www.micron.com/-/media/client/global/documents/products/technical-marketing-brief/ddr5_key_module_features_tech_brief.pdf?la=en&rev=f3ca96bed7d9427ba72b4c192dfacb56
and it seems that the number of command/address pins has actually gone down, not up, between DDR4 and DDR5:

I sincerely doubt that. PCIe 5.0 eats quite a few pins, as does all the other native high speed I/O - not necessarily for more data pins, but likely to be able to better isolate signal pins to lower interference and ensure signal integrity (which might be the reason for LGA1700 CPUs having peculiar patterns (rings, lines) of different-sized pads on the bottom) - not to mention the power delivery for these chips that are now actually rated at 241W (rather than just consuming that much anyhow). According to Anandtech DDR5 is still 288 pins, so at least that hasn't gone up. But if I were to make a guess, the signal integrity needs of DDR5 and PCIe 5.0 alone represent the majority of the new pins on that socket.
Posted on Reply
Add your own comment
Dec 21st, 2024 21:59 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts