Thursday, January 18th 2024

SK Hynix Throws a Jab: CAMM is Coming to Desktop PCs

In a surprising turn of events, SK Hynix has hinted at the possibility of the Compression Attached Memory Module (CAMM) standard, initially designed for laptops, being introduced to desktop PCs. This revelation came from a comment made by an SK Hynix representative at the CES 2024 in Las Vegas for the Korean tech media ITSubIssub. According to the SK Hynix representative, the first implementation is underway, but there are no specific details. CAMM, an innovative memory standard developed by Dell in 2022, was certified to replace SO-DIMM as the official standard for laptop memory. However, the transition to desktop PCs could significantly disrupt the desktop memory market. The CAMM modules, unlike the vertical DRAM sticks currently in use, are horizontal and are screwed into a socket. This design change would necessitate a complete overhaul of the desktop motherboard layout.

The thin, flat design of the CAMM modules could also limit the number that can be installed on an ATX board. However, the desktop version of the standard CAMM2 was announced by JEDEC just a month ago. It is designed for DDR5 memory, but it is expected to become mainstream with the introduction of DDR6 around 2025. While CAMM allows for higher speeds and densities for mobile memory, its advantages for desktops over traditional memory sticks are yet to be fully understood. Although low-power CAMM modules could offer energy savings, this is typically more relevant for mobile devices than desktops. As we move towards DDR6 and DDR7, more information about CAMM for desktops will be needed to understand its potential benefits. JEDEC's official words on the new standard indicate that "DDR5 CAMM2s are intended for performance notebooks and mainstream desktops, while LPDDR5/5X CAMM2s target a broader range of notebooks and certain server market segments." So, we can expect to see CAMM2 in both desktops and some server applications.
Source: ExtremeTech
Add your own comment

41 Comments on SK Hynix Throws a Jab: CAMM is Coming to Desktop PCs

#1
R0H1T
Yeah nope not until DDR6 I bet. As it is Intel & AMD are having (some)issues moving the latest DDR5 based parts & this will only exasperate that situation!
Posted on Reply
#2
usiname
Yes if this not make the dims or the mobos more expensive and we get better latency compared to standard full size dims
Posted on Reply
#3
Dirt Chip
Yes will use, as long as you can find some without A@&!ingRGBon tham.
Posted on Reply
#4
AusWolf
So instead of a slot, there's a socket and screws. It sounds great for laptops, but what difference does it make for desktop?
Posted on Reply
#5
Chaitanya
It certainly can be useful on Mini-ITX form factor with modules being moved to rearside of motherboard.
Posted on Reply
#6
Dirt Chip
Most importantly, what is the cost\pref of CAMM vs DDR4\5 ?
Posted on Reply
#7
Geofrancis
ChaitanyaIt certainly can be useful on Mini-ITX form factor with modules being moved to rearside of motherboard.
moving the modules to the back side of the board is the only way i could see this working on desktop.
Posted on Reply
#8
TechLurker
Given that some mATX and ITX boards already move M2s to the back, adding CAMMs to the back of all mobos and potentially shortening the traces to the CPU that way could be an option, also bringing potentially lower latency that way too. It would also clear up some topside estate to move M2s closer to the CPU, also reducing trace lengths while permitting more redrivers or adding in M2 style WiFi/BT options in the former spaces between PCIe slots. Or maybe capitalize on ever faster NVMe as cache/RAMDisk using the M2 closest to the CPU, while still using the current first NVMe M2 slot as the main drive. Or they can pull a page from ASUS's Strix ITX board and just create an M2 sandwich stack next to the CPU, or M2 "RAM-cards" like ASUS already does with their regular flagship boards.

This would also allow for larger CPUs onto existing mATX and ATX standards, such as newer Threadripper mATX or ATX board, or slightly larger next-gen CPUs with more lanes as add-in cards are coming back into vogue; streaming card, storage cards, audio card maybe, future dedicated AI card (or a second GPU used for AI purposes), etc.

Sure, the only loss would be in RGB details since no more RGB or thematic RAM heatsinks, but with in-computer LCD/LED/OLED screens apparently becoming the newest, hottest trend and cheap enough to implement on various fans and cooler tops, the extra real-estate topside would allow for larger waterblocks with screens that could cool just the CPU, or the CPU and VRMs, or CPU, VRMs, and the NVMe drive next to the CPU.
Posted on Reply
#9
R0H1T
Geofrancisback side of the board is the only way i could see this working on desktop.
And that generally wouldn't work with desktops because the physical distance between CPU & memory needs to be relatively short!
Posted on Reply
#10
WonkoTheSaneUK
R0H1TAnd that generally wouldn't work with desktops because the physical distance between CPU & memory needs to be relatively short!
Which means they could be arranged around the back of the CPU socket. Can't get shorter than a thru-board via connection!
Posted on Reply
#11
Chomiq
Unless it comes with super duper RGB it will fail.
Posted on Reply
#12
R0H1T
You'll probably also have to get new cases then & thicker(?) boards. Can't imagine installing new memory as with the current board layouts!
Posted on Reply
#13
rv8000
No thanks, cooling requirements for faster kits and or overclocking will be very VERY difficult to meet by slapping them on the rear of the motherboard.
Posted on Reply
#14
TheLostSwede
News Editor
ChaitanyaIt certainly can be useful on Mini-ITX form factor with modules being moved to rearside of motherboard.
Why not do this on all motherboards? You could place one on each side of the CPU socket and move the CPU socket slightly further away from the power regulation circuitry. No need for the CPU coolers to have clearance for the RAM any more.

You can see it in this video, although he doesn't remove the "shim".

AusWolfSo instead of a slot, there's a socket and screws. It sounds great for laptops, but what difference does it make for desktop?
There's no socket, instead there's a "shim" with connectors that connects the pads on the CAMM to the pads on the motherboard.
Dirt ChipMost importantly, what is the cost\pref of CAMM vs DDR4\5 ?
Apparently the latency is improved. Cost shouldn't be any higher and it's using DDR5 chips in this instance, so no difference there either.
WonkoTheSaneUKWhich means they could be arranged around the back of the CPU socket. Can't get shorter than a thru-board via connection!
That won't work mechanically.
rv8000No thanks, cooling requirements for faster kits and or overclocking will be very VERY difficult to meet by slapping them on the rear of the motherboard.
Why? RAM doesn't get very hot and it would be super easy to put a heatsink on the CAMM modules, just like on normal DIMMs.
Posted on Reply
#15
rv8000
TheLostSwedeWhy? RAM doesn't get very hot and it would be super easy to put a heatsink on the CAMM modules, just like on normal DIMMs.
Unless its a dual chamber case theres little to no airflow on the backside of the motherboard tray, heat radiating off the back of the CPU socket is going to cook the ic’s, the more they go over 50c the less stability, less frequency etc.

I’d be willing to bet no modern or revised DDR5 to come in the next year would be able to provide high frequency kits without crippling latency in this kind of format; even if this is meant to be implemented much further down the road. Without much more efficient dram heat will definitely be a problem on top of being a frequency limiting factor.
Posted on Reply
#16
WonkoTheSaneUK
At CES, some motherboard manufacturers showed boards with all the power connectors on the back of the board.
These require new cases, so I'd expect such cases to change to accomodate cooling for rear-mounted RAM if that's the way things go.
Posted on Reply
#17
Chomiq
TheLostSwedeYou can see it in this video, although he doesn't remove the "shim".
Actually he does, accidentally:
Posted on Reply
#18
TheLostSwede
News Editor
ChomiqActually he does, accidentally:
I didn't watch that far :p
At least that makes it very clear that the part that has the biggest chance of getting accidentally damaged can be swapped out.
rv8000Unless its a dual chamber case theres little to no airflow on the backside of the motherboard tray, heat radiating off the back of the CPU socket is going to cook the ic’s, the more they go over 50c the less stability, less frequency etc.

I’d be willing to bet no modern or revised DDR5 to come in the next year would be able to provide high frequency kits without crippling latency in this kind of format; even if this is meant to be implemented much further down the road. Without much more efficient dram heat will definitely be a problem on top of being a frequency limiting factor.
Yeah no, if that was the case, then laptops would be dying every five minutes.

The latency isn't about the chips themselves, but rather between the memory module and the CPU socket. Two different things. Sorry if that wasn't clear.
Posted on Reply
#19
rv8000
TheLostSwedeI didn't watch that far :p
At least that makes it very clear that the part that has the biggest chance of getting accidentally damaged can be swapped out.


Yeah no, if that was the case, then laptops would be dying every five minutes.

The latency isn't about the chips themselves, but rather between the memory module and the CPU socket. Two different things. Sorry if that wasn't clear.
Show me a laptop running ddr5 8000 at c36 1.45v, you’re clearly misunderstanding the premise.

Dont forget the camm modules would be sitting next to a 100-250w cpu instead of one running at 6-50w
Posted on Reply
#20
R0H1T
There's also the fact there'd be fewer modules? So more heat/density & less frequency or margin for OCing, if any!
Posted on Reply
#21
AnotherReader
TechLurkerGiven that some mATX and ITX boards already move M2s to the back, adding CAMMs to the back of all mobos and potentially shortening the traces to the CPU that way could be an option, also bringing potentially lower latency that way too. It would also clear up some topside estate to move M2s closer to the CPU, also reducing trace lengths while permitting more redrivers or adding in M2 style WiFi/BT options in the former spaces between PCIe slots. Or maybe capitalize on ever faster NVMe as cache/RAMDisk using the M2 closest to the CPU, while still using the current first NVMe M2 slot as the main drive. Or they can pull a page from ASUS's Strix ITX board and just create an M2 sandwich stack next to the CPU, or M2 "RAM-cards" like ASUS already does with their regular flagship boards.

This would also allow for larger CPUs onto existing mATX and ATX standards, such as newer Threadripper mATX or ATX board, or slightly larger next-gen CPUs with more lanes as add-in cards are coming back into vogue; streaming card, storage cards, audio card maybe, future dedicated AI card (or a second GPU used for AI purposes), etc.

Sure, the only loss would be in RGB details since no more RGB or thematic RAM heatsinks, but with in-computer LCD/LED/OLED screens apparently becoming the newest, hottest trend and cheap enough to implement on various fans and cooler tops, the extra real-estate topside would allow for larger waterblocks with screens that could cool just the CPU, or the CPU and VRMs, or CPU, VRMs, and the NVMe drive next to the CPU.
NAND only has high bandwidth; its latency, compared to DRAM, is abysmal. In fact, bandwidth is actually lower than DRAM too as the bandwidth relies on accessing many NAND devices in parallel whereas DRAM's bandwidth can be utilized from a single device. NAND can never be a RAM disk. As far as latency is concerned, decreasing the distance will help, but not as much as you might think. Propagation delay is a much smaller contributor to DRAM latency than other factors inherent to DRAM. Methods to improve the average latency of DRAM (link to PDF) have been proposed, but as far as I know, they haven't been implemented.
Posted on Reply
#22
TheLostSwede
News Editor
rv8000Show me a laptop running ddr5 8000 at c36 1.45v, you’re clearly misunderstanding the premise.

Dont forget the camm modules would be sitting next to a 100-250w cpu instead of one running at 6-50w
No, I did not. With a CKD there's no need for excess Voltages at those kind of speeds.
www.techpowerup.com/317612/patriot-memory-at-2024-ces-14gb-s-gen-5-ssds-usb4-prototypes-ddr5-memory-with-ckd

Your CPU doesn't have a cooler? Also, I guess you've missed out on gaming laptops with 100W+ GPUs in them that sit next to the RAM?
AnotherReaderNAND only has high bandwidth; its latency, compared to DRAM, is abysmal. In fact, bandwidth is actually lower than DRAM too as the bandwidth relies on accessing many NAND devices in parallel whereas DRAM's bandwidth can be utilized from a single device. NAND can never be a RAM disk. As far as latency is concerned, decreasing the distance will help, but not as much as you might think. Propagation delay is a much smaller contributor to DRAM latency than other factors inherent to DRAM. Methods to improve the average latency of DRAM (link to PDF) have been proposed, but as far as I know, they haven't been implemented.
CXL and similar things can be used for RAM though, but it's unlikely to show up in consumer devices any time soon.
Posted on Reply
#23
AnotherReader
TheLostSwedeNo, I did not. With a CKD there's no need for excess Voltages at those kind of speeds.
www.techpowerup.com/317612/patriot-memory-at-2024-ces-14gb-s-gen-5-ssds-usb4-prototypes-ddr5-memory-with-ckd

Your CPU doesn't have a cooler? Also, I guess you've missed out on gaming laptops with 100W+ GPUs in them that sit next to the RAM?


CXL and similar things can be used for RAM though, but it's unlikely to show up in consumer devices any time soon.
I concur; CXL is unlikely to show up for consumers anytime soon, because consumers don't require that much DRAM. If I recall correctly, CXL uses PCIe; that would increase latency substantially compared to regular DIMMs, but for large memory footprint applications, more memory, even if it's slower, would increase performance.
Posted on Reply
#24
rv8000
TheLostSwedeNo, I did not. With a CKD there's no need for excess Voltages at those kind of speeds.
www.techpowerup.com/317612/patriot-memory-at-2024-ces-14gb-s-gen-5-ssds-usb4-prototypes-ddr5-memory-with-ckd

Your CPU doesn't have a cooler? Also, I guess you've missed out on gaming laptops with 100W+ GPUs in them that sit next to the RAM?


CXL and similar things can be used for RAM though, but it's unlikely to show up in consumer devices any time soon.
None of that is in the same realm of desktop, ddr5 up to 7000? No mention of timings (which are likely abysmal)?

The cooling scenario is entirely different with parts consuming a fraction of what desktop parts use all while having everything in a laptop strapped to a unified heatpipe/vaporchamber cooler with blower fans making your ears bleeds as soon as you put a load thats going to max the available tdp.

Format is a terrible idea for desktops, heat will undoubtably be an issue. Comparing DDR5 7000 with loose timings c48+ at low 1.1-1.2v isn’t the same thing as a desktop setup. Go put a gen4 nvme drive on the back of an itx board and see what happens to temps.
Posted on Reply
#25
TheLostSwede
News Editor
rv8000None of that is in the same realm of desktop, ddr5 up to 7000? No mention of timings (which are likely abysmal)?
Actually, it's up to 8000 for now and will most likely go faster in the future. That was just the one thing that was posted here on TPU about it.
rv8000The cooling scenario is entirely different with parts consuming a fraction of what desktop parts use all while having everything in a laptop strapped to a unified heatpipe/vaporchamber cooler with blower fans making your ears bleeds as soon as you put a load thats going to max the available tdp.

Format is a terrible idea for desktops, heat will undoubtably be an issue. Comparing DDR5 7000 with loose timings c48+ at low 1.1-1.2v isn’t the same thing as a desktop setup. Go put a gen4 nvme drive on the back of an itx board and see what happens to temps.
Whatever dude, you clearly don't understand the benefits and believe the PC desktop is a fixed thing that has never changed over the past 40+ years... :banghead:
Also, PCIe 4.0 NVMe drive thermals ≠ DDR5 8000 thermals, but again, whatever, you have clearly made up your mind, so not point continuing this discussion.
Posted on Reply
Add your own comment
Dec 30th, 2024 11:37 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts