Thursday, May 23rd 2024
LPDDR6 LPCAMM2 Pictured and Detailed Courtesy of JEDEC
Yesterday we reported on DDR6 memory hitting new heights of performance and it looks like LPDDR6 will follow suit, at least based on details in a JEDEC presentation. LPDDR6 will just like LPDDR5 be available as solder down memory, but it will also be available in a new LPCAMM2 module. The bus speed of LPDDR5 on LPCAMM2 modules is expected to peak at 9.2 GT/s based on JEDEC specifications, but LPDDR6 will extend this to 14.4 GT/s or roughly a 50 percent increase. However, today the fastest and only LPCAMM2 modules on the retail market which are using LPDDR5X, comes in at 7.5 GT/s, which suggests that launch speeds of LPDDR6 will end up being quite far from the peak speeds.
There will be some other interesting changes to LPDDR6 CAMM2 modules as there will be a move from 128-bit per module to 192-bit per module and each channel will go from 32-bits to 48-bits. Part of the reason for this is that LPDDR6 is moving to a 24-bit channel width, consisting of two 12-bit sub channels, as mentioned in yesterday's news post. This might seem odd at first, but in reality is fairly simple, LPDDR6 will have native ECC (Error Correction Code) or EDC (Error Detection Code) support, but it's currently not entirely clear how this will be implemented on a system level. JEDEC is also looking at developing a screwless solution for the CAMM2 and LPCAMM2 memory modules, but at the moment there's no clear solution in sight. We might also get to see LPDDR6 via LPCAMM2 modules on the desktop, although the presentation only mentions CAMM2 for the desktop, something we've already seen that MSI is working on.
Sources:
JEDEC (PDF), via @DarkmontTech
There will be some other interesting changes to LPDDR6 CAMM2 modules as there will be a move from 128-bit per module to 192-bit per module and each channel will go from 32-bits to 48-bits. Part of the reason for this is that LPDDR6 is moving to a 24-bit channel width, consisting of two 12-bit sub channels, as mentioned in yesterday's news post. This might seem odd at first, but in reality is fairly simple, LPDDR6 will have native ECC (Error Correction Code) or EDC (Error Detection Code) support, but it's currently not entirely clear how this will be implemented on a system level. JEDEC is also looking at developing a screwless solution for the CAMM2 and LPCAMM2 memory modules, but at the moment there's no clear solution in sight. We might also get to see LPDDR6 via LPCAMM2 modules on the desktop, although the presentation only mentions CAMM2 for the desktop, something we've already seen that MSI is working on.
23 Comments on LPDDR6 LPCAMM2 Pictured and Detailed Courtesy of JEDEC
Shorter, simplified traces means better signal integrity and lower physical latency.
The 'jumble' of non-intercompatible standards is a bit concerning, however.
There are apparently some issues with stacking the SC CAMM2 modules.
It might be possible to put one CAMM2/LPCAMM2 on each side of a motherboard though, but I haven't seen this proposed anywhere.
JEDEC's gonna have to get the mobo makers on board (no pun intended) to make the concept into a widely accepted and affordable standard though (just like the rear connector thing).
Otherwise, m.E.h..... If the capacity is large enough (128GB or more), then yes I would....
But people called me a moron xd
www.jedec.org/news/pressreleases/jedec-publishes-new-camm2-memory-module-standard
@TheLostSwede The CAMM modules have 8, 16 or 32 chips. What about ECC, which requires 25% more chips? It doesn't appear there is any space reserved for them.
When JEDEC announced the CAMM2 they talked about 128 and 64 bit modules.
It seems that the LPDDR6 will be moving to 192 bit bus, when they're schedule to hit the market?
like JESD318 CAMM2 is stackable, I don't see a problem to have 1 module, that is already stacked.
(And besides that, each subchannel has its own set of ranks, so "single" means four in total anyway.)
As for when this will hit the market, who knows, the DDR6 specs aren't expected to be done until mid 2025, no mention of LPDDR6, but the LP type memory appears to be half a cycle ahead of DDR, so maybe the spec will be done this year, so availability 2025-2026? Look at the picture I posted. Type A, B and C are all 128-bit, only type D is 64-bit per module.
Also, DDR6/LPDDR6 will be 48-bit, not 32-bit.
2. Shorter traces can be accomplished off our current desktop DCP's by rotating the CPU that way the memory controller is in line with the DPC's. In the end if you tried both and tried your absolute best to make the wires shorter by moving things around we are talking about a difference that is damn near 5-7.5% at BEST case (which is god damn useless)
3. They make this confusing that way when normal people read this they have little to no clue what's going on, It's literally just input and output through a wire mainly concerning a PCB with a bunch of silicon DRAM chips on it and another peice of silicon with the entire CPU, including the IMC.
also if you have this on desktops you can have more room for a beefier cpu cooler too
In this scenario, you've already hosed yourself out of having a good system by buying thin and light (it's inevitably throttle happy, has a tiny battery that doesn't last, it's thermally limited so it never performs to its fullest extent and the machine gets out of the factory almost obsolete, since it's designed to be replaced within a generational cycle anyway), so might as well enjoy the security benefits of a secured-core PC-compliant system with soldered RAM.
It's unrelated nonsense, but this single-module back and forth somehow reminded me of AMD's old ganged memory mode, in which the memory controller operated both the two 64-bit channels as a single, fat 128-bit channel. The idea at the time is that by doing this, you'd obtain better bandwidth utilization figures in single-threaded workloads.
I guess it shouldn't be surprising that in the end JEDEC adopted the extreme opposite with the pseudo-quad channel offered by DDR5 (which breaks each channel into two independent 32-bit subchannels).
Even on the server space, there's going to be a point where people won't want to deal with 64 channels of memory if the sticks can't get faster :D
For the desktop, from what I've gathered, people who want fast memory are sticking to a kit of two sticks anyway, since four sticks might not even be stable at XMP ever since we've entered the DDR5 era. Even quad channel 6400mhzCL32 seems hard to pull off... I wonder how many people on TPU are running with four sticks on their personal machine. Especially when the capacity of a single stick seems to quadruple every generation
What bothers me is that the form factor requires more horizontal space... by a big margin