Tuesday, April 8th 2025
"DRAM+" Non-Volatile Memory Combines DRAM Speed With Flash Persistence
Ferroelectric Memory Co. (FMC) and Neumonda have formed a partnership to commercialize "DRAM+," a ferroelectric (FeRAM) memory architecture combining DRAM's speed with non-volatile data retention. The technology substitutes conventional capacitors with ferroelectric hafnium oxide (HfO₂) elements, allowing persistent storage without power while maintaining nanosecond access times. This hybrid technology addresses the performance gap between high-speed DRAM and storage-class memory like NAND flash. Unlike previous European DRAM ventures from Infineon and Qimonda that failed against commodity memory economics, FMC targets specialized applications valuing persistence and power efficiency. The HfO₂-based approach resolves limitations of previous FeRAM memory implementations using lead zirconate titanate (PZT) that couldn't scale beyond megabyte capacities.
Prototypes now demonstrate gigabit-range densities compatible with sub-10 nm fabrication of traditional DRAM made by Micron, Samsung, SK Hynix, and others. By eliminating refresh cycles, DRAM+ reduces static power consumption substantially compared to traditional one-transistor/one-capacitor DRAM cells. Primary applications include AI accelerators requiring persistent model weights, automotive ECUs with immediate startup requirements, and power-constrained medical implants. Neumonda will contribute its test platform suite Rhinoe, Octopus, and Raptor for electrical characterization and analytics at lower capital costs than standard semiconductor test equipment. No production timeline has been announced for commercial DRAM+ products.
Source:
via Tom's Hardware
Prototypes now demonstrate gigabit-range densities compatible with sub-10 nm fabrication of traditional DRAM made by Micron, Samsung, SK Hynix, and others. By eliminating refresh cycles, DRAM+ reduces static power consumption substantially compared to traditional one-transistor/one-capacitor DRAM cells. Primary applications include AI accelerators requiring persistent model weights, automotive ECUs with immediate startup requirements, and power-constrained medical implants. Neumonda will contribute its test platform suite Rhinoe, Octopus, and Raptor for electrical characterization and analytics at lower capital costs than standard semiconductor test equipment. No production timeline has been announced for commercial DRAM+ products.
17 Comments on "DRAM+" Non-Volatile Memory Combines DRAM Speed With Flash Persistence
If they manage to get commercial NVMEs in the 240GB+ range on the market for a reasonable price, I would definitely consider using one as a boot drive.
Even if it lacks in sequential access speeds, DRAM like random access time on the boot drive would be pretty awesome!
I wonder what endurance is like.
Basically unlimited like DRAM, a few thousand R/W cycles like NAD, or somewhzin between...
Note that the article says Gigabit, not Gigabyte, so best case we're talking 1 GB chips, but I think it's more likely they've reached something like 512 MB chips, so possibly 16 GB memory modules for now.
It seems to be a similar premise to 3DXpoint, but on a different underlying technology. Intel's approach ended up being a middle ground, not really good enough to replace DRAM while being too expensive to replace NAND.
I don't think 3DXpoint was robust enough for industrial/automotive applications, either. So this could get a foothold in non-traditional computing domains and grow from there.
See:
en.wikipedia.org/wiki/Ferroelectric_RAM
I suspect embedded systems, myself. Low storage capacity needs, and they generally aren't cutting-edge on speed. As long as the chips can take the conditions.
Instead, you'd have to log in to the device and manually reboot it from the OS side.
There's obviously pros and cons to both methods, but consumers aren't used to that behaviour today and it will cause a lot of confusion.
We also don't know anything about the expected performance of this memory, but looking at the article over at Tom's, it's being sold as yet another solution for something to do with "AI" compute.
But yeah, you're not wrong, if the the performance isn't there, it will most likely end up in industrial embedded products in niche markets, but that's where current FeRAM is already being used. This should be faster and larger in size and again, looking at the Tom's article, it seems like they're expecting to be able to ramp it up to Gigabyte size, but I very much doubt we'll see this in something like an SSD any time soon.
I can't wait to start storing data on stationary rust :p
On newer devices, with the not-shockingly-unsensible Modern Standy idea, there’s no ACPI transition involved (I believe) and the Linux kernel (again, take it as unverified) just powers hardware devices down on its own, so they will essentially remain running in a low power state. Quality of implementation varies, and if implemented right, outcomes could end up the same—all ACPI does is it asks the firmware to also power down stuff, after all— yet in practice, what I’m seeing on our newish IdeaPad (Lenovo) is that draw has often been in the 20%/d realm under Linux. (Haven’t tried Windows.) And then, wakeup isn’t even faster and S3 can not be selected anymore. Ugh. :nutkick:
I haven’t seen people refer to hibernation as sleep, and this is where RAM contents would decay (and not be reused). But, if manufacturers were to properly implement their S0ix features for once, hibernation taking some seconds would not be an issue! (Presumably most people use their device once per day or every few days, where standby could be remained in. The battery on here is tiny, 33Wh or so stock, 20Wh right now, so draw would be well under 2% per day, on typical devices, if any advanced, “modern” features were disabled.)
So, I could read your post as either wakeup from standby taking seconds, which I’ve heard people in forums complain about and which is dumb—it also won’t be helped by this technology—or as wakeup from hibernation taking a couple seconds, which is is a very first-world-problem to complain about to begin with and wouldn’t matter as much if standby did work as well as it already could. My not-so-snarky take on this (as a layman): Right now it’s expensive any-which-way and for how cheap it will become in five to ten years, you’ll only get guesses, which might be off by too much. As funny as that sounds, (I’d indeed chuckle), please don’t confuse rust with oxides in general, copper, aluminum or even calcium oxides are all hardly rust. (Aluminum, mayyyybe, the others, no.) Yes. Exactly my (superficial) take. Wonder storage that does it all would upset the industry quite a bit, but for now, I would’ve been quite happy with a smallish Optane area (32GiB are plenty!) on existing storage or even mainboards or such. First off, sensible operating systems may fit quite well into that space, even including applications (granted, when not using BtrFS), and then, there’d still be a couple of gigabytes of power-loss-safe write cache that could absorb oh-so-much journalling (file-system technology) and logging writes that currently all go to NAND. It could cut down immensely on write amplification. Even just a couple gigabytes of that would be neat. Or, you know, some capacitor-based (or whatever) data-dump assistant, that would write RAM (or the last 8–16GiB or so of it, with today’s vast capacities) to some set place on disk whenever unexpected power-loss or even just power-down (crash) occurs.
Now, I don't know if this new DRAM+ can be 3D packaged in the same way, but if so, then I would expect the packages to be in the Gigabyte range.
Is that practical for replacing NAND? Not really. But looking back at Optane, it was originally packaged on DIMMs for non-volatile RAM backup. If, and only if, the performance of DRAM+ is reasonably close to standard RAM, then it could replace instead of supplement system RAM. I can also see it replacing flash memory on smaller drives, especially for single use embedded systems, where the RAM and storage could be combined. However, I doubt that will happen unless it can make significant production volume.
There is a tipping point that Optane never reached, where your cycle of 'Increased Production Volume -> Lower Unit Cost -> Increased Adoption Rate -> Increased Production Volume' becomes self-reinforcing. Hopefully, if this replaces system RAM it could reach that point.