Wednesday, September 26th 2018
ASUS DDR4 "Double Capacity DIMM" Form-factor a Workaround to Low DRAM Chip Densities
32-gigabyte DDR4 UDIMMs are a reality. Samsung recently announced the development of a 32 GB DDR4 dual-rank UDIMM, using higher density DRAM chips. Those chips, however, are unlikely to be available anytime soon, compounded by Samsung's reported scumbaggery in the making. In the midst of all this, motherboard major ASUS designed its own non-JEDEC UDIMM standard, called "Double Capacity DIMM" or DC DIMM, with the likes of G.Skill and Zadak designing the first models. The utility of these modules is to max out the CPU memory controller's limit despite having fewer memory slots on the motherboard. Possible use-cases include LGA1151 mini-ITX motherboards with just one slot per memory channel (2 slots in all), or certain LGA2066 boards with just four slots (one slot per channel).
There is no word on the memory chip configuration modules, but it's highly likely they are dual-rank. The first DDR4 DC modules could be 32 GB, letting you max out the memory controller limit of 8th gen and 9th gen Core processors with just two modules. ASUS is heavily marketing this standard with its upcoming motherboards based on Intel's Z390 Express chipset, so it remains to be seen if other ASUS motherboards (or other motherboards in general) support the standard. Ironically, the Zadak-made module shown in ASUS marketing materials use DRAM chips made by Samsung.
Source:
VideoCardz
There is no word on the memory chip configuration modules, but it's highly likely they are dual-rank. The first DDR4 DC modules could be 32 GB, letting you max out the memory controller limit of 8th gen and 9th gen Core processors with just two modules. ASUS is heavily marketing this standard with its upcoming motherboards based on Intel's Z390 Express chipset, so it remains to be seen if other ASUS motherboards (or other motherboards in general) support the standard. Ironically, the Zadak-made module shown in ASUS marketing materials use DRAM chips made by Samsung.
38 Comments on ASUS DDR4 "Double Capacity DIMM" Form-factor a Workaround to Low DRAM Chip Densities
Too bad you cant put 128GB on mainstream consumer motherboard still
@xkm1948 imagine the possibilities
The statements that X amount of memory is not supported are according to the highest available capacity, just like how in the P55 era 16GB was the maximum, and when 8GB sticks came out, most systems did actually support 32GB without issues
I would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.
But there may be a bad new. They will most likely remove the furthest slots, not the closest, because if you leave furthest slots and put a double RAM in there, you get much longer distance between CPU and RAM chips. Imagine MB with 6 slots on one side (traditional positioning on consumer MB), so if you plug this RAM into slot 3 and 4, it will technically be similar to occupying slots from 3 to 6.
Shrug.
If not for that - sure, we could move the RAM wherever we wanted.
Putting memory as close to the processor as possible is one of the largest engineering problems in computers today ("slightly" more serious than just colliding coolers :p).
Also, the cooling on that side will suck in an ATX case.
Also, the back side of mobo usually has hardly any airflow, which would be a disaster for RAM. It's bad enough we started putting NVMe drives there.
But sure, in most devices (especially passively cooled) both sides of PCB are used. Current SSDs are not even close (Optane is getting there).
In a typical database server, you have a matrix of SCSI or SSD drives, then a fast SSD cache and then sometimes a RAM cache as well. And it's still visibly slower than an in-memory alternative.
Think about a humble JOIN of 2 tables on single equality condition.
How this works in a disk database: engine pulls these 2 columns with row ids into RAM, it performs the JOIN and then pulls the data by row id. This data is kept in RAM until you discard it. If it doesn't fit... it's put back to the drives...
Think about the amount of I/O operations, memory allocating and so on. In a single JOIN..
You write a large query: multiple joins, aggregations, analytic functions and so on and it has tens of disk<->RAM cycles.
Moreover, using in-memory databases you have many interesting optimization possibilities.
Example:
If you want to speed up joins in normal databases, you create indexes that you use with foreign keys. This makes fast ( O(log(n)) ) searches possible. All very nice.
An in-memory database can store pointers to other tables, so joining tables is almost free (O(1), constant time).
The speed of systems like SAP HANA is just mindblowing.
You don't want such load on a database used by more people.
You don't use in-memory databases for storing data, especially on production system. It's RAM after all.