Wednesday, September 26th 2018

ASUS DDR4 "Double Capacity DIMM" Form-factor a Workaround to Low DRAM Chip Densities

32-gigabyte DDR4 UDIMMs are a reality. Samsung recently announced the development of a 32 GB DDR4 dual-rank UDIMM, using higher density DRAM chips. Those chips, however, are unlikely to be available anytime soon, compounded by Samsung's reported scumbaggery in the making. In the midst of all this, motherboard major ASUS designed its own non-JEDEC UDIMM standard, called "Double Capacity DIMM" or DC DIMM, with the likes of G.Skill and Zadak designing the first models. The utility of these modules is to max out the CPU memory controller's limit despite having fewer memory slots on the motherboard. Possible use-cases include LGA1151 mini-ITX motherboards with just one slot per memory channel (2 slots in all), or certain LGA2066 boards with just four slots (one slot per channel).

There is no word on the memory chip configuration modules, but it's highly likely they are dual-rank. The first DDR4 DC modules could be 32 GB, letting you max out the memory controller limit of 8th gen and 9th gen Core processors with just two modules. ASUS is heavily marketing this standard with its upcoming motherboards based on Intel's Z390 Express chipset, so it remains to be seen if other ASUS motherboards (or other motherboards in general) support the standard. Ironically, the Zadak-made module shown in ASUS marketing materials use DRAM chips made by Samsung.
Source: VideoCardz
Add your own comment

38 Comments on ASUS DDR4 "Double Capacity DIMM" Form-factor a Workaround to Low DRAM Chip Densities

#1
gmn17
Nice idea Asus
Too bad you cant put 128GB on mainstream consumer motherboard still
Posted on Reply
#2
hat
Enthusiast
Seems... like an obvious move. Just make taller memory! Lord knows we've already had heatsinks that tall. I wonder why nobody thought of this before... though heatsink compatibility is once again going to become a factor for anyone using these modules and an aftermarket air cooler.

@xkm1948 imagine the possibilities
Posted on Reply
#4
TheLostSwede
News Editor
hatSeems... like an obvious move. Just make taller memory! Lord knows we've already had heatsinks that tall. I wonder why nobody thought of this before... though heatsink compatibility is once again going to become a factor for anyone using these modules and an aftermarket air cooler.

@xkm1948 imagine the possibilities
Because it's hard to do this and retain the signal integrity. They most likely need to use some kind of buffer chip, similar to RDIMMs.
Posted on Reply
#5
dj-electric
gmn 17Nice idea Asus
Too bad you cant put 128GB on mainstream consumer motherboard still
Well... yet. Technically, there shouldn't be issues to actually use 128GB with Z370 based system. The CPU doesn't really care about it.
The statements that X amount of memory is not supported are according to the highest available capacity, just like how in the P55 era 16GB was the maximum, and when 8GB sticks came out, most systems did actually support 32GB without issues
Posted on Reply
#6
randomUser
This would be very nice. Ram are most often much lower in height than a CPU cooler in any kind of case.
I would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.

But there may be a bad new. They will most likely remove the furthest slots, not the closest, because if you leave furthest slots and put a double RAM in there, you get much longer distance between CPU and RAM chips. Imagine MB with 6 slots on one side (traditional positioning on consumer MB), so if you plug this RAM into slot 3 and 4, it will technically be similar to occupying slots from 3 to 6.
Posted on Reply
#7
lexluthermiester
dj-electricWell... yet. Technically, there shouldn't be issues to actually use 128GB with Z370 based system. The CPU doesn't really care about it.
The statements that X amount of memory is not supported are according to the highest available capacity, just like how in the P55 era 16GB was the maximum, and when 8GB sticks came out, most systems did actually support 32GB without issues
This. Then again, why? Even though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment? Other than UHD raw video editing I can't think of even one..
Posted on Reply
#8
Aquinus
Resident Wat-man
lexluthermiesterThis. Then again, why? Even though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment? Other than UHD raw video editing I can't think of even one..
Genomics like our buddy @xkm1948 does?
Posted on Reply
#9
cyneater
They will only cost and arm and a leg
Posted on Reply
#10
lexluthermiester
AquinusGenomics like our buddy @xkm1948 does?
I could be wrong here, but such a task would not benefit from that much ram without also having additional CPU's to scale the usage. Am I right? If I understand things correctly, that kind of thing only benefits from 32GB to 48GB of ram. Beyond that is very diminishing returns and beyond 64GB is almost nothing.
Posted on Reply
#11
Camm
You are already at 64GB on most mainstream boards (4x16GB), and 128GB on HEDT. Its edge case scenario's that need more than this, edge cases that are likely already fulfilled by Workstation and Server markets.

Shrug.
Posted on Reply
#12
Caring1
Is it possible to have dual rank on a single sided card?
Posted on Reply
#13
notb
randomUserI would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.
But are you aware of the fact that distance between RAM and CPU has significant impact on memory latency and signal quality? :p

If not for that - sure, we could move the RAM wherever we wanted.
Putting memory as close to the processor as possible is one of the largest engineering problems in computers today ("slightly" more serious than just colliding coolers :p).
Posted on Reply
#14
Caring1
notbBut are you aware of the fact that distance between RAM and CPU has significant impact on memory latency and signal quality? :p

If not for that - sure, we could move the RAM wherever we wanted.
Putting memory as close to the processor as possible is one of the largest engineering problems in computers today ("slightly" more serious than just colliding coolers :p).
Then why don't they mount ram slots on the back of the board where they can be closer.
Posted on Reply
#15
notb
lexluthermiesterThis. Then again, why? Even though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment? Other than UHD raw video editing I can't think of even one..
google: "in-memory database"
Posted on Reply
#16
TheLostSwede
News Editor
cyneaterThey will only cost and arm and a leg
That's not enough, you most likely have to throw in a kidney and part of your liver with the current memory pricing...
Caring1Then why don't they mount ram slots on the back of the board where they can be closer.
Because, space? Also, have you ever looked at the back of the a PCB? You actually have quite a lot of components that goes through all of the layers of the PCB, so you can't just place things where you'd like to. It's not impossible to do what you're suggesting though, look at some of the NAS boards, they have the memory slots on the rear of the PCB, but only SO-DIMMs, not full size DIMMs.
Also, the cooling on that side will suck in an ATX case.
Posted on Reply
#17
lexluthermiester
notbgoogle: "in-memory database"
With the advent of hyper fast SSD's why would this be needed?
Posted on Reply
#18
diatribe
Caring1Then why don't they mount ram slots on the back of the board where they can be closer.
Given the design of most modern computer cases, this may be a good idea. Most motherboard mounts have a large hole around the back of the CPU which would allow easy access.

Posted on Reply
#19
TheinsanegamerN
randomUserThis would be very nice. Ram are most often much lower in height than a CPU cooler in any kind of case.
I would vote for removing 2 DIMMs that are closer to CPU so that cpu coolers could have more space there.

But there may be a bad new. They will most likely remove the furthest slots, not the closest, because if you leave furthest slots and put a double RAM in there, you get much longer distance between CPU and RAM chips. Imagine MB with 6 slots on one side (traditional positioning on consumer MB), so if you plug this RAM into slot 3 and 4, it will technically be similar to occupying slots from 3 to 6.
Why not just angle the slots downward, like laptops do? You could get them closer to the CPU socket and have better cooler compatibility.
Posted on Reply
#20
londiste
Could these be slots to leverage Optane DIMMs or something of the sort?
Posted on Reply
#21
notb
Caring1Then why don't they mount ram slots on the back of the board where they can be closer.
Because of the standard. You have limited space behind the mobo.
Also, the back side of mobo usually has hardly any airflow, which would be a disaster for RAM. It's bad enough we started putting NVMe drives there.

But sure, in most devices (especially passively cooled) both sides of PCB are used.
lexluthermiesterWith the advent of hyper fast SSD's why would this be needed?
Current SSDs are not even close (Optane is getting there).
In a typical database server, you have a matrix of SCSI or SSD drives, then a fast SSD cache and then sometimes a RAM cache as well. And it's still visibly slower than an in-memory alternative.

Think about a humble JOIN of 2 tables on single equality condition.
How this works in a disk database: engine pulls these 2 columns with row ids into RAM, it performs the JOIN and then pulls the data by row id. This data is kept in RAM until you discard it. If it doesn't fit... it's put back to the drives...
Think about the amount of I/O operations, memory allocating and so on. In a single JOIN..
You write a large query: multiple joins, aggregations, analytic functions and so on and it has tens of disk<->RAM cycles.

Moreover, using in-memory databases you have many interesting optimization possibilities.
Example:
If you want to speed up joins in normal databases, you create indexes that you use with foreign keys. This makes fast ( O(log(n)) ) searches possible. All very nice.
An in-memory database can store pointers to other tables, so joining tables is almost free (O(1), constant time).

The speed of systems like SAP HANA is just mindblowing.
Posted on Reply
#22
lexluthermiester
notbCurrent SSDs are not even close (Optane is getting there).
In a typical database server, you have a matrix of SCSI or SSD drives, then a fast SSD cache and then sometimes a RAM cache as well. And it's still visibly slower than an in-memory alternative.

Think about a humble JOIN of 2 tables on single equality condition.
How this works in a disk database: engine pulls these 2 columns with row ids into RAM, it performs the JOIN and then pulls the data by row id. This data is kept in RAM until you discard it. If it doesn't fit... it's put back to the drives...
Think about the amount of I/O operations, memory allocating and so on. In a single JOIN..
You write a large query: multiple joins, aggregations, analytic functions and so on and it has tens of disk<->RAM cycles.

Moreover, using in-memory databases you have many interesting optimization possibilities.
Example:
If you want to speed up joins in normal databases, you create indexes that you use with foreign keys. This makes fast ( O(log(n)) ) searches possible. All very nice.
An in-memory database can store pointers to other tables, so joining tables is almost free (O(1), constant time).

The speed of systems like SAP HANA is just mindblowing.
Yes but that's server work. I'm talking about and referring to desktop workflow scenarios.
Posted on Reply
#23
xkm1948
lexluthermiesterI could be wrong here, but such a task would not benefit from that much ram without also having additional CPU's to scale the usage. Am I right? If I understand things correctly, that kind of thing only benefits from 32GB to 48GB of ram. Beyond that is very diminishing returns and beyond 64GB is almost nothing.
Nah you are wrong. Would be nice if I can have 1TB~2TB DRAM per local CPU access. In bioinformatics, especially with huge data sets the more ram the better. I was constantly out of ram when performing a 17 samples (in triplicates) microbiome analysis and i constantly maxed out my 128GB RAM
Posted on Reply
#24
notb
lexluthermiesterYes but that's server work. I'm talking about and referring to desktop workflow scenarios.
No. In-memory DBs are often (if not most of the time) deployed on workstations. They're perfect for advanced analytics, machine learning and so on.
You don't want such load on a database used by more people.

You don't use in-memory databases for storing data, especially on production system. It's RAM after all.
Posted on Reply
#25
trparky
lexluthermiesterEven though it's very likely possible, what computing scenario would require 128GB of system ram in a desktop environment?
Um... bragging rights?
notbBecause of the standard.
Then perhaps it's time for a new standard for motherboard layouts. ATX has been around for what... 10 plus years? Maybe it's time for a new design that fits more in line with the hardware requirements of today.
Posted on Reply
Add your own comment
Nov 21st, 2024 13:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts