• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Intel to Introduce 3D XPoint DIMM Tech to the Market on 2018

But as I said, I'm worried about other things. It's one more component added to the system and it adds complexity.
I don't understand this argument, to be honest. What's so terrifying about a DIMM module? :D

Will it eat PCIe lanes?
No. It's treated just like any other DIMM module (usually RAM). DIMM sockets are directly connected to the CPU and don't affect PCIe lanes at all.

Will it need a modification of the current RAM channels? Cause it will definitely need additional sockets.
Again, why? This is just a module that you'd stick where RAM usually goes. Nothing has to be added.
 
I don't understand this argument, to be honest. What's so terrifying about a DIMM module? :D


No. It's treated just like any other DIMM module (usually RAM). DIMM sockets are directly connected to the CPU and don't affect PCIe lanes at all.


Again, why? This is just a module that you'd stick where RAM usually goes. Nothing has to be added.
So how does the system know how to use it if it sees it as just another DIMM stick? There have to be some changes under the hood.
 
Something that does not replace RAM, but instead inserts itself between RAM and SSD/HDD.
The way I see it, in this form it's more geared towards alleviating RAM usage, hence my designation as "cache for RAM". I suppose you can look at it the other way and say it's a cache for permanent storage, too, and you wouldn't be wrong. But it's still one more component and as always when adding complexity, there's always something to be lost.
That makes sense, thanks for the explanation.
 
So how does the system know how to use it if it sees it as just another DIMM stick? There have to be some changes under the hood.

Probably same way as NVDIMMs does it now, through nvm library. But yeah I don't think one can use it as normal DIMM stick, you have to put it to the supported server platform(Purley takes nvdimms maybe it can take optane dimms too). And I doubt it will come to consumer space anytime soon.
 
This is just a module that you'd stick where RAM usually goes. Nothing has to be added.
I highly doubt XPoint is DDR4 compatible because of the massive performance drop when the CPU tries to access it. Most likely, select Xeon processors will have DIMMs + XPoint DIMMs with two separate controllers (one for each) and only be compatible with a land grid array that has DIMM slots for both.
 
I highly doubt XPoint is DDR4 compatible because of the massive performance drop when the CPU tries to access it. Most likely, select Xeon processors will have DIMMs + XPoint DIMMs with two separate controllers (one for each) and only be compatible with a land grid array that has DIMM slots for both.

That's another thing. You're already talking about some optimization. And you're right: the memory controller has to know, what kind of DIMM is plugged in.
Furthermore, RAM needs to be refreshed from time to time (there's an instructions for that) - XPoint doesn't. But once the MC knows what tech is on the DIMM, it'll know what instructions to send.
But other than that, it's accessed the same way RAM is, so it could use the same instructions set (~subset).

So how does the system know how to use it if it sees it as just another DIMM stick? There have to be some changes under the hood.

The system "knows" because it's been designed to know. It also knows the clocks and latencies of RAM. :) That's why Optane is only supported by some CPU+chipset combinations.
But other than that Optane is a lot like RAM and, no offense, all these discussions about it being a cache of some sort are pointless.

It's not like the whole PC idea has to be rebuilt because we're forcing an SSD into a RAM socket. XPoint is a lot more similar to RAM than to a NAND SSD. We should treat it that way.
So now we simply have 2 types of DIMM: fast DDR and slow (but larger and persistent) XPoint. It's only important for the PC to know how to use them.
If we had such choice from the start, XPoint would seem totally natural.
 
But other than that, it's accessed the same way RAM is, so it could use the same instructions set (~subset).
Operating systems would likely treat XPoint as a RAM drive. That is, it exists in the address memory space of the processor for really quick, direct access.
 
Operating systems would likely treat XPoint as a RAM drive. That is, it exists in the address memory space of the processor for really quick, direct access.
Exactly, but at the same time it's also way slower than DDR4.
So the OS has to know what should be put in XPoint. Intel shows it with the best-case scenario: in-memory analytical databases. XPoint has the advantage of size, but also is fast enough for that (i.e. it won't ruin the performance compared to DDR).
I guess it'll be also great for rendering and large numerical projects, when RAM size and cost can be limiting.

However, in the consumer world XPoint could work not as a "slower RAM" but as a faster and smaller persistent storage. We'll see how this tech matures (and how the price evolves), but it seems we might soon get PCs with all memory put in DIMMs. :)
 
Theoretically possible because of 64-bit address space...

The problem is that it requires more pins on the processor. More pins means more costs.
 
Back
Top