FWIW on my Optane era mobo designed to use it in tandem with SATA (Two M.2 slots and real world capability to use one of them - Gen 3 x4 NVMe or Optane/Gen 3 x2 NVMe).
In 2024, max performance found by going with Gen 4 NVMe was obvious. 8 years ago Optane + SATA SSD was the better solution.
Those are a nightmare. Do not buy. I spent a hefty chunk of change (on PCIe switches, cables, adapters, enclosures, etc.) in the name of science to try to make those work… Consider them proprietary and abandoned. They’re designed in a way that would never work in a normal system—even a modern Intel one after the thirteenth generation.
From what I have seen Windows has a 2.9 GB/s limit for moving real files. The newer the data the more NAND likes it but older files can slow any NAND based drive to it's knees (for a combined 25 seconds). I have not used Optane so I cannot say if it can overcome the 2.9 GB/s limit.
That does not make sense. Think of WIfi. Is there anyone that does not use Intel Wifi cards; you can even buy them on Amazon for like $15. That is hotcakes for Intel that we don't think about but all vendors use. They did not buy Killer the other day too. If they could have Intel would have made Optane the same. It was their hubris as the known quantity in the space as those P44 Solidigm drives have some of that Optane sauce to confirm that Intel knew what they were doing but I want that with a Gen 5 controller. In testing the WD AN1500 loaded Forspoken the fastest using the benchmark. That beat a RAID 0 4.0 5Gb/s Team array, MP 700 1TB, Seagate 530 2 TB, Kingston B500 RAID 0 SSD array and an 8TB Mass Storage SSD that is about $400 more now than when I bought it in 2021 or 2020 it is all a wash. Of course the difference was at most 1/3 of a second. You can do with that what you will but it is fun messing around with PC today with very compelling parts for alll aspects. An off the wall example (especially if you have Logitech speakers) Creative Stage V2. It is like $120 Canadian but if you like music, this thing will have you searching Youtiube for the oldest raw Reggae sets. Excelsior.
I was just trying to be ironic. Intel kept Intel (that is, themselves) out of the Optane market (that is, their own market). This is a second hand market now, thanks also to the wall around their garden.
Given the speed of PCIe 5.0 I can see NAND being the popular choice with the latest stacked devices.
NAND is poised to reach 500 layers which is about the limit based on through holes etc
Given how cheap 4TB NAND is using NVMe I am expecting to see lower prices for 8TB and I can see 16TB surfacing down the road
Optane is dead not just because of capacity issues, it was a real gas guzzler & inefficient as compared to the best SSD's of its time. Those reminiscing about how great "Optane" was in QD1 probably forget that, the latest gen SSD's are only getting more efficient.
Both Intel and Micron have dropped their pursuit of 3D XPoint / Optane. There is nothing wrong with the well-received technology itself. Read more here.
seekingalpha.com
optane isn't dead. just lawsuits doing lawsuit things
Interesting you mention that. I have in my backlog of upgrades to hook up a bunch of PCIe 3.0 Optanes with a PCIe 5.0 switch, getting the advantage of both low latency and aggregate bandwidth. It’d absolutely suck in terms of efficiency though, with the PCIe switch itself already consuming the power equivalent to a single U.2 SSD.
Let’s say it were alive though. I’ve heard rumors that Intel had difficulty scaling it up like 3D NAND. If so, then there’d be no path by which it could stay alive.
But given this alleged setback, I’m surprised the P5800Xs made it to 3.2 TB when all the contemporary (p)SLC offerings from the likes of Kioxia, Micron, and Solidigm released within the past year can’t break 2 TB.
Yes. There’s probably two remaining (but untried) tricks to getting it working:
A PCIe switch configured with dual-lane ports but connected to the SSD over a quad-lane cable
A cable which connects to a dual-port, octo-lane connection on the host end and a dual-port, dual-lane connection at the SSD end (4 lanes from the host unused)
Both of these would be possible without a PCIe switch, but I wouldn’t be considering a PCIe switch if motherboards could bifurcate down to two lanes. Y’all are welcome to experiment. I don’t want to lay down another $1,500 USD for equipment to make a $65 SSD work.
Optane is dead not just because of capacity issues, it was a real gas guzzler & inefficient as compared to the best SSD's of its time. Those reminiscing about how great "Optane" was in QD1 probably forget that, the latest gen SSD's are only getting more efficient.
The P5800X is an enterprise drive, it isn't designed to be optimized for power consumption. 3.2w at idle is excellent for an enterprise drive, my 9300 Pros idle at 10w. Under load enterprise drives consume anywhere from 10w to 25w so Optane is within typical operating parameters. If you look at the spec sheet for the P1600X, power consumption of consumer focused optane is within the normal range there as well. 5.2w under load and 1.7 under idle. Not as power hungry as PCIe 5.0 drives but not as good as your typical M.2 SSD. Given the level of performance these drives provide though, I'd say efficiency wise they are pretty good given that no other SSD can get close to them performance wise for non-sequential workloads. I'm willing to bet if Intel had a greater consumer focus with these products that power consumption could have been further optimized as well.
If you look at the spec sheet for the P1600X, power consumption of consumer focused optane is within the normal range there as well. 5.2w under load and 1.7 under idle. Not as power hungry as PCIe 5.0 drives but not as good as your typical M.2 SSD. Given the level of performance these drives provide though, I'd say efficiency wise they are pretty good given that no other SSD can get close to them performance wise for non-sequential workloads. I'm willing to bet if Intel had a greater consumer focus with these products that power consumption could have been further optimized as well.
It’s a shame they killed consumer lines so quickly. To get better efficiency for a laptop, I had to go all the way back to the 800P series from 2018. They are identical in capacity to the P1600X but have the ability to reduce power consumption to 8 mW in deep sleep (L1.2). And active power consumption is only 3.75 W.
It sure seems like it. But there are still thousands of them floating in unopened packaging out there. The P5800X series will be warranted up until 2030. There is no downside I can see if the price is right; I’d welcome some more over additional NAND for latency-sensitive/endurance-oriented applications.
For the most demanding applications, it’s mighty hard to run one of them down into the ground.
These CDM benchmarks of the P1600X were made on a clean Win 11 Pro 23H2 install for the first run. Then as a second step, to sort of parallel the conditions in which W1zzard tests these, I filled the drive furthermore to about 80% capacity to rerun all tests – twice. It’s not like I went into...
www.techpowerup.com
On the 'budget side', a 118GB P1600X ($45-75) is sufficient for a single-OS Boot Drive.
Buy Intel Optane SSD P1600X SSDPEK1A118GA01 M.2 2280 118GB PCIe 3.0 x4, NVMe 3D XPoint Enterprise Solid State Disk with fast shipping and top-rated customer service. Once you know, you Newegg!
The worst part is very high consumption even at very low speeds. But I'm confident that new controllers made on 7nm or 5nm nodes will improve that by a lot.
I actually 'passed up' on getting one a year or so ago (when NAND was cheap). Got 4x 58GB P1600Xs instead (later buying another 4x 118GBers), thinking RAID0x4 P1600Xs would be so much faster.
Nope! I was am an idiot.
(At least on AMD) Optane does *not* play nice in CPU-RAID 0. Even, the '16GB' M10 3.0x2 lane drives (that I've hoarded dozens of), do not RAID well on AMD
There's some freaky* overhead that occurs; semi-linear/semi-logarithmic performance loss at 2x/3x/4x drive arrays
On the plus side, I have 4x awesome 118GB 'Boot Drives', and have figured out how to get some (very) Legacy systems NVMe booting, using 'Bootloader Stacking'.
*I say 'freaky' as I've experimented w/ Other NVMe drives in RAID0, and saw nearly linear performance increases across-the-board, not losses.
Honestly, I'm not sure if it's something implemented on-purpose from Intel, or just the nature of Optane and how AMD 'stacks' NVMe RAID with the rest of the CPU's load.
(At least on AMD) Optane does *not* play nice in CPU-RAID 0. Even, the '16GB' M10 3.0x2 lane drives (that I've hoarded dozens of), do not RAID well on AMD
There's some freaky* overhead that occurs; semi-linear/semi-logarithmic performance loss at 2x/3x/4x drive arrays
You’re not alone. AMD’s motherboard RAID solution is like a pity date. You get it as a courtesy and you are expected not to pursue it further. You also feel like poo after you realize you’re practically back to square one.
EDIT: wow, no Unicode emojis allowed on the forum?
You’re not alone. AMD’s motherboard RAID solution is like a pity date. You get it as a courtesy and you are expected not to pursue it further. You also feel like poo after you realize you’re practically back to square one.
EDIT: wow, no Unicode emojis allowed on the forum?
That is a hilarious (and accurate) take on NVMe AMD-RAID support.
[Still on the lookout for an off-use Server NVMe RAID controller. Sucks, as it looks like the more-affordable Dell ones all require mobo firmware support.]
FWIW on my Optane era mobo designed to use it in tandem with SATA (Two M.2 slots and real world capability to use one of them - Gen 3 x4 NVMe or Optane/Gen 3 x2 NVMe).
FYI, those boards had very rare x2/x2 (x4, Bifurcated) support on their M-key M.2s.
TBQH, those 'hybrid drives' are so common on the secondary market, I'm kind of surprised I haven't seen any 'Chinese Innovation' adapters.
Either
x8 (x4/x4 bifurcated) -> M.2 M-key(x2/x2) trace routing (cheap)
or Gen3 ASM Packet Switch'd Adapter Card w/ x2/x4 -> M.2 M-key(x2/x2 config). (Which, most dual-slot USB->NVMe adapters use)
Maybe offtopic but you two might have a clue about that: What good is the PCIe 5.0 x2 interface of the Samsung 990 EVO? The only logical answer I see is that upcoming processors will have a matching interface (or a PCIe 5.0 x4 that can split lanes in x2+x2).
Maybe offtopic but you two might have a clue about that: What good is the PCIe 5.0 x2 interface of the Samsung 990 EVO? The only logical answer I see is that upcoming processors will have a matching interface (or a PCIe 5.0 x4 that can split lanes in x2+x2).
TBQH, the only 'useful' applications I can imagine, are in the future. Right now, I'm not sure of their usefulness whatsoever.
A.
I've ran Gen4 NVMes in very-short trace x1 lane passive adapters.
The performance loss is less heavy, the newer the supported (and handshook) PCIe generation. So, in the eventuality we see x1 5.0 slots, they might be handy.
B.
Most-all USB-NVMe adapters are x2 lane only.
Eventually, we'll see USB3.2Gen2x2/USB4/TB3-5->Gen5 NVMe adapters; and when we do those 'old' 990Evos might make some of the fastest external SSDs.
What good is the PCIe 5.0 x2 interface of the Samsung 990 EVO? The only logical answer I see is that upcoming processors will have a matching interface (or a PCIe 5.0 x4 that can split lanes in x2+x2).
So this is partly speculation and partly based on what I could dig up. Obviously, ole Sammy knows a lot more―either about the future availability of dual-lane width PCIe 5.0 links on consumer platforms or the energy efficiency of such a design.
Facts/observations:
“Reducing power consumption in PCI Express primarily involves addressing the PHY, the system's major power consumer, responsible for about 80% of total usage.”
―Synopsys: Navigating PCIe 6.0 Power and Latency Challenges in HPC SoCs
The SSD power consumption goes up to hit PCIe 5.0 speeds, but not linearly so. So each bit transmitted must require less power than the prior PCIe generation.
“The L0p power state in PCI Express 6.0 greatly improves power efficiency by aligning power usage with actual bandwidth needs. In systems with a 16-lane PCI Express 6.0 link, bandwidth requirements can vary, and not all the 64G transfer rate is always needed. L0p enables dynamic scaling of active lanes, reducing power consumption without the need for renegotiating link width, a necessity in previous standards.”
―Synopsys: Navigating PCIe 6.0 Power and Latency Challenges in HPC SoCs
Speculation:
Links using less lanes but each running at higher frequencies use less energy while transmitting data at the same speed. (The lanes are the PHY mentioned above.)
These 990 EVO SSDs are designed to take advantage of future portable (i.e., laptops) devices which could slash as much power consumption as possible while retaining the same performance.
Why is this speculative? Because it’s not always true. PCIe 2.0 was less efficient than its predecessor PCIe 1.0. PCIe 2.0’s 5.0 gbps lanes were consuming three times more power than PCIe 1.0’s 2.5 gbps lanes, meaning efficiency-minded implementations were incentivized to stay with PCIe 1.0. This could be the case between PCIe 4.0 and 5.0, or it might not. I have neither hard data nor authoratative literature to demonstrate either. The math might work out such that PCIe 5.0 requires less energy to transmit each bit given the facts/observations.
An older source: Synopsys IP Technical Bulletin: PCI Express 2.0: Comparing 2.5-Gbps Solutions Versus 5.0-Gbps.
With PCIe 6.0, whether the above is speculation or not, such devices could become reality. The question is whether the energy overhead of PAM-4, FEC, and CRC in PCIe 6.0 is still an overall win at all L0p levels.
Because these guys actually design the silicon that implement the PCI Express specifications and they have been doing so for a very long time. They know what they are talking about. On the other hand, I have encountered two very similar articles while keyword searching that led me to believe some of the technical blogs are just written by AI. They ought to be reputable given that the websites distribute products based on said technologies, but some assertions were very obviously wrong like “PAM-4 being deployed starting with PCIe 5.0.” (lol no!)
PCI-SIG also has an interesting set of slides (HOTI_PCIe6.0.pdf) detailing the design changes made to improve the efficiency of PCIe 6.0. As consumers we are obvious a long way from there, but it does give more time to dream.
Please Google/Bing/Duck my sources. I have provided the websites and titles of the pages as I cannot link anything with my new account.
ALL of my Gen2x2/USB4 adapters are x4 lanes here and now. And TB3 ones too. (internal 3.0x4 for Gen2x2/TB3 and 4.0x4 for USB4/ASM2464PD - you can see it in Crystaldisk Info if your adapter supports PCIe-tunneling (TB/USB4)).
Really, just day-to-day desktop-typical workloads. It’s where low-queue-depth performance shines. It’s just unfortunate that there is such a big mismatch between the pocket depth of the consumers who will benefit the most versus the cost of the tech.
very subjective, personally, if it allows me to do everything I need to do flawlessly, smoothly, accurately in a very timely manner without hiccups is already fast.
I work on the said Industry, and yes its a must, but I have a different beast for that job, mainly I face my gaming PC at home, and somehow wanted more if it can really deliver.
Cant't really say if its too late for me to dive in the 900/905p, pretty much the P5800X is out of reach for a peasant like me..