• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Intel SSD DC P4500 Series: "ruler" form factor? Huh?

Joined
Dec 12, 2020
Messages
1,755 (1.20/day)
I realize this is a data center product (called Cliffdale) but it's 'interface' is 'PCIe 3.1 x4, NVMe'?

I was looking up SSD's that have 4 TiB or more of storage and came across this bizarre, discontinued datacenter product from Intel.

https://ark.intel.com/content/www/u...0-series-4-0tb-ruler-pcie-3-1-x4-3d1-tlc.html

Here's a picture of the bizarre ruler format:



Even more bizarre is the door on the end which makes it look like a really long lighter
 
Joined
Nov 25, 2019
Messages
825 (0.45/day)
Location
Taiwan
Processor i5-9600K
Motherboard Gigabyte Z390 Gaming X
Cooling Scythe Mugen 5S
Memory Micron Ballistix Sports LT 3000 8G*4
Video Card(s) EVGA 3070 XC3 Ultra Gaming
Storage Adata SX6000 Pro 512G, Kingston A2000 1T
Display(s) Gigabyte M32Q
Case Antec DF700 Flux
Audio Device(s) Edifier C3X
Power Supply Super Flower Leadex Gold 650W
Mouse Razer Basilisk V2
Keyboard Ducky ONE 2 Horizon
Joined
Jan 3, 2021
Messages
3,571 (2.48/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
If it's discontinued, it's because because current products are 15 TB and 30 TB. Intel makes the DC P4510 and D5-P5316, and Samsung has a shorter lighter they call PM9A3 that even runs PCIe 4.0 x4. The "ruler" form factor is officially called E1.L (long) and E1.S (short) and doesn't always come enclosed in such a nice box.
 

TheLostSwede

News Editor
Joined
Nov 11, 2004
Messages
17,741 (2.42/day)
Location
Sweden
System Name Overlord Mk MLI
Processor AMD Ryzen 7 7800X3D
Motherboard Gigabyte X670E Aorus Master
Cooling Noctua NH-D15 SE with offsets
Memory 32GB Team T-Create Expert DDR5 6000 MHz @ CL30-34-34-68
Video Card(s) Gainward GeForce RTX 4080 Phantom GS
Storage 1TB Solidigm P44 Pro, 2 TB Corsair MP600 Pro, 2TB Kingston KC3000
Display(s) Acer XV272K LVbmiipruzx 4K@160Hz
Case Fractal Design Torrent Compact
Audio Device(s) Corsair Virtuoso SE
Power Supply be quiet! Pure Power 12 M 850 W
Mouse Logitech G502 Lightspeed
Keyboard Corsair K70 Max
Software Windows 10 Pro
Benchmark Scores https://valid.x86.fr/yfsd9w
I realize this is a data center product (called Cliffdale) but it's 'interface' is 'PCIe 3.1 x4, NVMe'?

I was looking up SSD's that have 4 TiB or more of storage and came across this bizarre, discontinued datacenter product from Intel.

https://ark.intel.com/content/www/u...0-series-4-0tb-ruler-pcie-3-1-x4-3d1-tlc.html

Here's a picture of the bizarre ruler format:



Even more bizarre is the door on the end which makes it look like a really long lighter
Nothing new or bizarre, it was introduced in 2017.
 
Joined
Nov 25, 2019
Messages
825 (0.45/day)
Location
Taiwan
Processor i5-9600K
Motherboard Gigabyte Z390 Gaming X
Cooling Scythe Mugen 5S
Memory Micron Ballistix Sports LT 3000 8G*4
Video Card(s) EVGA 3070 XC3 Ultra Gaming
Storage Adata SX6000 Pro 512G, Kingston A2000 1T
Display(s) Gigabyte M32Q
Case Antec DF700 Flux
Audio Device(s) Edifier C3X
Power Supply Super Flower Leadex Gold 650W
Mouse Razer Basilisk V2
Keyboard Ducky ONE 2 Horizon
Interesting write-up on the E1.S form factor that predicts it will completely replace M.2:
https://www.storagereview.com/news/ruler-is-finally-going-mainstream-why-e1-s-ssds-are-taking-over
Not very persuading, the thermal protection capability is only due to M.2 drives are not standardized to include heatsinks, the ease-of-use is just a slot design difference which can easily be overcome.
People could easily design new vertical slots just like the E1.S and also standardized heatsinks, at that point, all the difference listed are gone.
 
Joined
Jan 3, 2021
Messages
3,571 (2.48/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Joined
Dec 12, 2020
Messages
1,755 (1.20/day)
Not very persuading, the thermal protection capability is only due to M.2 drives are not standardized to include heatsinks, the ease-of-use is just a slot design difference which can easily be overcome.
People could easily design new vertical slots just like the E1.S and also standardized heatsinks, at that point, all the difference listed are gone
Good points but does the M.2 standard include the ability to hot swap?
 
Joined
Jan 3, 2021
Messages
3,571 (2.48/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Joined
Nov 25, 2019
Messages
825 (0.45/day)
Location
Taiwan
Processor i5-9600K
Motherboard Gigabyte Z390 Gaming X
Cooling Scythe Mugen 5S
Memory Micron Ballistix Sports LT 3000 8G*4
Video Card(s) EVGA 3070 XC3 Ultra Gaming
Storage Adata SX6000 Pro 512G, Kingston A2000 1T
Display(s) Gigabyte M32Q
Case Antec DF700 Flux
Audio Device(s) Edifier C3X
Power Supply Super Flower Leadex Gold 650W
Mouse Razer Basilisk V2
Keyboard Ducky ONE 2 Horizon
Since these are designed for servers, I'm actually wondering how many servers actually use SSDs, it seems like an expensive and low durability option compared to HDD RAID
 
Joined
Jan 3, 2021
Messages
3,571 (2.48/day)
Location
Slovenia
Processor i5-6600K
Motherboard Asus Z170A
Cooling some cheap Cooler Master Hyper 103 or similar
Memory 16GB DDR4-2400
Video Card(s) IGP
Storage Samsung 850 EVO 250GB
Display(s) 2x Oldell 24" 1920x1200
Case Bitfenix Nova white windowless non-mesh
Audio Device(s) E-mu 1212m PCI
Power Supply Seasonic G-360
Mouse Logitech Marble trackball, never had a mouse
Keyboard Key Tronic KT2000, no Win key because 1994
Software Oldwin
Since these are designed for servers, I'm actually wondering how many servers actually use SSDs, it seems like an expensive and low durability option compared to HDD RAID
Well it's not true that all servers just write data like crazy, all the time, at full speed. You have transactional databases, those that collect all detailed customer data for example, but never or very rarely delete the data; they are close to write-once. Then there are analytical databases/data warehouses, where data is written and rewritten more often (like, daily) but in much smaller amounts compared to transactional. I picked these two examples because I have some experience with SQL databases but servers of course do other things too, and even databases need various log files that do see a huge amount of writes.

And if we are to believe Intel, the 30 TB drive has an endurance of 23 PB for random writes and 105 PB for sequential writes. Being Q-L-C! (!!!). Storage review did a review.
 
Joined
May 2, 2017
Messages
7,762 (2.79/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
Interesting write-up on the E1.S form factor that predicts it will completely replace M.2:
https://www.storagereview.com/news/ruler-is-finally-going-mainstream-why-e1-s-ssds-are-taking-over
Good points but does the M.2 standard include the ability to hot swap?
Given that these ruler SSDs necessitate a standardization of both cases and motherboards (or backplanes with expensive riser cables), they are never going to gain even the smallest foothold in the consumer space. Nor are they meant to - they're much larger, much more expensive, and designed for higher capacities than consumers could hope to afford. These form factors are explicitly designed for enterprise use, and while some enthusiasts would no doubt like to adopt the standard, it is fundamentally unsuited for consumer use. It's designed around the wrong parameters, quite simply.

In enterprise, on the other hand, m.2 has never been anything but a band-aid fix on top of a broken femur - using what you have, until you find something actually suited to the task.
Since these are designed for servers, I'm actually wondering how many servers actually use SSDs, it seems like an expensive and low durability option compared to HDD RAID
Many, many servers these days do. NAND durability is really not that low, even with TLC, and the engineers and technicians that spec, build and service these servers are well versed in system monitoring, preventative maintenance, and redundancy. And the performance benefits, especially for anything reliant on random performance, are on such a scale that staying with HDDs would be business suicide. Even the craziest RAID setup won't come within 10% of the random performance of a single SATA SSD, let alone NVMe RAID. And expensive? Compared to an HDD, sure. But compared to the total price of a rack of servers, and the electricity needed to run and cool them for a few years? Negligible.
 
Joined
Jul 15, 2022
Messages
951 (1.08/day)
And the performance benefits, especially for anything reliant on random performance, are on such a scale that staying with HDDs would be business suicide. Even the craziest RAID setup won't come within 10% of the random performance of a single SATA SSD, let alone NVMe RAID.
With Violin Memory products you could already achieve one million IOPS in 2012. That was a much bigger difference at the time than the difference now because HDDs have become faster. Why didn't all companies jump on Violin Memory then? They thought the gains in productivity would not outweigh the additional costs. Violin Memory has always claimed the opposite, and they were probably right. But the point I'm making, a Seagate MACH.2 is going to have enough throughput and IOPS for many business situations, and is going to offer much more affordable storage than any SSD. In the situations where a Seagate MACH.2 would not give enough IOPS on Linux and Windows servers, they can still switch to FreeBSD to get additional performance and redundancy and better network latency:
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=49228e7&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=225b6b2&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=12872ac&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=5ca0c1f&p=2
https://openbenchmarking.org/embed.php?i=1812249-SP-WINSERVER76&sha=0ac3ab0&p=2
https://openbenchmarking.org/embed.php?i=1812249-SP-WINSERVER76&sha=4347141&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=c253c2f&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=49228e7&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=6e71607&p=2

In many situations, companies don't necessarily need an SSD, and it will be much more expensive for them.
 
Joined
May 2, 2017
Messages
7,762 (2.79/day)
Location
Back in Norway
System Name Hotbox
Processor AMD Ryzen 7 5800X, 110/95/110, PBO +150Mhz, CO -7,-7,-20(x6),
Motherboard ASRock Phantom Gaming B550 ITX/ax
Cooling LOBO + Laing DDC 1T Plus PWM + Corsair XR5 280mm + 2x Arctic P14
Memory 32GB G.Skill FlareX 3200c14 @3800c15
Video Card(s) PowerColor Radeon 6900XT Liquid Devil Ultimate, UC@2250MHz max @~200W
Storage 2TB Adata SX8200 Pro
Display(s) Dell U2711 main, AOC 24P2C secondary
Case SSUPD Meshlicious
Audio Device(s) Optoma Nuforce μDAC 3
Power Supply Corsair SF750 Platinum
Mouse Logitech G603
Keyboard Keychron K3/Cooler Master MasterKeys Pro M w/DSA profile caps
Software Windows 10 Pro
With Violin Memory products you could already achieve one million IOPS in 2012. That was a much bigger difference at the time than the difference now because HDDs have become faster. Why didn't all companies jump on Violin Memory then? They thought the gains in productivity would not outweigh the additional costs. Violin Memory has always claimed the opposite, and they were probably right. But the point I'm making, a Seagate MACH.2 is going to have enough throughput and IOPS for many business situations, and is going to offer much more affordable storage than any SSD. In the situations where a Seagate MACH.2 would not give enough IOPS on Linux and Windows servers, they can still switch to FreeBSD to get additional performance and redundancy and better network latency:
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=49228e7&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=225b6b2&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=12872ac&p=2
https://openbenchmarking.org/embed.php?i=1901268-SP-ZFSBSDLIN95&sha=5ca0c1f&p=2
https://openbenchmarking.org/embed.php?i=1812249-SP-WINSERVER76&sha=0ac3ab0&p=2
https://openbenchmarking.org/embed.php?i=1812249-SP-WINSERVER76&sha=4347141&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=c253c2f&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=49228e7&p=2
https://openbenchmarking.org/embed.php?i=1812090-SK-ZFSBTRFS470&sha=6e71607&p=2

In many situations, companies don't necessarily need an SSD, and it will be much more expensive for them.
I frankly don't understand who you're arguing against here: nobody here has said that every single company should replace every single HDD-based server with flash storage. That's just a straw man. Literally nobody has said that, or anything close to that. What was said, and what you quoted, was that in any workload reliant on random performance, an SSD-based array will be orders of magnitude faster than a HDD-based one. If you earn money based on completing work, and flash lets you complete parts of that work 10x faster, that's a major possible cost savings/revenue increase, which will easily offset the cost of a flash array vs. HDDs in that case. Heck, your own benchmarks - which didn't come with a source link or any info about the configuration, so they're rather meaningelss as an example - confirm this difference. 60 000 IOPS? A single low-end SATA SSD does better than that. Those sequential numbers speak to that being a pretty large array though - you don't get 6GB/s out of HDDs without there being a significant array of them. With SSDs, you can cut the number of drives significantly. Which will of course cost a ton for the same capacity, but again, if the flash lets you work 10x faster, that's a small price to pay.

Also, I don't really see how your arguments apply. Like, "In the cases where a Mach.2 wouldn't give enough IOPS, you could switch your OS and improve other performance metrics"? So what? You're still not coming close to the IOPS of an SSD array. Also, presenting "just change your OS" for a business setting as something trivial or even moderately easy is just nonsensical. Sure, let's just rewrite our entire software stack and spend a couple of years ironing out all the bugs. That sounds like a good business strategy.

As for that company you're mentioning: I have no idea, but if they failed, maybe their tech wasn't very good, maybe they weren't good at marketing themselves, maybe they just launched at the wrong time? Flash was expensive in 2012. It isn't really today. And there are tons of companies providing all-flash storage solutions for servers and enterprise if that's what you're getting at. From the looks of it, that company might have been early, but they're by no means unique in 2022.
 
Joined
Feb 18, 2005
Messages
5,847 (0.81/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Razer Pro Type Ultra
Software Windows 10 Professional x64
Since these are designed for servers, I'm actually wondering how many servers actually use SSDs, it seems like an expensive and low durability option compared to HDD RAID
WTAF.
 

FreedomEclipse

~Technological Technocrat~
Joined
Apr 20, 2007
Messages
24,134 (3.74/day)
Location
London,UK
System Name DarnGosh Edition
Processor AMD 7800X3D
Motherboard MSI X670E GAMING PLUS
Cooling Thermalright AM5 Contact Frame + Phantom Spirit 120SE
Memory 2x32GB G.Skill Trident Z5 NEO DDR5 6000 CL32-38-38-96
Video Card(s) Asus Dual Radeon™ RX 6700 XT OC Edition
Storage WD SN770 1TB (Boot)| 2x 2TB WD SN770 (Gaming)| 2x 2TB Crucial BX500| 2x 3TB Toshiba DT01ACA300
Display(s) LG GP850-B
Case Corsair 760T (White) {1xCorsair ML120 Pro|5xML140 Pro}
Audio Device(s) Yamaha RX-V573|Speakers: JBL Control One|Auna 300-CN|Wharfedale Diamond SW150
Power Supply Seasonic Focus GX-850 80+ GOLD
Mouse Logitech G502 X
Keyboard Duckyshine Dead LED(s) III
Software Windows 11 Home
Benchmark Scores ლ(ಠ益ಠ)ლ
dang someone outbid me just after I posted. I bet someone saw my post lol

thats a shame. I wanted to see how it would measure up to other consumer SSDs
 

ir_cow

Staff member
Joined
Sep 4, 2008
Messages
4,545 (0.76/day)
Location
USA
thats a shame. I wanted to see how it would measure up to other consumer SSDs
I don't think it would be much difference vs Intel DC U.2 drive besides being Gen4 instead. Still get PB of write endurance and probably the same good Intel que depth.
 

FreedomEclipse

~Technological Technocrat~
Joined
Apr 20, 2007
Messages
24,134 (3.74/day)
Location
London,UK
System Name DarnGosh Edition
Processor AMD 7800X3D
Motherboard MSI X670E GAMING PLUS
Cooling Thermalright AM5 Contact Frame + Phantom Spirit 120SE
Memory 2x32GB G.Skill Trident Z5 NEO DDR5 6000 CL32-38-38-96
Video Card(s) Asus Dual Radeon™ RX 6700 XT OC Edition
Storage WD SN770 1TB (Boot)| 2x 2TB WD SN770 (Gaming)| 2x 2TB Crucial BX500| 2x 3TB Toshiba DT01ACA300
Display(s) LG GP850-B
Case Corsair 760T (White) {1xCorsair ML120 Pro|5xML140 Pro}
Audio Device(s) Yamaha RX-V573|Speakers: JBL Control One|Auna 300-CN|Wharfedale Diamond SW150
Power Supply Seasonic Focus GX-850 80+ GOLD
Mouse Logitech G502 X
Keyboard Duckyshine Dead LED(s) III
Software Windows 11 Home
Benchmark Scores ლ(ಠ益ಠ)ლ
I don't think it would be much difference vs Intel DC U.2 drive besides being Gen4 instead. Still get PB of write endurance and probably the same good Intel que depth.

I was dropping a pun.
 
Top