Friday, September 13th 2024

Silicon Motion's SM2508 PCIe 5.0 NVMe SSD Controller is as Power Efficient as Promised

The first reviews of Silicon Motion's new PCIe 5.0 NVMe SSD controller, the SM2508 are starting to appear online, and the good news is that the controller is as power efficient as promised by the company. Tom's hardware has put up their review of a reference design M.2 SSD from Silicon Motion and in their testing, equipped with 1 TB of Kioxia's 162-layer BiCS6 TLC NAND. It easily bests the competition when it comes to power efficiency. In their file copy test, it draws nearly two watts less than its nearest competitor and as much as three watts less than the most power hungry drive. It's still using about one watt more than the best PCIe 4.0 drives, but it goes to show that the production nodes matters, as the SM2508 is produced on a 6 nm node, compared to 12 nm for Phison's E26.

We should point out that the peak power consumption did go over nine watts, but only one of the Phison E26 drives managed to stay below 10 watts here. The most power hungry PCIe 5.0 SSD controller in the test, the InnoGrit IG5666 peaks at nearly 14 watts for comparison. Idle power consumption of the SM2508 is also very good, still drawing more than the PCIe 4.0 drives it was tested against, but far less than any of the other PCIe 5.0 drives. What about performance you ask? The reference drive places itself ahead of all the Phison E26 drives when it comes to sequential file transfers, regardless if it's to or from the drive. Random read IOPS also places right at the top, but it's somewhat behind when it comes to random writes, without being a slow drive by any means. Overall we're looking at a very promising new SSD controller from Silicon Motion with the SM2508 and TPU has also received a sample that is currently undergoing testing, so expect a review here soon.
Source: Tom's Hardware
Add your own comment

40 Comments on Silicon Motion's SM2508 PCIe 5.0 NVMe SSD Controller is as Power Efficient as Promised

#26
Chrispy_
Finally! \o/

All Gen 5.0 SSDs so far have been stupid because they cannot be cooled without exceeding the M.2 dimensions specification.

Brute forcing the issue with a large heatsink and fan is a really terrible kludge that blocks CPU coolers, PCIe slots, and of course necessitates on of those whiny, tiny, sub-40mm fans that have pitiful throughput and even more pitiful lifespans. Boards that include a PCIe 5.0 M.2 slot so often also include integrated passive cooling that has to stripped off first just to install one of the hot-n-hungry PCIe 5.0 SSDs and if the piece you're stripping off to clearn room for your roasty-toasty PCIe 5.0 SSD's active cooling solution was also the same piece that cooled your PCIe 4.0 drive you also use - then tough luck. One of your drives must suffer and your fancy motherboard now looks like an ugly mess.
Posted on Reply
#27
TheLostSwede
News Editor
Frank_100I use PCIe 4.0 as main drives and backup drive.
My biggest time sink is cloning system for back-up.
Because of throttling, the data transfer is only slightly better then SATA.
It usually takes about 2 hours for clonezilla to create and check a back-up.
It is worth doing, but I wish the hardware was faster.
I think you're confusing random file transfers with throttling, as random reads and writes aren't that much better on NVMe drives compared to the best SATA drives.
As long as you have heatsinks on your drives, they shouldn't throttle.
Chrispy_Finally! \o/

All Gen 5.0 SSDs so far have been stupid because they cannot be cooled without exceeding the M.2 dimensions specification.

Brute forcing the issue with a large heatsink and fan is a really terrible kludge that blocks CPU coolers, PCIe slots, and of course necessitates on of those whiny, tiny, sub-40mm fans that have pitiful throughput and even more pitiful lifespans. Boards that include a PCIe 5.0 M.2 slot so often also include integrated passive cooling that has to stripped off first just to install one of the hot-n-hungry PCIe 5.0 SSDs and if the piece you're stripping off to clearn room for your roasty-toasty PCIe 5.0 SSD's active cooling solution was also the same piece that cooled your PCIe 4.0 drive you also use - then tough luck. One of your drives must suffer and your fancy motherboard now looks like an ugly mess.
That's why you always buy SSDs without heatsinks. In fact, the PCIe 5.0 slot heatsinks on the motherboards are often better than the ones that the SSD makers supply. Not the ones under the GPU though, but you don't really have a choice there.
Posted on Reply
#28
Maxx
SSD Guru
CosmicWandererWhy are they still making them on what is essentially a 7nm node when they could have gone for at least 5nm. I bet that would have matched the efficiency of PCIe 4.0 controllers.

Cost I'm guessing, but I would have thought that given the push for 3nm, 5nm would be more cost effective by now.
Technically, Samsung has 5nm controllers out there (in their own process, not TSMC, so effectively "larger" I guess), but the 990 EVO and its OEM counterpart aren't as efficient as expected. At least partly because the 990 EVO uses older flash but also because Samsung's controller design is cumbersome. I think that's also the case with the SM2508. I've followed its development and have a reasonably good idea of the architecture, essentially dual quad-R8s up to 1.25GHz with a management M on the side. R8s are not more efficient than R5s (Phison, albeit the E26 uses RISC-V coprocessors) and you're still dealing with a DRAM controller and eight channels (up to 3600 MT/s) and BiCS6. BiCS6 is significantly better than previous gens in power draw, but you could hit the same speeds with fewer dies (and six-plane interleaves differently than four-plane) and better efficiency with BiCS8 or alternative/upcoming flash.
EatingDirtI wouldn't be so sure this won't need active cooling to not thermally throttle at all. It still uses 30% more power than the best PCIe 4.0 drive(SN850X) on the test, and has 40% higher max power consumption.

It's definitely more efficient than the rest of the Gen5 drives, which is nice I guess. I don't think efficiency is a huge issue when it comes to most peoples use-cases, which is why I imagine companies haven't been pushing for more efficient nodes, as that will up the cost of the drives themselves.
You can run the 1TB without a heatsink in a well-ventilated desktop, but a heatsink is probably smart. As for the 2TB, I'm under the impression they are sticking with 512Gb dies (rather than go to 1Tb, but the 4Tb will use 1Tb) which means twice the interleaving. So, I'd expect maximum performance on the 2TB would throttle if the drive is bare. This is likely why they only sent out 1TB samples (or at least, one reason).
WirkoPhison has a less hungry controller too, the E31T. Four-channel but potentially still good performance with newer, faster flash chips. It doesn't seem popular, however. I know of a single SSD from MSI that uses it, not sure if it has come to market yet.


Also, if it's the analog parts (the PHY) that draw significant power, there would be little benefit for a much increased cost.

As a point of reference, has anyone thoroughly tested the 990 EVO's power draw?
Yep, and the SM2504XT as you mention after this. Also the MAP1802. I have the spec sheets for the E31T but there are reasons you're not seeing it. I can't dive too deeply into it but you'll start seeing it in January 2025 via internal roadmaps. As for the 990 EVO, I mention it in this reply in an above quote section. Drives will be more efficient in x2 5.0 than x4 4.0. Phison throttles the E26 by link speed (i.e. generation) but throttling by link width (i.e. lanes) is slightly more efficient according to the patents.
Posted on Reply
#29
GabrielLP14
SSD DB Maintainer
Hagal77I hope Marvell finaly bring out his Gen5 Controller "Bravera SC5"
that controller is out for ages now, i myself have a few of them lying around, they are efficient indeed though, but they are for datacenters
Posted on Reply
#30
Minus Infinity
TheLostSwedeI guess you wouldn't and it's unlikely to be the NAND any of Silicon Motion's partners will use when they build drives.
The drive tested is a reference design and most likely a controller validation platform, not a product that's likely to appear in retail.
As such, we should also be able to expect better performance from the controller when paired with faster/better NAND.
Makes sense.
Posted on Reply
#31
_roman_
Chrispy_Boards that include a PCIe 5.0 M.2 slot so often also include integrated passive cooling that has to stripped off first just to install one of the hot-n-hungry PCIe 5.0 SSDs and if the piece you're stripping off to clearn room for your roasty-toasty PCIe 5.0 SSD's active cooling solution was also the same piece that cooled your PCIe 4.0 drive you also use - then tough luck.
Just like RGB Lightning it's about showing off the awesome ASUS Proart / ASUS ROG GAMER Logo / MSI Godike / Gigabyte whatever ... and so on Bullshit marketing Labels.
It's not about functionality.

The mainboard guys are for sure smart and intelligent to put M2 drives below expansion slots. I always have to remove the graphic card to access the M2 NVME drive.
Well they improved now the design with even bigger M2 passive covers which even block more of the mainboard area.

The passive cooler from a proper M2 NVME SSD is for sure better as the usual bad passive cooler from the mainboard in my point of view. These placeholder advertisement shields are just for show and not really decent passive cooling for a M2 drive in my point of view. The side profile of my e.g. Corsair MP 600 Pro has for sure better cooling characteristics as e.g. the m2 covers from my prevous msi b550 gaming edge wifi. There were some tables in the past during my education where we learnt about the characteristics of passive coolers.

I prefer a cheaper mainboard without those "useless" m2 cooling covers with bad cooling characterics.
Frank_100I use PCIe 4.0 as main drives and backup drive.
My biggest time sink is cloning system for back-up.
Because of throttling, the data transfer is only slightly better then SATA.
It usually takes about 2 hours for clonezilla to create and check a back-up.
It is worth doing, but I wish the hardware was faster.
I disagree. Backup to USB-NVME-SSD is much faster as backup to USB-SATA-SSD.

My gnu gentoo linux installation was installed in 2006.

The backup took over 25 minutes for around 60GB from Mainboard NVME SSD to external USB-A to SATA SSD. I used different SATA SSDs in the sizes for 120 to 128GB. Around 5 drives. Times were similar for a long time period. Before that I had similar times for my gaming notebook internal SATA SSD to USB - SATA SSD. (Same setup)

Improvement:

I had for a while
Internal drive: P5Plus 1TB + LVM2 Container -> Luks encryption container -> ext4 file system

backup to usb-a -> NVME -> around 6-8 minutes

Now:
Internal drive: KC3000 2TB + LVM2 Container -> Luks encryption container -> BTRFS file system
backup to usb-a -> NVME -> round up to 6 minutes for 90TB mixed data. Hole gentoo installation and data.

Additional information:

I write the start time stamp to a file. I force all writing operations and than write the end date time stamp to a file. My backups are done without internet connections with a live gnu linux iso.

Please note I backup from a compressed and encrypted volume (gentoo btrfs zstd / luks / lvm) to external encrypted volume (lvm2 / luks / ext4)

I bought a low end, cheap, "garbage" PCIE 3.0 drive for that purpose. Not sure if a better high end drive like a Crucial P5 Plus or better reduce even more the backup time. It was a money decision to go for that cheap WD drive.

I use low spec as external NVME drive (german amazon text - copy paste):
WD Blue SN570 1TB High-Performance M.2 PCIe NVMe SSD, mit bis zu 3500MB/s Lesegeschwindigkeit
ICY BOX SSD M.2 NVMe Gehäuse, USB 3.1 (Gen2, 10 Gbit/s), Kühlsystem, USB-C, USB-A, PCIe M-Key, Aluminium, grau

--

These times are reproducible and similar.

I'm well aware of, that i compare apples with bananas. It's the same gnu gentoo linux installation with a lot of similar data. But the user data changed a bit. The system files changes a lot, as the system gets updates a alot. I wanted to show that there is a very big time difference between USB-A to USB-SATA-SSD bridge in comparission with USB-A to USB-NVME-SSD bridge. It also matters how you make backups on which file system and which operating systems. Working with the bash shell in gnu linux with cp / cryptsetup commands is kinda fast in my point of view. I would never use a graphical user interface for backups.

I prefer hole system backups. I moved my installation several times and also tested the backups. The backup strategy works.
Posted on Reply
#32
Frank_100
TheLostSwedeI think you're confusing random file transfers with throttling, as random reads and writes aren't that much better on NVMe drives compared to the best SATA drives.
As long as you have heatsinks on your drives, they shouldn't throttle.


That's why you always buy SSDs without heatsinks. In fact, the PCIe 5.0 slot heatsinks on the motherboards are often better than the ones that the SSD makers supply. Not the ones under the GPU though, but you don't really have a choice there.
No Heat sink beyond the one provided with the motherboard.
The main drive sits behind graphics card.
It starts off very fast. 14 Gb/min but slows to 8 or 9 after a few minutes.
SATA on other machine usually goes at a constant 7 Gb/min.

Some of that may be the cpu throttling. (Ryzen 5950x)
It does have to compress the data.

@_roman_
I use internal drives for backup.
The machine is used for audio recording, mixing, ect.
There is about 600 GB of data.

The SATA machine is used for math and real work. Only about 200 GB.

(I am impressed that you gentoo.)
Posted on Reply
#33
Chrispy_
TheLostSwedeThat's why you always buy SSDs without heatsinks. In fact, the PCIe 5.0 slot heatsinks on the motherboards are often better than the ones that the SSD makers supply. Not the ones under the GPU though, but you don't really have a choice there.
_roman_The passive cooler from a proper M2 NVME SSD is for sure better as the usual bad passive cooler from the mainboard in my point of view.
I was more concerned that PCIe 5.0 drives that don't thermal throttle have needed ridiculous contraptions like this nonsense with significant clearance and compatibility issues:

Posted on Reply
#34
TheLostSwede
News Editor
Frank_100No Heat sink beyond the one provided with the motherboard.
The main drive sits behind graphics card.
It starts off very fast. 14 Gb/min but slows to 8 or 9 after a few minutes.
SATA on other machine usually goes at a constant 7 Gb/min.

Some of that may be the cpu throttling. (Ryzen 5950x)
It does have to compress the data.
Sorry, but this makes no sense.
The Ryzen 9 5950X doesn't support PCIe 5.0 to start with.

Also, the SATA interface is limited to 6 Gbps, so you're clearly making up some numbers here. No SATA drive gets even close to that kind of speed as well.

Also, interface speeds are per second, not per minute. Maybe you're mixing up Gigabyte and Gigabit?
Chrispy_I was more concerned that PCIe 5.0 drives that don't thermal throttle have needed ridiculous contraptions like this nonsense with significant clearance and compatibility issues:

Again, as I said, that's why you buy SSDs without heatsinks and use the motherboard ones, no clearance issues.
Most boards with PCIe 5.0 NVMe support have large enough heatsinks built in, unless you get a really cheap board.
Posted on Reply
#35
Frank_100
TheLostSwedeSorry, but this makes no sense.
The Ryzen 9 5950X doesn't support PCIe 5.0 to start with.

Also, the SATA interface is limited to 6 Gbps, so you're clearly making up some numbers here. No SATA drive gets even close to that kind of speed as well.

Also, interface speeds are per second, not per minute. Maybe you're mixing up Gigabyte and Gigabit?


Again, as I said, that's why you buy SSDs without heatsinks and use the motherboard ones, no clearance issues.
Most boards with PCIe 5.0 NVMe support have large enough heatsinks built in, unless you get a really cheap board.
Nope.
Look again. (I know it's late or early)
I stated I use PCIe 4.0.
Also note, GB/Min is not the same as Gbps. (I did typo the above GB/Min as Gb/Min, It is Gigabytes per Minute)
The number I'm quoting is provided by clonezilla.
It includes a read, a compress and a write.

Here is a link to a screenshot. (not mine, poor user is has a transfer rate of 1.58 GB/Min)
linuxquestions/comments/1cfdn8l
Posted on Reply
#36
TumbleGeorge
TheLostSwedeAlso, the SATA interface is limited to 6 Gbps
4.8Gb/s(600MB/s)
But this is theoretical maximum. In real world around up to 550/560MB/s.
Posted on Reply
#37
TheLostSwede
News Editor
TumbleGeorgeBut this is theoretical maximum. In real world around up to 550/560MB/s.
As I said:
No SATA drive gets even close to that kind of speed as well.
Posted on Reply
#38
phints
These performance numbers look... decent? Nothing blowing me away here, but efficiency is good finally.

I think it's better to wait for PCIe 5.0 offerings from the next Samsung Pro or WD SN850X sequel.
Posted on Reply
#39
Yashyyyk
TheDeeGeeI wonder, are there actually people who refuse to buy a storage drive because it's power usage?
Kind of?

It is pretty depressing (but I suppose also reassuring) to buy a SK Hynix P31/P41 then it takes literally years for something to get close / beat in efficiency - but it doesn't have a 4TB option
Posted on Reply
#40
Frank_100
TheLostSwedeAs I said:
SATA to SATA transfer.

It peaked at 8.06 GB/min

Sorry my camera is a potato.

So lets do some math to clear up the confusion.
1 GigaByte = 8 Gigabits.

7 * 8 = 56
56 Gigabits/min

56/60 = .93 Gigabits/sec

Well below the 6 Gbps ceiling on SATA speed.
Posted on Reply
Add your own comment
Dec 4th, 2024 03:47 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts