HighPoint Rocket 1608A 8-Slot M.2 Gen 5 Review - 56 GB/s Transfer Rates 51

HighPoint Rocket 1608A 8-Slot M.2 Gen 5 Review - 56 GB/s Transfer Rates

SLC Cache & Write Intensive Usage »

CrystalDiskMark


Really impressive numbers! Over 57 GB/s read is the highest number I've ever seen in CrystalDiskMark.

In order to achieve these results, we had to use non-standard settings that increase the transfer size and parallelism (SEQ2MQ16T5 instead of SEQ1MQ8T1).


Even with "just" four drives in RAID-0, we still get over 40 GB/s!

CrystalDiskMark at Standard Settings


Once we switch to CDM's default settings, the numbers are much less impressive, because even CDM can't make full use of the drive's IO capacity.

The most important performance characteristic for storage for non-enterprise workloads is Random 4K at QD1, which is basically the same as for a single drive, even in 8-drive RAID-0.


Here's a single drive in the R1608A.

Sequential Throughput

Our first round of synthetic tests looks at sequential throughput with a block size of 1 MB. We test the drive at various queue depths, ranging from 1 to 128. Besides sequential read and sequential write, we also test a mixed workload that randomly issues a read or write request with equal probability.

The charts below report a combined and weighted score that should cover real-life consumer workloads very well. Accordingly, the weighting factors were chosen to represent these mostly low queue depth loads (not the absolute maximum possible by the drive, which won't make any difference for real-life usage).

512K Sequential Read Performance
512K Sequential Write Performance
512K Sequential Mixed Performance


Random Access Performance

Next, we test 4K random IO, which transfers small chunks of data and spreads the accessed areas over the whole workspace of 8 GB. As before, a third data point is provided, for a mixed workload that randomly issues a read or write request with equal probability. Please note that the vertical axis uses non-linear scaling to improve visibility of the low queue depth values—which are more relevant for daily workloads.

Using the same weighted scheme as for sequential IO, the charts below provide comparison data against other drives in the test group.

4K Random Read Performance
4K Random Write Performance
4K Random Mixed Performance


fSync Write Performance

When an application wants to make absolutely sure that changes have been written to disk, it uses the fsync() syscall, which flushes all pending writes to the drive and waits until the data is safely written. This provides important integrity guarantees that are used by databases like MySQL, SQL Server, high-availability filesystems and the etcd service that's the backbone for all Kubernetes clusters.

While modern consumer drives offer hundreds of thousands of IOPS at high queue depths, without any durability guarantees, their fSync performance is very low and varies greatly, which is why we've added this test.



The more drives you add to RAID-0, the longer it takes for fSyncs to complete, because you have to wait on more than one drive.

If your plan was to build an ultra-high-performance database server using consumer SSDs, then it's not going to work. With this setup fSyncs are actually slower—you either need to buy enterprise drives or relax the ACID requirements.
Next Page »SLC Cache & Write Intensive Usage
View as single page
Aug 4th, 2024 10:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts