As you can see, I use a potent test system with an ultra-fast PCIe SSD from which all the tests are executed to ensure there is no bottleneck on my side, since the RevoDrive 350 480 GB can achieve up to 1800 MB/s read and 1700 MB/s write speeds (sequential). I have also equipped the system with an Intel X540-T2 network adapter for up to 10GbE transfer speeds for NAS servers that feature a 10GbE Ethernet controller. The Zyxel XS1920 smart-managed switch is also essential in achieving such high speeds with copper wires.
Test Setup for Multi Client Tests
The test setup I use for multi-client tests is described in the table below.
TL-SG3216 16-port Gigabit managed switch (LACP and Jumbo frames support)
Ethernet Cabling
CAT 6e, 2 m
UPS
CyberPower Systems PR2200ELCDSL
I chose to use ten real clients instead of virtual machines for the multi-client tests to ensure these tests are conducted in a context that is very close to real life. Ten real clients along with my custom-made software are more than enough to figure out the capabilities of a NAS for even extreme usage scenarios.
Thanks Section
I want to thank ADATA for providing us with ten Ultimate SU800 SSDs.
Methodology
I use two programs to evaluate the NAS server's performance. The first is custom-made. It performs ten basic file transfer tests and measures the average MB/s speed for each. To extract results that are as accurate as possible, I run all selected tests ten times and use the average as the final result.
I also perform multi-client tests, where up to ten clients are supported by one server instance of the program through the same program. The server program runs on the main workstation, and the clients run the client version of the program. All are synchronized and operate in parallel; after all tests are finished, the clients report their results to the server, which sums them up and transfers them to an excel sheet for the generation of the corresponding graphs.
The second program I use in my test sessions is DiskSpd by Microsoft, a highly flexible storage test tool capable of very accurately simulating different workloads. I wrote two advanced scripts, one that simulates an On-line Transaction Processing (OLTP) system and another that simulates an On-line Analytical Processing (OLAP) system. The OLTP scenario consists of a large number of short transactions, which has IOPS (Input/Output Operations Per Second) play a key role. The number of transactions is low in my OLAP scenario, but the queries can be very complex. Response times are crucial for an OLAP system, and a NAS server's maximum throughput speed is reached in this scenario because the block size is quite large.
OLTP systems generally serve the purpose of gathering input information and storing it in a database. This is done on an enormous scale, and the most common operations are INSERT, UPDATE, and DELETE. An OLTP database holds detailed, current data, and an entity model, usually 3NF, is used as the schema to store transactional databases. An OLTP database also usually has high read-to-write ratios (typically 90/10 to 70/30).
OLAP systems are used to analyze the data stored in a database. As such, OLAP systems mostly apply select operations in very large data warehouses to collect information (data mining). An OLAP database consists of aggregated, historical data stored in multi-dimensional schemas (usually star schemas). Their read-to-write ratio is very high, and in some cases, there might only be read operations.