As you can see, we use a pretty strong test system with an incredibly fast PCIe SSD from which all the tests are executed to make sure there is no bottleneck on our side, since the RevoDrive 350 480 GB can achieve up to 1800 MB/s read and 1700 MB/s write speeds (sequential). We also equipped the system with an Intel X540-T2 network adapter for up to 10GbE transfer speeds with NAS servers that feature a 10GbE Ethernet controller. The Zyxel XS1920 smart managed switch is also essential in achieving such high speeds with copper wires.
Test Setup for Multi Client Tests
The test setup we use for our multi-client tests is described in the table below:
TL-SG3216 16-port Gigabit managed switch (LACP and Jumbo frames support)
Ethernet Cabling
CAT 6e, 2 m
UPS
CyberPower Systems PR2200ELCDSL
We used ten real clients instead of virtual machines for our multi-client tests to ensure we conduct our tests in a context that is very close to real life. We believe ten clients with our custom-made software to be more than enough to figure the capabilities of a NAS out, even for extreme usage scenarios.
We use a very strong, high quality Cyberpower UPS with pure sinewave output to adequately protect our client PCs and the NAS. The PR2200ELCDSL belongs to the Professional Tower Series and has a capacity of 2200VA, which is more than enough to handle all ten client PCs and a business-centric NAS with multiple HDDs installed.
Thanks Section
Building a suitable test bed for NAS reviews is really hard and expensive; however, we were lucky enough to have the support of several companies we would like to mention and thank, one by one.
Shuttle for helping us to acquire twelve DS81 barebone slim-PCs.
OCZ for the RevoDrive 350 and dozen ARC 100 SSDs with 240 GB capacity each.
We use three different programs to evaluate the NAS server's performance. The first is Intel's NAS Performance Toolkit. Intel was kind enough to build a NAS performance toolkit and not only release it for free to the public, but also provide its source code. The only problem with this toolkit is that a client PC with more than 2GB of memory will heavily affect the results in two tests ("HD Video Record" and "File Copy to NAS") since these actually measure the client's RAM buffer speed, not the network's speed, so we set the maximum memory of our test PC to 2GB via msconfig's advanced options. We also exploit the batch run function, which repeats the selected tests for five turns and uses the average as the final result for all tests with this toolkit.
The second program is custom-made. It performs ten basic file transfer tests and measures the average MB/s speed for each. To extract results that are as accurate as possible, we run all selected tests ten times and use the average as the final result.
We also perform our multiple client tests (up to ten clients are supported by one server instance of the program) with the same program. The server program runs on the main workstation and the clients run the client version of the program. All are synchronized and operate in parallel; after all tests are finished, the clients report their results to the server, which sums them up and transfers them to an excel sheet for the generation of the corresponding graph(s).
The third program we use in our test sessions is DiskSpd by Microsoft, a highly flexible storage test tool capable of very accurately simulating different workloads. We wrote two advanced scripts, one that simulates an On-line Transaction Processing (OLTP) system and one that simulates an On-line Analytical Processing (OLAP) system. The OLTP scenario consists of a large number of short transactions, which has the number of IOPs (Input/Output Operations Per Second) play a key role. The number of transactions is low in our OLAP scenario, but the queries can be very complex. Response times are very important for an OLAP system, and a NAS server's maximum throughput speed is reached in this scenario since the block size is quite large.
OLTP systems generally serve the purpose of gathering input information and storing it in a database. This is done on a very large scale, and the most common operations are INSERT, UPDATE, and DELETE. An OLTP database holds detailed, current data, and an entity model, usually 3NF, is used as the schema to store transactional databases. An OLTP database usually also has high read-to-write ratios (typically 90/10 to 70/30).
OLAP systems are used to analyze the data stored in a database. As such, OLAP systems mostly apply SELECT operations in very large data warehouses in order to collect information (data mining). An OLAP database consists of aggregated, historical data stored in multi-dimensional schemas (usually star schemas). The read-to-write ratio is very high, and there might only be read operations in some cases.