Friday, November 18th 2011
LSI Implements SAS 12 Gb/s Interface
LSI corporation is the first to implement new serial-attached SCSI (SAS) 12 Gb/s interface, geared for future generations of storage devices that can make use of that much bandwidth. For now, LSI proposes SAS expander chips that distribute that bandwidth among current-generation storage devices. The company displayed the world's first SATA 12 Gb/s add-on card, which uses PCI-Express 3.0 x8 interface to make sure there's enough system bus bandwidth. This card can connect to up to 44 SAS or SATA devices, and supports up to 2048 SAS addresses. It is backwards compatible with today's 6 Gb/s and 3 Gb/s devices.
By making use of the 12 Gb/s SAS Expander solution paired with 32 current-generation Seagate Savvio 15.3K RPM hard drives, LSI claims 58% increase in IOPS compared to a 6 Gb/s host controller, because of better bandwidth aggregation per drive. There's also a 68% increase in bandwidth yield. The array of 32 hard drives could dole out 3106.84 MB/s on IOMeter, and more significantly, over 1.01 million IOPS. As big as this number seems, it could be an IOMeter bug, because the numbers don't add up. Perhaps it's measuring IOPS from disk caches.
Source:
TheSSDReview
By making use of the 12 Gb/s SAS Expander solution paired with 32 current-generation Seagate Savvio 15.3K RPM hard drives, LSI claims 58% increase in IOPS compared to a 6 Gb/s host controller, because of better bandwidth aggregation per drive. There's also a 68% increase in bandwidth yield. The array of 32 hard drives could dole out 3106.84 MB/s on IOMeter, and more significantly, over 1.01 million IOPS. As big as this number seems, it could be an IOMeter bug, because the numbers don't add up. Perhaps it's measuring IOPS from disk caches.
36 Comments on LSI Implements SAS 12 Gb/s Interface
IOPS for HDDs are calculated as 1/(Avg latency + Avg seek time). So: 1/(2 ms + 2.7 ms) = 213 IOPS
213 * 32 = 6,816 IOPS
i'd say user error using IOMeter
They should have done it on SLC SSD's to show all the capabilities of this controller (given it's application area)
still great stuff though.
And if we're talking research: www.snia.org/sites/default/files/AnilVasudeva_Are_SSDs_Ready_Enterprise_Storage_Systemsv4.pdf
^look at how that confirms my views on the spread of enterprise data.
That research is showing HDD's in use because only the rich and big business can afford such setup. Companies are still on HDD's because of their price and availability. Such HW isn't sold to every server. ZeusIOPS is in no way a traditional SSD, and I/O drives will take over. But it's a slow transaction at this rate.
Seriously, inform yourself from the info out here.
1a
1b
2a
2b
Seriously, go to bed. At this moment you are simply unable to debate.
Companies that're building from the ground up are moving towards enterprise SLC SSD's. No-one in his right mind would get a load of HDD's over SSD's at this time and date.
But yeah, they used to pay me... to talk crap about eVGA on their forums... to get myself banned.
www.evga.com/forums/fb.ashx?m=1127027
"HDD's aren't much use in datacenters anymore".
I didn't say they aren't used anymore. I tried to say it's more logical to invest in SSD's rather than to buy hundreds of drives for this kind of application (Input/Output's). It's much easier to do it that way. See the original post? The topic is about the amount of IOPS LSI came up with on their controller.
That example aside, the logic used in flash like Zeus is perfect. It's great when you can reapplicate I/O levels of that much drives under just one 120 GB SSD. It really is the way to go. If you read what some storage experts say, they repeat the things I said in here.
People don't bother and are being stubborn on SAS/SCSI drives. They don't realize non-volatile flash has come a long way and will take over. Now the question is when.
"HDD's aren't much use in datacenters anymore".
That statement is incorrect, I've provided statistics to refute that. Mechanical drives are far from being "aren't much use in datacenters anymore."
HDD's aren't much use in continously accessed data. That is high I/O's, which is what the OP is on about.
For storage though, yeah, they are. HDD's and especially tape for mass storage.
The biggest issue with SSD is their number of writes, they would be raped in heavy write environment. And this I say from my own experience.
SCSI/SAS HDD's have no issues working for 5 years 24/7 while getting hammered, now let me see an SSD do that.