• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

LSI Implements SAS 12 Gb/s Interface

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,194 (7.56/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
LSI corporation is the first to implement new serial-attached SCSI (SAS) 12 Gb/s interface, geared for future generations of storage devices that can make use of that much bandwidth. For now, LSI proposes SAS expander chips that distribute that bandwidth among current-generation storage devices. The company displayed the world's first SATA 12 Gb/s add-on card, which uses PCI-Express 3.0 x8 interface to make sure there's enough system bus bandwidth. This card can connect to up to 44 SAS or SATA devices, and supports up to 2048 SAS addresses. It is backwards compatible with today's 6 Gb/s and 3 Gb/s devices.

By making use of the 12 Gb/s SAS Expander solution paired with 32 current-generation Seagate Savvio 15.3K RPM hard drives, LSI claims 58% increase in IOPS compared to a 6 Gb/s host controller, because of better bandwidth aggregation per drive. There's also a 68% increase in bandwidth yield. The array of 32 hard drives could dole out 3106.84 MB/s on IOMeter, and more significantly, over 1.01 million IOPS. As big as this number seems, it could be an IOMeter bug, because the numbers don't add up. Perhaps it's measuring IOPS from disk caches.



View at TechPowerUp Main Site
 
Last edited:
Joined
Jan 11, 2009
Messages
9,249 (1.60/day)
Location
Montreal, Canada
System Name Homelabs
Processor Ryzen 5900x | Ryzen 1920X
Motherboard Asus ProArt x570 Creator | AsRock X399 fatal1ty gaming
Cooling Silent Loop 2 280mm | Dark Rock Pro TR4
Memory 128GB (4x32gb) DDR4 3600Mhz | 128GB (8x16GB) DDR4 2933Mhz
Video Card(s) EVGA RTX 3080 | ASUS Strix GTX 970
Storage Optane 900p + NVMe | Optane 900p + 8TB SATA SSDs + 48TB HDDs
Display(s) Alienware AW3423dw QD-OLED | HP Omen 32 1440p
Case be quiet! Dark Base Pro 900 rev 2 | be quiet! Silent Base 800
Power Supply Corsair RM750x + sleeved cables| EVGA P2 750W
Mouse Razer Viper Ultimate (still has buttons on the right side, crucial as I'm a southpaw)
Keyboard Razer Huntsman Elite, Pro Type | Logitech G915 TKL
over a million IOPS :eek:
 
Joined
Dec 16, 2010
Messages
1,668 (0.33/day)
Location
State College, PA, US
System Name My Surround PC
Processor AMD Ryzen 9 7950X3D
Motherboard ASUS STRIX X670E-F
Cooling Swiftech MCP35X / EK Quantum CPU / Alphacool GPU / XSPC 480mm w/ Corsair Fans
Memory 96GB (2 x 48 GB) G.Skill DDR5-6000 CL30
Video Card(s) MSI NVIDIA GeForce RTX 4090 Suprim X 24GB
Storage WD SN850 2TB, Samsung PM981a 1TB, 4 x 4TB + 1 x 10TB HGST NAS HDD for Windows Storage Spaces
Display(s) 2 x Viotek GFI27QXA 27" 4K 120Hz + LG UH850 4K 60Hz + HMD
Case NZXT Source 530
Audio Device(s) Sony MDR-7506 / Logitech Z-5500 5.1
Power Supply Corsair RM1000x 1 kW
Mouse Patriot Viper V560
Keyboard Corsair K100
VR HMD HP Reverb G2
Software Windows 11 Pro x64
Benchmark Scores Mellanox ConnectX-3 10 Gb/s Fiber Network Card

W1zzard

Administrator
Staff member
Joined
May 14, 2004
Messages
27,742 (3.71/day)
Processor Ryzen 7 5700X
Memory 48 GB
Video Card(s) RTX 4080
Storage 2x HDD RAID 1, 3x M.2 NVMe
Display(s) 30" 2560x1600 + 19" 1280x1024
Software Windows 10 64-bit
i think those IOPS are coming from cache.

IOPS for HDDs are calculated as 1/(Avg latency + Avg seek time). So: 1/(2 ms + 2.7 ms) = 213 IOPS

213 * 32 = 6,816 IOPS

i'd say user error using IOMeter
 
J

John Doe

Guest
HDD's aren't much use in datacenters anymore. An I/O drive would give superior I/O's in a much lower profile. Using a few PCI-E slots rather than a whole rack that needs power and cooling as well.

They should have done it on SLC SSD's to show all the capabilities of this controller (given it's application area)

still great stuff though.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,194 (7.56/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
HDD's aren't much use in datacenters anymore.

They very much are. Single 250 GB SATA HDD is the most common storage option in leased servers. SSDs commonly start at an additional $50-odd per month for Intel SLC 50 GB.
 
J

John Doe

Guest
They very much are. Single 250 GB SATA HDD is the most common storage option in leased servers. SSDs commonly start at an additional $50-odd per month for Intel SLC 50 GB.

lol you bombed on this one. I was referring to enterprise datacenters. HDD's do nothing but suck power and underperform on an enterprise scale. Enterprise sized servers don't base their power on 250 GB HDD's. A single PCI-E or I/O SSD can replace tens, if not a few hundreds of them. Look up "ZeusIOPS". ;)
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,194 (7.56/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
lol you bombed on this one. I was referring to enterprise datacenters. HDD's do nothing but suck power and underperform on an enterprise scale. Enterprise sized servers don't base their power on 250 GB HDD's. A single PCI-E or I/O SSD can replace tens, if not a few hundreds of them. Look up "ZeusIOPS". ;)

No yuo bombed on this one. SSDs are far too unreliable and have far too high price per GB to "replace" HDDs in enterprise datacenters. By these, yes, I mean "enterprise-grade" SSDs. No enterprise with half a brain replaces most of its HDDs with SSDs are you're suggesting. Instead, they use SSDs only to temporarily hold the "hot" parts of their databases, and constantly keep them in sync with hard drives arrays, which are infinitely more in number.
 
Last edited:
J

John Doe

Guest
No yuo bombed on this one. SSDs are far too unreliable to "replace" HDDs in enterprise datacenters. They use SSDs only to hold the "hot" parts of their databases, and quickly archive to hard drives, which are infinitely more in number.

Uhm, yeah. Time to do some research on what I just mentioned. Look it up then tell me what the IBM guys say about it. SSD reliability (especially real SLC like ZeusIOP's, single cell only) is much more improved. It's more reliable than a convertional HDD. The SLC you see on the market (like Intel X25 Extreme) still use a few cells to store the data per-cell. They don't compare to the reliability of ZeusIOPS. The ZeusIOPS is an OEM only drive that enterprise business use instead of hundreds of HDD's. The memory is uses is like ECC RAM (as in reliability). It costs thousands of Euro's per drive and can only be ordered from STEC themselves.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,194 (7.56/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Uhm, yeah. Time to do some research on what I just mentioned. Look it up then tell me what the IBM guys say about it. SSD reliability (especially real SLC like ZeusIOP's, single cell only) is much more improved. It's more reliable than a convertional HDD. The SLC you see on the market (like Intel X25 Extreme) still use a few cells to store the data per-cell. They don't compare to the reliability of ZeusIOPS. The ZeusIOPS is an OEM only drive that enterprise business use instead of hundreds of HDD's. The memory is uses is like ECC RAM (as in reliability). It costs thousands of Euro's per drive and can only be ordered from STEC themselves.

Again, no. What IBM thinks doesn't reflect on how today's enterprises are storing data. And no, not even SLC SSDs are more reliable than enterprise hard drives. IBM is merely endorsing SSDs in its ZeusIOPS paper, it's in no way showing the spread of today's enterprise data.

And if we're talking research: http://www.snia.org/sites/default/files/AnilVasudeva_Are_SSDs_Ready_Enterprise_Storage_Systemsv4.pdf





^look at how that confirms my views on the spread of enterprise data.
 
Last edited:
J

John Doe

Guest
Talking about researches, you don't know what you're talking about. So I suggest you to do some more on ZeusIOPS. Especially read what storage professionals say about it. It's the future of store. It can reproduce the I/O's of hundreds of drives by itself alone. It's extremely reliable can easily, constantly be backed-up. You can use a few consulting each other for the potential of a thousand drives. It's ridicilious to use 1000 drives instead of 5 ZeusIOPS with backup. Do you have an idea on how much it takes to cool and power 1000 drives?

That research is showing HDD's in use because only the rich and big business can afford such setup. Companies are still on HDD's because of their price and availability. Such HW isn't sold to every server. ZeusIOPS is in no way a traditional SSD, and I/O drives will take over. But it's a slow transaction at this rate.

Seriously, inform yourself from the info out here.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,194 (7.56/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
Talking about researches, you don't know what you're talking about.

Go to bed.

1a


1b


2a


2b


Seriously, go to bed. At this moment you are simply unable to debate.
 
J

John Doe

Guest
Go to bed.

Seriously, go to bed. At this moment you are simply unable to debate.

Yes, I'm. Re-read my post. They aren't in use because they aren't available options to all servers. Only the richer and more modern servers have moved up to those. And the thing is, they WILL be used in the future. We won't have HDD's by the time all flash become as reliable and as fast as ZeusIOPS.

Companies that're building from the ground up are moving towards enterprise SLC SSD's. No-one in his right mind would get a load of HDD's over SSD's at this time and date.
 
Joined
Nov 18, 2011
Messages
245 (0.05/day)
That topic you had entirely valid points. As for this one and many others I have to agree with BTA, where as Zeus is a great technology for one the Enterprise even if they wanted it I'm sure are still looking at plans to overhaul to go to it. As I firmly also believe most of the enterprise is still running off of mechanical. I also don't even remotely see how my statement was absurd, I actually was convinced you were trolling this entire topic.
 
J

John Doe

Guest
That topic you had entirely valid points. As for this one and many others I have to agree with BTA, where as Zeus is a great technology for one the Enterprise even if they wanted it I'm sure are still looking at plans to overhaul to go to it. As I firmly also believe most of the enterprise is still running off of mechanical. I also don't even remotely see how my statement was absurd, I actually was convinced you were trolling this entire topic.

I'm not, why should I? You didn't need to nitpick on my post right in the first post. Enterprise is running mechanical but that's going to change. My original post stated;

"HDD's aren't much use in datacenters anymore".

I didn't say they aren't used anymore. I tried to say it's more logical to invest in SSD's rather than to buy hundreds of drives for this kind of application (Input/Output's). It's much easier to do it that way. See the original post? The topic is about the amount of IOPS LSI came up with on their controller.
 
Joined
Nov 18, 2011
Messages
245 (0.05/day)
Right I understood that, although I also stated that I agree with BTA, that was also half of his argument that the industry also was not running SSDs. I personally even with Zeus wouldn't be surprised if industries would not be attracted to them. The reliability of SSDs is still a toss up just because of history, until the big wigs in the corporations realize that it's worthwhile then they will jump on the bandwagon. I actually highly doubt they will invest now though, and with companies continuing to put out new technology that supports the mechanical more and more they are also pushed into not seeing a point to change how they go about it.
 
J

John Doe

Guest
And that's the problem. People are still thinking HDD's are the only solid option. They aren't. For a fact, current MLC has much writes that you don't have to worry about degradition like people did a few years ago (for home usage).

That example aside, the logic used in flash like Zeus is perfect. It's great when you can reapplicate I/O levels of that much drives under just one 120 GB SSD. It really is the way to go. If you read what some storage experts say, they repeat the things I said in here.

People don't bother and are being stubborn on SAS/SCSI drives. They don't realize non-volatile flash has come a long way and will take over. Now the question is when.
 

btarunr

Editor & Senior Moderator
Staff member
Joined
Oct 9, 2007
Messages
47,194 (7.56/day)
Location
Hyderabad, India
System Name RBMK-1000
Processor AMD Ryzen 7 5700G
Motherboard ASUS ROG Strix B450-E Gaming
Cooling DeepCool Gammax L240 V2
Memory 2x 8GB G.Skill Sniper X
Video Card(s) Palit GeForce RTX 2080 SUPER GameRock
Storage Western Digital Black NVMe 512GB
Display(s) BenQ 1440p 60 Hz 27-inch
Case Corsair Carbide 100R
Audio Device(s) ASUS SupremeFX S1220A
Power Supply Cooler Master MWE Gold 650W
Mouse ASUS ROG Strix Impact
Keyboard Gamdias Hermes E2
Software Windows 11 Pro
"HDD's aren't much use in datacenters anymore".

You highlighted the wrong part of your statement. You said:

"HDD's aren't much use in datacenters anymore".

That statement is incorrect, I've provided statistics to refute that. Mechanical drives are far from being "aren't much use in datacenters anymore."
 
J

John Doe

Guest
You highlighted the wrong part of your statement. You said:

"HDD's aren't much use in datacenters anymore". The most logical interpretation of that statement is that you claim HDDs.

That statement is incorrect, I've provided statistics to refute that. Mechanical drives are far from being "aren't much use in datacenters anymore"

Okay sir, let me clear up further then. ;)

HDD's aren't much use in continously accessed data. That is high I/O's, which is what the OP is on about.

For storage though, yeah, they are. HDD's and especially tape for mass storage.
 
Last edited by a moderator:
Joined
Apr 21, 2010
Messages
5,731 (1.08/day)
Location
West Midlands. UK.
System Name Ryzen Reynolds
Processor Ryzen 1600 - 4.0Ghz 1.415v - SMT disabled
Motherboard mATX Asrock AB350m AM4
Cooling Raijintek Leto Pro
Memory Vulcan T-Force 16GB DDR4 3000 16.18.18 @3200Mhz 14.17.17
Video Card(s) Sapphire Nitro+ 4GB RX 580 - 1450/2000 BIOS mod 8-)
Storage Seagate B'cuda 1TB/Sandisk 128GB SSD
Display(s) Acer ED242QR 75hz Freesync
Case Corsair Carbide Series SPEC-01
Audio Device(s) Onboard
Power Supply Corsair VS 550w
Mouse Zalman ZM-M401R
Keyboard Razor Lycosa
Software Windows 10 x64
Benchmark Scores https://www.3dmark.com/spy/6220813
Okay sir, let me clear up further than. ;)

HDD's aren't much use in continously accessed data. That is high I/O's, which is what the OP is on about.

For storage though, yeah, they are. HDD's and especially Tape for mass storage.

They might not be of "much use" however even in enterprise datacentres they are still the most common form of storage/data by a country mile compared to SSD/IO drives
 

Easy Rhino

Linux Advocate
Staff member
Joined
Nov 13, 2006
Messages
15,577 (2.37/day)
Location
Mid-Atlantic
System Name Desktop
Processor i5 13600KF
Motherboard AsRock B760M Steel Legend Wifi
Cooling Noctua NH-U9S
Memory 4x 16 Gb Gskill S5 DDR5 @6000
Video Card(s) Gigabyte Gaming OC 6750 XT 12GB
Storage WD_BLACK 4TB SN850x
Display(s) Gigabye M32U
Case Corsair Carbide 400C
Audio Device(s) On Board
Power Supply EVGA Supernova 650 P2
Mouse MX Master 3s
Keyboard Logitech G915 Wireless Clicky
Software The Matrix
that was a good read. anyone who thinks HDDs aren't much use in enterprise datacenters today is clearly on drugs. i'm glad i didn't get sucked in that nonsense.
 
Last edited by a moderator:
Joined
Apr 7, 2011
Messages
1,380 (0.28/day)
System Name Desktop
Processor Intel Xeon E5-1680v2
Motherboard ASUS Sabertooth X79
Cooling Intel AIO
Memory 8x4GB DDR3 1866MHz
Video Card(s) EVGA GTX 970 SC
Storage Crucial MX500 1TB + 2x WD RE 4TB HDD
Display(s) HP ZR24w
Case Fractal Define XL Black
Audio Device(s) Schiit Modi Uber/Sony CDP-XA20ES/Pioneer CT-656>Sony TA-F630ESD>Sennheiser HD600
Power Supply Corsair HX850
Mouse Logitech G603
Keyboard Logitech G613
Software Windows 10 Pro x64
People don't bother and are being stubborn on SAS/SCSI drives. They don't realize non-volatile flash has come a long way and will take over. Now the question is when.

Yes SSD will take over, but right now? Not even close, SSD's in enterprise solutions is still quite rare, it is getting in there but SAS HDD's still hold their ground.
The biggest issue with SSD is their number of writes, they would be raped in heavy write environment. And this I say from my own experience.
SCSI/SAS HDD's have no issues working for 5 years 24/7 while getting hammered, now let me see an SSD do that.
 
Joined
Aug 10, 2007
Messages
4,267 (0.68/day)
Location
Sanford, FL, USA
Processor Intel i5-6600
Motherboard ASRock H170M-ITX
Cooling Cooler Master Geminii S524
Memory G.Skill DDR4-2133 16GB (8GB x 2)
Video Card(s) Gigabyte R9-380X 4GB
Storage Samsung 950 EVO 250GB (mSATA)
Display(s) LG 29UM69G-B 2560x1080 IPS
Case Lian Li PC-Q25
Audio Device(s) Realtek ALC892
Power Supply Seasonic SS-460FL2
Mouse Logitech G700s
Keyboard Logitech G110
Software Windows 10 Pro
Well if you believe post #21 (not saying anyone should) and he only meant high I/O scenarios this whole time, then he does have at least a little standing. Million IOPS systems were comprised of ~750 short stroked 15K hard drives only five years ago. Performance of two full racks now can be had in 4U thanks to SSDs.
 
Top