Monday, February 27th 2012

LSI-SandForce Releases Code to SSD Manufacturers That Adjusts Over-provisioning

To anyone who's familiar with SSDs, "SandForce" is equally familiar, as it makes the brains of some of the fastest client SSDs in the business. Buyers have also come to know of SandForce-driven SSDs as being characterized by unique capacity amounts caused by allocating a certain amount of the physical NAND flash capacity for some special low-level tasks by the controller, resulting in capacities such as 60 GB, 120 GB, 240 GB, for drives with physical NAND flash capacities of 64, 128, and 256 GB, respectively. This allocation is called "over-provisioning". An impression was built that this ~7% loss in capacity is some sort of a trade-off for higher performance. It appears like that's not quite the case.
SandForce released a code to SSD manufacturers, which causes the drives to operate without that ~7% over-provisioning, providing nearly 100% of the physically-available NAND flash as unformatted capacity to the end-user, without loss in performance. All modern SSDs need a certain amount of their physical NAND flash allocated by the controller to map out bad-blocks, and data marked for deletion when the OS issues a TRIM command to delete something, which the controller later leisurely ruminates upon like a cow (ensuring users don't experience drops in performance caused by NAND flash write cycles).

What SandForce achieved with its newest code is let SSD manufacturers use what's called "0% over-provisioning". This is impossible in the real-world, but can be achieved handing over the difference in capacity between "billions of bytes" and "gigabytes" towards user area. This delta is really 7.37% of the physical capacity. The real difference in user capacity between a 120 GB its 128 GB physical NAND flash capacity really is 7% + 7.37%, or 14.37%. The translation of Billion bytes and gigabytes was made by HDD manufacturers a while ago, so most users don't notice that difference. What the new firmware for the SF-2000 processor family now permits, is for manufacturers to create SSDs at full binary capacity points with what is commonly known as "0% over-provisioning".

In other words, buyers will soon see SandForce-driven SSDs with capacities such as 64 GB, 128 GB, 256 GB, etc., with ~7% higher user-space, and no loss in performance. This is not to be confused with some SandForce-driven SSDs launched in the past, bearing labels of canonical capacities (64, 128, 256 GB), denoting physical NAND flash capacity.
Source: The SSD Review
Add your own comment

7 Comments on LSI-SandForce Releases Code to SSD Manufacturers That Adjusts Over-provisioning

#1
Shinshin
Good news!
We can hope for firmware upgrades...
Posted on Reply
#2
btarunr
Editor & Senior Moderator
I think such a firmware update won't be easy for users, since it's changing the user space on the drive. I guess firmware-updated drives will need fresh low and high-level formats.
Posted on Reply
#3
laszlo
this should be done a few years ago;paid storage space which can't be accessed wtf

2 pity we won't have for HDD's also a similar "code"
Posted on Reply
#4
Completely Bonkers
7% extra storage space (about the size of an internet temp directory) OR an extra X% in terms of longevity from overprovisioning. I wonder what that X% is? I'd give up 7% space for a "certain guarantee" of my data. That is, if overprovisioning helped longevity at all!

Funny how Sandforce is worried about this 7%. Obviously consumers are, in general, dumb, and were picking up 64GB drives instead of 60GB drives because they felt they were getting "more".

As many of us know from experience trying with our first SSDs, a measly 7% or 4GB isnt going to make an iota of difference. It 60GB isnt enough, neither is 64GB. You cannot live with 64GB as your main drive and will need to upgrade to 128GB or 256MB. And if that was 120GB or 240GB, again, didnt make a difference.
Posted on Reply
#5
1c3d0g
I'm more concerned how this will affect reliability, seeing as stability is not one of SandForce's strong points. Since their technology doesn't use any form of cache (DRAM or otherwise), wiping the only "reserve space" left leaves them without almost any sort of "scratch pad" to write data on (the difference between gigabytes and gibibytes isn't all that much). It'll be interesting to see if the next-generation controller will include some form of cache or not.
Posted on Reply
#6
Steevo
For the price of SSD's on the market and reliability/performance I see no issue leaving the 7% as scratch pad/bad block area. That will allow one whole chip on 16GB densities to fail and still run with data intact.
Posted on Reply
#7
iLLz
I wonder if Intel helped them with this. When Intel spent the last year testing SF controllers and modding the firmware, they said they would allow SF to release what they updated to all drive manufacturers but at a later time. Giving Intel a while to have it exclusively.

I wonder if this was one of the things Intel did. I do know they have made the SF much more stable and in a few months new firmware will come out for everyone else.
Posted on Reply
Dec 22nd, 2024 01:36 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts