# Hard Disk Drives with HAMR Technology Set to Arrive in 2018



## P4-630 (Jan 5, 2016)

"_While many client devices use solid-state storage technologies nowadays, hard disk drives (HDDs) are still used by hundreds of millions of people and across virtually all datacenters worldwide. Heat-assisted magnetic recording (HAMR) technology promises to increase capacities of HDDs significantly in the coming years. Unfortunately, mass production of actual hard drives featuring HAMR has been delayed for a number of times already and now it turns out that the first HAMR-based HDDs are due in 2018.
_
*Storage Demands Are Increasing*
_Analysts from International Data Corp. and Western Digital Corp. estimate that data storage capacity shipped by the industry in 2020 will total approximately 2900 exabytes (1EB = 1 million TB), up from around 1000EB in 2015. Demand for storage will be driven by various factors, including Big Data, Internet-of-Things, user-generated content, enterprise storage, personal storage and so on. Samsung Electronics believes that the NAND flash industry will produce 253EB of flash memory in 2020, up from 84EB in 2015. Various types of solid-state storage will account for less than 10% of the storage market in terms of bytes shipped, whereas hard drives, tape and some other technologies will account for over 90%, if the estimates by IDC, Samsung and Western Digital are correct._"





"_In a bid to meet demand for increased storage needs in the coming years, the industry will need to expand production of NAND flash memory as well as to increase capacities of hard disk drives. Modern HDDs based on perpendicular magnetic recording (PMR) and shingled magnetic recording (SMR) platters have areal density of around ~0.95 Terabit per square inch (Tb/in²) and can store up to 10TB of data (on seven 1.43TB platters). Technologies like two-dimensional magnetic recording (TDMR) can potentially increase areal density of HDD disks by 5 to 10 per cent, which is significant. Moreover, Showa Denko K.K. (SDK), the world’s largest independent maker of hard drive platters, has outlined plans to mass produce ninth-generation PMR HDD media with areal density of up to 1.3Tb/in² next year.
_
*HAMR: The Key to Next-Gen HDDs*
_Companies like Seagate Technology and Western Digital believe that to hit areal densities beyond 1.5Tb/in², HAMR technology along with higher anisotropy media will be required because of supermagnetic limit (physical “pitches” on optical media become so tiny that it will not be possible to produce a powerful enough magnetic field in the HDD space to write data into them).

Certain principles of heat-assisted magnetic recording were patented back in 1954, even before IBM demonstrated the very first commercial hard disk drive. Heat-assisted magnetic recording technology briefly heats magnetic recording media with a special laser close to Curie point (the temperature at which ferromagnetic materials lose their permanent magnetic properties) to reduce its coercivity while writing data on it. HAMR HDDs will feature a new architecture, require new media, completely redesigned read/write heads with a laser as well as a special near-field optical transducer (NFT) and a number of other components not used or mass produced today."




_
Read more:
http://www.anandtech.com/show/9866/hard-disk-drives-with-hamr-technology-set-to-arrive-in-2018


----------



## natr0n (Jan 5, 2016)

I don't have a good feeling about this tech at all.


----------



## eidairaman1 (Jan 5, 2016)

natr0n said:


> I don't have a good feeling about this tech at all.



IBM Deathstar?


----------



## R-T-B (Jan 5, 2016)

eidairaman1 said:


> IBM Deathstar?



I think this is more like the second deathstar, the one they didn't even finish building before it got blown up by the rebel (SSD) alliance.


----------



## natr0n (Jan 5, 2016)

eidairaman1 said:


> IBM Deathstar?


indeed


----------



## eidairaman1 (Jan 5, 2016)

R-T-B said:


> I think this is more like the second deathstar, the one they didn't even finish building before it got blown up by the rebel (SSD) alliance.



Im running a 840 pro as os with the Raptor X as my game drive( no games go on the 840 lol).

I just recall the ibm deathstar ran very hot when it got the click of death


----------



## R-T-B (Jan 5, 2016)

eidairaman1 said:


> Im running a 840 pro as os with the Raptor X as my game drive( no games go on the 840 lol).
> 
> I just recall the ibm deathstar ran very hot when it got the click of death



The issue with the Deathstars wasn't actually the heat, it was the glass platters.  When areas were frequently accessed, the magnetic material would actually flake off from the platter.  The heat generated was most likely related to it's bearings and such being full of stuff it was not supposed to be by the time it began to REALLY die.


----------



## RCoon (Jan 5, 2016)

Making HDD's above 3TB caused them to reduce rotation speeds to 5900 RPM. Making HDD's use shingled recording to reach 6 and 8TB also further reduced read and write speeds and increased costs due to the expensive use of Helium in production. I can't help but think HAMR is going to nearly halve the speed of reading and writing data on a 16TB HDD in comparison to an old 7200 RPM 2TB HDD.

Everything in terms of HDD technology since 4TB hard drives hasn't really been a technological innovation, they've just been bandaids to tide us over an extra few years while we solve a bigger problem. Either somebody needs to make a new magnetic substrate to produce platters from so density in platters can be doubled without breaking magnetism outright, or we need to just take HDD's out the back, shoot them, and wave in the new age of solid-state storage by mass producing NAND at a higher rate than magnetic platters.

I already took the first step and repurposed my old work WD Black and used the platters for some stylish coffee mats.


----------



## natr0n (Jan 5, 2016)

R-T-B said:


> The issue with the Deathstars wasn't actually the heat, it was the glass platters.  When areas were frequently accessed, the magnetic material would actually flake off from the platter.  The heat generated was most likely related to it's bearings and such being full of stuff it was not supposed to be by the time it began to REALLY die.



https://en.wikipedia.org/wiki/HGST_Deskstar#/media/File:IBM75GXP_Failed_Disks.png

kinda surreal


----------



## eidairaman1 (Jan 5, 2016)

Year i remember reading about that, too bad I didnt jump on the band wagon. After that fiasco i moved to wds then to seagates. I have a seagate sshd that is ok, however there are no tools from them to disable Power saving properties, otherwise i wouldnt have gotten the raptor


----------



## R-T-B (Jan 5, 2016)

I actually owned a Deathstar.  It started reporting bad sectors during a defrag, so off the data went.  It lived long enough to save my data and get replaced with a Maxtor of the era.


----------



## eidairaman1 (Jan 5, 2016)

R-T-B said:


> I actually owned a Deathstar.  It started reporting bad sectors during a defrag, so off the data went.  It lived long enough to save my data and get replaced with a Maxtor of the era.



I had an 80GB get the click of death, DFT fixed the issue once along with FDISK, but fdisk had detected bad sectors at the time (start of failure), decommed the drive


----------



## R-T-B (Jan 5, 2016)

eidairaman1 said:


> I had an 80GB get the click of death, DFT fixed the issue once along with FDISK, but fdisk had detected bad sectors at the time (start of failure), decommed the drive



Yeah, I owned the 80GB as well.

It's one of those great failures in engineering where someone probably DID get fired for buying IBM, lol.  Wonder how they missed that in testing?


----------



## eidairaman1 (Jan 5, 2016)

R-T-B said:


> Yeah, I owned the 80GB as well.
> 
> It's one of those great failures in engineering where someone probably DID get fired for buying IBM, lol.  Wonder how they missed that in testing?



The unit tested probably didnt have the failure or wasnt tested long enough.


----------



## Bill_Bright (Jan 5, 2016)

I don't think reliability of these new HAMR HDD devices will be a problem. The technologies are, or will be sound before they are released into mass production and head out the factor door.

The problem is the technology is obsolete before it even gets a chance to head out the door. It is DOA - or at least dying on arrival.

The article's author is wrong when he says HDDs are used "_across virtually all datacenters worldwide_". That is just not true and is becoming less true day by day. The prices of SSDs are dropping at a rapid rate as densities rapidly increase at the same time. Combine that with the much lower energy cost to run SSDs and the much lower cooling costs to keep them cool, in addition to much less physical space per disk, more and more datacenters are migrating to SSDs - and there is no reason to suggest those trends will not continue.

Flash Drives Replace Disks at Amazon, Facebook, Dropbox
Solid-State Storage: On The Road To Datacenter Domination

I think HAMR is a great idea, but too little too late.


----------



## newtekie1 (Jan 5, 2016)

Bill_Bright said:


> The article's author is wrong when he says HDDs are used "_across virtually all datacenters worldwide_".



I don't think you could go into any data center and not find HDDs in use.  So the author isn't wrong.  The primary storage may be migrating to SSDs, but there is still a lot of primary storage running on HDDs, and the backup and archival is almost always HDD based.  Enterprise SSDs are still too expensive/GB to entirely migrate a datacenter to SSD.



RCoon said:


> Making HDD's above 3TB caused them to reduce rotation speeds to 5900 RPM. Making HDD's use shingled recording to reach 6 and 8TB also further reduced read and write speeds and increased costs due to the expensive use of Helium in production.



The early 4TB drives were 5900RPM, but now there are 5 and 6TB drives that run at 7200RPM.  Also, SMR doesn't require Helium.  The 8TB Seagate SMR drive doesn't use Helium, the Helium is required for 8TB drives that don't use SMR.  That is also why the Helium 8TB drives can run at 7200RPM.  It is also why the 8TB Seagate SMR drive costs less than half the 8TB Helium drives.

As for the read/write speeds.  The 8TB SMR drive has very good read speeds, it is only the write performance that is poor.  And Seagate handles the write issue with a ~16GB dedicated write cache on the drive.  So the writing only gets really slow when you manage to fill up that write cache.


----------



## Bill_Bright (Jan 5, 2016)

newtekie1 said:


> I don't think you could go into any data center and not find HDDs in use. So the author isn't wrong.


That is not the point. I showed where some are already using SSDs (and note my first link was from 2012) and more and more are migrating to SSDs because prices are dropping, densities are increasing, and operating costs are lower. 





newtekie1 said:


> Enterprise SSDs are still too expensive/GB to entirely migrate a datacenter to SSD.


Now hold on! You are trying to rationalize your position about the future by using today's prices for SSDs! That is not a valid argument. You are also trying to rationalize your position by talking about migrating an existing datacenter.

I admit I also said "migrating" datacenters, but the fact remains more and more "new" data centers are being built all the time. And many are using flash technologies. And it makes financial sense for them to build new with flash technologies than having to go through the huge expense of retooling in the future.

Also, existing (aging) datacenters are using older, slower, smaller hard drives that are wearing out and need replacing. These are opportunities to migrate through natural progression to SSDs - just as is happening in the industry as a whole (see, SSD onslaught: Hard drives poised for double-digit revenue drop).

While HDs will continue to maintain their dominance over the next "few" years, SSD prices already have dropped and densities increased much faster than analyst predicted. There is no reason to suggest that trend will not continue enticing more data centers to migrate to SSDs and more importantly, new data centers to start out with SSDs - especially when you factor in the lower energy and cooling costs for SSDs, and less floor space to house them.

If hard drives were not electro-mechanical devices - that is, if they did not have motors (two in each drive) and other moving parts, then I could see light in their future. But because they are big, clunking, slow and power hungry devices, I say they are doomed, along with CRT monitors, cassette players, VCR players, and steam engines.

SSDs will not supplant HDs by 2018, but they will continue on their unstoppable and ever increasing march towards that end.

As a side observation, I note in that article, SSD densities today (well, 2015) are already at 84EB - that's 84,000,000TB!!!!


----------



## RejZoR (Jan 6, 2016)

Like @natr0n said, I too have bad feeling about this. Having laser in there to heat stuff up means another failure point. Instead of decreasing the number of failure points, we're adding them. It doesn't make sense.

I thought HAMR was already used. Hm. What was the tech that Seagate used (I think) that increased data density, but caused idiotically slow writes compared to other normal drives. Can't remember how it was called.

They said high capacity is only possible with helium filled drives and that was also soon proven false by drives from Seagate/WD that have same capacity with regular "breather" drives. If possible, I'll avoid HAMR drives. If 2018 is the target, I'll be on M.2 NVMe only drive by then...


----------



## Cybrnook2002 (Jan 6, 2016)

I wanna pull that laser out and zap stuff. Pew Pew (HDD arm plays music in the background)


----------



## Bill_Bright (Jan 6, 2016)

RejZoR said:


> Like @natr0n said, I too have bad feeling about this. Having laser in there to heat stuff up means another failure point. Instead of decreasing the number of failure points, we're adding them. It doesn't make sense.


How do you figure that is adding failure points? This removes the R/W head and replaces it with a laser. That's one for one. And this laser is focused to an extremely small point of a specific distance away so I don't see extra heat as a problem - the motors are already generating enough of that.

And lasers are extremely robust and reliable - they've been used in mice for years.

And to suggest problems encountered in 2001 - 2003 with old technology drives would still be a problem today is just not realistic either.

I think it is premature to be criticizing or doubting the reliability of the technology when it is still years away from production and use.


----------



## alucasa (Jan 6, 2016)

SSD powered hosting packages are becoming the norm. SSD shared hosting SSD VPS, cheap dedis with SSD. They are all common now. Some companies use consumer grade SSD from what I've seen with acceptable results.

I think it's only recent that SSD wave started firing all cylinders. With the price fairly good at the moment, even I am considering a 500gb SSD to replace my app drive.

By 2018..., hm, I probably won't have HDDs left in my main rig. I am pretty sure my media rig will still have HDDs though. 2TB SSD ain't cheap enough yet.


----------



## taz420nj (Jan 6, 2016)

Bill_Bright said:


> How do you figure that is adding failure points? This removes the R/W head and replaces it with a laser. That's one for one. And this laser is focused to an extremely small point of a specific distance away so I don't see extra heat as a problem - the motors are already generating enough of that.



No, it's not replacing the read/write head with a laser, it's adding a laser to preheat the substrate before the bit is written, hence the "heat ASSISTED" moniker.  It's not an optical drive. Reread the article.

I'm really curious how powerful this laser has to be in order to heat up a sector to critical temperature as it whips by at 70mph - and how fast it can shed said heat after the write is complete.  I do see these things running incredibly hot during heavy write operations.


----------



## RejZoR (Jan 6, 2016)

Bill_Bright said:


> How do you figure that is adding failure points? This removes the R/W head and replaces it with a laser. That's one for one. And this laser is focused to an extremely small point of a specific distance away so I don't see extra heat as a problem - the motors are already generating enough of that.
> 
> And lasers are extremely robust and reliable - they've been used in mice for years.
> 
> ...



I don't think you understand how HAMR works. Lasers DO NOT replace traditional R/W head. Lasers just ASSIST it. It's still a magnetic and not optical data storage.


----------



## rtwjunkie (Jan 6, 2016)

alucasa said:


> I am pretty sure my media rig will still have HDDs though. 2TB SSD ain't cheap enough yet.



In the interest of space, I'm going to be migrating all my server media drives from 2TB HDD to 4TB HDD, because I simply cannot do that cheaply on SSD's yet.


----------



## Bill_Bright (Jan 6, 2016)

Okay, my mistake. I still think it premature to bash the technology before it is even out - except for the fact it is still a hard drive, thus many more parts, including moving parts, than SSDs.


----------



## alucasa (Jan 6, 2016)

rtwjunkie said:


> In the interest of space, I'm going to be migrating all my server media drives from 2TB HDD to 4TB HDD, because I simply cannot do that cheaply on SSD's yet.



I prefer to keep it at 2TB for now. By 2018, I think 500gb SSD might hit 100 USD mark? 1TB at maybe 180 USD? So, I am guessing 2020 for 2TB SSD at a reasonable price... but then by 2020, 2TB drives might not cut it anymore.


----------



## newtekie1 (Jan 6, 2016)

Bill_Bright said:


> That is not the point. I showed where some are already using SSDs (and note my first link was from 2012) and more and more are migrating to SSDs because prices are dropping, densities are increasing, and operating costs are lower.



That is entirely the point. You claimed the author was wrong in saying hard drives are used in all data centers across the world.  The author is not wrong.  At this very moment, every datacenter in the world uses HDDs.  They are migrating to SSDs, but they all still use HDDs.  As time goes on more and more SSDs will be used, and less and less HDDs will be used, but right now they all use HDDs.  That is what the author stated, and the author is not wrong.



Bill_Bright said:


> Now hold on! You are trying to rationalize your position about the future by using today's prices for SSDs! That is not a valid argument. You are also trying to rationalize your position by talking about migrating an existing datacenter.



We aren't talking about the future.  My statement was that SSDs are still too expensive to migrate an entire existing datacenter entirely to SSDs right now, today. So no datacenter has done it.  That is why the author's statement that all datacenters still use HDDs is not wrong.

If we are talking about the future, then HDDs still have a place in datacenters for several years.  As I pointed out, the backup and cold storage portions of most datacenters will continue to use HDDs for years to come.  Simply because  of the low cost/GB and the fact that performance isn't key. The power saving aspect of SSDs doesn't play a factor here either, as 90% of the time these HDDs will be powered down(the heads will be parked and the disks won't be spinning).  So the long term lower cost of operation for SSDs doesn't factor into backup and cold storage applications.  So while the primary storage will be  switching over to SSDs, and that migration will accelerate over the next several years, you won't find a datacenter on this planet that has completely migrated to SSDs in the next few years.


----------



## taz420nj (Jan 6, 2016)

alucasa said:


> I prefer to keep it at 2TB for now. By 2018, I think 500gb SSD might hit 100 USD mark? 1TB at maybe 180 USD? So, I am guessing 2020 for 2TB SSD at a reasonable price... but then by 2020, 2TB drives might not cut it anymore.



Same here.  Larger drives simply make it so you will lose more data at once when they fail.  Eggs in one basket and all that jazz.  Using large drives in a RAID array just means you're spending more than you should on parity - especially with a large array where you need to use RAID6 and ideally have a hot spare..


----------



## Frick (Jan 6, 2016)

taz420nj said:


> Same here.  Larger drives simply make it so you will lose more data at once when they fail.  Eggs in one basket and all that jazz.  Using large drives in a RAID array just means you're spending more than you should on parity - especially with a large array where you need to use RAID6 and ideally have a hot spare..



Which is why I'm distributing all my important stuff on USB sticks which I then misplace.

Anyway I'm with Rcoon here. It's at best a bandaid, but Big Data always needs more space so it's necessary. Or we change form factors and introduce 5.25 inch drives.


----------



## alucasa (Jan 6, 2016)

taz420nj said:


> Same here.  Larger drives simply make it so you will lose more data at once when they fail.  Eggs in one basket and all that jazz.  Using large drives in a RAID array just means you're spending more than you should on parity - especially with a large array where you need to use RAID6 and ideally have a hot spare..



If I am paid to build a nice HTPC, I'd go with raid and all that jazz - with his money - 

But as a personal system used by me and my family, I simply use labelled HDDs. I use 4 2TB HDDs. + 1 OS SSD. 3 HDDs are used to serve content with organized folders with 4th HDD being used as backup. More important stuff from the 3HDDs are stored on 4th. The 4th drive is backed up once a month to a USB3 2TB external HDD. So, data loss is mostly taken care of.

And personally, 2TB data loss is about the max I can take. 4TB would be too much (to restore or re-download).

I dream of one day where my media rig will be tiny, silent, and light.  It will come in a decade, me thinks.


----------



## DarthBaggins (Jan 7, 2016)

Well we'll see how this drives prices down as the launch date comes closer, pretty interesting to see companies still driving for improvements in HDD tech.  Also by 2018 I hope to see inexpensive 1TB+ SSD's as well


----------



## Bill_Bright (Jan 7, 2016)

DarthBaggins said:


> Well we'll see how this drives prices down as the launch date comes closer, pretty interesting to see companies still driving for improvements in HDD tech.  Also by 2018 I hope to see inexpensive 1TB+ SSD's as well


It is all relative, of course, but I think 1TB SSDs are already becoming quite reasonable - considering I paid more than that for my first 20MB (yes, mega) HD years ago.

And while the high-end Samsung 850 Pro 2TB is still quite pricy, it does have an amazing 10 year warranty - doubling the best HD warranties. 

And certainly, these prices will continue to drop rapidly and densities will continue to increase over the next couple years too.


----------



## ste2425 (Jan 7, 2016)

Bloody hell, I've only just upgraded from a 500 GB HDD.


----------



## DarthBaggins (Jan 7, 2016)

I still need to snag a few 256GB Pro's for my RAID arrays (still rock my 3 128GB Pro's)


----------

