# SOLVED - A bit of an issue with 2nd HDD becoming read only in Ubuntu



## HTC (May 26, 2019)

Tried "ducking" and got lots of suggestions but nothing seems to fix this which means i'm sitting on a 10 TB HDD with about 4.8 TB unused that i can't re-format because i don't have the space in my other HDDs for what's in it but i also can't write anything else to it. Tried changing the permissions but nothing sticks and i keep getting "read only" errors 

To note: the partition is Ext4.

I've noticed this behavior with USB sticks as well since they too become read only but, unlike with the HDD, i have the space in HDDs to store what i need from the USB and re-format it, so i use that as a work-around.

EDIT

Some more info: somehow, when the HDD gives the read-only error, it becomes unmounted.

A bit later, the HDD was becoming unmounted while i was attempting to change permissions: why the hell does that happen?


----------



## Russ64 (May 26, 2019)

Have you checked in AskUbuntu?  Found this:
https://askubuntu.com/questions/195730/read-only-filesystem


----------



## HTC (May 26, 2019)

Russ64 said:


> Have you checked in AskUbuntu?  Found this:
> https://askubuntu.com/questions/195730/read-only-filesystem



I had not: thanks for the info.

Since it's not the main HDD, it's not mounted so i tried the following:




How should i proceed?

EDIT

Added info to the OP, regarding the fact the HDD gets unmounted by itself.


----------



## Aquinus (May 26, 2019)

Depending on how your fstab is setup, it's not necessarily uncommon for a partition to be marked to remount as read-only on error. I guess it's possible that the drive is failing. Just for reference:


If it's saying /dev/sdb can't be found, that would mean the block device is just flat out missing.

What does running "ls /dev/sd*" look like? For me it's something like this:

```
jdoane@Kratos:~$ ls -l /dev/sd*
brw-rw---- 1 root disk 8,   0 May 26 05:37 /dev/sda
brw-rw---- 1 root disk 8,  16 May 26 05:37 /dev/sdb
brw-rw---- 1 root disk 8,  32 May 26 05:37 /dev/sdc
brw-rw---- 1 root disk 8,  48 May 26 05:37 /dev/sdd
brw-rw---- 1 root disk 8,  64 May 26 05:37 /dev/sde
brw-rw---- 1 root disk 8,  80 May 26 05:37 /dev/sdf
brw-rw---- 1 root disk 8,  96 May 26 05:37 /dev/sdg
brw-rw---- 1 root disk 8,  97 May 26 05:37 /dev/sdg1
brw-rw---- 1 root disk 8, 112 May 26 05:37 /dev/sdh
brw-rw---- 1 root disk 8, 113 May 26 05:37 /dev/sdh1
brw-rw---- 1 root disk 8, 128 May 26 05:37 /dev/sdi
brw-rw---- 1 root disk 8, 129 May 26 05:37 /dev/sdi1
```


----------



## HTC (May 26, 2019)

Aquinus said:


> Depending on how your fstab is setup, it's not necessarily uncommon for a partition to be marked to remount as read-only on error. I guess it's possible that the drive is failing. Just for reference:
> View attachment 123711
> 
> If it's saying /dev/sdb can't be found, that would mean the block device is just flat out missing.
> ...


I didn't use fstab via terminal (don't know how). The drive is about 10 days old: a bit too soon to be failing, no?

As for your question, it looks like this:



How do i fix this?


----------



## Aquinus (May 26, 2019)

HTC said:


> I didn't use fstab via terminal (don't know how). The drive is about 10 days old: a bit too soon to be failing, no?


Drive failures tend to happen really often at 2 particular times, soon after getting it brand new, and after several years of use or more. Two of the 4 WD Blacks I have in my tower died within a couple days of having them (damn near lost all my data on my RAID 5,) so it wouldn't surprise me if it's failing early to be honest. SMART should be able to tell you if that's the case.


HTC said:


> I didn't use fstab via terminal (don't know how).


So, fstab is a file that's at /etc/fstab, it stands for "File System Table," and it describes your mount points at boot. I just opened it in vim and that's how my vim setup looks (I use vim for development.)

I'll give you a quick crash course on disks in Linux real quick, but the quick version is that ext4 doesn't know what to do with sdb because that's not a partition. That's the entire disk.

So, you have sda, sda1, and sdb.
sda is the disk disk in your system and sda1 is the first partition of the disk sda. This is likely your boot disk and sda1 is likely your root partition.

sdb is the second disk, since there are no numbered partitions, there could be a couple of things going on:

The ext4 file system was created directly on the block device and not the partition (A little weird but completely valid, but you would have had to gone out of your way to do this.)
The ext4 file system was put on to a LVM volume using sdb, if this is the case, you should find something with "ls -l /dev/mapper" other than just "control". This is also weird because typically LVM gets put on to a partition, much like ext4.
The partition simply doesn't exist.
The drive is dying.
I would see what is in /dev/mapper first. I have things in here for dmraid for my RAID-0 and 5:

```
jdoane@Kratos:~$ ls -l /dev/mapper
total 0
crw------- 1 root root  10, 236 May 26 05:37 control
brw-rw---- 1 root disk 253,   0 May 26 05:37 isw_cfaabaebjb_HDD
lrwxrwxrwx 1 root root        7 May 26 05:37 isw_cfaabaebjb_HDD1 -> ../dm-4
brw-rw---- 1 root disk 253,   1 May 26 05:37 isw_feffbdib_SSD
lrwxrwxrwx 1 root root        7 May 26 05:37 isw_feffbdib_SSD1 -> ../dm-2
lrwxrwxrwx 1 root root        7 May 26 05:37 isw_feffbdib_SSD2 -> ../dm-3
```

If there is nothing there, I would see what the partition table of /dev/sdb looks like with fdisk:

```
jdoane@Kratos:~$ sudo fdisk -l /dev/dm-0
Disk /dev/dm-0: 2.6 TiB, 2850567094272 bytes, 5567513856 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 196608 bytes
Disklabel type: gpt
Disk identifier: A5F066E9-57BB-4C63-BA6C-76DD37D432FF

Device                          Start        End    Sectors  Size Type
/dev/mapper/isw_cfaabaebjb_HDD1   384 5567513471 5567513088  2.6T Linux filesystem
jdoane@Kratos:~$ sudo fdisk -l /dev/dm-1
Disk /dev/dm-1: 212.4 GiB, 228061347840 bytes, 445432320 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 32768 bytes
Disklabel type: gpt
Disk identifier: 88041F70-AB31-4AD9-88D2-E59B104D8D9A

Device                         Start       End   Sectors   Size Type
/dev/mapper/isw_feffbdib_SSD1   2048    194559    192512    94M EFI System
/dev/mapper/isw_feffbdib_SSD2 194560 445431807 445237248 212.3G Linux filesystem
jdoane@Kratos:~$ sudo fdisk -l /dev/nvme0
nvme0      nvme0n1    nvme0n1p1 
jdoane@Kratos:~$ sudo fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 477 GiB, 512110190592 bytes, 1000215216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: A80068C3-BCCF-4880-B661-E01D693D2630

Device         Start        End    Sectors  Size Type
/dev/nvme0n1p1  2048 1000214527 1000212480  477G Linux filesystem
```

Edit: You might want to check SMART on that drive as well:

```
sudo smartctl --all /dev/sdb
```


----------



## HTC (May 26, 2019)

One of the three commands worked but the other two didn't. Got this:


----------



## blobster21 (May 26, 2019)

Is it possible to have the output of the following commands please:


```
sudo lsblk

sudo nano /etc/fstab
```


----------



## HTC (May 26, 2019)

blobster21 said:


> Is it possible to have the output of the following commands please:
> 
> 
> ```
> ...



Sure:

 

Does any of the screenshots help sort this issue?


----------



## blobster21 (May 26, 2019)

and what does gparted say about this /dev/sdb device ?

Is there a file allocation table detected at all ?

I'm not familiar with disk mounting, as i have always mounted partitions manually or through fstab instead, so i'm quite surprised to see your 10TB drive mapped on /media/htc/High storage

I didn't even know it was possible. The way i see things, as long as a drive doesn't have 1/one file allocation table (GPT, msdos) 2/ at least one partition 3/a coherent files system (ext4), you wiill get nothing out of it.

The fact that lsblk show your /dev/sdb device has a disk without partitions leads me to say you have not created a partition inside it, nor formatted it.

Am i right ?


----------



## HTC (May 26, 2019)

blobster21 said:


> and what does gparted say about this /dev/sdb device ?
> 
> Is there a file allocation table detected at all ?
> 
> ...



Gparted info:



I thought i had created a partition with Ext4 in this drive but now i'm not so sure, considering the issues i'm having with it.

As can be seen, the drive currently has a bit over 50% capacity used.

EDIT

When i press "Create Partition Table", i get this (drive mounted):



But then there's this:



No partition table, apparently. What happens to the data in the drive if i create one now? It erases everything so ... how do i fix this?


----------



## blobster21 (May 26, 2019)

Thanks for confirming this.

Can you show us the EXT4 reserved file system space on this device using the following command :


```
sudo tune2fs -l /dev/sdb | grep 'Reserved block count'
```

just to make sure something didn't go wrong


----------



## HTC (May 26, 2019)

blobster21 said:


> Thanks for confirming this.
> 
> Can you show us the EXT4 reserved file system space on this device using the following command :
> 
> ...


----------



## blobster21 (May 26, 2019)

Typically around 5% of the drive space has been reserved for the root user and system services. Since it's a non critical drive (read : it's not the boot drive), you can alway reclaim this space with the following comnand :


```
sudo tune2fs -m 0 /dev/sdb
```

You will reclaim 500Gb immediately.

Still it does not explain why you can't use the rest of the allocated space, it's like there's a physical limitation past 5TB

I know it's more like a workaround, but if you are willing to try recovering the remaining 50% disk space, you could resize the drive using gparted, then create a second partition into the unallocated space, and mount it side by side the existing High storage partition.

Sorry, i can't come up with a better idea right now.


----------



## HTC (May 26, 2019)

blobster21 said:


> *I know it's more like a workaround, but if you are willing to try recovering the remaining 50% disk space, you could resize the drive using gparted, then create a second partition into the unallocated space, and mount it side by side the existing High storage partition.*
> 
> Sorry, i can't come up with a better idea right now.



Figured i tried that but i got an error when doing so and the operation was aborted.

I think there may be a cable problem because, after doing that with GParted, i tried copying one of the smaller folders from that drive (just over 200 GBs) and i got an error because the drive became unmounted after around 12 GBs copied.


----------



## Aquinus (May 26, 2019)

HTC said:


> But then there's this:
> 
> 
> 
> ...


This means that you formatted the entire disk as an ext4 filesystem without creating a partition table. This is a valid and usable setup, but it's definitely an atypical way of setting a disk up in Linux. If you want to fix this, you need to backup the drive (if necessary,) and nuke the entire drive, create a new partition table (which is currently missing since you formatted the entire disk as if it were a partition,) and format a new first partition as ext4.

With that said, there is no reason why you can't operate this like. Even gparted says that the partition is mounted (auto mounted by Ubuntu, probably.)

Your fstab looks a little weird. The last line probably shouldn't be getting mounted before root (should have pass equal to something like 2 instead,) but I don't even know if that disk is your 10TB (it doesn't look like it, so I don't really know what it is.)

The bottom line is that the disk appears to be okay, you just set it up weird. There is no reason why you can't. If it's claiming that the disk is read only, it's possible that it's getting mounted with "error=remount-ro" which means the disk will remount as read only *if there is a problem*.

I highly suggest posting the output of running "sudo smartctl --all /dev/sdb". That will tell you the SMART stats of the drive. More often than not, a spinning disk remounting as read-only means the disk is on the fritz.


----------



## HTC (May 26, 2019)

Aquinus said:


> *This means that you formatted the entire disk as an ext4 filesystem without creating a partition table.* This is a valid and usable setup, but it's definitely an atypical way of setting a disk up in Linux. If you want to fix this, you need to backup the drive (if necessary,) and nuke the entire drive, create a new partition table (which is currently missing since you formatted the entire disk as if it were a partition,) and format a new first partition as ext4.
> 
> With that said, there is no reason why you can't operate this like. Even gparted says that the partition is mounted (auto mounted by Ubuntu, probably.)
> 
> ...



That means i still have a loooooooooooong way to go as far as learning Linux is concerned: had no idea i was doing it wrong.

I have 3 disks in my PC right now: an 250 GB NVME which is the OS drive, a 6 TB HDD which currently holds just over 3.5 TB and that 10 TB HDD, which is around 10 days old. Dunno how to correct any errors that may be present in that fstab thing.

That command doesn't work:


----------



## blobster21 (May 26, 2019)

then you will have to install it :


```
sudo apt install smartmontools
```

and let us have a look at the drive health.


----------



## Aquinus (May 26, 2019)

HTC said:


> That command doesn't work:


Do what @blobster21 suggested: "sudo apt install smartmontools"
You need to install the tool in order to use it. I'm actually surprised that Ubuntu didn't suggest that you needed to install that package. It does for a lot of other commands that aren't installed by default.



HTC said:


> had no idea i was doing it wrong.


You weren't doing it wrong. It's just atypical to do it that way. There have been rare situations where I've done this before. In particular when I used to run a gateway/nas/vm box, my software RAID-5 was setup that way, however there was no reason reason to, I just did because why not.


----------



## HTC (May 26, 2019)

blobster21 said:


> then you will have to install it :
> 
> 
> ```
> ...



 

Something is seriously wrong here: when i 1st mounted the drive in order to run that smartctl command, and while i was typing my password in the terminal, the HDD unmounted itself, but it did manage to perform the command on the 2nd attempt, after which it unmounted itself again. Drive failing?

How does the output of the test look?


----------



## Aquinus (May 26, 2019)

HTC said:


> View attachment 123759 View attachment 123760
> 
> Something is seriously wrong here: when i 1st mounted the drive in order to run that smartctl command, and while i was typing my password in the terminal, the HDD unmounted itself, but it did manage to perform the command on the 2nd attempt, after which it unmounted itself again. Drive failing?
> 
> How does the output of the test look?


That's actually just the logs. It even says it hasn't run a self-test. Try running this: "sudo smartctl -t short /dev/sdb" and give it some time. It will eventually finish and will be logged below the statistics table when you run "sudo smartctl --all /dev/sdb" at some point in the future (maybe give it about 5-10 minutes to run, I think it normally only takes 2 minutes-ish though.)

This looks relatively normal for a Seagate drive, however the UDMA errors *might* suggest that there is an issue with the SATA cable. Perhaps the cable is old, defective, or not fully plugged in on either end. The high read error rates are not uncommon for Seagate drives, so I wouldn't worry about that too much and since it's not freaking out about bad sectors, the drive is likely fine.

For the future as a safety measure, I've learned that it's wise to do something like writing all zeros to the disk before really using it for anything important. A stress test like that usually turns up errors if there are any.

Edit: FWIW, a "short" test might not pick up errors. A longer test might be required but I think the extended one takes almost two hours to run.


----------



## HTC (May 26, 2019)

Aquinus said:


> That's actually just the logs. It even says it hasn't run a self-test. Try running this: "sudo smartctl -t short /dev/sdb" and give it some time. It will eventually finish and will be logged below the statistics table when you run "sudo smartctl --all /dev/sdb" at some point in the future (maybe give it about 5-10 minutes to run, I think it normally only takes 2 minutes-ish though.)
> 
> This looks relatively normal for a Seagate drive, however the UDMA errors *might* suggest that there is an issue with the SATA cable. Perhaps the cable is old, defective, or not fully plugged in on either end. The high read error rates are not uncommon for Seagate drives, so I wouldn't worry about that too much and since it's not freaking out about bad sectors, the drive is likely fine.
> 
> ...



Couldn't get the test to run because the HDD kept un-mounting itself so i swapped the data cable for this HDD and it's running the short test: i'll edit this post with the results after it finishes.

Been a few minutes already but it hasn't unmounted itself yet, which i take as a good sign.

How to write all zeros to a drive? Never done such a thing: not even in Windows, back when i used it.

EDIT

over half an hour and it still isn't done, according to this:



Even though this says to wait 1 minute:



The good news is that it still hasn't unmounted itself.


----------



## Aquinus (May 26, 2019)

HTC said:


> How to write all zeros to a drive? Never done such a thing: not even in Windows, back when i used it.


You could do something like "sudo dd if=/dev/zero of=/dev/sdb" but...
*DO NOT RUN THIS UNLESS YOU WANT TO LOSE ALL OF YOUR DATA ON THAT DISK*.
(It's like doing a full format.)


----------



## HTC (May 27, 2019)

I ended up cancelling the test since it STILL hadn't finished, according to GParted so i used GParted instead of terminal and got this:



Also, i'm happy to report the drive is no longer read-only as i've successfully copied files to it.

I'll run the extended version of the test and leave it running before going to work later today: will post those results after returning from work, assuming it has completed the test by then.

I've edited this topic's title to reflect the current situation.

I'd like to offer a big thanks to all those who helped me sort out this issue!

EDIT

This post was merged with the previous one i had posted this morning. I've added the "EDIT" as well as this paragraph to separate them.

Left the extended test running before going to work.

Just arrived, almost *9 hours later*, and it's still 90% left to do?



This normal?


----------



## HTC (May 28, 2019)

I ended up cancelling that test and started another one, earlier this morning: this time, it was actually performing the test, instead of getting "stuck @ 90% remaining".

Still took a lot of time and was still around 20% left to finish when i went to work.

Got home a few minutes ago and the test had finished. Here's a pic with the results:



I trust everything is in order, hopefully.


----------

