258

I added a new hard drive (/dev/sdb) to Ubuntu Server 16, ran parted /dev/sdb mklabel gpt and sudo parted /dev/sdb mkpart primary ext4 0G 1074GB. All went fine. Then I tried to mount the drive

mkdir /mnt/storage2
mount /dev/sdb1 /mnt/storage2

It resulted in

mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

I tried mount -t ext4 /dev/sdb1 /mnt/storage2 with identical outcome. I've done this stuff many times before and have never ran into anything like this. I've already read this mount: wrong fs type, bad option, bad superblock on /dev/sdb on CentOS 6.0 to no avail.

fdisk output regarding the drive

Disk /dev/sdb: 1000 GiB, 1073741824000 bytes, 2097152000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 0E136427-03AF-48E2-B56B-A467E991629F

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 2097149951 2097147904 1000G Linux filesystem 
6
  • 60
    Hint for anyone else coming across this: run dmesg, it may give you more information on what your problem actually is. Commented Oct 1, 2019 at 15:53
  • To add to the suggestion by @WinstonEwert, run dmesg without piping it to grep. The messages I needed to see had no mention of my device, label, or mount point (other than dm-0 which I was not looking for)! Commented Sep 18, 2020 at 20:00
  • dmesg told me I had "default" in /etc/fstab, "unrecognized option", when it should have been "defaults". Commented Jul 20, 2021 at 6:03
  • 2
    If you came to this question through a search, but your problem involves NFS, see NFS does not work. mount: wrong fs type, bad option, bad superblock. Commented Jun 13, 2022 at 17:46
  • I added a new drive to the server and I accidently swapped the SATA cables from some drives. That was the problem for me. Switched them back to the way they were and it was all fixed. Commented Aug 1, 2022 at 18:55

21 Answers 21

258

WARNING: This will wipe out your drive!


You still need to create a (new) file system (aka "format the partition"). 

Double-check that you really want to overwrite the current content of the specified partition! Replace XY accordingly, but double check that you are specifying the correct partition, e.g., sda2, sdb1:

mkfs.ext4 /dev/sdXY

parted / mkpart does not create a file system.  The Parted User's Manual shows:

2.4.5 mkpart

Command: mkpart [part-type fs-type name] start end

    Creates a new partition, without creating a new file system on that partition.

    [Emphasis added.]

12
  • 146
    This will wipe out your current drive !! Commented Jan 26, 2019 at 2:03
  • 1
    @Kosta you must run the command as a superuser (i.e. sudo) Commented May 13, 2019 at 12:54
  • 4
    @SudarP it wipes out the /dev/sdb1 device. Not your current. Just execute if you are sure of what you are doing (linux.die.net/man/8/mkfs.ext4) Commented Aug 2, 2019 at 7:56
  • 1
    @tremendows, rudimeier: I agree that the command is appropriate for the OP, but also agree with SudarP that a command like this that can mess up an existing filesystem should come with a prominent warning. Other people reading question may have intended /dev/sdc1 or somesuch and could seriously regret executing this command. Commented May 23, 2020 at 12:28
  • 1
    It's important to mention that the "device" that gets wiped is only the partition that is selected, not the entire drive. So mkfs.ext4 /dev/sda4 will keep sda1-sda3 intact. So, as clarified, it will not wipe your current drive, unless you attempt to wipe it by missing to provide the device number in the command, in which case a prompt will ask for confirmation. Commented May 11, 2022 at 18:13
63

I had this problem with /dev/sda on Ubuntu 16.04 I solved it by booting into a live usb and doing the following:

To see your disks use lsblk

If you can see your drive -- that's good, run sudo fdisk -l to see if the system can use it.

Run this command to attempt to repair bad superblocks on the drive.

sudo fsck /dev/sda1 (replace /dev/sda1 with the drive you want to fix).

When it asks to repair blocks select yes by pressing 'y'

Allow fsck to repair all bad blocks.

Then I was able to mount the device using

sudo mount /dev/sda /media/ubuntu

This solved it for me.

12
  • Trying this, I get a permission denied error for both fdisk -l and the fsck command. Is there a workaround? Commented Apr 22, 2019 at 5:37
  • 1
    @NoVa This may be out of date, but in generaly permission denied means you must precede the command with sudo such that you have root permisions. Commented May 17, 2020 at 19:06
  • 1
    You are assuming the file system is ext. Commented Sep 3, 2020 at 9:34
  • No, clean, 1801983/9043968 files, 33913312/36160000 blocks. This only for certain condition. Commented May 9, 2021 at 12:36
  • If you have a corrupted ubuntu installation and you are trying to recover it by booting from an external device, this is the answer. I was about to dump my ubuntu HD and this saved me. Thank you! Commented Oct 14, 2021 at 17:01
27

If dmesg logs show something like

[ 4990.686372] ntfs3: sda7: It is recommened to use chkdsk.

[ 4990.734846] ntfs3: sda7: volume is dirty and "force" flag is not set!

It says that the NTFS partition has dirty flag set; Linux NTFS driver mounts R+W only if it isn't; RO mount works irrespective of dirtiness.

However, if you want R+W mount, this can be fixed by ntfsfix -d, if possible:

-d, --clear-dirty
    Clear the volume dirty flag if the volume can be fixed and mounted.  If the option is not present or the volume cannot be fixed, the  dirty volume flag is set to request a volume checking at next mount.

Running

$ ntfsfix -d /dev/sda7

Mounting volume... OK
Processing of $MFT and $MFTMirr completed successfully.
Checking the alternate boot sector... OK
NTFS volume version is 3.1.
NTFS partition /dev/sda7 was processed successfully.

fixed the problem for me.

If this fails the next resort would be to use chkdsk on a Windows machine.

Refer Why is NTFS has a dirty mark and why can't NTFS3 mount dirty NTFS partitions? for more details.

5
  • 1
    thanks, checked sudo dmesg and it was exactly same as you mentioned, so ntfsfix worked for me but with using sudo ntfsfix -d /dev/sda1 Commented Oct 1, 2024 at 22:12
  • This is the information I needed. I mounted the partition as "RO" and was able to recover my files. Commented Oct 18, 2024 at 23:05
  • amazing, this was the only thing that worked! Commented Apr 3 at 10:10
  • This solution worked for me like a charm. Commented May 28 at 8:41
  • This was exactly my problem after having to force stop my pc. sudo ntfsfix fixed it. Thank you! Commented Oct 17 at 19:12
15

In my case, the solution was install nfs-utils on the client side.

CentOS/Red Hat:

yum install nfs-utils

Ubuntu/Debian:

apt update
apt install nfs-kernel-server

Arch

pacman -S nfs-utils
2
  • 3
    For Arch: sudo pacman -S nfs-utils Commented Jan 5, 2021 at 13:53
  • This seems to have solved my issue. Thank you ... Commented Apr 28, 2024 at 15:53
12

I have a different process for this that replaced the bad superblock with one of the alternatives. FSCK can be a "lossy" process because FSCK may want to remove too much data or to remove data from a sensitive location (e.g. data directory for a data base) so there are times when I don't want to use it or it doesn't work.

You can sudo yourself silly or just become root for the process. Just remember that when you are root, Linux assumes that you know what you're doing when you issue commands. If so directed, it will speedily delivery Mr. Bullet to Mr. Foot. Like many other things, with great power comes great responsibility. That concludes my warning on running your system as root.

sudo -s

fdisk -l

Figure out which device - assuming /dev/sdc1 for this example along with EXT4 as its the most common for this explanation.

fsck -N /dev/sdc1

Your device and your file system (ZFS, UFS, XFS, etc.) may vary so know what you have first. Do not assume it's EXT4. Ignoring this step can cause you problems later if it's NOT an EXT4 file system.

fsck.ext4 -v /dev/sdc1

Get your error message which says the superblock is bad. You don't want to do this if your superblock is OK.

mke2fs -n /dev/sdc1

This will output the alternate superblocks stored on your partition.

*Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208*

Pick an alternate superblock - keep in mind that the first one is the default and its bad so let's not use that one. You will also want to pick one from the list you get from your partition. Do not use the example. Your superblocks may be stored elsewhere.

e2fsck -b 98304 /dev/sdc1

Reboot and see if this worked. If not try the next superblock on the list. I've had to go the third or fourth one a couple of times.

e2fsck -b 163840 /dev/sdc1

Now try the command to validate the disk again. See if you get the same messabout about bad superblocks.

fsck.ext4 -v /dev/sdc1

Keep trying until you either run out of superblocks or it works. If you run out, you likely have bigger issues and I hope you have good backups. You can try running FSCK at that point.

1
  • 1
    For me this did not work since my partition table got corrupted. I dealed with it with testdisk, my answer to a related question: unix.stackexchange.com/a/747574/459266. Commented May 30, 2023 at 15:28
12

for me the problem was that the mount command contained options (umask, uid) which was not accepted (probably due to the device has ext4 file system)

sudo mount /dev/sda1 /media/ssd -o uid=pi,gid=pi
mount: /media/ssd: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error.
sudo mount /dev/sda1 /media/ssd -o umask=000
mount: /media/ssd: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error.

After removing the options, it worked

sudo mount /dev/sda1 /media/ssd
df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda1      ext4      220G   61M  208G   1% /media/ssd

After mount the permissions or ownership can be set to able to have write permissions for non root users.

sudo chown -R pi:pi /media/ssd
2
  • 1
    I had the same issue after upgrading from Ubuntu 16 to Ubuntu 20. Removing the option nobarrier from /etc/fstab on a XFS partition solved it. Commented Jul 6, 2021 at 9:46
  • Thanks! I would've never figured out. I just copy-pasted the line from an exFAT, but for an ext4, so the user setting was not compatible. Commented Mar 9, 2023 at 19:24
5

My HD shows up in Gparted as unallocated but it isn't showing up in File (same thing in windows Disk management and File explorer)

I used testdisk to recreate the partition table and I was able to recover my HD and all the data without formatting it.

  1. In the terminal enter testdisk
  2. Then choose create a log
  3. Then select the HD and choose the partition type (most probably Intel)
  4. Then select Analyse
  5. If you only have 1 partition in your HD, then just mark the one that shows up as Bootable partition (with the left-right arrow). Then press "Enter" to continue
  6. If you had more, you should now do a "deeper search", otherwise choose Write

Unplug your HD and plug it again (this is what they mean, in this case, by "reboot")

2
  • 1
    For me it also worked with testdisk to solve wrong fs type, bad option, bad superblock on /dev/nvme0n1 as in my answer here: unix.stackexchange.com/a/747574/459266. Commented May 30, 2023 at 15:22
  • This worked for me! I followed step 1 to 4. For step 5, I just kept the default, and for step 6 I didn't bother with a deeper search despite having 3 partitions. Probably not the wisest, but I forgot what partitions/filesystem the drive had, so I opted to gamble it. Upon exiting testdisk, GNOME Disks showed individual partitions, I was able to mount the disk, and successfully recover all data. Commented Jul 10, 2023 at 21:38
5

I had to install ntfs-3g as it was an NTFS formatted partition and Linux kernel versions before 5.15 didn't support ntfs out of the box. NTFS-3G is a FUSE filesystem (Filesystem in Userspace)

For me, the message appeared in dolphin. With installing ntfs-3g, dolphin can mount it properly.

1
  • Me too. For me it solved my issue and GParted guide me. Commented Aug 11, 2023 at 11:12
3

Unbelievable. I see this come up All the time. I can manually mount with the mount command no problem. When I try to mount -a I get the same bad fs type error. I looked at the /etc/fstab carefully and saw I spelled default not defaults. It is the spelling that gets me.

0
3

To get a usable volume out of a new disk, you have to carry the following steps:

  1. Create a partition table on the disk, aka disklabel. there are two possible formats, msdos or gpt.
  2. Create as many partitions as desired from the disk space. each partition is essentially defined by a begin/end addresses and partition type. The filesystem type is not definitely not a partition attribute, although some commands like parted may use it to set the partition type. Each partition will be considered as a block device on the system and accessible as a block file like /dev/sdb1.
  3. Create a filesystem on the desired partition, which actually means organizing the partition raw space in a way suitable to store files/folders/links, etc., i.e. the file hierarchy. The filesystem attributes are stored in the superblock, which is generally one disk block at the very beginning in the partition space.

Given this short introduction, you'll notice that you still have to go through the third step. You may use for this mkfs.ext4 /dev/sdb1.

3
  • From the question, it looks like the OP already did your steps 1 and 2. If you believe that there’s an error in the commands he used, say so. And rudimeier’s answer (which is accepted and has nearly 200 votes) already covers your step 3.  Please don’t post duplicate answers. Do you have any new information or advice to contribute? Commented Jun 13, 2022 at 17:24
  • The question clearly states that the disk is new. So no need for warning twice in bold text that the disk will be wiped/content overwritten! If editing other's solution were possible I would suggested modifications.. I also wanted to add a summary of the full procedure of how to make a usable space out of a new disk. Nothing new. btw, would you please point me to some resources on why SE bot put this question on the top (no filtering)? Thanks. Commented Jun 14, 2022 at 13:49
  • If this question floated to the top of the list on the ‘‘front page’’, it’s probably because JingChen posted a new answer.   (You know; the one you commented on.) Commented Jun 14, 2022 at 21:17
2

I know this is an old question but in case this helps anyone that stumbles across this.

I was attempting to connect and NFS mount but was failing and getting the same error as OP.

After installing the dependencies I was good to go:

sudo apt install nfs-common

1
2

In my case I wasn't paying attention, and was trying to mount /dev/sda /mnt instead of mount /dev/sda1 /mnt. Be careful out there!

1

If you are trying to mount image file in any folder present in /home in ubuntu then try these Commands:

Generalized :

>>mkdir <name of your directory>
>>sudo mount -o ro <name of image file with extension> <name of your directory>/

Specified: name of your directory = system , name of image file with extension = system.img

>>mkdir system                                
>>sudo mount -o ro system.img system/
1

In case you want to mount a disk in linux mint. here is how is my secondary disk looks like linux mint disk management

You just need to create partition from the menu and most done automatically, and mount options came automatically, disk is mounted under /media/ extended partition

0
1

I got the same error on boot and it was due to wrong filesystem entry on /etc/fstab.

The entry was

/dev/mapper/vgso-lvso /so xfs defaults 0 0 

and it should be ext4 mounted

/dev/mapper/vgso-lvso /so ext4 defaults 0 0 
1
  • This is the gotcha, op also states the wrong file system type. Commented May 9, 2021 at 13:13
1

In my case, a simple 'Repair Filesystem' option from 'Disks' utility solved the problem.

For totally new Linux users, the GUI method (on Gnome) is as follows -

Open 'Disks' utility -> Select your disk on the left hand side -> Click on wheel icon on the right hand side -> Select 'Repair Filesystem'

Disks utility Repair Filesystem

2
  • This solved it for me too. It was an NTFS filesystem originally created in windows. Commented Jan 3 at 14:00
  • Mine was also NTFS but created on Linux. This worked for me. Commented Jan 13 at 0:34
0

For me, there was some mysterious file causing this issue.

I had to clear the directory using the following command:

sudo mkfs -t ext3 /dev/sdf

Warning: this might delete files you have saved. So you can run ls to make sure you don't lose important saved files and backup these files before execution.

0

in my case changing sata cable powering both drives and restart seems to fix the problem. however I was unable to mount them on the same old point so changing them fixed for me. then I manually remove the old mount points from mnt directory :)

2
  • Welcome to the site, and thank you for your contribution. However, if you say you were "unable to mount them on the same old point" - what exactly do you mean by that? That you could no longer use the mount point you used previously to mount the drives, although you only exchanged the SATA cable? Commented Oct 4, 2021 at 8:36
  • yes. before it was sda1 and somehow I was unable to mount on it. even after removing that mount from the os itself. then I used different mount name and it worked. I mean to say I change the 15 points sata power cable and sata3 data cables. Commented Oct 7, 2021 at 5:43
0

In case this helps anyone: I was getting this issue for a different reason. I was trying to mount my entire nvme0n1 drive, which was my dual boot system! And thus mount was complaining because I was trying to mount some Windows partitions (ntfs) onto my liveusb (ext4), causing errors visible in dmesg.

The solution was to check which section held my Linux install specifically via sudo fdisk -l /dev/nvme0n1, then mounting that one with sudo mount /dev/nvme0n1p7 /mnt.

0

Make sure that the filesystem on the partition matches the filesystem in fstab. Everything has an "x" in it, so my dyslexic brain confused ext4 with xfs, because I was following a recipe that used ext4 but the drive I was replacing was formatted as xfs. (Examples below work for Rocky 9)

If you are replacing a drive and /etc/fstab contains xfs that looks like this:

/dev/disk/by-id/nvme-Micron_9300_MTFDHAL15T3TDP_221837870467-part1      /data/rt/0      xfs     defaults        0 0

then reformat the new partition using

sudo mkfs.xfs /dev/disk/by-id/nvme-Micron_9300_MTFDHAL15T3TDP_221837870467-part1

then

sudo systemctl --daemon-reload
sudo mount -a

Note: you don't want to use names like /dev/nvme1n1p1, as these numbers can and do changed with every reboot, because it is a hardware race to number the drives. Use the udevadm info ... command to learn the /dev/data/by-id/* alternative names that will not change with every reboot.

-1

If need be identify partition with: lsblk -f then

sudo ntfsfix NAME_OF_FAILING_PARTITION

if it still fails add -d option, aka

sudo ntfsfix -d NAME_OF_FAILING_PARTITION

worked for me on Manjaro with ntfsfix v2022.10.3 (libntfs-3g)

1
  • This question is not about NTFS. Commented Jun 8, 2024 at 20:58

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.