So, I have a BTRFS partiton that fails to mount after I resized it. When I resized it back, it still fails to mount. I should also mention that I have three hard drives in the computer in question. The two BTRFS drives were converted from EXT4 (one of them is the drive in question) and the third one is an SSD which contains the OS. I was wondering if there would be a way to mount it, so I can copy the data to my working BTRFS partition and then add and format that partition to a RAID array. I did try to e-mail the BTRFS mailing list stated in the wiki but got an automatic response saying it couldn’t be delivered. (I sent it through G-Mail on my web browser) I did try to see if I could copy the data using rescue to the other drive, but it couldn’t recover anything, although in GParted if I view the drive (while it’s unmounted since it doesn’t mount anyway) it does show how much space is used and how much is free.
So, with this out of the way, here’s some information that I believe may be useful if recovery is possible:
Line from /etc/fstab
of the volume that doesn’t mount:
UUID=44140ddb-1d65-40c2-bc87-6f0c4eea6f8e /nasa16 btrfs ro,rescue=ignoredatacsums,compress=zstd:8,noatime 0 0
I have the line for the volume that fails to mount commented with the hashtag symbol in /etc/fstab
, otherwise my Linux install will boot into emergency mode when it fails to mount the volume. However, I removed the comment hashtag for the commands.
Line from /etc/fstab
of the working BTRFS volume that does mount:
UUID=a278e706-b44a-43b8-bfb0-631f45bfa3f6 /nasa btrfs autodefrag,compress=zstd:8,noatime 0 0
For reference, /dev/sdc1
is the working BTRFS volume. /dev/sda1
fails to mount. Both of the drives were converted from ext4 to BTRFS using the following command (Integrity was checked before running the conversion, [] was used in place of the drive letter.):
btrfs-convert -d /dev/sd[]1
Now, here’s the output of the commands mentioned in the BTRFS mailing list to provide when asking for support. I wrote the following script to run them, but added a mount command before dmesg
so any useful info will show up in dmesg
.
#!/bin/bash
uname -a &>'uname.txt'
btrfs --version &>'btrfs-version.txt'
btrfs fi show &>'btrfs-fi-show.txt'
btrfs fi df /dev/sda1 &>'btrfs-fi-df.txt'
mount '/dev/sda1' &> 'mount-log.log'
dmesg > 'dmesg.log'
Here’s the output of each file generated by the script. (The file dmesg.log
was cut short because the intel_powerclamp
lines at the beginning of the output were repeating themself over and over again.)
uname.txt (I know the kernel version is a release candidate, but I couldn’t find a package on Ubuntu’s mirrors to install the final version kernel on 22.04)
Linux p7-1268c 6.1.0-060100rc5-generic #202211132230 SMP PREEMPT_DYNAMIC Sun Nov 13 22:36:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
btrfs-version.txt
btrfs-progs v5.19.1
btrfs-fi-show.txt (/dev/sdd1
and /dev/sdi1
are temporary drives, so if I can recover /dev/sda1
they’ll hold the information copied from /dev/sda1
, then I can run btrfs-replace
.)
Label: 'NASA14TB' uuid: a278e706-b44a-43b8-bfb0-631f45bfa3f6
Total devices 3 FS bytes used 19.11TiB
devid 1 size 12.73TiB used 12.72TiB path /dev/sdc1
devid 3 size 7.28TiB used 6.38TiB path /dev/sdd1
devid 4 size 59.62GiB used 53.00GiB path /dev/sdi1
Label: none uuid: 44140ddb-1d65-40c2-bc87-6f0c4eea6f8e
Total devices 1 FS bytes used 13.76TiB
devid 1 size 14.27TiB used 14.14TiB path /dev/sda1
btrfs-fi-df.txt (I know it was supposed to be the directory of the mounted volume, but the volume doesn’t mount anyway.)
ERROR: not a directory: /dev/sda1
mount-log.log
mount: /nasa16: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error.
dmesg.log
[101735.605137] intel_powerclamp: Stop forced idle injection
[101736.606472] intel_powerclamp: Start idle injection to reduce power
[101785.778086] intel_powerclamp: Stop forced idle injection
[101786.779394] intel_powerclamp: Start idle injection to reduce power
[101838.269052] BTRFS info (device sda1): using crc32c (crc32c-intel) checksum algorithm
[101838.269067] BTRFS info (device sda1): ignoring data csums
[101838.269070] BTRFS info (device sda1): use zstd compression, level 8
[101838.269072] BTRFS info (device sda1): disk space caching is enabled
[101838.269674] BTRFS warning (device sda1): checksum verify failed on logical 21558607872 mirror 1 wanted 0x00000000 found 0xa5f996b5 level 1
[101838.269687] BTRFS error (device sda1): failed to read chunk root
[101838.270385] BTRFS error (device sda1): open_ctree failed
Output of btrfs check /dev/sda1
Opening filesystem to check...
checksum verify failed on 21558607872 wanted 0x00000000 found 0xa5f996b5
checksum verify failed on 21558607872 wanted 0x00000000 found 0xa5f996b5
Csum didn't match
ERROR: cannot read chunk root
ERROR: cannot open file system
Output of btrfs rescue super-recover /dev/sda1
All supers are valid, no need to recover
So, how can I recover the drive so it can be mounted, or recover certain folders from the drive at the very least?