Recently I tried migrating my HDDs (RAID5 – 4x 4TB) from an existing OMV6 setup to a fresh OMV8 installation on a new pc build, since the new motherboard offered me 2 more SATA slots.
Unfortunately the system had a problem with the SATA drives (very old motherboard primarily meant for IDE drives) and the system crashed while unlocking LUKS and mounting the storage pool.
This corrupted the metadata on 2 of the drives and the RAID was no longer recognized.
I moved the HDDs to another stable system and tried fixing the setup. Unfortunately after several days and going around in circles with ChatGPT (yeah I know, probably not the best idea) all i achived was rewriting the metadata once with mdadm --create --assume-clean and thus overwriting the superblocks on the 2 drives that previously where recognized as participants in the RAID. I have saved the metadata of the 2 drives before. I don’t know, if this potentially damaged the LUKS header.
So my current state:
- No superblocks present.
- No RAID metadata on the disks.
- Data should be intact (because I used
--assume-clean). - Drive order not known.
- Drives present in stable system with OMV and Windows 10 installed.
What I know for sure:
-
Previous Structure: RAID5 with ext4 and LUKS.
-
RAID was created using OMV6 (including the LUKS add-on) (OMV installation is still intact and accessible).
-
Drive UUIDs don’t match the original ones in the old OMV config (new hardware).
-
Drive letters also scrambled.
-
LUKS password.
-
Data offset (from old raid metadata).
-
No physical disk failure.
-
Previous metadata of two drives:
/dev/sdc: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : c7902597:37b03a7f:65e30e58:c5ff5cd6 Name : openmediavault:0 (local to host openmediavault) Creation Time : Thu Feb 1 14:59:35 2024 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 7813772976 sectors (3.64 TiB 4.00 TB) Array Size : 11720658432 KiB (10.92 TiB 12.00 TB) Used Dev Size : 7813772288 sectors (3.64 TiB 4.00 TB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=688 sectors State : clean Device UUID : 3b702221:b6bff32a:80c86da6:0f225da2 Internal Bitmap : 8 sectors from superblock Update Time : Sun Feb 22 22:00:25 2026 Bad Block Log : 512 entries available at offset 24 sectors Checksum : 731f3399 - correct Events : 23465 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)/dev/sde: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : c7902597:37b03a7f:65e30e58:c5ff5cd6 Name : openmediavault:127 (local to host openmediavault) Creation Time : Thu Feb 1 14:59:35 2024 Raid Level : raid5 Raid Devices : 4 Avail Dev Size : 7813772976 sectors (3.64 TiB 4.00 TB) Array Size : 11720658432 KiB (10.92 TiB 12.00 TB) Used Dev Size : 7813772288 sectors (3.64 TiB 4.00 TB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=688 sectors State : clean Device UUID : 86f6a821:281889b3:ba2613bf:8b59b0a0 Internal Bitmap : 8 sectors from superblock Update Time : Sun Feb 22 22:00:25 2026 Bad Block Log : 512 entries available at offset 24 sectors Checksum : 1333a80b - correct Events : 23465 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
I really hope someone can advise me how to proceed here and potentially gain access to the filesystem again or at least the raw files.