I'm having a linux machine with a regular RAID-1 on 2 HDD's.
Everything seems fine, but about one time per day it seems to wipe the superblock.
If i run mdadm --create --assume-clean --level=1 --raid-devices=2 /dev/md0 /dev/sda /dev/sdb everything seems fine though. mdadm --assemble says that it can't find a valid superblock (on both drives).
I don't know if it matters but i made a Hexdump from before and after RAID recreation: (head /dev/sda | hexdump -C)
It's not only at reboot, it also happens while the PC is running.
Do you have any ideas what it could be?
dumpe2fs shows the following for both sda and sdb:
dumpe2fs 1.45.5 (07-Jan-2020)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda
Couldn't find valid filesystem superblock.Maybe the error is because i made the RAID on the whole disk and not just on the partition?
51 Answer
It seems to be the Mainboard wiping the data, because there was still a part of the GPT formatting left.
Details are explained here:
Running sgdisk --zap /dev/sda and sgdisk --zap /dev/sdb solved the issue.