Glam Prestige Journal

Bright entertainment trends with youth appeal.

I'm having a linux machine with a regular RAID-1 on 2 HDD's. Everything seems fine, but about one time per day it seems to wipe the superblock. If i run mdadm --create --assume-clean --level=1 --raid-devices=2 /dev/md0 /dev/sda /dev/sdb everything seems fine though. mdadm --assemble says that it can't find a valid superblock (on both drives). I don't know if it matters but i made a Hexdump from before and after RAID recreation: (head /dev/sda | hexdump -C)

enter image description hereIt's not only at reboot, it also happens while the PC is running. Do you have any ideas what it could be?

dumpe2fs shows the following for both sda and sdb:

dumpe2fs 1.45.5 (07-Jan-2020)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda
Couldn't find valid filesystem superblock.

Maybe the error is because i made the RAID on the whole disk and not just on the partition?

5

1 Answer

It seems to be the Mainboard wiping the data, because there was still a part of the GPT formatting left. Details are explained here: Running sgdisk --zap /dev/sda and sgdisk --zap /dev/sdb solved the issue.

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy