Creating an encrypted RAID5 with LVM in Debian 8
It was already critical, as one of the RAID1 disks had failed and the second SMART counter for seek errors went higher and higher. No reallocated sectors though, lucky. Guess the seek mechanism was broken in this one. Or in both. The first disk had a controller problem which did not let me connect the disk anymore.
As a side note, the two broken disks are Seagate Barracuda 7200s. And those two are already replacement disks! The two before also went down pretty quickly. The three new disks are Western Digital WD30EFRX, which are marked as 'suitable for continuous operation'. Maybe the Seagates are really not made for long running hours, dunno.
WARNING: Playing with disks and data is always coming with risks! Backup your data onto external devices and store them somewhere safe. It is good to have a disconnected backup device that cannot be reached by out-of-control happyly-disk-formatting software!
Creating a new RAID5 with mdadm
After connecting the new disks they are available for use in the system. First step is now to bundle those three to one RAID array with mdadm. In my configuration, the new disks are sdc
, sdd
and sdf
(sde
being the old Seagate). Now, the initial creation:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
The creation of /dev/md1
is now in work. What does mdadm say?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
|
Fine. /dev/sdf
is marked as spared at creation time. This seems okay and is described in one of the linked resources. We now have an md-array available to further configure. You do not need to wait until the array is fully built.
Side note: if you happen to reboot while the creation was still in progress, you might find your array marked as 'degraded' and/or 'recovering' but nothing happens. The creation process is revivable by either writing to the array or simply setting the array as read-write (it is most probably marked as read-only too): sudo mdadm --readwrite /dev/md1
. The recovery process should now continue.
Encrypting the new array device
Now comes the time to encrypt the newly created device /dev/md1
. I am using LUKS here. I also encrypt the whole array and set LVM on top. You can do it the other way around, too, if you do not want to encrypt the whole disk for example.
So, I am now going to format the array with LUKS, secure the volume with a password, generate a keyfile and add this keyfile to the volume. Further info below.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
Checking the LUKS header:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
You can now test mounting the container with the keyfile via sudo cryptsetup -v -d /root/storage2_keyfile luksOpen /dev/md1 storage2_crypt
. This should open a mapper device at /dev/mapper/storage2_crypt
without asking for a password. If you want the container to be automatically mounted on system boot, you also need to add the md-array in /etc/crypttab
:
1 2 |
|
Why did I also add a keyfile that is saved on the same machine? you may wonder. The reason for the whole encryption step is only to make sure, that broken disks I have to send back in the future do not contain any usable data. The disk with the faulty controller I talked about earlier I cannot send back because there is no way to shred the data the disk contains.
Is it a performance loss? I don't know yet but I guess not. The WD disks are 5400 rpm only, so they are not made for heavy disk load anyway and decryption is pretty easy for modern CPUs.
Setting up LVM physical, group and logical volumes
Step 3. We will now define the encrypted volume /dev/mapper/storage2_crypt
as a physical LVM volume, add this pv to a newly created group and add logical volumes (like partitions) to this group.
The first volume I will then format as ext4 with a size of 2.5 terabytes. It will take over for the dying disk as soon as we are finished. Finally, I add the ext4 volume to /etc/fstab
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
Removing the old RAID1 array
In my setup we want to also remove the old broken RAID1 array. It's a LVM directly on top of an md-array.
1 2 3 4 5 6 7 8 |
|
The hardware disk can now be removed.
Bonus: Using SMART to watch your disks
While setting up the new array I regularily checked the SMART values of all acting disks:
1 2 3 4 |
|
Luckily, the new WDs do not have any errors so far. The Seagate on the other hand.. well:
1 2 3 4 |
|
And later, after copying all data..
1 2 3 4 |
|
Further resources
- Performance - Linux Raid Wiki
- HOWTO: Automatically Unlock LUKS Encrypted Drives With A Keyfile
- How-To: encrypted partitions over LVM with LUKS — page 2 — encrypting the partitions | Debuntu
- SOLVED Repair Degraded Raid 5 w/ mdadm
- RAID Recovery - Linux Raid Wiki
- raid5 with mdadm does not ron or rebuild
- MDADM - how to reassemble RAID-5 (reporting device or resource busy) - Unix & Linux Stack Exchange
- Mdadm checkarray - Thomas-Krenn-Wiki
- raid - New md array is auto-read-only and has resync=PENDING - Unix & Linux Stack Exchange