HotplugRaid
- Block devices connected over the network (drbd or nbd devices) are not covered here.
- Alternative aproaches include the syncing of regular (possibly network mounted) filesystems (unison, ChironFS) or replicating file systems (GlusterFS, OpenAFS, coda, intermezzo, ...).
HotplugRaid
A replicating raid (redundant array of independent disks) that holds the user data is a generally useful setup. A hotplugable raid with one ore more external drives is especially usefull for home or office laptop users. This page provides a How-To and a basic test case.
Warning: Hotplugging differnt hardware with device mapper on top of the raid can cause data loss! https://bugs.launchpad.net/ubuntu/+source/linux/+bug/320638
Home directories on HotplugRaid
Layman's Description:
Consider you are doing critical work on a laptop computer. At home, or when on AC, you plug in one (or possibly several) of your USB hard disks containing mirror partitions. Upon insertion the lights of those drives will stay lit during idle times for a little while. This is the mirror partition being synced in the background automatically. As long as those drives stay attached any work you save will be written simultaneously to the laptops internal disk, as well as to the attached mirror partitions.
If the internal disk fails, crashes down the stairway together with your laptop when you are on the road, or it drives away in that fake taxi drivers trunk, well hopefully you have at least one up-to-date mirror partition on that drive in your backpack, or on the one you kept at home.
Technical Description:
It's an installation with a filesystem residing on a multi-disk (md) raid array that contains hot-plugable devices like removable USB and Firewire drives or (possibly external / bay-mounted) (S)ATA harddrives.
It is possible to have the entire root file system (/) on a hot-plugable raid or as in the example installation below just the home directories (/home).
Beginning with ubuntu 9.10 the arrays are properly assembled using udev rules and the --incremental option. Recovery of temporarily disconected (external) drives should now work out of the box.
Installation Instructions
See: http://testcases.qa.ubuntu.com/Install/AlternateRaidMirror and adopt the partitioning to your needs.
- Having the root file system and swap on one encrypted lvm volume-group and /home on a second encrypted raid device, means you have to enter passphrases twice on each boot and resume. With fast devices like external SATA Drives you can include root swap and home into one volume group on one luks on raid mirror, and use a separate raid mirror only for /boot.
- multi-sync-bitmaps: If you want to have multiple external members and use them connected one at a time (rolling backup style) you may like to have a separate bitmap for each member! For this you need to stack md arrays of two members. The first array (i.e. md0) then consists of an internal member and one external member. The second array (i.e. md1) consists of md0 and the second external member.
Further Improvements
- Maybe enabling the mdadm options --write-mostly --write-behind for external drives.
- Maybe including a read only medium as a preferred mirror, showing discrepancies if the internal drive has been tempered with. Maybe even allowing you to boot from that external drive, possibly re-syncing the internal /boot partition and master boot records (MBR).
Troubleshooting
- Preexisting Superblocks: If during subsequent install attempts or when the disks already contain some lvm, raid or luks partitions you encounter problems with the installer not being able to delete or use preexisting devices (with data) or inadvertently scraping previous setups. If you encounter this please file/report on appropriate bugs. (Workaround may be to delete existing superblocks: mdadm --zero-superblock))
Booting with a degraded array (see ReliableRaid).
- When you detach a removable device with a raid member partition from the machine when it is powered down, the degraded array may not come up automatically during boot.
Slight chance with 9.10: Do a "dpkg-reconfigure mdadm" after the install and set boot degraded to yes again. (462258) To start the array manually if you are droped to a console:
mdadm --incremental --run /dev/sde5 # Assemble and run the device (in this case sde5) as a degraded raid. cryptsetup luksOpen /dev/mdX my-degraded-mirror # Open the encrypted md device using its luks header. mount /dev/mapper/my-degraded-mirror /mountpoint # Finally mount your data.
- When you detach a removable device with a raid member partition from the machine when it is powered down, the degraded array may not come up automatically during boot.
Resyncing reconnected drives (readding raid member partitions to its array) when they are re-attached (see ReliableRaid).
- If a reconnected partitions that got marked faulty still needs to be re-added manually: "mdadm --add /dev/md0 $dev" will manually re-add the partition to the raid and start the syncing. A custom udev rule may be similar to this one:
ACTION=="add", BUS=="usb", SUBSYSTEM=="block", DEVPATH=="*[0-9]" SYSFS{serial}=="5B6B1B916C31", RUN+="mdadm --add /dev/md0 $DEVNAME"
- But the above rule actually identifies the USB device by its serial number and not the raid partition by superblock/uuid (which you may have moved to another device).
- If a reconnected partitions that got marked faulty still needs to be re-added manually: "mdadm --add /dev/md0 $dev" will manually re-add the partition to the raid and start the syncing. A custom udev rule may be similar to this one:
References
https://help.ubuntu.com/community/Installation/SoftwareRAID https://features.launchpad.net/distros/ubuntu/+spec/udev-mdadm https://blueprints.launchpad.net/ubuntu/+spec/udev-mdadm https://blueprints.edge.launchpad.net/ubuntu/+spec/boot-degraded-raid https://blueprints.launchpad.net/ubuntu/+spec/udev-lvm-mdadm-evms-gutsy
HotplugRaid (last edited 2013-01-05 21:44:43 by 77-22-90-94-dynip)