This page is a work in progess.
Greyhole is an open source application that merges drives of arbitrary size into a storage pool that is shared as logical, redundant volumes using Samba of any size or level of redundancy. This is similar to Logical Volume Management in its storage pooling capacity and RAID in its ability to provide redundant storage. It is maintained at http://www.greyhole.net.
Comparison to Software RAID
Greyhole's advantage over RAID is that drives of dissimilar sizes can be added to the same pool. The disadvantages over RAID are in speed (Greyhole is slower) and drive usage (a duplex greyhole share setup requires 200% of the aggregate file size, where a RAID 5 or 6 additional storage scheme imposes a storage penalty that shrinks with an increasing number of physical drives in the RAID).
The average greyhole implementation is 6TB in total pool size and has 5 drives. The largest is 43TB and has 26 drives. [November 2011]
Greyhole merges an arbitrary number of
The supported, and probably optimal, way to use raid with Ubuntu is to employ Linux's Multiple Device (md) raid system, optionally with the Logical Volume Manager (LVM).
In Breezy Badger (5.10), installation of md and LVM can be completed entirely with the installation CD without using expert mode.
This is the simplest method of setting up software raid. It uses one raid partition per operating system partition, unlike LVM which uses logical volumes in order to use a single raid partition. For each disk in the array create a "Use as: Raid physical" partition of appropriate size for each operating system partition (/boot, /, swap, /home, etc.). The /boot or / partitions on each disk should be marked bootable. Use configure raid to make raid devices for each partition. On each of these raid devices configure a single operating system partition (/boot, /, swap, etc.) of the size of the entire device. Continue installing Ubuntu.
http://www.howtoforge.com/linux_software_raid More detailed instructions for Debian and Ubuntu
Note for RAID1 first timers: Each RAID array is usually made up of at least two 'active' devices (one of which is actually 'active' and one of which is the mirror) - spare devices are there for when one of the active devices fail so they can jump in and continue mirroring. The same principle applies if you have one true 'active' and multiple mirrors etc.
md made super simple
If you want to make a RAID array of data devices and not of boot devices, adapt the following recipe to your needs.
Goal: Create a RAID 1 array for user data, that is, for home directories.
Setup: Ubuntu 6.06 LTS (Dapper server) with one (1) 40GB root partition (/dev/hda), currently holding all data including home and two (2) unused 250GB hard drives (/dev/hdc, /dev/hdd).
The "super simple md" recipe
$ cd / $ sudo -s -H # mdadm /dev/md0 --create --auto yes -l 1 -n 2 /dev/hdc /dev/hdd # mke2fs /dev/md0 # mv /home /home2 # mkdir /home # cat >> /etc/fstab /dev/md0 /home ext2 defaults 0 0 ^D # mount -a # mv /home2/* /home # rmdir /home2 # exit $ cd
All commands, e.g., mke2fs, have sensible defaults and "do the right thing".
Obs.: if you have just one of the "data HDs" at hand but intend to buy one more later, you can build the array using:
# mdadm /dev/md0 --create -l 1 -n 2 /dev/hdc missing where "missing" will be the place of the new disk.
- Didn't test with --auto option (as on the original super simple recipe above).
# mdadm /dev/md0 --add /dev/hdd will add the new disk later (be aware of the sizes, when buying).
- It's worth to say that some people (including me) had "mdadm: /dev/hdd1 not large enough to join array" error when using the standard procedure, that is: partitioning the disk and add the partition to the array.
- You may need to edit /etc/fstab to remove redundant entries, e.g., if either /dev/hdac or /dev/hdad was used for something else before the pair of hard drives was installed.
- This recipe creates ext2 partitions on the drives being raided; if you want ext3, you will need to at least use the "-j" option to mke2fs, and you will need to modify the /etc/fstab entry appropriately. Some people reported errors with journaled file systems (at least ext3 and reiserfs) under RAID intensive use.
- Or you can use mkfs.ext3 instead of mke2fs to make the file system ext3, and as above change the fstab entry from ext2 to ext3
- Various man pages reference man pages that either no longer exist or are not installed by default. E.g., the md man page refers to mkraid(8).
- You can use gparted (sudo gparted) to find the hard drive devices (mine were sdb and sdc) as well as format the disks in a graphical interface if you wish. (You may need to sudo apt-get install gparted if it is not installed)
- I use a desktop install and had to sudo apt-get install mdadm before this would work.
- ^D is control-d (ctrl-D).
- If you have a SATA disk (most as of semptember 2010) you'll see "sdX" instead of "hdX" devices (e.g.: /dev/sda instead of /dev/hda).
Installation/LVMOnRaid Setup using both LVM and md. The LVM setup didn't work for me.
Installation/RAID1 an older description for Warty Warthog.
Most, in not all, of the so called "raid controllers" installed on motherboards are actually just hard drive controllers with a few extra features to make it easy to implement software raid drivers. These are highly non-standard. Each chipset uses different on-disk formats and different drivers. These systems are not extremely desirable for use with Ubuntu; the completely software raid described above is better. They are primarily of interest when compatibility with another existing system that employs them is required.
Device mapper raid can be used to access many of these volumes. It is provided by the dmraid package. dmraid is in the Universe repository.
After installing dmraid you can run the command <code>dmraid -r</code> to list the devices and raid volumes on your system. dmraid makes a device file for each volume and partition; these can be found in the /dev/mapper/ directory, and can be mounted and otherwise manipulated like normal block devices. Other options of the dmraid program are used to administer the array.
It is not advisable to install Ubuntu onto disks managed by a fake raid system; it is extremely difficult and the results will be disappointing compared to Linux's LVM and md software raid system. If you really must do it to install Ubuntu on the same raid array as an existing installation of another operating system see the following:
Mixing Software Raid
Ubuntu can be installed on its own raid array on a computer that is using FakeRaid for another operating system on another array. There are a few steps that need to be followed for this to work:
Identify which drives are the existing operating system. You can do this by booting the Ubuntu Live CD, enabling the universe repository, installing dmraid, mounting the partitions, and poking around. You can see which devices were which mapped block device by running <code>dmraid -r</code>.
- Disable the fake raid support in the bios for the drives that Ubuntu will be installed onto. Many controllers, such as the Silicon Image 3114 controller, can either provide regular SATA drives or fakeraid, but not both at the same time; if this is your situation you'll need to move the drives for Ubuntu to a different SATA controller.
- Install Ubuntu onto the drives with raid as described in "Just md" above.
- Ubuntu will probably install Grub onto the first disk in your system. Once Ubuntu boots you'll want to install grub onto Ubuntu's drives and possibly restore the bootloader of another operating system if Ubuntu stepped on it. If your boot partitions are say, sdc1 and sdd1, installing Grub can be done with the following commands in grub (grub's numbers are one less than the letter):
- root (hd2,0)
- install (hd2)
- root (hd3,0)
You can restore the boot record of a stepped on drive of another raid array in your system if you have a backup of the drive's master boot record. For some bootloaders and configurations, such as NT loader on RAID1, the master boot record of the other drive in the array can be used as the backup.
http://www.linuxsa.org.au/mailing-list/2003-07/1270.html Grub and Raid, part 6 describes installing grub
Real hardware raid systems are very rare and are almost always provided by a card such as a PCI card. Your hardware will need kernel level support in order to work with Ubuntu. You can find out if is is supported without much work by booting a Live CD. Your array should be visible as a scsi block device and if it has existing partitions and file systems, mountable.
http://www.linuxmafia.com/faq/Hardware/sata.html Information about SATA controllers, including fakeraid.