This page is a work in progess.

Greyhole

Greyhole is an open source application that merges drives of arbitrary size into a storage pool that is shared as logical, redundant volumes using Samba of any size or level of redundancy. This is similar to Logical Volume Management in its storage pooling capacity and RAID in its ability to provide redundant storage. It is maintained at http://www.greyhole.net.

Comparison to Software RAID

Greyhole's advantage over RAID is that drives of dissimilar sizes can be added to the same pool. The disadvantages over RAID are in speed (Greyhole is slower) and drive usage (a duplex greyhole share setup requires 200% of the aggregate file size, where a RAID 5 or 6 additional storage scheme imposes a storage penalty that shrinks with an increasing number of physical drives in the RAID).

Usage Statistics

The average greyhole implementation is 6TB in total pool size and has 5 drives. The largest is 43TB and has 26 drives. [November 2011]

Logical Construction

Greyhole merges an arbitrary number of

Software Raid

The supported, and probably optimal, way to use raid with Ubuntu is to employ Linux's Multiple Device (md) raid system, optionally with the Logical Volume Manager (LVM).

Installation

In Breezy Badger (5.10), installation of md and LVM can be completed entirely with the installation CD without using expert mode.

Just md

This is the simplest method of setting up software raid. It uses one raid partition per operating system partition, unlike LVM which uses logical volumes in order to use a single raid partition. For each disk in the array create a "Use as: Raid physical" partition of appropriate size for each operating system partition (/boot, /, swap, /home, etc.). The /boot or / partitions on each disk should be marked bootable. Use configure raid to make raid devices for each partition. On each of these raid devices configure a single operating system partition (/boot, /, swap, etc.) of the size of the entire device. Continue installing Ubuntu.

Note for RAID1 first timers: Each RAID array is usually made up of at least two 'active' devices (one of which is actually 'active' and one of which is the mirror) - spare devices are there for when one of the active devices fail so they can jump in and continue mirroring. The same principle applies if you have one true 'active' and multiple mirrors etc.

md made super simple

If you want to make a RAID array of data devices and not of boot devices, adapt the following recipe to your needs.

Goal: Create a RAID 1 array for user data, that is, for home directories.

Setup: Ubuntu 6.06 LTS (Dapper server) with one (1) 40GB root partition (/dev/hda), currently holding all data including home and two (2) unused 250GB hard drives (/dev/hdc, /dev/hdd).

The "super simple md" recipe

   $ cd /
   $ sudo -s -H
   # mdadm /dev/md0 --create --auto yes -l 1 -n 2 /dev/hdc /dev/hdd
   # mke2fs /dev/md0
   # mv /home /home2
   # mkdir /home
   # cat >> /etc/fstab
     /dev/md0        /home           ext2    defaults        0       0
     ^D
   # mount -a
   # mv /home2/* /home
   # rmdir /home2
   # exit
   $ cd

All commands, e.g., mke2fs, have sensible defaults and "do the right thing".

Obs.: if you have just one of the "data HDs" at hand but intend to buy one more later, you can build the array using:

   # mdadm /dev/md0 --create -l 1 -n 2 /dev/hdc missing  where "missing" will be the place of the new disk.

   # mdadm /dev/md0 --add /dev/hdd will add the new disk later (be aware of the sizes, when buying).

Some notes:

Other Setups

FakeRaid

Most, in not all, of the so called "raid controllers" installed on motherboards are actually just hard drive controllers with a few extra features to make it easy to implement software raid drivers. These are highly non-standard. Each chipset uses different on-disk formats and different drivers. These systems are not extremely desirable for use with Ubuntu; the completely software raid described above is better. They are primarily of interest when compatibility with another existing system that employs them is required.

Access

Device mapper raid can be used to access many of these volumes. It is provided by the dmraid package. dmraid is in the Universe repository.

After installing dmraid you can run the command <code>dmraid -r</code> to list the devices and raid volumes on your system. dmraid makes a device file for each volume and partition; these can be found in the /dev/mapper/ directory, and can be mounted and otherwise manipulated like normal block devices. Other options of the dmraid program are used to administer the array.

Installation

It is not advisable to install Ubuntu onto disks managed by a fake raid system; it is extremely difficult and the results will be disappointing compared to Linux's LVM and md software raid system. If you really must do it to install Ubuntu on the same raid array as an existing installation of another operating system see the following:

More Information

Mixing Software Raid

Ubuntu can be installed on its own raid array on a computer that is using FakeRaid for another operating system on another array. There are a few steps that need to be followed for this to work:

  1. Identify which drives are the existing operating system. You can do this by booting the Ubuntu Live CD, enabling the universe repository, installing dmraid, mounting the partitions, and poking around. You can see which devices were which mapped block device by running <code>dmraid -r</code>.

  2. Disable the fake raid support in the bios for the drives that Ubuntu will be installed onto. Many controllers, such as the Silicon Image 3114 controller, can either provide regular SATA drives or fakeraid, but not both at the same time; if this is your situation you'll need to move the drives for Ubuntu to a different SATA controller.
  3. Install Ubuntu onto the drives with raid as described in "Just md" above.
  4. Ubuntu will probably install Grub onto the first disk in your system. Once Ubuntu boots you'll want to install grub onto Ubuntu's drives and possibly restore the bootloader of another operating system if Ubuntu stepped on it. If your boot partitions are say, sdc1 and sdd1, installing Grub can be done with the following commands in grub (grub's numbers are one less than the letter):
    1. root (hd2,0)
    2. install (hd2)
    3. root (hd3,0)
    4. install (hd3,0)</pre>

You can restore the boot record of a stepped on drive of another raid array in your system if you have a backup of the drive's master boot record. For some bootloaders and configurations, such as NT loader on RAID1, the master boot record of the other drive in the array can be used as the backup.

More Information

Hardware Raid

Real hardware raid systems are very rare and are almost always provided by a card such as a PCI card. Your hardware will need kernel level support in order to work with Ubuntu. You can find out if is is supported without much work by booting a Live CD. Your array should be visible as a scsi block device and if it has existing partitions and file systems, mountable.


CategoryCleanup CategoryHardware

greyhole (last edited 2011-11-30 18:21:36 by pool-96-249-253-174)