FakeRaidHowto

Differences between revisions 39 and 40
Revision 39 as of 2006-02-13 03:43:08
Size: 15633
Editor: cpe-72-228-75-201
Comment:
Revision 40 as of 2006-02-13 03:48:21
Size: 15735
Editor: cpe-72-228-75-201
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
== How to install Ubuntu to boot on a hardware fakeRAID == == How to configure Ubuntu to access a hardware fakeRAID ==
||<tablestyle="float:right; font-size: 0.9em; width:40%; background:#F1F1ED; margin: 0 0 1em 1em;" style="padding:0.5em;">'''Contents'''[[BR]][[TableOfContents]]||
Line 6: Line 7:
This page describes how to get Linux to see the RAID as one disk and -- here's the trick -- boot from it. In my case, I use a RAID-0 configuration, but this should also apply to RAID-1 and RAID-5. For the benefit of those who haven't done some of these steps before, these instructions are fairly detailed (so don't be intimidated by the length of this document -- it's pretty straight-forward).

'''Contents'''[[BR]][[TableOfContents]]
This page describes how to get Linux to see the RAID as one disk __and boot from it__. In my case, I use a RAID-0 configuration, but this should also apply to RAID-1 and RAID-5. For the benefit of those who haven't done some of these steps before, these instructions are fairly detailed (so don't be intimidated by the length of this document -- it's pretty straight-forward).

How to configure Ubuntu to access a hardware fakeRAID

Back when Ubuntu Breezy preview came out, I spent a week getting it installed on my Via SATA fakeRAID and finally got the system dual-booting WinXP and Ubuntu Linux on a RAID-0 (stripe) between two 36 gig 10,000rpm WD Raptor hard drives. So I thought I would create a howto to describe how I did it so that others could benefit from my work and add related lessons.

This page describes how to get Linux to see the RAID as one disk and boot from it. In my case, I use a RAID-0 configuration, but this should also apply to RAID-1 and RAID-5. For the benefit of those who haven't done some of these steps before, these instructions are fairly detailed (so don't be intimidated by the length of this document -- it's pretty straight-forward).

What is fakeRAID?

In the last year or two a number of hardware products have come on the market claiming to be IDE or SATA RAID controllers. These have shown up in a number of desktop/workstation motherboards. Virtually none of them are true hardware RAID controllers. Instead, each is simply a multi-channel disk controller that has special BIOS and drivers to assist the OS in performing software RAID functions. This has the effect of giving the appearence of a hardware RAID, because the RAID configuration is set up using a BIOS setup screen and the system can be booted from the RAID.

Under Windows, you must supply a driver floppy to the setup process so Windows can access the RAID. Under Linux, which has had built-in softRAID functionality for some time, the hardware is seen for what it is -- multiple hard drives and a multi-channel IDE/SATA controller.

Background

In recent years there has been a trend to try and pull a bunch of code out of the kernel and into EarlyUserSpace. This includes stuff like nfsroot configuration, md software RAID, lvm, conventional partition/disklabel support, and so on. Early user space is set up in the form of an initramfs which the boot loader loads with the kernel, and this contains user mode utilities to detect and configure the hardware, mount the correct root device, and boot the rest of the system.

Hardware fakeRAID falls into this category of operation. A device driver in the kernel called device mapper is configured by user mode utilities to access software RAIDs and partitions. If you want to be able to use a fakeRAID for your root filesystem, your initramfs must be configured to detect the fakeRAID and configure the kernel mapper to access it.

Installing Ubuntu into the RAID Array

Installing dmraid

The standard setup and LiveCDs do not yet contain support for fakeRAID. I used the LiveCD to boot up, and used the package manager to download the dmraid package from the universe repository. You will need to enable packages from Universe in the settings of Synaptic to see the package. If you are using the DVD you may also need to get the gparted package, which you will need for partitioning your RAID.

Partitioning the RAID Array

You can use gparted to create and delete partitions as you see fit, but at this time, it can not refresh the partition table after it modifies it, so you will need to change the partitions, then manually run dmraid -ay from the command prompt to detect the new partitions, and then refresh gparted before you can format the partition.

I needed to resize my existing NTFS partition to make space for Ubuntu. (If you don't need to do this, skip to the next paragraph.) Gparted currently can not do this on the mapper device so I had to use the ntfsresize program from the command line. Note that ntfsresize only resizes the filesystem, not the partition, so you have to do that manually. Use ntfsresize to shrink the filesystem, note the new size of the filesystem in sectors, then fire up fdisk. Switch fdisk to sector mode with the 'u' command. Use the 'p' command to print the current partition table. Delete the partition that you just resized and recreate it with the same starting sector. Use the new size of the filesystem in sectors to compute the ending sector of the partition. Don't forget to set the partition type to the value it was before. Now you should be able to create a new partition with the free space.

Start gparted and create the partitions you want for your setup. To begin, use the selector on the upper right to choose the device dmraid has created for your fakeRAID. In my case, this was /dev/mapper/via_hfciifae, with an additional device /dev/mapper/via_hfciifae1 assigned to my already-created NTFS partion. DMRAID will attempt to assign a meaningful name reflecting the controller you are using (e.g., and nvRAID user may see /dev/mapper/nvidia_bggfdgec or the like).

After selecting the unused space, I created an extended partition with 3 logical partitions inside. I made a 50 meg partition for /boot, a 1 gig partition for swap, and the rest for the root. Once you have set up the partitions you want, apply the changes and exit gparted. If you apply changes more than once (e.g., do this in more than one step, or change your mind while working), you should exit gparted, refresh the partition table using the command gparted -ay, and open gparted again to continue your work.

Formatting the Partitions

Now format your filesystem for each partition. In my case I used fdisk and ran a mke2fs on /dev/mapper/via_hfciifae5 and mkreiserfs on /dev/mapper/via_hfciifae7.

Alternatively, you can do this using the GUI in gparted. Run dmraid -ay again to refresh the partition table for gparted and then open gparted again. You will see that the new partitions are designated as "unknown type", because they are not formatted. You can use gparted to format them by right-clicking each partion and selecting "convert" and the appropriate format. Before you exit, make a note of the device mapping for each new partition (you will need this later). Apply the changes and exit. You can also see these mappings with the command dmraid -r.

In my case I had the following mappings:

via_hfciifae  -- the raw raid volume
via_hfciifae1 -- the NTFS partition
via_hfciifae5 -- /boot
via_hfciifae6 -- swap
via_hfciifae7 -- /

Mounting the Temporary File Structure

Next, I created a temporary file structure to hold my new installation while I construct it, I and mounted two sets of directories to it: a) Mounted the new partitions I had created for / and /boot (so could install packages to them). b) Mounted the currently running, /dev, /proc, and /sys filesystems, so I could use these to simulate a running system within my temporary file structure.

mkdir /target
mount -t reiserfs /dev/mapper/via_hfciifae7 /target
mkdir /target/boot
mount -t ext2 /dev/mapper/via_hfciifae5 /target/boot
mkdir /target/dev
mount --bind /dev /target/dev
mkdir /target/proc
mount -t proc proc /target/proc
mkdir /target/sys
mount -t sysfs sysfs /target/sys

Installing the Base System

Now we install the base system. debootstrap installs all base packages and does it setup. Afterwards you need to install some additional packages:

cd /target

apt-get install debootstrap
# install debootstrap to install the base system at the next point

# install base system
debootstrap breezy /target  ## instead of breezy can be any distribution selected

# copy sources list
cp /etc/apt/sources.list etc/apt

# copy resolv.conf
cp /etc/resolv.conf /target/etc

# run in the now installed system
chroot /target

# install ubuntu-base (and other packages)
apt-get update
apt-get install ubuntu-base linux-k7 ubuntu-desktop dmraid grub
# change grub to lilo if you use lilo
# change k7 to your processor architecture if you don't know, use linux-386.

# when prompted whether you want to stop now, say no (we will later be fixing the issue that the system is talking about)
# when prompted whether to create a symbolic link, say yes.

# the system is installed now.

Setting Up the Bootloader for RAID

We will demonstrate the installation of GRUB (Grand Unified Bootloader), but there are several alternatives (e.g., LILO). The key information here is how the normal process for use of the bootloader had to be modified to accomodate the RAID mappings, so this general process should be useful regardless of your choice of bootloader.

Installing the Bootloader Package

If you followed the instructions so far, you have already installed the grub package. If not, or if you are using a different bootloader, install it now. For grub this is simply apt-get install grub if you didn't do it earlier.

Now you need to run the grub shell. In a non-RAID scenario, one might use grub-install, but we cannot because it cannot see the RAID device mappings and therefore cannot set up correct paths to our boot and root paritions. So we will install and configure grub manually as follows:

mkdir /boot/grub
cp /lib/grub/<your-cpu-arch>-pc/stage1 /boot/grub/
cp /lib/grub/<your-cpu-arch>-pc/stage2 /boot/grub/
# grub needs these files to setup itself correctly

grub
# you are now in the grub shell

device (hd0) /dev/mapper/via_hfciifae
# this says grub which mapping the BIOS has

# setting the geometry may not be
# mandatory for you. Only specify it if things
# do not work.  The three numbers describe
# the drive in terms of cylinders, heads,
# and sectors per track.

geometry (hd0) 9001 255 63

# the root command complained about bad cylinder numbers without this
# so I gave it this command to tell it the right geometry
# according to fdisk (fdisk -l /dev/mapper/via_hfciifae)

root (hd0,4)
# select the root partition (note "4" is my boot partition number minus 1)

setup (hd0)
# install grub

quit

Configuring the Bootloader

Now run update-grub. This adds your newly installed linux kernel, and the associated initial ram disk image, to the boot options menu that grub presents during start-up. You will find this menu in the boot/grub directory. We need to edit this menu.lst file as follows:

a) Correct the path that points to the linux root. update-grub configures hda1 as root because it can't find your current root-device. Put the correct device name for your linux root (e.g. root=/dev/mapper/via_hfciifae7) in the places where you update-grub defaulted to root=/dev/hda1. Make sure you change this each of the multiple alternatives sections as well as in the Automagic defaults section. (Note that the Automagic defaults section is nested and therefore uses ## to indicate comments and # to indicate the defaults, so don't "un-comment" the default lines when you edit them).

b) If necessary, correct the grub root. In places, you will see other lines that also refer to "root", but use syntax such as root (hd0,1) instead of a path. These refer to the "root" for grub's purposes, (stay with me now) which is actually your /boot. Also, grub's syntax uses partition numbering beginning with zero. So, if you have a separate /boot partition, these lines will show root (hd0,4) -- (the same information we used while working with grub interactively earlier). Change this both for the Automagic defaults as well as for each alternative, including the memtest option.

c) An additional edit is required if you are using a separate /boot partition. The path pointing to the linux root must be relative to the grub "root" (your /boot). So if you are using a separate boot partition, the paths in grub's menu.lst file that help grub locate the linux kernel and initrd will not begin with "/boot/", and you should delete that portion of the path. If you are not using a separate boot partition, you can leave these paths alone.

d) Finally, to add a Windows-Boot to your menu.lst you can use and change the following lines:

title Windows XP 
  rootnoverify (hd0,0)
  chainloader +1

Reconfiguring the Initramfs for RAID

For the kernel to recognize the RAID array, it must be able to load the dmraid module. So we need to add dmraid to the initramfs (the file system initially loaded into RAM with the kernel when linux boots). Debian and Ubuntu supports this by way of a set of shell scripts and configuration files placed in /etc/mkinitramfs/. We must tailor these to include dmraid by plugging in two simple scripts and adding a one-line entry to a configuration file. The only real challenge here is to make sure you don't inadvertently screw up the syntax with a typo.

Configuring mkinitramfs

First, create a new file as /etc/mkinitramfs/scripts/local-top/dmraid .

(If you are lazy or don't like to keyboard, you can open this how-to in the browser and copy the text.)

PREREQ=""

prereqs()
{
        echo "$PREREQ"
}

case $1 in
# get pre-requisites
prereqs)
        prereqs
        exit 0
        ;;
esac

modprobe -q dm-mod

/sbin/dmraid -ay

Second, create another new file as /etc/mkinitramfs/hooks/dmraid.

(Again for the lazy, you can copy it from your browser. Also, it's only slightly different, so if you are manually typing it for some reason, you may want to start with a copy of the first script.)

PREREQ=""

prereqs()
{
        echo "$PREREQ"
}

case $1 in
# get pre-requisites
prereqs)
        prereqs
        exit 0
        ;;
esac

. /usr/share/initramfs-tools/hook-functions

copy_exec /sbin/dmraid /sbin

exit 0

Third, mark both of these new initramfs scripts as executable:

chmod +x /etc/mkinitramfs/hooks/dmraid
chmod +x /etc/mkinitramfs/scripts/local-top/dmraid

Last, add the line dm-mod to the file /etc/mkinitramfs/modules. Make sure the file ends with a newline.

Updating the initrd

Now the big moment -- use initramfs to update the initrd file. Below, I show the kernel I installed at that time, but stuff following "img-" and following "-c -k " must reflect the version YOU are using (e.g., "2.6.12-10-amd64-k8-smp" or whatever).

Two commands

rm /boot/initrd.img-2.6.12-9-k7
update-initramfs -c -k 2.6.12-9-k7

Now you are ready to set up the new system.

Preconfiguring the New System as Usual

Ensure that you are still operating as root within the new (temporary) system (i.e., your prompt will be root@ubuntu#. If not, chroot /target again. (The process from here forward is the same as any bootstrap / network installation, and there are many sources to refer to for more detail.)

Enter the command base-config new to configure system defaults.

While it is not absolutely necessary, it may be useful to also copy the live interfaces file into your temporary system before rebooting ( cp /etc/network/interfaces /target/etc/network/interfaces). It may also be helpful to configure your fstab file at this point.

When the process is finished, you can reboot and use your system.

== External Links ==
[http://samokk.is-a-geek.com/wordpress/2006/01/15/running-ubuntu-gnulinux-on-a-fakeraid1-mirroring-array/ Running Ubuntu On a Fakeraid/1 array] describes how to adapt this HOWTO to a FakeRAID/1 (mirroring) array.


CategoryDocumentation CategoryHardware

FakeRaidHowto (last edited 2008-08-06 17:00:01 by localhost)