FakeRaidHowto

Differences between revisions 10 and 11
Revision 10 as of 2005-10-30 16:03:29
Size: 10285
Editor: zux166-097
Comment:
Revision 11 as of 2005-11-10 02:52:46
Size: 11011
Editor: 103
Comment:
Deletions are marked like this. Additions are marked like this.
Line 155: Line 155:
Note: These instructions do not work. I have never used grub before so I'm not really sure what I am doing, but I think that first device line should be:

device (hd0,4) /dev/mapper/via_hfciifae5

Becaue the first partition is the windows partition, it's partition 5 that is /boot.

When I make that change though, it still does not work. The root command says the partition table is invalid. The debug messages say it is opening /dev/mapper/via_hfciifae5, which is not the right device if it is trying to read the MBR. It should be reading /dev/mapper/via_hfciifae for that. If I change the command to rootnoverify, it is ok, but then the setup command fails with error 17: cannot mount selected partition.

How to configure Ubuntu to access a hardware fakeRAID

This page is a work in progress. I have spent the last week getting Ubuntu Breezy preview installed on my Via SATA fakeRAID and finally have the system dual booting WinXP and Ubuntu Linux on a RAID-0 ( stripe ) between two 36 gig 10,000rpm WD Raptor hard drives. I thought I would create a howto to describe how I did it so that others may benefit from my work.

What is it?

In the last year or two a number of hardware products have come on the market claiming to be IDE or SATA RAID controllers. Virtually all of them are not really hardware RAID controllers. Instead they are simply a multi-channel disk controller that has special BIOS and drivers for the OS to perform the software RAID functions. This has the effect of giving the appearence of a hardware RAID, because the RAID configuration is set up using a BIOS setup screen and the system can be booted from the RAID.

Under Windows, you must supply a driver floppy to the setup process so Windows can access the RAID. Under Linux, the hardware is seen for what it is, which is simply a multi-channel IDE/SATA controller. What this means is that if you have multiple disks configured as a RAID, Linux sees individual disks. This page describes how to get Linux to see the RAID as one disk, and boot from it. In my case, I use a RAID-0 configuration, but this should also apply to RAID-1 and RAID-5.

Background

In recent years there has been a trend to try and pull a bunch of code out of the kernel and into EarlyUserSpace. This includes stuff like nfsroot configuration, md software RAID, lvm, conventional partition/disklabel support, and so on. Early user space is set up in the form of an initramfs which the boot loader loads with the kernel, and this contains user mode utilities to detect and configure the hardware, mount the correct root device, and boot the rest of the system.

Hardware fakeRAID falls into this category of operation. A device driver in the kernel called device mapper is configured by user mode utilities to access software RAIDs and partitions. If you want to be able to use a fakeRAID for your root filesystem, your initramfs must be configured to detect the fakeRAID and configure the kernel mapper to access it.

The How To

The key areas of work that needs done are:

  1. Installing Ubuntu
  2. Installing lilo or grub
  3. Configuring the initramfs to boot the system

Installing Ubuntu

Install dmraid

The standard setup and LiveCDs do not yet contain support for fakeRAID. I used the LiveCD to boot up, and used the package manager to download the dmraid package from the universe repository. You will need to enable packages from Universe in the settings of Synaptic to see the package.

Once the package is installed, it will detect the RAID device and any existing partitions on it, and create devices for them. In my case, initially dmraid created /dev/mapper/via_hfciifae and /dev/mapper/via_hfciifae1 (dmraid -r will show you what's mapped). These devices corresponded to my RAID itself, and the single partition on it that existed at the time, which was my WinXP NTFS partition. You should be able to fire up gparted and select the raw device to partition.

Partition the raid system

You can use gparted to create and delete partitions as you see fit, but at this time, it can not refresh the partition table after it modifies it, so you will need to change the partitions, then manually run dmraid -ay from the command prompt to detect the new partitions, and then refresh gparted before you can format the partition.

I needed to resize my existing NTFS partition to make space for Ubuntu. Gparted currently can not do this on the mapper device so I had to use the ntfsresize program from the command line. Note that ntfsresize only resizes the filesystem, not the partition, so you have to do that manually. Use ntfsresize to shrink the filesystem, note the new size of the filesystem in sectors, then fire up fdisk. Switch fdisk to sector mode with the 'u' command. Use the 'p' command to print the current partition table. Delete the partition that you just resized and recreate it with the same starting sector. Use the new size of the filesystem in sectors to compute the ending sector of the partition. Don't forget to set the partition type to the value it was before. Now you should be able to create a new partition with the free space. In my case, I created an extended partition with 3 logical partitions inside. I made a 50 meg partition for /boot, a 1 gig partition for swap, and the rest for the root.

Attention: When you use grub as bootloader, be carefully to use a boot partition which is under the 8 GB limit, otherwise at the grub setup an error is reportet that the bios doesn't support the cyclinder count.

After saving the new partition table and exiting fdisk, run dmraid -ay again to detect the new partitions. In my case I had the following in /dev/mapper:

via_hfciifae  -- the raw raid volume
via_hfciifae1 -- the NTFS partition
via_hfciifae5 -- /boot
via_hfciifae6 -- swap
via_hfciifae7 -- /

Format your new partitions

Now format your filesystems. In my case I ran an mke2fs on /dev/mapper/via_hfciifae5 and mkreiserfs on /dev/mapper/via_hfciifae7. Once that is done you can mount the new target filesystems. In my case I did:

mkdir /target
mount -t reiserfs /dev/mapper/via_hfciifae7 /target
mkdir /target/boot
mount -t ext2 /dev/mapper/via_hfciifae5 /target/boot

Install the base system

Now we install the base system. debootstrap installs all base packages and does it setup. Afterwards you need to install some additional packages:

cd /target

# install base system
debootstrap breezy /target  ## instead of breezy can be any distribution selected

# copy sources list
cp /etc/apt/sources.list etc/apt

# run in the now installed system
chroot .

# install ubuntu-base (and other packaged)
apt-get update
apt-get install ubuntu-base linux-k7 ubuntu-desktop grub
# change grub to lilo if you use lilo
# change k7 to your processor architecture if you don't know, use linux-386.

# the system is installed now.

Installing Lilo

I had to make a one line patch to the lilo sources to get this to work. I am working with the lilo maintainer right now on the issue and hopefully the next lilo release will have this resolved. Other than that, setting up lilo.conf is the tricky part to get it working with the fakeraid.

My lilo.conf file looks like this:

static-bios-codes
boot=/dev/mapper/via_hfciifae
disk=/dev/mapper/via_hfciifae
        sectors=63
        heads=255
        cylinders=9001
        max-partitions=15
        bios=0x80
        partition=/dev/mapper/via_hfciifae1
                start=63
        partition=/dev/mapper/via_hfciifae5
                start=97659198
        partition=/dev/mapper/via_hfciifae7
                start=100004688
prompt
timeout=150
lba32
compact
vga=normal
read-only
image=/boot/vmlinuz-2.6.12-9-amd64-generic
        label=linux
        initrd=/boot/initrd.img-2.6.12-9-amd64-generic-dmraid
        literal="root=/dev/mapper/via_hfciifae7"
other=/dev/mapper/via_hfciifae1
        label=windows
        unsafe

The directive use-static-bios tells lilo to not try to probe the disks and figure out the partition layout. This is needed because lilo does not understand mapper devices and this probe will fail. When you use this option, the disk= directives become required. You have to explain to lilo exactly where the partitions are on what bios disk device.

Disk 0x80 is the first hard disk BIOS detects. The fakeRAIDd BIOS makes it look like the raid is a single big disk at device 0x80, so we must tell lilo to use device 0x80. Lilo also needs to know the starting sectors of the partitions, which you can get from fdisk.

Installing Grub

To install grub you need to install the grub package.

Now you need to run the grub shell. The command grub-install doesn't work, because the device mapping is not known to grub.

grub --device-map=/dev/null
# you are now in the grub shell

device (hd0,0) /dev/mapper/via_hfciifae1
device (hd0) /dev/mapper/via_hfciifae
# this says grub which mapping the BIOS has

root (hd0,0)
# select the root partition

# install grub
setup (hd0)

quit

If you have installed grub after installing the linux-... package, run now update-grub to add your linux kernel to the boot options.

Note: These instructions do not work. I have never used grub before so I'm not really sure what I am doing, but I think that first device line should be:

device (hd0,4) /dev/mapper/via_hfciifae5

Becaue the first partition is the windows partition, it's partition 5 that is /boot.

When I make that change though, it still does not work. The root command says the partition table is invalid. The debug messages say it is opening /dev/mapper/via_hfciifae5, which is not the right device if it is trying to read the MBR. It should be reading /dev/mapper/via_hfciifae for that. If I change the command to rootnoverify, it is ok, but then the setup command fails with error 17: cannot mount selected partition.

Configuring the initramfs

For the kernel to recognize the RAID you have to run the dmraid utility to configure the mapper device. To do this, we need to add dmraid to the initramfs. Debian and Ubuntu supports for this mkinitramfs to add dmraid we need to add some scripts and hooks:

Create a file as /etc/mkinitramfs/scripts/local-top/dmraid 

modprobe dm-mod
dmraid -ay

Create a file as /etc/mkinitramfs/hooks/dmraid

# copied from /usr/share/doc/initramfs-tools/examples/example_hook

# no pre-requirements
PREREQ=""

prereqs()
{
        echo "$PREREQ"
}

case $1 in
# get pre-requisites
prereqs)
        prereqs
        exit 0
        ;;
esac

. /usr/share/initramfs-tools/hook-functions

if [ -x /sbin/dmraid ]; then
        copy_exec /sbin/dmraid sbin
fi

manual_add_modules dm-mod
exit 0

Mark both files as executable and change the initramfs file, for this you need the version of you linux kernel use. Show in /boot for the vmlinuz files, the string after the first dash is used as version. I will use 2.6.12-9-k7:

chmod +x /etc/mkinitramfs/hooks/dmraid
chmod +x /etc/mkinitramfs/scripts/local-top/dmraid

rm /boot/initrd.img-2.6.12-9-k7
update-initramfs -c -k 2.6.12-9-k7

Now you can reboot your computer and use your new system.

Setup your system

Now you need to setup some settings:

Use base-config to install a new default user.


CategoryDocumentation CategoryHardware

FakeRaidHowto (last edited 2008-08-06 17:00:01 by localhost)