Differences between revisions 60 and 61
Revision 60 as of 2006-06-09 07:51:52
Size: 26851
Editor: 59
Revision 61 as of 2006-06-09 09:18:01
Size: 26557
Editor: cpe-72-228-75-201
Comment: Other howto is copy, still uses debootstrap, has errors. This works for dapper.
Deletions are marked like this. Additions are marked like this.
Line 407: Line 407:
'''Soumyadip Modak'''
I installed Dapper from the LiveCD and I documented the procedure here : [http://ubuntu-in.org/wiki/SATA_RAID_Howto]
On a related note, can someone tell me which kernel option I need to enable to get my custom kernel to work with DMRAID ? The stock kernel work fine.

How to configure Ubuntu to access a hardware fakeRAID

Back when Ubuntu Breezy preview came out, I spent a week getting it installed on my Via SATA fakeRAID and finally got the system dual-booting WinXP and Ubuntu Linux on a RAID-0 (stripe) between two 36 gig 10,000rpm WD Raptor hard drives. So I thought I would create a howto to describe how I did it so that others could benefit from my work and add related lessons.

This page describes how to get Linux to see the RAID as one disk and boot from it. In my case, I use a RAID-0 configuration, but this should also apply to RAID-1 and RAID-5 (see note at end). For the benefit of those who haven't done some of these steps before, these instructions are fairly detailed (so don't be intimidated by the length of this document -- it's pretty straight-forward).

What is fakeRAID?

In the last year or two a number of hardware products have come on the market claiming to be IDE or SATA RAID controllers. These have shown up in a number of desktop/workstation motherboards. Virtually none of these are true hardware RAID controllers. Instead, each is simply a multi-channel disk controller that has special BIOS and drivers to assist the OS in performing software RAID functions. This has the effect of giving the appearence of a hardware RAID, because the RAID configuration is set up using a BIOS setup screen and the system can be booted from the RAID.

Under Windows, you must supply a driver floppy to the setup process so Windows can access the RAID. Under Linux, which has built-in softRAID functionality that pre-dates these devices, the hardware is seen for what it is -- multiple hard drives and a multi-channel IDE/SATA controller. Hence, "fakeRAID".

If you have arrived here after researching this topic on the Internet, you know that a common response to this question is, "I don't know if you can actually do that, but why bother -- Linux has built-in softRAID capability." Also, it's not clear that there is any performance gain using hardware fakeRAID under Linux instead of the built-in softRAID capability; the CPU still ends up doing the work. Well, that's beside the point. The point is that a Windows user with a fakeRAID system may very well want to put Linux on that same set of disks. Multiboot configurations are common for cross-over users trying Linux out, for people forced to use Windows for work, and for other reasons. These people shouldn't have to add an additional drive just so they can boot Linux. Also, ome people say, "RAID-0 is risky". That's a matter of individual needs (speed vs security subject to resource constraints). These are not the subject of this HowTo; we assume you want to do it and tells you "how to".

Installing Ubuntu into the RAID Array

Installing dmraid

The standard setup and LiveCDs do not yet contain support for fakeRAID. I used the LiveCD to boot up, and used the package manager to download the dmraid package from the universe repository. You will need to enable packages from Universe in the settings of Synaptic to see the package. If you are using the DVD you may also need to get the gparted package, which we will use for partitioning your RAID.

NOTE: Support for dmraid has been improved in Ubuntu 6.06, and several of the steps below are no longer necessary. If you install from the Live cd, install the dmraid package from universe before you start the installer program (Ubiquity). Just make sure you choose your RAID devices under /dev/mapper and do not use the raw devices /dev/sd* for anything. So far, this works for some, while for others, Ubuquity crashes. If Ubiquity does not complete the install, you can manually complete the process by following this procedure. In that case, those steps that are no longer required for Ubuntu 6.06 or later have been marked "Ubuntu 5.10".

Partitioning the RAID Array

You can use gparted to create and delete partitions as you see fit, but at this time, it can not refresh the partition table after it modifies it, so you will need to change the partitions, then manually run dmraid -ay from the command prompt to detect the new partitions, and then refresh gparted before you can format the partition. (Of course, you can use parted, fdisk or other tools if you are experienced with them.)

I needed to resize my existing NTFS partition to make space for Ubuntu. (If you don't need to do this, skip to the next paragraph.) Gparted currently can not do this on the mapper device so I had to use the ntfsresize program from the command line. Note that ntfsresize only resizes the filesystem, not the partition, so you have to do that manually. Use ntfsresize to shrink the filesystem, note the new size of the filesystem in sectors, then fire up fdisk. Switch fdisk to sector mode with the 'u' command. Use the 'p' command to print the current partition table. Delete the partition that you just resized and recreate it with the same starting sector. Use the new size of the filesystem in sectors to compute the ending sector of the partition. Don't forget to set the partition type to the value it was before. Now you should be able to create a new partition with the free space.

Start gparted and create the partitions you want for your setup. To begin, use the selector on the upper right to choose the device dmraid has created for your fakeRAID. In my case, this was /dev/mapper/via_hfciifae, with an additional device /dev/mapper/via_hfciifae1 assigned to my already-created NTFS partion. DMRAID will attempt to assign a meaningful name reflecting the controller you are using (e.g., and nvRAID user may see /dev/mapper/nvidia_bggfdgec or the like).

After selecting the unused space, I created an extended partition with 3 logical partitions inside. I made a 50 meg partition for /boot, a 1 gig partition for swap, and the rest for the root. Once you have set up the partitions you want, apply the changes and exit gparted. If you apply changes more than once (e.g., do this in more than one step, or change your mind while working), you should exit gparted, refresh the partition table using the command dmraid -ay, and open gparted again to continue your work.

Formatting the Partitions

Now format your filesystem for each partition. In my case I used fdisk and ran a mke2fs on /dev/mapper/via_hfciifae5 and mkreiserfs on /dev/mapper/via_hfciifae7.

Alternatively, you can do this using the GUI in gparted. Run dmraid -ay again to refresh the partition table for gparted and then open gparted again. You will see that the new partitions are designated as "unknown type", because they are not formatted. You can use gparted to format them by right-clicking each partion and selecting "convert" and the appropriate format. Before you exit, make a note of the device mapping for each new partition (you will need this later). Apply the changes and exit. You can also see these mappings with the command dmraid -r.

In my case I had the following mappings:

via_hfciifae  -- the raw raid volume
via_hfciifae1 -- the NTFS partition
via_hfciifae5 -- /boot
via_hfciifae6 -- swap
via_hfciifae7 -- /

Mounting the Temporary File Structure

Next, I created a temporary file structure to hold my new installation while I construct it, I and mounted two sets of directories to it: a) Mounted the new partitions I had created for / and /boot (so could install packages to them). b) Mounted the currently running, /dev, /proc, and /sys filesystems, so I could use these to simulate a running system within my temporary file structure.

mkdir /target
mount -t reiserfs /dev/mapper/via_hfciifae7 /target
mkdir /target/boot
mount -t ext2 /dev/mapper/via_hfciifae5 /target/boot
mkdir /target/dev
mount --bind /dev /target/dev
mkdir /target/proc
mount -t proc proc /target/proc
mkdir /target/sys
mount -t sysfs sysfs /target/sys

Installing the Base System

Now we install the base system. debootstrap installs all base packages and does it setup. Afterwards you need to install some additional packages:

cd /target

apt-get install debootstrap
# install debootstrap to install the base system at the next point

# install base system
debootstrap breezy /target  ## instead of breezy can be any distribution selected

# copy sources list
cp /etc/apt/sources.list etc/apt

# copy resolv.conf
cp /etc/resolv.conf /target/etc

# copy hosts
cp /etc/hosts /target/etc

# run in the now installed system
chroot /target

# install ubuntu-base (and other packages)
apt-get update
apt-get install ubuntu-base linux-k7 ubuntu-desktop dmraid grub
# change grub to lilo if you use lilo
# change k7 to your processor architecture if you don't know, use linux-386.

# when prompted whether you want to stop now, say no (we will later be fixing the issue that the system is talking about)
# when prompted whether to create a symbolic link, say yes.  (Setting up symlinks with names that don't change with each kernel update the corresponding file references used by the bootloader don't have to be udpated each time the kernel is updated.)

# the system is installed now.

**Temporary Note to other editors: when I tested this howto with 6.06 LTS on 1 June 2006, the install of dmraid failed (--configure), indicating it was unable to start the dmraid initscript. This may have been some kind of error on my part. I was able to fix this with dpkg-reconfigure dmraid, so I add it here as a possibily useful tip should this turn out to be systemic problem that others encounter. Also, install dmraid first, then the kernel, in order to use the initramfs scripts that are now part of the 6.06 distrubution. This is based on one 6.06 test -- please correct/edit this as appropriate.**

Setting Up the Bootloader for RAID

Now that you have the debian core, ubuntu-base, linux kernel, dmraid, grub, and ubuntu-desktop installed, you can proceed with the bootloader. If you haven't completed these successfully, don't attempt to proceed, you will just exacerbate any problem you have at this point.

We will demonstrate the installation of GRUB (Grand Unified Bootloader), but there are several alternatives (e.g., LILO). The key information here is how the normal process for use of the bootloader had to be modified to accomodate the RAID mappings, so this general process should be useful regardless of your choice of bootloader.

Installing the Bootloader Package

Now you need to run the grub shell. In a non-RAID scenario, one might use grub-install, but we cannot because it cannot see the RAID device mappings and therefore cannot set up correct paths to our boot and root partitions. So we will install and configure grub manually as follows:

First, make a home for GRUB and put the files there that it needs to get set up:

mkdir /boot/grub
cp /lib/grub/<your-cpu-arch>-pc/stage1 /boot/grub/
cp /lib/grub/<your-cpu-arch>-pc/stage2 /boot/grub/
cp /lib/grub/<your-cpu-arch>-pc/<the staging file for your boot partition's filesystem>

The "staging files" look like: "e2fs_stage1_5" (for ext2 or 3); "reiserfs_stage1_5" (for reiserfs); "xfs_stage1_5" (and so on). It is safe to copy them all to your /boot/grub.

Next, go into the grub shell:


You should now see the grub prompt.

Next, tell GRUB which device is the boot device:

device (hd0) /dev/mapper/via_hfciifae

In my case, it was the RAID array mapped as /dev/mapper/via_hfciifae.

Next, tell GRUB where all the stuff is that is needed for the boot process:

root (hd0,4)

CAUTION: This is one of the most common sources of error, so we will explain this in excruciating detail. From GRUB's perspective, "root" is whatever partition holds the contents of /boot. For most people, this is simply your linux root (/) partition. E.g., if / is your 2nd partition on the RAID you indicated above as hd0, you would say "root (hd0,1)". Remember that GRUB starts counting partitions at 0. The first parition is 0, the second is 1, and so on. In my case, however, I have a separate boot partition that GRUB mounts read-only for me at boot time (which helps keep it secure). It's my 5th partition, so I say "root (hd0,4)"

Optional: IF GRUB complains about bad cylinder numbers (i.e, if it did not complain, skip this part about fdisk and geometry): You may need to tell it about the device's geometry (cylinders, heads, and sectors per track. You can find this information out by using fdisk (quit GRUB) with the command: fdisk (fdisk -l /dev/mapper/via_hfciifae) ...then reenter the GRUB shell and use the command: geometry (hd0) 9001 255 63

Next, now that you've successfully established the "device" and "root", you can go ahead and instantiate GRUB on the boot device. This sets up the stage 1 bootloader in the device's master boot record and the stage 2 boot loader and grub menu in your boot partition:

setup (hd0)

Configuring the Bootloader

Now run update-grub:


This adds your newly installed linux kernel, and the associated initial ram disk image, to the boot options menu that grub presents during start-up. You will find this menu in the boot/grub directory. We need to edit this menu.lst file as follows. (CAUTION: Get this right - this is a common source of error and mistakes result in kernel panic upon reboot, so no typos.):

a) "root=": Correct the path that points to the linux root (in several places). update-grub configures hda1 as root because, not being dmraid-aware, it can't find your current root-device. Put the correct device mapping for your linux root. So put your equivalent of:


every place you see "root=" (only where you see root and the equal sign). This goes in all the places where update-grub defaulted to root=/dev/hda1 or just left it blank like root=   .

Make sure you change this in the Automagic defaults section as well as in each of the multiple alternatives sections that follow. (Important: the Automagic defaults section is nested and therefore uses ## to indicate comments and # to indicate the actual defaults that is uses. So don't "un-comment" the default lines when you edit them. In other words, leave the #). When you update your kernel later on, update-grub will use these defaults so it won't ignorantly "assume hda1" and send your system into a kernel panic when you boot. This ought to end up looking something like:

#kopt=root= /dev/mapper/via_hfciifae7 ro

b) "groot": If necessary, correct the grub root. In places, you will see other lines that also refer to "root" (or "groot") but use syntax such as root (hd0,1) instead of a path. As described earlier, these refer to the "root" for grub's purposes, which is actually your /boot. Also, remember grub's syntax uses partition numbering beginning with zero. So, if you have a separate /boot partition, these lines should instead show something like:

root (hd0,4)

(The same information we used while working with grub interactively earlier.) Change this both for the Automagic defaults as well as for each alternative, including the memtest option.

c) An additional edit is required IF you are using a separate /boot partition. The path pointing to the linux root must be RELATIVE to the grub "root" (your /boot). So if you are using a separate boot partition, the paths in grub's menu.lst file that help grub locate the linux kernel and initrd will not begin with "/boot", and you should delete that portion of the path. For example, update-grub} initially spat out this:

title           Ubuntu, kernel 2.6.15-23-amd64-k8
root            (hd0,0)
kernel          /boot/vmlinuz-2.6.15-23-amd64-k8 root= ro quiet splash
initrd          /boot/initrd.img-2.6.15-23-amd64-k8

... and because I have a separate boot parition and opted not to use a grub splash image (which you can learn about elsewhere), my editing looked like this...

title           Ubuntu, kernel 2.6.15-23-amd64-k8
root            (hd0,4)
kernel          /vmlinuz-2.6.15-23-amd64-k8 root=/dev/mapper/via_hfciifae ro quiet 
initrd          /initrd.img-2.6.15-23-amd64-k8

NOTE that I removed "savedefault". If you leave this in, you will get a "file not found" error when you try to boot (you also can't set use default=saved up top as it shows in the example). Again, if you are not using a separate boot partition, you can leave /boot in the paths.

d) To add a static boot stanza for Windows, you can use and change the example in the menu.lst file or the following:

title Windows XP
  rootnoverify (hd0,0)
  chainloader +1

Put it at the bottom, below where it says ### END DEBIAN AUTOMAGIC KERNELS LIST. Or if for some unforgivable reason you want your computer to boot Windows by default, you can put it up front above where it says {{{### BEGIN DEBIAN AUTOMAGIC KERNELS LIST }}}

e) Close the gaping security hole! First, set a password where the example shows it. This will be required for any locked menu entries, for the ability to edit the bootlines, or to drop to a command prompt. To do this, in the console type:


When it prompts you "Password:", it's asking what you want to be the GRUB password (not your user password, the root password, or anything else). You will be prompted to enter it twice, then it will spit out the MD5 hash that you need to paste into menu.lst. This line should end up looking something like:

password --md5 $1$gLhU0/$aW78kHK1QfV3P2b2znUoe/

- Then, to keep your "recovery mode" boot alternative(s) locked each time update-grub runs, set


.Unless you do this, anybody will be able to seize root simply by rebooting your computer (e.g., cutting power to it) and selecting your "recovery mode" menu entry when it reboots, or editing the normal bootline to include 'single' mode.

f) Test automagic kernels settings (also completes the locking of alternatives). It is better to find errors now than a month from now when you've forgotten all this stuff and the kernel gets updated. - first, make a backup of menu.lst - then run update-grub again - watch for errors and re-examine menu.lst for discrepancies - correct as needed.

Reconfiguring the Initramfs for RAID (Ubuntu 5.10)

Reminder: Sections Ubuntu 5.10 should be skipped if you are installing Ubuntu 6.06.

In recent years there has been a trend to try and pull a bunch of code out of the kernel and into EarlyUserspace. This includes stuff like nfsroot configuration, md software RAID, lvm, conventional partition/disklabel support, and so on. Early user space is set up in the form of an initramfs which the boot loader loads with the kernel, and this contains user mode utilities to detect and configure the hardware, mount the correct root device, and boot the rest of the system.

Hardware fakeRAID falls into this category of operation. A device driver in the kernel called device mapper is configured by user mode utilities to access software RAIDs and partitions. If you want to be able to use a fakeRAID for your root filesystem, your initramfs must be configured to detect the fakeRAID and configure the kernel mapper to access it.

So we need to add dmraid to the initramfs. Debian and Ubuntu supports this by way of a set of shell scripts and configuration files placed in /etc/mkinitramfs/. We must tailor these to include dmraid by plugging in two simple scripts and adding a one-line entry to a configuration file. The only real challenge here is to make sure you don't inadvertently screw up the syntax with a typo.

Note that in Ubuntu 6.06, this is taken care of by the dmraid package itself.

Configuring mkinitramfs in Ubuntu 5.10 (Breezy Badger)

First, create a new file as /etc/mkinitramfs/scripts/local-top/dmraid .

(If you are lazy or don't like to keyboard, you can open this how-to in the browser and copy the text.)


        echo "$PREREQ"

case $1 in
# get pre-requisites
        exit 0

modprobe -q sata_nv
modprobe -q dm-mod

# Uncomment next line if you are using RAID-1 (mirror)
# modprobe -q dm-mirror

/sbin/dmraid -ay

Second, create another new file as /etc/mkinitramfs/hooks/dmraid.

(Again for the lazy, you can copy it from your browser. Also, it's only slightly different, so if you are manually typing it for some reason, you may want to start with a copy of the first script.)


        echo "$PREREQ"

case $1 in
# get pre-requisites
        exit 0

. /usr/share/initramfs-tools/hook-functions

copy_exec /sbin/dmraid /sbin

exit 0

Third, mark both of these new initramfs scripts as executable:

chmod +x /etc/mkinitramfs/hooks/dmraid
chmod +x /etc/mkinitramfs/scripts/local-top/dmraid

Last, add the line dm-mod to the file /etc/mkinitramfs/modules. Make sure the file ends with a newline. If you use a RAID-1 (mirror), include dm-mirror as well.

Updating the initrd (Ubuntu 5.10)

Now the big moment -- use initramfs to update the initrd file. Below, I show the kernel I installed at that time, but stuff following "img-" and following "-c -k " must reflect the version YOU are using (e.g., "2.6.12-10-amd64-k8-smp" or whatever).

Two commands

rm /boot/initrd.img-2.6.12-9-k7
update-initramfs -c -k 2.6.12-9-k7

Now you are ready to set up the new system.

Preconfiguring the New System as Usual

Ensure that you are still operating as root within the new (temporary) system (i.e., your prompt will be root@ubuntu#. If not, chroot /target again: sudo chroot /target

(The process from here forward is the same as any bootstrap / network installation, and there are other sources to refer to for more detail.)

UBUNTU 5.10: Enter the command base-config new to configure system defaults.

**UBUNTU 6.06: base-config is deprecated in Dapper Drake. The correct procedure needs to be inserted here. Theoretically, one could do what base-config does manually.

While it is not absolutely necessary, it may be useful to also copy the live hosts and interfaces files into your temporary system before rebooting (after exiting your chroot:

cp /etc/hosts /target/etc/hosts
cp /etc/network/interfaces /target/etc/network/interfaces


It will also be helpful to configure your fstab file at this point. One easy way to do this is:

cat /etc/mtab

(select and copy everything)

nano /target/etc/fstab

(paste everything)

Then delete everything except the proc line, and the lines that refer to your RAID partitions. It might end up something like this (yours will vary - people asked for examples):

#FileSystem                     MountPoint      Type       Options      Dump/Pass

proc                            /proc           proc       rw              0 0
/dev/mapper/via_hfciifae5       /boot           ext3       defaults        0 2
/dev/mapper/via_hfciifae7       /               reiserfs   notail,noatime  0 1
/dev/mapper/via_hfciifae6       none            swap       sw              0 0


#[fs                       ]  [fs_mount][fs_type][ fs_opts ][dmp][pass]
/dev/mapper/nvidia_bggfdgec2    /boot     ext3    defaults    0    1
/dev/mapper/nvidia_bggfdgec3    none      swap    sw          0    0
proc                            /proc     proc    rw          0    0

Finally you are ready to reboot. This first time, select the "recovery mode" option. When it asks, you want to "perform maintenance". Set the root password:


Suggested early set-up tasks: adduser yourself (create a regular user) nano /etc/group (create an admin group) visudo (duplicate the root line except with %admin where root was) reboot, and you should be able to log in a normal user through gdm, and continue normally with sudo privileges.

(more is needed here, or a reference to whatever replaces the howto that describes a general debootstrap install)

Upgrading to Ubuntu 6.06 (Dapper Drake)

The dmraid package in Ubuntu 6.06 has the necessary scripts included (under /usr). After upgrading the dmraid package, you can therefore delete the old scripts that you've made (under /etc). To be sure the package scripts are baked into the initrd, update the initrd again by reconfiguring dmraid:

sudo rm /etc/mkinitramfs/hooks/dmraid
sudo rm /etc/mkinitramfs/scripts/local-top/dmraid
sudo dpkg-reconfigure dmraid

[http://samokk.is-a-geek.com/wordpress/2006/01/15/running-ubuntu-gnulinux-on-a-fakeraid1-mirroring-array/ Running Ubuntu On a Fakeraid/1 array] described how to adapt the original HOWTO to a FakeRAID/1 (mirroring) array.

Special Note for Raid 5

While trying to install dmraid for a Raid 5 Nvidia setup received a error 139 forced exit and upon further investigation in the TODO doc in /usr/share/doc/dmraid found that dmraid doesn't support raid modes above 1 yet. Here's the exact wording from the TODO,"higher RAID levels above 1; main restriction to support these is the need for device-mapper targets which map RAID3,4,5."

EDIT: Further research has lead me to dmraid 1.0.0.rc10, which in it's changelog notes Raid5 support for nvidia. Current Ubuntu version is 1.0.0.rc9 which explains the lack of Raid5 support. Will update with more info on how well it works.

NOTE: The kernel device mapper ( which dmraid depends on ) does not yet support raid 5. There are some early development patches availible, so they might get merged into Linus's kernel in time for Dapper+1, but I'd say it's not all that likely.

CategoryDocumentation CategoryHardware

FakeRaidHowto (last edited 2008-08-06 17:00:01 by localhost)