InstallOnBcache

How to install Ubuntu Server on a bcached rootfs

Experimental
You can read a little bit more about bcache there Bcache

Summary

With this how-to we will install ubuntu server on a bcached rootfs. This server setup has two hardware RAIDs and one SSD:

  • One SSD 120GB
  • One HW RAID1 465GiB
  • One HW RAID10 3,8TiB

For this guide we will setup disks like this:

  SSD (/dev/sdc1 here):
   * All LVM2

  RAIDs:
   * RAID1  (/dev/sda1 here): 1GiB for /boot, 
   * RAID1  (/dev/sda2 here): the rest is for LVM2
   * RAID10 (/dev/sdb1 here): full LVM2

Then on LVM we will create those logical volumes:

   SSD:
    * 12GiB for swap (8GiB RAM)
    * 64GiB for cache
    * Rest is reserved for other purposes.

   RAID1:
    * One big partition for root (will get containers and vms, so need some space)
   RAID10:
    * One big partition for /srv (Will store data for user and services)

Installation Guide

Live DVD setup

For setting the bcache root, it is easier for now to start with a live dvd desktop and debootstrap our server installation.

Download a live Ubuntu 15.04 Desktop AMD64 iso (Ubuntu Download Page), then boot your server with it.

Select 'Try Ubuntu', then when you access the desktop, open a terminal (search for gnome-terminal).

   Optional:
   I use a apt-cache-ng-proxy locally, so
    # sudo su
    # echo ‘Acquire::http::Proxy “http://IP.OF.THE.PROXY:3142”;’ > /etc/apt/apt.conf.d/01Proxy


Now we will install the necessary tools for our setup:

    # apt-get update && apt-get install -y mdadm lvm2 bcache-tools

   Note: lvm2 must already be in the iso, and we install mdadm if you want to use a linux software RAID stack.
         mdadm setup is not included in this guide.

Now you need to setup your partition layout:

Disk Layout Setup

Creation of partition for LVM Physical Volumes, with sgdisk:

   Create one big partition on SSD:
   Remove any remaining filesystem, raid or partition-table signatures:
   # wipefs -af /dev/sdc
   Create a new GPT partition table on /dev/sdc ( for some rare cases )
   # sgdisk -Z /dev/sdc
   Create partition:
   # sgdisk -n 1 /dev/sdc
   Reload partition table
   # partprobe
   Verify disk setup:
   # sgdisk -p /dev/sdc

   Create two partitions on RAID1: First: 1GiB(-n 1::1G), Second the remaining (-n 2)
   # wipefs -af /dev/sda
   For my particular (old) server, I need to make MBR boot, so I add -m to convert gpt to mbr here):
   # sgdisk -Z -m /dev/sda
   Now use fdisk to create 2 partition, 1 bootable of 1G, and the second for the remaining:
   # echo -e 'o\nn\np\n1\n\n+1G\na\nn\np\n\n\n\nw\n' | fdisk /dev/sda
   # partprobe
   # fdisk -l /dev/sda

   Create one big patition on RAID10
   # wipefs -af /dev/sdb
   # sgdisk -Z /dev/sdb
   # sgdisk -n 1 /dev/sdb
   # partprobe
   # sgdisk -p /dev/sdb
  • If your disks were not empty, there may be some errors telling you that the kernel still uses old partitions, you will need to reboot, and redo the steps to install lvm2, bcache-tools, and mdadm if needed. Then pursue from here, else pvcreate or other instructions may fail, especially, if you don't reboot before installing bcache, you will have to purge/clean all disks from bcache and re-create them with reboots in between.

Formating /boot partition (here /dev/sda1):

   # mkfs.ext4 /dev/sda1

Creating Physical Volumes for LVM:

   # pvcreate /dev/sda2
   # pvcreate /dev/sdb1
   # pvcreate /dev/sdc1

This is what it looks like after those commands (lsblk):

   # lsblk

NAME

MAJ:MIN

RM

SIZE

RO

TYPE

MOUNTPOINT

sda

8:0

0

465,3G

0

disk

├─sda1

8:1

0

1023M

0

part

└─sda2

8:2

0

464,3G

0

part

sdb

8:16

0

3,7T

0

disk

└─sdb1

8:17

0

3,7T

0

part

sdc

8:32

0

111,8G

0

disk

└─sdc1

8:33

0

111,8G

0

part

sdd

8:48

1

58,9G

0

disk

├─sdd1

8:49

1

1,1G

0

part

└─sdd2

8:50

1

2,2M

0

part

loop0

7:0

0

1G

1

loop

/rofs


LVM Setup

Now we will create the Volume Groups:

   # vgcreate ssd /dev/sdc1
   # vgcreate RAID1 /dev/sda2
   # vgcreate RAID10 /dev/sdb1

Now we will create the Logical Volumes:
Create swap space:

   # lvcreate -n swap -L 12G ssd

Create a cache space:

   # lvcreate -n cache -L 64G ssd

Free space on the SSD for other uses:

   # lvcreate -n free -l 100%FREE ssd

Create root Logical Volume:

   # lvcreate -n root -l 100%FREE RAID1

Create /srv Logical Volume:

   # lvcreate -n srv -l 100%FREE RAID10

Bcache Setup


Here we will have one cache for serving two backing devices. We could have created two caches and use a 1:1 cache:backing setup to be sure to reserve a certain amount of cache per backing device (i.e: create 2 lvm cache partition 12G/52G for example). In this setup the cache will be shared between both devices.

Bcache Creation

Then create the bcache devices for both RAIDs:

   For avoiding left-overs:
   # wipefs -af /dev/mapper/ssd-cache
   # wipefs -af /dev/mapper/RAID1-root
   # wipefs -af /dev/mapper/RAID10-srv
   For the same reason, we will add --wipe-bcache:
   # make-bcache --writeback --wipe-bcache -B /dev/mapper/RAID1-root -B /dev/mapper/RAID10-srv \
                                           -C /dev/mapper/ssd-cache --wipe-bcache

Notes: make-bcache is the command that will create the bcache, it can take options.
       --writeback: for performance, but in production you may not want to use this mode for safety reasons. 
       -B refers to the backing devices, here we will use one cache to cache two disks.
       -C refers to the caching device, multiple devices not yet supported; use mdadm as a workaround if needed.
       --wipe-bcache, will overwrite any previous bcache superblocks, thus destroying previous bcache data.

This is what lsblk must show you at this step:

   # lsblk

NAME

MAJ:MIN

RM

SIZE

RO

TYPE

MOUNTPOINT

sda

8:0

0

465,3G

0

disk

├─sda1

8:1

0

1023M

0

part

└─sda2

8:2

0

464,3G

0

part

└─RAID1-root

252:3

0

464,3G

0

lvm

└─bcache0

251:0

0

464,3G

0

disk

sdb

8:16

0

3,7T

0

disk

└─sdb1

8:17

0

3,7T

0

part

└─RAID10-srv

252:4

0

3,7T

0

lvm

└─bcache1

251:1

0

3,7T

0

disk

sdc

8:32

0

111,8G

0

disk

└─sdc1

8:33

0

111,8G

0

part

├─ssd-swap

252:0

0

12G

0

lvm

├─ssd-cache

252:1

0

64G

0

lvm

│ ├─bcache0

251:0

0

464,3G

0

disk

│ └─bcache1

251:1

0

3,7T

0

disk

└─ssd-free

252:2

0

35,8G

0

lvm

sdd

8:48

1

58,9G

0

disk

└─sdd1

8:49

1

4K

0

part

loop0

7:0

0

1G

1

loop

/rofs


Format baches

I format both ext4 (had a recent issue with bcache + btrfs):

   # mkfs.ext4 /dev/bcache0
   # mkfs.ext4 /dev/bcache1

Partitioning, and disk setup is now done, we head on to server install:

Debootstrap Server

Prepare Debootstrap

We create a mount point for those:

   # mkdir -p /media/target
   # mount /dev/bcache0 /media/target

We install our ubuntu server files with debootstrap:

   # apt-get install -y debootstrap

I prepare env for the debootstrap, I want to use my proxy for this:

   # export https_proxy=http://IP.OF.THE.PROXY:3142
   # export http_proxy=http://IP.OF.THE.PROXY:3142
   # export ftp_proxy=http://IP.OF.THE.PROXY:3142

Debootstrap

   # debootstrap --arch amd64 vivid /media/target http://archive.ubuntu.com/ubuntu/
   Note: you may use a different mirror (I use http://mirrors.ircam.fr/pub/ubuntu/archive)

Chroot Setup

Then, prepare chroot:

   # cd /media/target

Copy the proxy and DNS setup for this new install:

   # cp /etc/apt/apt.conf.d/01Proxy /media/target/etc/apt/apt.conf.d/
   # cp /etc/resolv.conf /media/target/etc/resolv.conf

Mount the system mounts:

   # mount -t proc proc proc
   # mount -o bind /dev/ dev
   # mount -o bind /sys sys

Setup Installation

Chroot to the new installation

Chroot into target install root:

   # chroot .

Prepare Mounts

Use lsblk to identify your srv bcache number (it can be either 0 or 1 on your particular setup):

   # mount /dev/bcacheX /srv
   # mount /dev/sda1 /boot

Install Kernel

Now proceed with final setup of the installation (kernel + grub setup + packages + users):

   # apt-get update 

We will need to install kernel, of course; here we will use the generic one, but you can choose an other. To list available ones use:
‘apt-cache search linux-image’:

   # apt-get install linux-image-generic

   When grub ask you where to be installed, in this setup:
    - Select your booting device, not the /boot partition (i.e.: Select /dev/sda)

For a standard Ubuntu server experience:

   # apt-get install -y server^ standard^

=== Update Grub2 ===

On Vivid, there is a warning message for now, so we will comment HIDDEN_TIMEOUT_OPTIONS:<<BR>> {{{
   # vi /etc/default/grub to comment HIDDEN_TIMEOUT_OPTIONS
   # update-grub2

Install mandatory packages

We need to install lvm2 and bcache-tools (which will add udev hooks), we need to add mdadm in case we used a linux software RAID option (Those are really important else system won’t boot).

   # apt-get install -y lvm2 bcache-tools mdadm 

Setup fstab

Now we will need to get UUIDs for disks to be sure to mount the good one on the good mount-point. Bcache number is not a guaranteed one (bcache0 can become bcache1 at next boot). For this, we will use blkid tool:

   # blkid

and setup /etc/fstab: i.e.:

   # blkid | grep “/bcache” | cut -d‘“‘ -f2 > /etc/fstab
   # cat /proc/mounts >> /etc/fstab

Now edit properly:

   # vi /etc/fstab

File /etc/fstab example:

   UUID=ebf07bc7-e45d-4bc6-95d5-ca120cc6c135 / ext4 rw,relatime,data=ordered 0 0
   UUID=e118a9ab-539b-406d-af91-78888b945fb7 /srv ext4 rw,relatime,data=ordered 0 0 
   #/dev/sda1 is the boot partition
   UUID=”ed353670-703f-4cb1-8a2a-fee649ab97bf” /boot ext4 rw,relatime,data=ordered 0 0

Setup Networking

Now edit /etc/hostname (i.e. bcache_server.example.com):

   # vi /etc/hostname

Now edit /etc/hosts, to add the ‘bcache_server bcache_server.example.com’ in the 127.0.0.1 line:

   # vi /etc/hosts

Configure network interface (as your specific needs):

   # vi /etc/network/interfaces

File /etc/network/interfaces example:

   # iface eth0 auto
   # iface eth0 inet dhcp

Install openssh-server:

   # apt-get install -y openssh-server

Create a user

Setting-up users: create a user and root passwd:

   # adduser kick
   # addgroup kick admin
   # passwd

Reboot to your new system

exit chroot and reboot:

   # exit
   # sudo reboot

On you new system if you issue a mount, you must see:

   # mount

Filesystem

Size

Used

Avail

Use%

Mounted on

udev

3.8G

0

3.8G

0%

/dev

tmpfs

774M

8.8M

765M

2%

/run

/dev/bcache1

458G

791M

434G

1%

/

tmpfs

3.8G

0

3.8G

0%

/dev/shm

tmpfs

5.0M

0

5.0M

0%

/run/lock

tmpfs

3.8G

0

3.8G

0%

/sys/fs/cgroup

/dev/sdc1

1008M

40M

918M

5%

/boot

/dev/bcache0

3.6T

68M

3.4T

1%

/srv

tmpfs

774M

0

774M

0%

/run/user/0

Now you must have a working fresh new server on a bcached rootfs, time to test lxd?

ServerTeam/InstallOnBcache (last edited 2015-06-01 12:48:47 by AStrasbourg-551-1-150-48)