Created: 2005-04-25 by MattZimmerman
- Malone Bug:
UduSessions: 1, 4, 8, etc
Establish a strategy to support cluster-oriented filesystems (CFS) in Ubuntu.
We have a great opportunity to capitalize on growing interest in the clustering space, thanks to Ubuntu's appropriateness for infrastructure deployments, predictable release and support cycles. We have already had interest from cluster administrators, and CFS support has been raised as a major feature that would tip the scales in favor of Ubuntu. It would also strengthen Ubuntu's reputation as a server operating system.
GFS is already in Ubuntu! It doesn't scale as well as the other CFS, but is well known and quite popular.
OCFS 2 is in beta, and may not be ready or appropriate for BreezyBadger. Jeff will contact Oracle about collaboration on this.
Lustre is now on a delayed Free release cycle, and is administratively expensive, so without further discussion and/or support from clusterfs.com, it is unlikely that we will support Lustre in Ubuntu under our own steam.
- kernel side of GFS is already in our development tree. It will hit Breezy at the first upload.
- done - 09/05/2005 with 22.214.171.124 kernel upload. The gfs module is available everywhere. NOTE: the rh cluster suite kernel side is not complete yet. Apparently there are a few issues with 64 bit arches and PREEMPT.
- the userland part of GFS and the entire RH cluster suite is already being packaged. It has been beta tested in a 14-node cluster in Germany. (Week of May 13th 2005 should be an expected first release in Breezy.)
- done - 10/05/2005. gfs-tools from universe are capable to handle the GFS module in our kernel with no problems.
the remaining part of the cluster suite can be demoted to LowPriority. (done - 10/06/2005)
- there will be no d-i support to install on CFS for Breezy. We will evaluate the option for Breezy+1 on users input.
- as above there will be no rootfs over CFS.
Data Preservation and Migration
1. linux-image-* (done)
User Interface Requirements
1. The configuration of clusters is not something that can be done easily. These kinds of environments are way too specific and related to the application that will run on it. We will provide an example configuration in the beginning and we will grow it only after users/administrators input.
* 2005-07-15 uploaded a GUI tool to configure the cluster suite.
- Plan for on-going maintenance of user space components and synchronization with kernel code (2005-07-14 solved with some Depends: kernel modules magic)
- d-i and kickstart integration for automated installs with GFS
Addition after Approval
On Mark request:
- OCFS2 is now in the kernel
- OCFS2 tools have been uploaded.
Solution has been tested on i386 and ppc (by one of the RedHat upstream guys)
Approved -- ColinWatson
OCFS2 test case
To test OCFS2 a minimal setup of 3 machine is required (or 2 machine and a SAN/blade) but only 2 of them will play the cluster.
- Install kernel linux-source-2.6.12 (126.96.36.199-1.2) or higher on all the machines.
- The 3 machines need to be on the same LAN (Layer 2) network.
- Make sure to have a spare partition of at least 256MB available that will be trashed (or use a loop device - see vblade docs). It can be anykind of block device (rw).
- Install vblade (universe) if you don't have a SAN and export the partition as AOE device.
- Import the block device with a modprobe aoe.
- Check that the block device has been imported correctly via dmesg and that the device in /dev/ether/ has been created.
- Install ocfs2-tools and ocfs2console (both in universe).
Machine B and C will play the cluster dance:
- Use ocfs2console to configure the cluster and to format the device as OCFS2. Note that it is enough to format the device from one of the node only. The other will detect that it has been formatted as such (use the refresh button).
- Mount the device on both the machines (still use the console)
- Verify that the cluster (dlm) is talking properly creating a file on the device and the file should appear on the other machine.
- Perform any kind of disk I/O operation on it and verify the contents of the data written in it.
NOTE: at this point in time the cluster is NOT started at boot time. The scripts are there but configured to be quite at boot.