Summary

Release Note

Rationale

EC2 (and thus UEC) instances are not able to maintain which kernel is booted. This is different from other hardware installations where the operating system is able to install a new kernel, update the bootloader, and boot into a new instance. It is a common point of confusion for users of EC2.

Depending on what we can achieve in this spec, we would like to either:

User stories

Assumptions

Design

Implementation

Overall goal here is to have a functional grub configuration that is updated on kernel installation inside both EC2 and UEC instances. That config will be read by a grub bootloader separate from the image itself.

When running in EC2, the bootloader used will have to be grub-0.97. Under UEC, it is expected to use grub-pc.

In both UEC and EC2, a "kernel" is loaded by the hypervisor. This does not have to be a traditional linux kernel, but can instead be a bootloader. We are provided with grub-0.97 bootloaders on EC2 that are hard coded to look at (hd0,0)/boot/grub/menu.lst . To keep a similar path between UEC and EC2, we expect to provide a grub2 bootloader to be loaded via 'kvm -kernel' that will be hard coded to read (hd0,0)/boot/grub/grub.conf.

Expected Path

General Notes

EC2 Notes

UEC Notes

Hurdles / Questions

Random Thoughts

Migration

The one migration path that we would hope to support would be an upgrade from a 10.04 LTS EBS image to 10.10. It is expected that this should be achievable via the normal 'do-release-upgrade'

Test/Demo Plan

The following test cases should apply apply:

other info

BoF agenda and discussion

If you accept the limitation that the guest OS cannot directly modify its kernel, then we can ease the pain and confusion for the user on this topic:
 * When the desktop user installs a new kernel via upgrade, they are notified that they need to reboot.  I'm not sure if there is similar functionality in the server install.  We could utilize similar functionality to tell the user what their options are, even providing cut and paste command lines for ebs volumes.

If you choose not to accept that limitation, the following are some things that could be done to address it directly:
 * ksplice : we probably should make sure that ksplice works in kvm and linux-ec2 kernels.  I would expect that it would work out of the box for kvm, but that the xen kernel might throw some hiccups in.
 * kboot / kexec: Ubuntu could register kboot kernels and ramdisks that functioned on ec2 or provide images that function in kvm.  Then, the kernel and ramdisk that were registered with the image would provide nothing more than a bootloader.  Per jjohansen, the xen patches conflict with the kexec function of the kernel.  Thus, in order to make this an option, we may need to have pv_ops kernels, rather than xen kernels.
 * I just read an article about gpxe, and wonder if it might be possible to utilize this (or or other) bootloader as a 'kernel' in ec2.
 * Actually modify UEC to support "full virt", where instead of loading a kernel, it would let the bootloader installed in the image take over.  This may cause some confusion on ec2 (ie, if grub were in the image)


Other topics:
 - kboot (smoser's pipe dream of guest managed kernels).  kboot depends on kexec, and kexec is incompatible with xen kernels.  That said, John hopes to try again with pv_ops kernels on ec2.



== Kernel Goal for Maverick ==
 * Single, merged kernel for both EC2 and Server
   - with working kexec
     - current xen patchset is incompatible with kexec
   - working pv_ops
   - could get to kboot eventually
   - kernel team wants to drop the xen patchset if possible
   - needs something that boots and works in every zone
 * Flavours are easier to maintain for the kernel team than a different top level tree

== Motivation ==
 * Why is upgrading your instance's kernel important at all?
 * So what if you have to kill your instance and start a new one?  That is the cloud model.
  * ebs root volumes gives you persistent storage and users of that could benefit from this
 * For S3-backed instances, applying kernel security updates is the main driver

== Amazon ==
 * Will Amazon even allow guests servicing their own kernel?  Possibly against ToC?
  * Amazon's concerns:
    * Security
    * Stability
   * Bad guest kernels can take down the hosts
 * Will discuss this in advance with Amazon before dedicating development effort on our part
  * Might require modifications to their Xen kernels (?)

== UEC ==
 * Even if Amazon doesn't allow this, we could enable it for UEC/Eucalyptus
 * Some admins simply want to update their guest kernels
 * kexec (and kboot) should be doable inside of KVM
  * see also pygrub

== ksplice ==
 * Would be really nice to apply security patches without rebooting (and solve that piece of this problem)
 * However ksplice support in ec2 kernels (if feasible), would move the ec2 kernel further from the distro kernel


CategorySpec

ServerMaverickCloudKernelUpgrades (last edited 2010-07-20 16:15:33 by 193)