Table of Contents

Ubuntu Server for IBM zSystems and LinuxONE

  • Ubuntu Server for IBM zSystems and LinuxONE is the Ubuntu Server version compiled for the s390x architecture (64bit Mainframe).
  • Ubuntu Server for IBM zSystems and LinuxONE (s390x) is available since 16.04 LTS (Xenial Xerus) release and newer, but meanwhile Ubuntu 16.04 reached it's end of base support.
  • The target architecture is zEC12 (zBC12) and up (alternate -march/-mtune switch: "arch10") starting with 16.04.
  • Ubuntu Server s390x will not even boot/IPL (incl. it's installer) on z196/z114 or older!
  • Starting with 20.04 the minimal architectural level set was raised to z13/z13s (using -march=z13 and -mtune=z15), hence 20.04 does no longer run on zBC12/zEC12.


ISO images

  • Access to the ISO and image download is provided via the following form http://www.ubuntu.com/download/server/linuxone or https://www.ubuntu.com/download/server/s390x.

    • The default download link that will be provided always points to the latest Ubuntu Server release.
  • But via the alternate options link that is provided as well, you can navigate to any other Ubuntu Server LTS release and image that is still in service.
  • If an old or outdated Ubuntu Server release is needed (for whatever reasons) it can be found here: http://old-releases.ubuntu.com/releases/. No need to say that it is strongly recommended to always use Ubuntu Server releases that are still in service and have the latest updates applied!

  • The ISO images contain multiple files in /boot/ sub-directory to boot/IPL on LPAR, z/VM, and KVM.

Cloud images

In addition to the ISO images, pre-installed Cloud images are available:

These are intended for direct use in KVM, OpenStack and other Cloud-like environments.

Container images

Finally there are also a lot of container images (not only for Ubuntu releases) available for direct use in container infrastructures, like lxc and LXD.

The Linux container - image server provides a full list. There are of course also ways for Creating custom LXD images

Updated installer

In some rare cases it might be required to use or test an updated installer. Since the installer is usually only shipped as part of an installation (ISO) image (incl. the 'point' releases), the time until the next installer becomes available needs to be bridged. An updated installer can be taken from here:

Starting with 20.04, subiquity became the new server (live) installer for s390x. Since then there are no updated installer version available for direct download, one will be automatically notified during installations in case an updated version exists, and can opt-in to update on-the-fly.

Note: If one of the above (old) links is not available (like shortly after a new Ubuntu release), then there is simply no updated installer, yet. Make sure the 'updated' installer is really taken from xenial-updates (respectively bionic-updates), rather than from xenial (bionic) (without -updates), since the URL without '-updates' simply points to the initial installer used by the GA ISO image. Now tell the installer to fetch its own components from -proposed as well, by adding the boot parameter: apt-setup/proposed=true

HWE Kernel

  • On the LTS "point" releases (.2 to .5) a second and optional boot folder named /boot-HWE/ exists.
    • That provides the option to install using the alternate HWE (hardware enablement) kernel.

      For more information on the HWE kernels see https://wiki.ubuntu.com/Kernel/LTSEnablementStack. It's important to notice that the HWE kernel is just an option or alternative to the LTS or GA kernel - either used by the installer or by the system on disk.

  • The HWE kernel will never be used or installed by default and no upgrade to it will be done automatically.
    • The user / administrator has to "opt-in" to the HWE kernel - means explicitly use it from the installation ISO image or explicitly change to it (install) on an already running system. Once an "opt-in" to the HWE kernel was done, all HWE kernel upgrades need to be done (rolling release). But one can move from the HWE kernel back to the LTS/GA kernel.





Official documentation landing page

Ubuntu Server Guide

Ubuntu (Server) Installation Guide (outdated)

Release Notes


  • Use ubuntu-bug tool

  • Include/add s390x tag to manual bug reports

  • If a problem seems to be s390x-specific, also provide the output of 'dbginfo.sh'.


Ubuntu Server s390x blog

Stop by at the Ubuntu on Big Iron blog for further (hopefully) useful information.


  • Almost all packages and major (server-) products are available for s390x:
  • Juju local and manual providers are available
  • LXD is available, incl. KVM support
  • multipass is available
  • Cloud images are available (KVM)
  • Container images are available
  • Docker is available
  • OpenStack - see more details visit the Cloud Archive page

  • Kubernetes is available (CK and Microk8s)
  • MAAS (with DPM systems and FCP disks only) is available


Q: How can one use and take over the defaults values that are provided on most d-i screens in the status line on the bottom?

A: Sometimes it needs to be distinguished between a generally empty value and a potential default value that is given. Hence just using Enter does not take the default. For taking over a given default value one needs to type a single dot "." and press Enter afterwards. (1667296)

Q: What Virtualization Modes are Supported?

A: Ubuntu is supported as:

  • a native install in an LPAR
  • a KVM instance on an Ubuntu host running on an LPAR
  • a z/VM instance (guest)
  • a container (LXD, lxc and Docker)
  • a guest of emulators such as Hercules or zPDT

Q: Is there an emulator I can use to run Ubuntu s390x on non-mainframe hardware?

A: In between the IBM zPDT tool got updated with the support for the EC12 hardware level, hence Ubuntu Server (for amd64) can now be used as base operating system (to install the zPDT software) and Ubuntu Server (for s390x) can run again on top.

There is also an open-source and freely available emulator called Hercules. Hercules v4.2 has been confirmed to run Ubuntu without issues.

Q: How to tweak boot arguments?

A: Simply edit /etc/zipl.conf and run sudo zipl to update the configuration.

Q: How to bump crashkernel limits?

A: Depending on the number of available devices crashdump setting in /etc/zipl.conf may not be appropriate. One can either increase it further, or limit the amount of devices visible to the kernel, and thus lower the requirements for the crashdump setting.

To ignore devices you can run cio_ignore tool to generate appropriate stanza to ignore all devices, but the currently active/in-use. Simply add the generated stanza to the boot parameters in /etc/zipl.conf:

$ sudo cio_ignore -u -k

A: Appropriately sized machines need to be used for the installation target. If one tries to install all available packages, all translations, all debug symbols, and all development packages - an appropriate amount of disk space and installation memory required - otherwise it can be possible that the system runs out of disk space and/or out of memory. In the latter case the oom-killer may kill random processes, incl. the installer itself.

A good common sense is to just select in d-i the minimal required components and packages for an initial setup and to install anything else after the initial setup is completed on top of that. This also provides a better user experience due to the more convenient package management tools available on an installed system. It is not really possible to estimate the required RAM if installing all of the archive options, because the services are started upon installation. And they may even have different requirements that change from version to version. The disk space estimation can also be done quite roughly and is based on the package unpacked size. But keep in mind that postinst package scripts may create more files that, again consume even more disk space. For example postgresql create and starting new clusters, or backups and snapshots are taken which at the end can eat a lot of space in /var.

Q: How reasonably small can an Ubuntu system be?

A: As usual it depends on what the system is used for and which packages are needed. For a small base installations 2GB of RAM is sufficient - even 1GB systems are possible, but I see them mainly as z/VM guests and KVM virtual machines, rather than LPARs of that size. For testing and for just trying out certain aspects and functions such small with RAM down to 1GB systems are possible. But limit the installed components and packages in such an edge case to the bare minimum to not reach the system limits right away.

Q: How can LPAR installation be done in VLAN environments?

A: There are two way of installing Ubuntu on z Systems or LinuxONE on LPAR in a VLAN (IEEE 802.10) network environment: - automated d-i installation using preseed - interactive d-i installation with d-i priority Medium More details can be found here: https://ubuntu-on-big-iron.blogspot.de/2017/01/lpar-install-ubuntu-on-z-with-vlan.html

Q: Installing zipl boot loader fails if "/boot' is located on a multi-target device-mapper device

A: Partitioning step allows to configure LVM across multiple devices without requiring to setup a separate /boot partition. This may lead to failure to install the bootloader at the end of the installation, and failures to boot the resultant installations. (1680101) - This is fixed with 18.04.

Q: LVM configuration cannot be removed when volume groups with the same name are found during installation

A: Partitioner does not support installation when multiple conflicting/identical volume groups have been detected. For example reinstalling Ubuntu with LVM across multiple disk drives that had individual LVM installations of Ubuntu. As a workaround, please format disk drives prior to installation, or from the built in shell provided in the installer. (1679184)

Q: System cannot boot when root filesystem is on an LVM on two disks

A: A System cannot be booted up when it's root filesystem is on an LVM on two disks (either ECKD or FCP). After all needed disk devices are enabled by 'chzdev -e', one must run 'update-initramfs -u' so that the udev rules generated by chzdev are copied into the initramfs and become available at boot time. . (1641078)

Q: Which disk storage is supported?

A: SCSI/FCP, DASD/FBA, DASD/ECKD, and NVMe disk storage is supported (within the requirements of the used IBM zSystem hardware generation).

S390X (last edited 2022-11-04 13:20:01 by fheimes)