single-install

Differences between revisions 2 and 3
Revision 2 as of 2015-10-23 02:14:32
Size: 5236
Editor: 1
Comment:
Revision 3 as of 2015-11-14 21:12:12
Size: 5638
Editor: cpe-76-182-21-82
Comment:
Deletions are marked like this. Additions are marked like this.
Line 6: Line 6:

== Juju failing to create lxc template clones ==

If `juju status` reports errors with lxc's about not being able to download the template or clone it for use then do the following:

{{{
$ sudo rm -rf /var/cache/lxc/cloud-trusty
$ sudo lxc-create -t ubuntu-cloud -n test
}}}

From there a Juju bootstrap will then re-use the already downloaded LXC image and proceed to deploy the services.

Debugging a Single install

Log file location

The default log file is located in ~/.cloud-install/commands.log

Juju failing to create lxc template clones

If juju status reports errors with lxc's about not being able to download the template or clone it for use then do the following:

$ sudo rm -rf /var/cache/lxc/cloud-trusty
$ sudo lxc-create -t ubuntu-cloud -n test

From there a Juju bootstrap will then re-use the already downloaded LXC image and proceed to deploy the services.

Services not deploying correctly - inspect Juju status and logs

   1 $ sudo lxc-attach -n openstack-single-$USER
   2 $ su ubuntu
   3 $ export JUJU_HOME=~/.cloud-install/juju
   4 $ juju status --format=tabular

if an individual service, e.g. 'glance' shows errors, you can inspect its log thusly:

  • $ juju debug-log --replay -i glance/0

"Top Level Container OS did not initialize properly" -- huh?

If your single install attempt fails with that message, check commands.log for the *last* error. It may say something like this:

159 [ERROR: 09-03 16:30:52, single.py:330] Container cloud-init finished with errors: ['(\'seed_random\', ProcessExecutionError("Unexpected error while running command.\\nCommand: [\'env\', \'pollinate\', \'-q\', \'--curl-opts\', \'-k\\
    ']\\nExit code: 1\\nReason: -\\nStdout: \'\'\\nStderr: \'\'",))']
160 [DEBUG: 09-03 16:30:52, utils.py:56] Traceback (most recent call last):
161   File "/usr/share/openstack/cloudinstall/utils.py", line 71, in run
162     super().run()
163   File "/usr/lib/python3.4/threading.py", line 868, in run
164     self._target(*self._args, '''self._kwargs)
165   File "/usr/share/openstack/cloudinstall/controllers/install/single.py", line 427, in do_install_async
166     self.do_install()
167   File "/usr/share/openstack/cloudinstall/controllers/install/single.py", line 453, in do_install
168     self.create_container_and_wait()
169   File "/usr/share/openstack/cloudinstall/controllers/install/single.py", line 224, in create_container_and_wait
170     while not self.cloud_init_finished(tries):
171   File "/usr/share/openstack/cloudinstall/controllers/install/single.py", line 331, in cloud_init_finished
172     raise Exception("Top-level container OS did not initialize "
173 Exception: Top-level container OS did not initialize correctly.

This most often means that the LXC container the installer created to house the installation does not have working network access to the outside world. There are a few reasons this might happen:

Proxy Issues

If you are behind a firewall that requires use of an http/s proxy, be sure you set that up in your install command line.

Container failing to reach the internet (Vivid and later)

This pertains to systemd-networkd found on Vivid and later

If you are running on a system using systemd and have configured it to use systemd-networkd, then you may need to set IPForward=yes in the [Network] section of the .network file corresponding to the interface that reaches the outside world. See man systemd.network for information about this option.

As an example, on this Wily system we edit /lib/systemd/network/80-container-host0.network

#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Match]
Virtualization=container
Name=host0

[Network]
DHCP=yes
LinkLocalAddressing=yes

# Fix unresolvable network in LXC containers.
IPForward=yes

Unable to lxc-attach to container on a Vivid (Ubuntu 15.04) install with LXC 1.1.2

  • It is a good idea to upgrade LXC on Vivid(15.04) to a newer release regardless of whether or not you run into this issue.

If you run into an error similar to:

lxc_container: cgmanager.c: lxc_cgmanager_enter: 694 call to cgmanager_move_pid_abs_sync failed: invalid request
lxc_container: cgmanager.c: cgm_attach: 1324 Failed to enter group /lxc/openstack-single-openstack/system.slice/cgproxy.service
lxc_container: attach.c: lxc_attach: 909 error communicating with child process

This can be resolved by upgrading to a newer version of LXC:

   1 $ sudo apt-add-repository ppa:ubuntu-lxc/stable
   2 $ sudo apt-get update
   3 $ sudo apt-get upgrade

This problem so far only exists on Vivid with the default LXC version of 1.1.2

Accessing the Juju environment

The container houses the entire OpenStack deployment and is a good first step in troubleshooting an install.

In order to log into the host container we'll need to use lxc-attach

  • $ sudo lxc-attach -n openstack-single-$USER

openstack-single-$USER is the default container used for installation. Within that container we can now inspect our Juju environment.

  • If for some reason the container name does not match the default simply running sudo lxc-ls -f will list out the container that was created, always starting with openstack-single.

   1 % su - ubuntu
   2 % JUJU_HOME=~/.cloud-install/juju juju status

All juju commands need to be able to read from a customized JUJU_HOME, the location that the installer uses is ~/.cloud-install/juju.

From this point you can log into any of the services Juju knows about and start inspecting logs, charm configurations, etc.

OpenStack/Installer/debugging/single-install (last edited 2015-11-14 21:12:12 by cpe-76-182-21-82)