Debugging a Single install
Log file location
The default log file is located in ~/.cloud-install/commands.log
Services not deploying correctly - inspect Juju status and logs
if an individual service, e.g. 'glance' shows errors, you can inspect its log thusly:
$ juju debug-log --replay -i glance/0
"Top Level Container OS did not initialize properly" -- huh?
If your single install attempt fails with that message, check commands.log for the *last* error. It may say something like this:
159 [ERROR: 09-03 16:30:52, single.py:330] Container cloud-init finished with errors: ['(\'seed_random\', ProcessExecutionError("Unexpected error while running command.\\nCommand: [\'env\', \'pollinate\', \'-q\', \'--curl-opts\', \'-k\\ ']\\nExit code: 1\\nReason: -\\nStdout: \'\'\\nStderr: \'\'",))'] 160 [DEBUG: 09-03 16:30:52, utils.py:56] Traceback (most recent call last): 161 File "/usr/share/openstack/cloudinstall/utils.py", line 71, in run 162 super().run() 163 File "/usr/lib/python3.4/threading.py", line 868, in run 164 self._target(*self._args, '''self._kwargs) 165 File "/usr/share/openstack/cloudinstall/controllers/install/single.py", line 427, in do_install_async 166 self.do_install() 167 File "/usr/share/openstack/cloudinstall/controllers/install/single.py", line 453, in do_install 168 self.create_container_and_wait() 169 File "/usr/share/openstack/cloudinstall/controllers/install/single.py", line 224, in create_container_and_wait 170 while not self.cloud_init_finished(tries): 171 File "/usr/share/openstack/cloudinstall/controllers/install/single.py", line 331, in cloud_init_finished 172 raise Exception("Top-level container OS did not initialize " 173 Exception: Top-level container OS did not initialize correctly.
This most often means that the LXC container the installer created to house the installation does not have working network access to the outside world. There are a few reasons this might happen:
If you are behind a firewall that requires use of an http/s proxy, be sure you set that up in your install command line.
Systemd networkd setup (Vivid or Later)
If you are running on a system using systemd (Ubuntu Vivid or later) and have configured it to use systemd-networkd (this is not the default as of Wily), then you may need to set IPForward=yes in the [Network] section of the .network file corresponding to the interface that reaches the outside world. See man systemd.network for information about this option.
Unable to lxc-attach to container on a Vivid (Ubuntu 15.04) install with LXC 1.1.2
- It is a good idea to upgrade LXC on Vivid(15.04) to a newer release regardless of whether or not you run into this issue.
If you run into an error similar to:
lxc_container: cgmanager.c: lxc_cgmanager_enter: 694 call to cgmanager_move_pid_abs_sync failed: invalid request lxc_container: cgmanager.c: cgm_attach: 1324 Failed to enter group /lxc/openstack-single-openstack/system.slice/cgproxy.service lxc_container: attach.c: lxc_attach: 909 error communicating with child process
This can be resolved by upgrading to a newer version of LXC:
This problem so far only exists on Vivid with the default LXC version of 1.1.2
Accessing the Juju environment
The container houses the entire OpenStack deployment and is a good first step in troubleshooting an install.
In order to log into the host container we'll need to use lxc-attach
$ sudo lxc-attach -n openstack-single-$USER
openstack-single-$USER is the default container used for installation. Within that container we can now inspect our Juju environment.
If for some reason the container name does not match the default simply running sudo lxc-ls -f will list out the container that was created, always starting with openstack-single.
All juju commands need to be able to read from a customized JUJU_HOME, the location that the installer uses is ~/.cloud-install/juju.
From this point you can log into any of the services Juju knows about and start inspecting logs, charm configurations, etc.