This page provides an overview of Juju and in particular of the abstractions/"concept"/"terms" it uses.
A Juju environment is a collection of services provided by nodes.
Is deployed by Juju using charms.
Runs as one or more units.
Can be related to other services, in which case some action is performed when the other services start or stop.
Can be given an alias name.
A unit is an instance of a service:
Runs on a node, it can either be:
The single unit running on that node.
One of several units each running in some kind of container (LXC, KVM) on that node.
Is setup and configured by a charm.
Runs on a machine that Juju can request from a provider:
A machine is supposed to have the OS already installed and running with two base attributes:
OS series (e.g. trusty or win2012r2).
System architecture (e.g. amd64 or ppc64el).
A provider can belong to some categories:
Can have several containers that run units or it can run units directly.
It always has a machine unit.
A charm is made of:
- A set of script and configuration files that setup an application.
Optionally an agent running on each unit on which the service is installed that maintains the configuration of the application if it is dynamically modifiable. The daemon's front end is as a rule an instance of the jujud daemon.
metadata refers specifically to tools and is the list of tool variants in a stream.
The tool variants are version plus the series and architecture of the machines they can be installed on.
A stream is a collection of packages on a Canonical server, and it roughly the same thing as an archive for Debian packages, for example stable.
The juju command line tool can have plugins which must be in a directory on the $PATH to be used.
Juju node types
- Usually the control node is not part of any environment and
does not run any agents.
Has the juju-core package that contains the commands to control the agents in the various environments.
Optionally has a juju-$TYPE package for the non-builtin type of hardware provisioning layers used by the environments.
The list of nodes is in $JUJU_HOME/environments.yaml and $JUJU_HOME/environments/$ENV.jenv.
It can have a local copy of the Juju tools repository https://streams.canonical.com/juju/tools/ created by `juju sync-tools`.
The first state node configured is called the boostrap node and as a rule runs on machine 0 and its machine agent configuration has the admin user password of the MongoDB database (in the oldpassword field).
Have the juju-mongodb package and the mongod daemon running.
Have Juju MongoDB database in /var/lib/juju/agent/machine-$HOST in a replication set (usually 3-wide) that holds the state of the Juju environment.
All the agents in Juju nodes connect to the primary state node to update their state in the database.
Each has several agents configured under /var/lib/juju/agents/ which are jujud instances that manage a specific aspect of their host.
One is the machine agent for that node.
The other agents are for the local units of the charms that have a dynamic aspect.
Have the Juju tools(which usually means jujud) under /var/lib/juju/tools/ and these are installed by the agent for the machine unit.
For public cloud providers this happens by default by fetching the relevant .tgz file from a simple-stream from the host streams.canonical.com.
The .tgz files can also be fetched from a local host indicated by agent-metadata-url.
The fetching is done by the agent daemon for each unit. The .tgz files are stored in the blobstore database on the state nodes.
- Juju on MAAS:
Install MAAS and then Juju on top, from the MAAS documentation.
Install MAAS and then Juju on top by quickstart, from the MAAS documentation.
Install MAAS and then Juju on top, from the Juju documentation.
Private cloud HOWTO which is relevant because MAAS counts as a private cloud in particular as to tools installation.
Lists of control commands with a brief description that is usually much better than the builtin documentation.
http://www.metaklass.org/how-to-recover-juju-from-a-lost-juju-openstack-provider/ configuration and state if one of the nodes has to be rebuilt, using the configuration and state still present on the other nodes.