This page provides an overview of Juju and in particular of the abstractions/"concept"/"terms" it uses.


Juju concepts

  • A Juju environment is a collection of services provided by nodes.

  • A service:

    • Is deployed by Juju using charms.

    • Runs as one or more units.

    • Can be related to other services, in which case some action is performed when the other services start or stop.

    • Can be given an alias name.

  • A unit is an instance of a service:

    • Runs on a node, it can either be:

      • The single unit running on that node.

      • One of several units each running in some kind of container (LXC, KVM) on that node.

      • Is setup and configured by a charm.

  • Each node:

    • Runs on a machine that Juju can request from a provider:

      • A machine is supposed to have the OS already installed and running with two base attributes:

        • OS series (e.g. trusty or win2012r2).

        • System architecture (e.g. amd64 or ppc64el).

      • A provider can belong to some categories:

        • local like LXC or KVM.

        • MAAS

        • private cloud like OpenStack.

        • public cloud like AWS or Azure.

        • Juju can be used to implement an OpenStack instance on top of the MAAS provider, or to use an OpenStack instance setup by other means as its provider.

    • Can have several containers that run units or it can run units directly.

    • It always has a machine unit.

  • A charm is made of:

    • A set of script and configuration files that setup an application.
    • Optionally an agent running on each unit on which the service is installed that maintains the configuration of the application if it is dynamically modifiable. The daemon's front end is as a rule an instance of the jujud daemon.

  • metadata refers specifically to tools and is the list of tool variants in a stream.

    • The tool variants are version plus the series and architecture of the machines they can be installed on.

    • A stream is a collection of packages on a Canonical server, and it roughly the same thing as an archive for Debian packages, for example stable.

  • The juju command line tool can have plugins which must be in a directory on the $PATH to be used.

Juju node types

Control node:

  • Usually the control node is not part of any environment and

    does not run any agents.

  • Has the juju-core package that contains the commands to control the agents in the various environments.

  • Optionally has a juju-$TYPE package for the non-builtin type of hardware provisioning layers used by the environments.

  • The list of nodes is in $JUJU_HOME/environments.yaml and $JUJU_HOME/environments/$ENV.jenv.

  • It can have a local copy of the Juju tools repository https://streams.canonical.com/juju/tools/ created by `juju sync-tools`.

State nodes:

  • The first state node configured is called the boostrap node and as a rule runs on machine 0 and its machine agent configuration has the admin user password of the MongoDB database (in the oldpassword field).

  • Have the juju-mongodb package and the mongod daemon running.

  • Have Juju MongoDB database in /var/lib/juju/agent/machine-$HOST in a replication set (usually 3-wide) that holds the state of the Juju environment.

  • All the agents in Juju nodes connect to the primary state node to update their state in the database.

Ordinary nodes:

  • Each has several agents configured under /var/lib/juju/agents/ which are jujud instances that manage a specific aspect of their host.

    • One is the machine agent for that node.

    • The other agents are for the local units of the charms that have a dynamic aspect.

  • Have the Juju tools(which usually means jujud) under /var/lib/juju/tools/ and these are installed by the agent for the machine unit.

  • For public cloud providers this happens by default by fetching the relevant .tgz file from a simple-stream from the host streams.canonical.com.

  • The .tgz files can also be fetched from a local host indicated by agent-metadata-url.

  • The fetching is done by the agent daemon for each unit. The .tgz files are stored in the blobstore database on the state nodes.



ServerTeam/JujuConcepts (last edited 2015-09-13 10:46:45 by pg-8)