Copyright @ Lenovo US

    In this article, we have demonstrated how Juju deploys a charm, including what makes a node eligible as a target and what files will be put on by the deployer.If you recall, there were four steps in a deploy process:

    1. Add a new machine to the cloud environment. In the demo, we added this machine manually using juju add-machine command.
    2. Machine-0 recognizes the new node.
    3. CLI issues a deploy command.
    4. Charm gets deployed.

    This usecase, however, is somewhat backwards -- we have provisioned a machine prior to the deploy command. Wouldn't it be better if the command will signal the cloud provider to create a new machine on the fly? MAAS provider can almost do just that.

    MAAS way of a deploy

    We have briefly touched upon the MAAS way. By default, Juju deploy will request for a new machine unless using the --to [machine number] flag. While using MAAS it is seen that MAAS will turn a READY machine into ALLOCATED then DEPLOYING. Once a Juju agent is installed (provisioned), the agent executes application deployment.

    MAAS target node state diagram during Juju deploy process

    From CLI to provider

    We know machine-0 will call MAAS's REST API to kick off the machine provisioning process. But we do not know which API endpoint it is using, and who is the caller inside machine-0 that makes this call. Priviously we have analyzed this process from the outside — files generated, states changed, application installed. Here we will look into the code to understand how these steps take place.

    Overview of the agent

    While looking at this process, one can't help noticing the key role the juju agent plays. It seems to have intelligence that, once installed, knows how to speak to the state controller (machine-0), how to find and download a charm, and how to use it to deploy an application. So just how are agents wired together?

    High level view of Juju agents in an environment

    The key of this diagram is that the agents are connected in a client-server configuration, where machine-0's agent is the API server and all other agents are its clients. The API server provides a facade, a design pattern, which exposes functions for client to call. It is not clear yet how function is mapped to a string, eg. Deploy RPC will translate to facade's Deploy() function call.So Juju CLI is a client, same as a provisioned node.

    Machine-0, state, and provisioner

    Now when agent, say the CLI, issues a command, to the API server, what happens next? Juju's design maintains an internal state that are persisted in a mongo DB. A command will cause a state change, eg. if machine XYZ's current state is one that has no MySQL (state A), a deploy MySQL command will generate a state B (=state A + MySQL installed). This change is then saved to the DB. There is a background loop called provisioner, who monitors this state change. The change(set) essentially has all the information the provisioner needs to take an action so to bring machine from state A → state B.

    Machine-0 state & provisioner

    Provisioner and provider

    Ok, so the provisioner loop is the action taker. How is it aware of the cloud provider? This is the final piece of the puzzle — within the provisionTask struct, there is a environs.InstanceBroker. Now if you recall, the environs is another name for provider, where InstanceBroker is a provider interface! Bingo.

    type provisionerTask struct {
        controllerUUID             string
        machineTag                 names.MachineTag
        machineGetter              MachineGetter
        toolsFinder                ToolsFinder
        machineChanges             watcher.StringsChannel
        retryChanges               watcher.NotifyChannel
    --> broker                     environs.InstanceBroker
        catacomb                   catacomb.Catacomb
        auth                       authentication.AuthenticationProvider
        imageStream                string
        harvestMode                config.HarvestMode
        harvestModeChan            chan config.HarvestMode
        retryStartInstanceStrategy RetryStrategy
        // instance id -> instance
        instances map[instance.Id]instance.Instance
        // machine id -> machine
        machines map[string]*apiprovisioner.Machine

    Call chain, and call to provider

    Back to our initial question then — how a CLI deploy command can kick off provider to create a new machine? The command chain is roughly like this:

    1. CLI will send a RPC request to API server (machine-0).
    2. RPC figures out what the next state needs to be.
    3. machine-0's agent write the change to DB.
    4. Provisioner detects a state change.
    5. Provisioner task calls function to kick off a new machine.

    Mapping everything here into what's in the code, here is a illustration showing call chain from one module to the next. Some of them are just think wrapper of next function. The mapping between a command to a state is taking place in state.AddApplication() which I'm highlighting in green.

    Illustration of call chain by "juju deploy" command

    — by Feng Xia


    Juju GUI nginx proxy

    In LXD on localhost we introduced using LXD container to bootstrap a Juju controller. But how to access the Juju GUI? Launching it is easy enough with $ juju gui from juju host;...

    Juju local LXD

    Using Juju's LXD provider is the least-hassle way to start an experience of Juju and its charms. However, if you have done charm development for a while, you know making a one line of code...

    Charm Ansible integration

    Let's face it. Ansible has the mouth (and market) share these days. For our modeling purpose, we are to utilize its procedural strength to carry out actions, which provides an abstraction instead of coding in charm's Python files.