1   ADEPT, the Deterministic Executor for Parallel Testing

ADEPT includes a small python program and a collection of Ansible playbooks, and related configurations. Together, they help orchestrate parallel creation, configuration, use and cleanup of ephemeral virtual machines.

1.1   Introduction

ADEPT provides the ground-work for managing and executing tests against systems through multiple Ansible playbooks. It supports the industry-standard practice of utilizing a separate triggering/scheduling versus execution host. Jobs may be defined in-repo or externally, and use the standard Ansible directory structure. Jobs may define their own playbooks, roles, and scripts or re-use any of the content provided.

Systems management, be it local or cloud, is extremely flexible. Though an OpenStack setup is the default, any custom host-management tooling may be used. Changing and maintaining management tooling is very smooth since the interface is simple and well defined. No persistent systems or data-stores are required, though both may be utilized.

Finally, since initiator-host capabilities are often unknown and fixed, ADEPT has very low dependency and resource requirements. The included adept.py program, along with a simple YAML input file directives, bootstraps Ansible operations for every job. While Ansible and it’s dependencies are gathered at runtime, and confined within a python virtual environment.

1.2   Latest Documentation

For the latest, most up to date documentation please visit http://A-D-E-P-T.readthedocs.io/en/latest

The latest Docker Autotest documentation is located at: http://docker-autotest.readthedocs.io

1.3   Prerequisites

  • Red Hat based host (RHEL, CentOS, Fedora, etc), subscribed and fully updated.
  • Python 2.7
  • PyYAML 3.10 or later
  • libselinux-python 2.0 or later
  • rsync 2.5 or later
  • Ansible 2.1 or later (EPEL)
  • Root access not required

1.3.1   Testing/Development

  • Ansible 2.3 or later
  • python-unittest2 1.0 or later
  • python2-mock 1.8 or later
  • pylint 1.4 or later
  • python-virtualenv or python2-virtualenv (EPEL)
  • Optional (for building documentation), make, python-sphinx, and docutils or equivalent for your platform.

1.3.2   OpenStack support:

1.4   Quickstart

Ansible 2.3 or later is required, along with the items listed under prerequisites.

This demonstration doesn’t do anything extraordinarily useful. However, it does demonstrate ADEPT’s essential functions. The tasks to be performed (the job) exist as a sparse standard Ansible directory layout, under jobs/quickstart. All files in the job directory, overwrite identically named files under kommandir/ (after a working copy is made).

# Optional: set $ANSIBLE_PRIVATE_KEY_FILE
# if unset, a new temporary key will be generated in workspace
$ export ANSIBLE_PRIVATE_KEY_FILE="$HOME/.ssh/id_rsa"

# Create a place for runtime details and results to be stored
$ export WORKSPACE="$(mktemp -d --suffix=.workspace)"

# Run the ADEPT-three-step (keyboard finger-dance)
$ ./adept.py setup $WORKSPACE exekutir.xn
$ ./adept.py run $WORKSPACE exekutir.xn
$ ./adept.py cleanup $WORKSPACE exekutir.xn

# Cleanup the workspace, when you're done looking at it.
$ rm -rf $WORKSPACE

Notes:

  1. To see select debugging output (select variable values and infos), append -e adept_debug=true onto any of the adept.py lines above.
  2. Setting -e adept_debug=true will prevent roles in the cleanup context from removing any leftover files in the workspaces.
  3. To see massive amounts of ugly details, append one or more --verbose, options onto any of the adept.py lines above.

1.5   Basic Definitions

context
The label given to the end-state / completion of a transition. Analogous to a “phase”, or collection of related steps. The context label (string) is assumed to pass unmodified, through all facilities (adept.py files, playbooks, scripts, etc.) and through all layers (original calling host, through slave, and into testing hosts). No facility or layer will alter the context label from the one originally passed into the first, lowest-level call of adept.py.
transition
The collection of steps necessary to realize a context. Analogous to the act of performing all described tasks within a “phase” to reach some end-state.
setup, run, and cleanup
The three context labels currently used in ADEPT. Operationally, the run context is dependent on a successful setup transition. However, the cleanup context transition does not depend on success or failure of either setup or run.
job
A single, top-level invocation through all context transitions, concluding in a resource-clean, final end-state. Independent of the existence of any results or useful data. Logically represented by a set of files within the path pointed to by job_path.
exekutir
The initial, starting host that executes one or more jobs.
kommandir
The name of the “slave” VM as referenced from within playbooks, adept files, and configurations. When a member of the nocloud group (see kommandir_groups), this will be the same host as the exekutir.
peon
The lowest-level VM used for the grunt-work of testing or performing some temporary but useful task. Assumed to be controlled by a one-way connection from the kommandir. Cannot and must not be able to access the kommandir or exekutir hosts.

1.6   Topology

1.6.1   Systems

exekutir --> kommandir --> peon
                       \
                        -> peon
                       \
                        -> peon

-or-

exekutir/kommandir --> peon
                   \
                    -> peon
                   \
                    -> peon

1.6.2   Directory Layout

docs/
Source for all documentation input and output.
exekutir/
Standard Ansible directory layout, dedicated specifically for use by the exekutir host. Some roles are shared by the kommandir, but all contents are limited by a reduced set of prerequisites. This directory is transferred verbatim to the exekutir’s workspace during The setup context transition.
kommandir/

Standard Ansible directory layout, dedicated specifically for use by the kommandir and peons. This directory is transferred to kommandir_workspace on the exekutir and becomes workspace on the kommandir.

Any/all files in the copy may be overridden by files from job_path. The most important of which is job.xn. This is the primary entry point on the kommandir for the job.

job_path
Sparse standard Ansible directory layout, dedicated to one or more jobs. Its contents will overwrite any identically named files already copied to the kommandir_workspace, on the exekutir. However, copies of all the default playbooks are made with a default_ prefix. This allows customized setup.yml, run.yml, and cleanup.yml to re-use the defaults if/where needed.
jobs/
Default directory containing public job definition subdirectories, e.g., subdirectories that job_path could reference.

1.7   Operational Overview

This is a general, high-level outline of key steps performed during the three standard transitions (setup, run and cleanup). It omits many small details, but the overall sequence should more/less match reality.

The names in parenthesis following each bullet’s text denote the adept.py transition file (*.xn), or the source Ansible role or script.

1.7.1   The setup context transition

  1. Fundamental setup of exekutir’s ssh keys, and workspace directory. (exekutir.xn)

    • Copy repository’s exekutir/* to the workspace.
    • Copy $ANSIBLE_PRIVATE_KEY_FILE (and .pub) or generate new {{workspace}}/ssh/exekutir_key.
  2. Intermediate exekutir setup, prepare the initial kommandir_workspace directory for future remote synchronization. (exekutir/roles/exekutir_workspace_setup role)

    • Copy kommandir/* (from repo.) to local {{kommandir_workspace}}
    • Backup default playbooks to allow selective re-use. i.e. {{kommandir_workspace}}/*.yml --> {{kommandir_workspace}}/default_*.yml
    • Copy contents of job_path to kommandir_workspace, overwriting any existing files to allow customization.
    • Generate unique ssh key for kommandir to use for this job, in {{workspace}}/kommandir_workspace/ssh/kommandir_key
  1. Create or discover the remote kommandir VM if configured. (exekutir/roles/kommandir_discovered role)

    • The exekutir has set the kommandir’s Ansible group membership from contents of the kommandir_groups list. (exekutir/roles/common role)
    • Membership in the ``nocloud` Ansible group <local_kommandir>`_ will always cause the kommandir to be the same host as the exekutir (i.e., ansible_host: localhost).
    • Otherwise, the cloud_provisioning_command is executed, with its stdout parsed as a YAML dictionary, updating {{workspace}}/inventory/host_vars/kommandir.yml inventory variables.
  2. Complete kommandir VM setup (if needed), install packages, setup storage, etc. (exekutir/roles/installed and kommandir_setup roles)

  3. Finalize the workspace for a remote kommandir VM (if used). (exekutir/roles/kommandir_workspace_update role)

    • A remote kommandir has a user named ``{{uuid}}` created <uuid>`_.
    • Remote kommandir’s /home/{{uuid}} (its workspace) synchronized from the local {{kommandir_workspace}} copy.
  1. Run job.xn on kommandir (local or remote). This file may be overridden by a copy from job_path to customize testing operations. The default simply executes the setup.yml playbook to create and configure peons for testing purposes. (exekutir.xn)
  2. For a remote kommandir, /home/{{uuid}} is synchronized back down to the exekutir’s local {{kommandir_workspace}} directory. This prevents state from being bound to a remote system. (exekutir/roles/kommandir_to_exekutir_sync role)

1.7.2   The run context transition

  1. Create or discover the kommandir VM. If needed, set it up, install packages, etc. exactly as in setup. This prevents the need to maintain a persistent slave-host.
  2. Synchronize the local {{kommandir_workspace}} to a remote kommandir (if used). (exekutir/roles/kommandir_workspace_update)
  3. Run job.xn on kommandir (local or remote). Same as in setup, this may have been overridden by a copy from job_path. The default simply executes the run.yml playbook. (exekutir.xn)
  4. For a remote kommandir, /home/{{uuid}} is synchronized back down to the exekutir’s local {{kommandir_workspace}} directory. This prevents state from being bound to a remote system. (exekutir/roles/kommandir_to_exekutir_sync role)

1.7.3   The cleanup context transition.

N/B: This should always run, whether or not any other contexts were successful. It may not exit successfully, but it must never orphan a remote kommandir or any peons.

  1. Create or discover the kommandir VM. If needed, set it up, install packages, etc. This may fail again if it also failed during setup or run - this is normal.
  2. If possible, synchronize the local {{kommandir_workspace}} to a remote kommandir (if used). (exekutir/roles/kommandir_workspace_update)
  3. If accessible, run job.xn on kommandir (local or remote). The default copy simply executes the cleanup.yml playbook to handle deallocation of peons and other resources. (exekutir.xn)
  4. For a remote kommandir, /home/{{uuid}} is synchronized back down to the exekutir’s local {{kommandir_workspace}} directory. This prevents state from being bound to a remote system. (exekutir/roles/kommandir_to_exekutir_sync role)
  5. Examine the state of the remote kommandir. If configured for automatic destruction (after some number of days), it will be destroyed. It will also be removed if the stonith flag is True, to support testing of the kommandir setup itself. (exekutir/roles/kommandir_destroyed/ role)

1.8   Examples

This section provides examples covering various usages and scenarios. They should all be taken as general guidelines, since they easily become out of date. If you notice a discrepancy, raise an issue regarding it.

1.8.1   Adding a public job

This example shows how to add a new public job into the ADEPT repository. It’s possible to store job details elsewhere, as documented here.

  1. The first step for any new job is to decide on its name, and what its speciality is. Begin by creating a subdirectory, and a README (jinja2 template) to describe its purpose. The README will be rendered into the results directory of the workspace for reference. In this example, a job called example will be created.

    $ mkdir -p jobs/example
    
    $ cat << EOF > jobs/example/README
    This job ({{ job_name }}) is intended to demonstrate
    adding a simple job.  If it's found in production,
    somebody has made a terrible, terrible mistake.
    
    (src: {{ job_path }}/{{ job_docs_template_src }})
    EOF
    
  2. Now you need to determine which aspects of the kommandir directory you want overwritten. Start with identifying the peons you need created and setup. For this example, we’ll limit the selection to only a single peon. Note, it’s also possible to use custom peon definitions by also adding/overriding files in the job’s inventory/host_vars subdirectory.

    $ ls kommandir/inventory/host_vars
    ...many...
    fedora-25-docker-latest.yml    # The one we'll use here
    ...others...
    
    $ mkdir -p jobs/example/inventory
    
    $ cat << EOF > jobs/example/inventory/peons
    [peons]
    fedora-25-docker-latest
    EOF
    
  3. In this example, there’s an additional Ansible Role we’d like to apply after all the default plays. Since the default playbooks are all renamed with a default_ prefix (exekutir/roles/exekutir_workspace_setup role), we can simply overwrite setup.yml to include default_setup.yml and then apply our special role.

    $ mkdir -p jobs/example/roles/frobnicated/tasks
    
    $ cat << EOF > jobs/example/roles/frobnicated/tasks/main.yml
    ---
    
    - name: Make docker daemon run in debug mode
      lineinfile:
        path: /etc/sysconfig/docker
        regexp: "^OPTIONS=[\\'\\"]?(.*)[\\'\\"]?"
        line: "OPTIONS='\\1 --debug=true --log-level=debug'"
    
    - name: Docker daemon is restarted to reload changed options
      service:
        name: docker
        state: restarted
    
    EOF
    
    $ cat << EOF > jobs/example/setup.yml
    ---
    
    # First do all the original, default plays.
    - include: default_setup.yml
    
    # Now apply the new frobnicated role
    - hosts: peons
      vars_files:
        - kommandir_vars.yml
      roles:
        - frobnicated
    EOF
    
  4. There’s no reason to go crazy with debug-mode tests, so we’ll just re-use whatever the basic job has going for it.

    $ ln -s ../basic/kommandir_vars.yml ../basic/templates jobs/example/
    
  5. Finally, the last step is to make sure the new job works. This should be performed on a system which meets the Testing/Development prerequisites.

    # (from the adept repository root)
    
    $ export WORKSPACE=/tmp/workspace
    $ rm -rf $WORKSPACE
    $ mkdir -p $WORKSPACE
    
    $ cat << EOF > $WORKSPACE/clouds.yml
    ---
    clouds:
        default:
            auth_type: thepassword
            auth:
                auth_url: http://example.com/v2.0
                password: foobar
                tenant_name: baz
                username: snafu
            regions:
                - Oz
            verify: False
    

    A simple $WORKSPACE/exekutir_vars.yml causes the exekutir to run the example job. See the variables_reference for more details.

    $ cat << EOF > $WORKSPACE/exekutir_vars.yml
    kommandir_groups: ["nocloud"]
    public_peons: True
    job_path: $PWD/jobs/example
    kommandir_name_prefix: "$USER"
    extra_kommandir_setup:
        command: >
            cp "{{ hostvars.exekutir.workspace }}/clouds.yml"
               "{{ hostvars.exekutir.kommandir_workspace }}/"
    EOF
    

    Then we kick it off.

    $ ./adept.py setup $WORKSPACE exekutir.xn
    
    localhost ######################################
    Parameters:
        optional = ''
        xn = 'exekutir.xn'
        workspace = '/tmp/workspace'
        context = 'setup'
    
    ...cut...many...lines...
    
    $ ./adept.py run $WORKSPACE exekutir.xn
    
    localhost ######################################
    Parameters:
        optional = ''
        xn = 'exekutir.xn'
        workspace = '/tmp/workspace'
        context = 'run'
    
    ...cut...many...lines...
    
    $ ./adept.py cleanup $WORKSPACE exekutir.xn
    
    localhost ######################################
    Parameters:
        optional = ''
        xn = 'exekutir.xn'
        workspace = '/tmp/workspace'
        context = 'cleanup'
    
    ...cut...many...lines...
    
    $ ls $WORKSPACE
    
    ansible.cfg             exekutir_ansible.log           roles
    cache                   exekutir_setup_after_job.exit  run_after_job.yml
    callback_plugins        exekutir_vars.yml              run_before_job.yml
    cleanup_after_job.yml   inventory                      setup_after_job.yml
    cleanup_before_job.yml  kommandir_setup.exit           setup_before_job.yml
    clouds.yml              kommandir_workspace            ssh
    dockertest              results
    

1.8.2   Adding a private job

In certain cases, it’s desireable for the details of a particular job to live outside of the ADEPT repository. In this case, the steps are exactly the same as Adding a public job except for one / possibly-two variables in exekutir_vars.yml:

job_path: /path/to/job/something/else
job_name: something

Here, it was necessary to set both job_name and job_path. If only the later was set, the job_name would have default to else instead of something. See the variables_reference for more information

1.9   Hacking

1.9.1   Run the unittests

This requires the additional test/development prerequisites These tests run relatively quickly, and only do a self-sanity check to verify major operational areas.

$ unit2
...............................s......................
----------------------------------------------------------------------
Ran 54 tests in 9.998s

OK (skipped=1)

1.9.2   Run the CI test job

This is a special ADEPT-job which runs entirely on the local machine, and verifies the operations of most major exekutir and kommandir plays. It does not have perfect coverage, for example, no cloud-based resources used. It can be run with adept_debug and/or --verbose modes to retain the temporary workspace for examination.

$ ./test_exekutir_xn.sh
localhost ######################################
Parameters:
    optional = '-e some_magic_variable_for_testing='value_for_magic_variable''
    xn = 'exekutir.xn'
    workspace = '/tmp/tmp.wfyfHGypgq.adept.workspace'
    context = 'setup'

...cut...

Examining exit files
Checking kommandir discovery (before job.xn) cleanup exit file contains 0
Checking setup exit files
Verifying exekutir_setup_after_job.exit file contains 0
Verifying kommandir_setup.exit file contains 0
Checking contents of test_file_from_setup.txt
Checking run exit files
Verifying exekutir_run_after_job.exit file contains 0
Verifying kommandir_run.exit file contains 0
Checking contents of test_file_from_run.txt
Checking cleanup exit files
Verifying exekutir_cleanup_after_job.exit file contains 0
Verifying kommandir_cleanup.exit file contains 0
Checking contents of test_file_from_cleanup.txt
All checks pass

1.9.3   Local Kommandir

Having a kommandir (“slave”) node is useful in production because it offloads much of the grunt-work onto a dedicated system, with dedicated resources. It also decouples the job environment/setup from the execution environment. Using one only makes sense in production-environments where the 5-minute setup cost can be spread over tens or hundreds of jobs.

However, for local testing/development purposes, the extra kommandir setup time can be excessive. If the local system (the exekutir) meets all the chosen cloud, kommandir, and job prerequisites, it’s possible to use the exekutir also as the kommandir. Note, however, the kommandir’s job.xn transition file and playbooks will still run from a dedicated workspace (created by the Exekutir).

Set the Exekutir’s ``kommandir_groups` variable <kommandir_groups>`_ to include nocloud. If required, also enable the flag to create network-accessable peons.

kommandir_groups: ["nocloud"]
public_peons: True

For reglar/automated use, avoid repeating any context transition more than once, against the same workspace or manually running Ansible. However, for development/debugging purposes, depending on the job-specifics, most contexts may be re-applied (within reason). Doing this may require manual manipulation of the uuid unless existing VMs are to be re-used. Otherwise, it’s safest to apply the cleanup context, then start over again with setup against a fresh workspace, with a fresh uuid.

1.9.4   OpenStack Cloud

This is the default for all bundled peons as per the peon_cloud_group variable value. The openstack group variables demand that you either set the $OS_* environment variables correctly, or dropped a clouds.yml file in the relevant workspace.

  1. Important, verify that all the default peon images are accessable to your tenant by examining the group variable file: kommandir/inventory/group_vars/openstack/peon_images.yml. If any are incorrect, fix them before proceeding. Otherwise, those peons will most certainly fail to be created.

  2. Setup your OpenStack credentials via the standard os-client-config file clouds.yml, in the workspace, as show below. The options are specific to your particular OpenStack setup. See the format and options, documented here.

    # (from the adept repository root)
    
    $ export WORKSPACE=/tmp/workspace
    $ rm -rf $WORKSPACE
    $ mkdir -p $WORKSPACE
    
    $ cat << EOF > $WORKSPACE/clouds.yml
    ---
    clouds:
        default:
            auth_type: thepassword
            auth:
                auth_url: http://example.com/v2.0
                password: foobar
                tenant_name: baz
                username: snafu
            regions:
                - Oz
            verify: False
    
  3. Populate the *exekutir’s* variables. In this example, the default (bundled) peon definitions are used (from kommandir/inventory/host_vars/). The other values select the job, name the kommandir VM, enable debugging and setup subscriptions. The final value makes sure the kommandir VM has access to the same cloud for creating the peons

    $ cat << EOF > $WORKSPACE/exekutir_vars.yml
    ---
    job_path: $PWD/jobs/basic
    kommandir_name_prefix: "$USER"
    adept_debug: True
    rhsm:
        username: nobody@example.com
        password: thepassword
    extra_kommandir_setup:
        command: >
            cp "{{ hostvars.exekutir.workspace }}/clouds.yml"
               "{{ hostvars.exekutir.kommandir_workspace }}/"
    EOF
    

    Note: If you want/need access to the peons as well, be sure to enable the public_peons flag.

  4. Apply the ADEPT setup context. Once this completes, a copy of all runtime source material will have been transferred to the workspace. This includes updating initial exekutir_vars.yml and inventory files. As noted, manual changes made to the source, will not be reflected at runtime unless the workspace is manually updated.

    $ ./adept.py setup $WORKSPACE exekutir.xn
    
    localhost ######################################
    Parameters:
        optional = ''
        xn = 'exekutir.xn'
        workspace = '/tmp/workspace'
        context = 'setup'
    
    ...cut...many...lines...
    
  5. Apply the ADEPT run context and/or inspect the workspace state.

    $ ./adept.py run $WORKSPACE exekutir.xn
    
    localhost ######################################
    Parameters:
        optional = ''
        xn = 'exekutir.xn'
        workspace = '/tmp/workspace'
        context = 'run'
    
    ...cut...many...lines...
    
  6. Whether or not setup or run were successful, always apply cleanup to release cloud resources.

    $ ./adept.py cleanup $WORKSPACE exekutir.xn
    
    localhost ######################################
    Parameters:
        optional = ''
        xn = 'exekutir.xn'
        workspace = '/tmp/workspace'
        context = 'cleanup'
    
    ...cut...many...lines...
    
    $ ls $WORKSPACE
    
    ansible.cfg             exekutir_ansible.log           roles
    cache                   exekutir_setup_after_job.exit  run_after_job.yml
    callback_plugins        exekutir_vars.yml              run_before_job.yml
    cleanup_after_job.yml   inventory                      setup_after_job.yml
    cleanup_before_job.yml  kommandir_setup.exit           setup_before_job.yml
    clouds.yml              kommandir_workspace            ssh
    dockertest              results
    

1.9.5   Other Clouds

A multitude of topologies are possible by changing the values of a few host and group variables. From the exekutir’s perspective, the kommandir will be created according to whichever group is set via kommandir_groups. For example, “openstack” will cause the group variables from exekutir/inventory/group_vars/openstack.yml to be brought in.

From the kommandir’s perspective, all default peons are created by membership dictated by the peon_cloud_group. This value is used to help populate the peon_groups variable. The default value of “openstack” will cause all default peons to created according to variables defined in the group variables files kommandir/inventory/group_vars/openstack/*.yml.

1.10   ADEPT Variables Reference

From the perspective of a task in a playbook, the lookup order is defined by Ansible’s documentation. This reference is just an overview of some of the common variables used throughout ADEPT’s plays and tasks.

1.10.1   High-level Variables

Use of variable-overrides on the Ansible command-line is highly discouraged. It has a very hard, non-obvious side-effect, which can make debugging very difficult: It forces those variables to be read-only, and silently fails to change their contents. Instead, there are two high-level YAML-dictionary variable files guaranteed to exist in workspace.

exekutir_vars.yml
This file’s variables are included in every play that runs on the exekutir. Whatever its original source, the exekutir’s exekutir_workspace_setup role will manipulate its contents during setup, to ensure consistency of critical variables like uuid.
kommandir_vars.yml
This only resides within the kommandir_workspace, but may have originated from inside job_path, copied by extra_exekutir_setup, or extra_kommandir_setup. It is included by every default play that runs on the kommandir. In all cases, the exekutir’s exekutir_workspace_setup role will always create and manipulate its contents during *setup* along with hard-coding critical values, like uuid.

1.10.2   Low-level Variables

Descriptions of specific, widely used or very important variables are defined below. This includes variables defined by tasks, playbooks, host_vars and group_vars files. General guidance is to define a variable in one place, which is as close to its usage-context as possible.

For example, if a variable is specific to a…

  • …playbook, it should wind up in kommandir_vars.yml or exekutir_vars.yml.
  • …role, place it in that role’s defaults/main.yml or vars/main.yml.
  • …group of hosts, define it under a named group (subdirectory) of inventory/group_vars.
  • …specific host, define it under that hosts inventory/host_vars.
  • …one or more tasks, use set_facts in the specific role, or the “common” role.
adept_debug
Boolean, defaults to False. Exekutir’s value copied to and overrides kommandir’s. Enables many Ansible debug statements across many roles that display variable values. Also disables removing files during the *cleanup* transition.
cleanup
Boolean, defaults to True. Exekutir’s value copied to kommandir’s. Enables/Disables removal of all peons by the kommandir, during the *cleanup* transition. See also, stonith.
cloud_environment
Dictionary, defaults to {}. Not shared between exekutir and kommandir. Defines the environment variable names and values that should be set when executing cloud_provisioning_command and cloud_destruction_command.
cloud_asserts
List, defaults to []. Not shared between exekutir and kommandir. List of Ansible conditionals (with jinja2 template resolution) which must all evaluate true, prior to executing cloud_provisioning_command and cloud_destruction_command.
cloud_provisioning_command
Dictionary, defaults to Null. Not shared between exekutir and kommandir. Same keys/values used for the Ansible shell module, excluding environment which is brought in by cloud_environment. Upon success, stdout is expected to be a valid YAML dictionary document. All values in that document will replace the corresponding keys in the hostvarsfile variable.
cloud_destruction_command
Dictionary, defaults to Null. Not shared between exekutir and kommandir. Same as cloud_provisioning_command, but defines the command for removing the host.
docker_autotest_timeout
Integer, huge number default. Only used by autotested role (if enabled) on kommandir. Specifies number of minutes to set as the overall timeout for Autotest on each peon.
empty
Defines the set of values which are to be considered “not set” or “blank”. This is used as a convenience value for quickly testing whether or not a variable is not a boolean, and contains something useful.
extra_exekutir_setup
Dictionary, defaults to Null. Not shared between exekutir and kommandir. Same keys/values used for the Ansible shell module. Represents a command to execute on the exekutir during the exekutir_workspace_setup role. Its purpose is to allow additional files to be copied into the workspace, such as cloud or access credentials.
extra_kommandir_setup
Dictionary, defaults to Null. Not shared between exekutir and kommandir. Same as extra_exekutir_setup. It is executed on the kommandir, after every time the kommandir_workspace_update role is applied.
hostvarsfile
Not shared between any host. String, set to the current host’s YAML variables file. Nominally {{invendory_dir}}/host_vars/{{inventory_hostname}}.yml.
job_name
String, defaults to the basename of job_path. Exekutir’s value overrides kommandir’s. The value of this variable is primarily used to identify job-specific resources. For example, it is appended to the end of all peon names when they are created in an OpenStack cloud.
job_path
String, defaults to jobs/quickstart. Exekutir’s value copied to kommandir. This is the absolute path containing all files, which should overwrite or support contents from the directory referenced by kommandir_workspace. This is the primary method to specialize a jobs activities. This is where you will find the job’s kommandir_vars.yml file.
job_subthings
List of strings, defaults to empty. Only used by autotested role (if enabled) on kommandir. List of Docker Autotest sub/sub-subtest names to include in the run. When empty, all sub/sub-subtests are considered for running.
kommandir_groups
List, defaults to ["nocloud"]. Not shared between exekutir and kommandir. Any Ansible groups the kommandir should be made a member of on the exekutir. The listed groups indicate which inventory/group_vars files should be used on that host.
kommandir_workspace
String, defaults to {{ workspace }}/kommandir_workspace on the exekutir. Only used in the exekutir/ playbooks. From the exekutir’s perspective, it represents the local path which contains the authoritative copy of the kommandir’s workspace. When the kommandir is a member of the nocloud group no synchronization is done, so this will also be the kommandir’s actual {{workspace}}.
kommandir_name_prefix
String, defaults to null. Not shared between exekutir and kommandir. When non-null, this is used as a prefix when discovering or creating a kommandir. It’s mainly used to control which kommandir is used for the job. For example, CI jobs testing ADEPT changes, should never use a production kommandir.
needs_reboot
Boolean, defaults to False. Only used by peons. If any role sets this to True, subsequent application of the rebooted role will result in that host being rebooted, and then confirmed accessible. Afterwards, the value is always reset back to False.
no_log_synchronize
Boolean, defaults to True. Exekutir’s value overrides kommandi’r. When False and adept_debug (above) is True or --verbose was used, the Ansible synchronize module will output the full contents of its operation. This can be a *HUGE* number (many hundreds) of output lines. Even when debugging, it’s recommended to keep this True unless the details are really needed.
peon_groups
List, no-default, mandatory. Only defined for peons but only used by kommandir. Defines the names of all groups a peon should be made a member of. This determines many other variable values for peons, all of which are defined by YAML files under kommandir/inventory/group_vars/.
peon_cloud_group
String, defaults to {{default_peon_cloud_group}} - “openstack”. Only defined for peons but only used by kommandir. All of the bundled default peon definitions (under kommandir/inventory/host_vars) list a single group indirectly by this value. Therefor changing the value of this one variable will affect (by group membership) the values brought in for cloud_environment, cloud_provisioning_command, cloud_destruction_command, etc.
public_peons
Boolean, defaults to False. Only used by peons. When True, the cloud_provisioning_command should make every effort to allow unrestricted network access to created peons. Otherwise, when False, unrestricted access is optional, except by the kommandir.
pull_request_description
String, defaults to undefined. Exekutir’s value overrides kommandi’r. When set to a string, this is assumed to be the description text contained in the originating pull-request. Jobs may make use of this however they like. Specifically, the autotested role will attempt to convert this into a parameter to autotest’s --args option.
stonith
Boolean, defaults to False. Only used by kommandir during the exekutir/roles/kommandir_destroyed role. When True during the cleanup context, it forces removal of the kommandir. This is used primarily during CI jobs for ADEPT itself, to ensure that a temporary kommandir is destroyed.
uuid
DNS & Username compatible string, defaults to a random number. Exekutir’s value overrides kommandir’s. This is a critical value. It must never change throughout the duration of all context transitions, and for the lifetime of any kommandir. Its primary purpose is to prevent resource contention (hostnames, usernames, and directory names). However, for cloud-based kommandir’s, it is also utilized to prevent workspace location clashes.
workspace
String, the path set by the $WORKSPACE environment variable by adept.py. This is the place where all runtime state and results are stored. See also kommandir_workspace.