OS automation with InfraKit on VMware

After developing the initial proof-of-concept for automating VMware VM instances with InfraKit, it became clear that some new mechanism would be needed in order to provide the capability of “initialising” the configuration of a brand new instance.

But … Immutability?

There is no doubt on a personal level that Immutability is clearly the best practice/method for building modern systems and applications. Numerous projects like the Docker engine to provide immutable images containing only application components or LinuxKit which is designed to build effectively a RO minimal Linux based OS demonstrate these benefits. These relatively new projects have allowed and effectively pushed people to embrace new concepts and new working methodologies in order to capture the benefits of Immutability.

I want something in-between …

The majority of Operating systems are designed to be flexible for a multitude of use-cases, which can render them pretty hard to use when trying to adhere to an immutable working pattern. This has led to a number of different methods that are used to build repeatable Operating system builds, ranging from the following:

  • Bootstrapping, through the use of things like kickstart in order to network boot an operating system and install the packages specified in the kickstart flexible
  • Infrastructure automation, these tools are typically used to automate the provisioning of “fleets” of infrastructure such as bare-metal or virtual machines from resources such as VM templates or machine images.
  • Configuration Management, the goal of these tools is to perform configuration of an existing operating system. Typically ssh keys or agents installed within the Operating System are needed to acomplish this task.

This blog post covers in more detail the difference between Infra automation and configuration management.

Another way?

Ideally I wanted to keep all of the provisioning inside “or as native” as possible removing dependancies on multiple APIs/CLIs or other interactions that can change or be depricated in the future (or even abandonded). I also didn’t want to develop a matching solution that relied on SSH (no point copying an already existing solution) or go down the rabbit-hole of developing an Operating system agent and having to support such a thing (security nightmare). After looking at some of the code that already existed as part of the vSphere plugin for InfraKit I hit upon an interesting experiment.

Enter [experimental] vmwscript

Not the greatest name I agree, but it is only experimental…


Ignoring the use of SSH to access a VM or using config management agents led to determining  what would be required to get the vmtoolsd to provide a reliable method for executing a process inside of the operating system. Anyone who’s had a play with vmtoolsd for doing this may have hit onto a few of the issues I experienced:

  • Odd TTY behaviour
  • Hanging processes
  • No STDOUT from the process
  • Processes not starting with any environment (no $PATH)
  • Others that i’ve since forgotten

After some Linux fiddling, some wrapping around execution calls followed by some headscratching and it appeared scriptable automation through vmtoolsd would be feasible!  🙂

The positives with using the vCenter API and vmtoolsd means that all of the exec steps that we want to run on the virtual machine can all be multi-plexed through the vCenter. No direct access to the virtual machine is needed, infact the virtual machine doesn’t need to be on the same network OR even have a network adapter in order for vmwscript to automate configuration of the operating system.

Example direct access automation
Standard Automation

Automation through VMware vCenter API

The vCenter API will provide access that traverses any network a VM may be hosted on.

Another positive is that the vmtoolsd is supported on multiple platforms giving vmwscript windows support for free! and looking at the release notes it might cover FreeBSD and Solaris.

Basic usage

Set the environment by using the following file:

export INFRAKIT_VSPHERE_VCURL=https://user@pass:qoqwom5@vcenter_url/sdk
export INFRAKIT_VSPHERE_VCHOST=esxi01.vsphere.host

Apply the configuration to current environment with $ . <path to config file>:

$ . /home/dan/infrakit_environment

Once the environment variables have been configured InfraKit will have everything needed to take a VM template and start applying changes to it.

$ ./infrakit x vmwscript ./path/to/json.json


VMware templating is certainly not a new technology but vmwscript was designed with a few concepts inherited/”stolen” from the Docker build workflow. Essentially taking a base image and applying the relevant changes to it, applying naming etc. and producing a new template all powered off and ready to consume. This is managed by the outputType being set to Template and once all of the configuration steps have been applied, infrakit will power down the virtual machine and convert it to a VMware template that can be consumed for future use.

    "label":"Updated template",
    "vmconfig" : {
        "guestCredentials" : {
            "guestUser" : "root",
            "guestPass" :"password"
    "deployment": [
            "name": "Template to be updated",
            "note": "Build new template for CentOS",
                "inputTemplate": "Old-Centos7-Template",
                "outputName": "New-Centos7-Template",
                "outputType": "Template",
                "commands": [
                    "note":"Upgrade all packages (except VMware Tools)",            
                    "cmd":"/bin/yum upgrade --exclude=open-vm-tools -y > /tmp/ce-yum-upgrade.log",

The above example will take the template Old-Centos7-Template and apply one set of commands to them and will output an updated template under the name New-Centos7-Template. As described above the model follows the path laid out by the Dockerfile in that we can start with an image (template) and apply a set of changes that ultimately will become the new image(template) we can use for all future deployments.

A much more complete example can be found in the infrakit repository, this template deployment will take any CentOS 7.x “minimal” installation and update the template so that we have an image that has been configured to deploy the Docker Engine (https://github.com/docker/infrakit/blob/master/pkg/x/vmwscript/examples/Docker-EE.json).

Deploying a Platform

Once we have built our updated image we can then use that to deploy multiple instances that are required to build a larger platform. We can automate that in the same method but specifying the outputType as VM will create running virtual machines that are all built from the updated template. We can then apply “bootstrap” configuration that will provide the final step of configuration.


  • Build updated web-sever image: bootstrap with static content
  • Docker Swarm Cluster built from updated Docker image: bootstrap with swarm join token
  • Load balanced application engine image: bootstrap with adding itself into a load balancer upon boot

Example code to apply static networking and add to a swarm cluster:

            "name": "Swarm Worker",
            "note": "Add worker",
               "inputTemplate": "DockerEE-2Template",
               "outputName": "worker001",
               "outputType": "VM",
                "commands": [
                       "note":"Join Swarm",
                       "cmd":"/usr/bin/docker swarm join --token SWMTKN--X", 

The above example will make use of a static token that has been writen into the configuration that will be applied by InfraKit, whilst this will complete the join process succesfully it is a static way of working. The vmwscript utility also has the capability of working in a dynamic way by taking the output of a command and storing in a key/value store to be used in other commands. A good example of this is the swarm example (https://github.com/docker/infrakit/blob/master/pkg/x/vmwscript/examples/swarm.json) that is hosted on the github repository.

When we build the Docker swarm master we save the join token in a temporary location, we will then download the token (and remove the temporary file). The text contents of this file are then stored under the key jointoken.

                    "note":"Backing up swarm key for other nodes",            
                    "cmd":"/usr/bin/docker swarm join-token worker | grep SWMTKN > /tmp/swm.tkn",
                    "delAfterDownload": false

In the second deployment in the deployment array we will deploy a worker node (as above) which will require a swarm join token. To make use of the dynamically generated and stored join token we can access it through the key value store as shown below:

                       "note":"Join Swarm",

Here we can see the resultKey used (as the result from the previous command) can be accesses in execution under an execKey allowing dynamic command generation through the building of platforms.

Docker InfraKit deploying Docker-EE from conference WiFi


This is a purely experimental way of building images and then using those images to automate a build out of a platform, the repository has a few examples for a few basic use-cases such as an updated Docker virtual machine and even a wordpress example that can be simply automated. The command syntax may change to make things more efficent along with adding additional functionality.

  • Windows testing
  • More networking configuration support
  • Ability to have the actual vSphere execute vmwscript (auto-scaling/healing) of docker infrastructure
  • Better user / sudo support

Leave a Reply

Your email address will not be published. Required fields are marked *