Automating VMware vCenter with InfraKit

After a brief discussion on Slack it became apparent that a post that details the steps to automate VMware vCenter with InfraKit would be useful.

Building InfraKit

I won’t duplicate the content that already exists on the InfraKit GitHub located Here the only change is around the build of the binaries. Instead of building everything, we will just build the components that we need to automate vCenter.

The steps below are all that are required:

make build/infrakit && \
make build/infrakit-manager && \
make build/infrakit-group-default && \
make build/infrakit-flavor-swarm && \
make build/infrakit-instance-vsphere

Starting InfraKit

We can then start the group and flavour plugins as per normal, however the vSphere plugin requires some additional details in order for it to start successfully, namely the details of the vCenter we’re automating by doing the following:

Environment var

export VCURL=https://user:pass@vcIPAddress/sdk

Plugin flag

./build/infrakit-instance-vsphere --url=https://user:pass@vcIPAddress/sdk

Note: For testing purposes the plugin can be instructed to not delete instances when they’re “Destroyed”, instead these instances will effectively stop being managed by InfraKit and will need deleting manually. This can be accomplished by starting the plugin with the flag --ignoreOnDestroy=true.

Using InfraKit to deploy instances from a VMware template/existing VM

In one of the more recent PRs, support for VMware VM Templates (.vmtx) or existing Virtual Machines to be used as the basis for new VM instances was added. This means that it becomes very straight forward to deploy and maintain an infrastructure using Virtual Machines that have been created through other tools.

Below is a snippet of the JSON that is used to describe the deployment of instances from the use of a Template
"Instance": {
"Plugin": "instance-vsphere",
"Properties": {
"Datacenter" : "Home Lab",
"Datastore" : "vSphereNFS",
"Hostname" : "esxi01.fnnrn.me",
"Template" : "CentOS7-WEB",
"PowerOn" : true
}
}

The "Datacenter" is optional, and is only required in the even that the VMware vCenter cluster contains more than one Datacenter (linked-mode etc.). The new VM instances will be built using the

Using InfraKit to deploy instances from a LinuxKit .iso

In order to build instances using an .iso that has been pushed to VMware vCenter additional details are required, this is because the .iso doesn’t contain any information to describe a Virtual Machine. The plugin has a number of defaults for CPUs/Memory but these can be overridden as shown below, also the ISOPath must be correct for the Datastore that is used.

"Instance": {
"Plugin": "instance-vsphere",
"Properties": {
"Datacenter" : "Home Lab",
"Datastore" : "vSphereNFS",
"Hostname" : "esxi01.fnnrn.me",
"Network" : "HomeNetwork" ,
"Memory" : 512,
"CPUs" : 1,
"ISOPath" : "linuxkit/vSphere.iso",
"PowerOn" : false
}
}

This will create an entirely new Virtual Machine for every instance to the specs detailed in the properties and attach it to the VMware vSwitch/dvSwitch network.

Creating Instances

All instances will be created in a folder that matches the "ID": "InfraKitVMs" they will also be tagged internally so that InfraKit knows which instances it owns. Creating your allocated instances is as simple then as committing your InfraKit JSON:

./infrakit group commit InfraKitVMs.json

At this point the plugin will inspect your configuration, if the configuration has been committed before then the plugin will inspect the current configuration and determine the changes that are required. If the configuration has changed (e.g. Memory allocation has changed), then the plugin will recreate the virtual machines with the updated configuration. If it is just changes to the allocation of VMs then the plugin will determine if VMs need adding or removing and apply the relevant changes.

VMware vCenter

If you move a Virtual Machine from the InfraKit created folder then it will no longer be monitored by InfraKit, it also means that InfraKit will create a new Virtual Machine to replace the one it previously managed. If you drag the Virtual Machine bag to its folder then, the previous Virtual Machine will be deleted as the allocation count will be +1.

VMware vSphere

vSphere doesn’t support the “construct” of folders, therefore all created Virtual Machines will reside in the root with all other Virtual Machines created on that vSphere host.

 

In pursuit of a tinier binary-(er)

… yes that was an attempt to make the title rhyme 🙁

tl;dr make an executable smaller by hiding your code inside the header of the executable… read on for the gory detail.

There was a great post recently from Dieter Reuter around building the smallest possible Docker Image, which I thought posed an interesting idea mainly due to some of the crazy sizes of Docker images I keep having to deal with. I decided that I would join in with the challenge and see where I could shrink both a binary and by association the resulting Docker Container down to it’s smallest possibility.

I created a number of binaries during my playing around, below is a list of five of them that all print to STDOUT the following text "Hello Docker World!\n" If you’re not familiar with escaped characters, the ‘\n’ simply is the newline character.

*I realise I capitalised some of the strings by accident, but ‘h’ still occupies the same space as ‘H’ 😉

Initial failure

Before I delve into the steps I went through to make the small container, it’s worth pointing out that there is a fatal flaw in one of the above binaries when placed in a SCRATCH container. *hint* there is a duplicate binary with the suffix _STATIC 🙂

The reason that the hello_in_C will fail to run in the SCRATCH container is that it has dynamic requirements on a number of system libraries. Most notably is libc, which is the base C library that contains a lot of basic day-to-day code that provides the standard functionality to C programs. If we were to place this into a Docker container the following would be the result:

$ docker run -it --rm hello:C
standard_init_linux.go:178: exec user process caused "no such file or directory"

We can examine binaries to check for external dependencies using the ldd tool to see what external libraries are needed to run the binary file. Alternatively, we can use volume mapping to pass the host Operating System libraries into the SCRATCH container -v /lib64:/lib64:ro, this will provide the libraries required for this particular executable to successfully execute.

docker run -v /lib64:/lib64:ro -it --rm hello:C
Hello Docker World!

To permanently fix this issue is quite simple and requires building the C binary with the -static compile-time flag (the package glibc-static will be required), this quite simply will bundle all code into a single file instead of relying on external libraries. This has to knock on effect of making the binary easier to run on other systems (as all code is in one place) however the binary has now increased in size by 100 times… which is the opposite of what we’re trying to accomplish.

What makes an Executable

Ignoring MS-DOS .com files that no-one has touched and hasn’t been supported in years, most executables regardless of Operating System typically consist of a header that identifies the executable type (e.g. elf64, winPE)  and a number of sections:

  • .text, code that can be executed
  • .data, static variables
  • .rodata, static constants
  • .strtab /.shstrtab, string tables
  • .symtab, symbol tables.

The Executable header will contain an entry that points to the beginning of the .text section, which the Operating System will then use when the executable is started to find the actual code to run. This code then will access the various bits of data that it needs from the .data or .rodata sections.

Basic overview of a “Hello Docker World!” execution process

  1. The exec() family functions will take the path of file and attempt to have the OS execute it.
  2. The Operating System will examine the header to verify the file, if OK it will examine the header structure an find the entry point.
  3. Once the entry point is found, the operating system will start executing the code from that point. It is at this point where the program itself is now running.
  4. The program will set up for the function to write the string to stdout
    1. Set string length
    2. Set the pointer to the string in the .data section
    3. Call the kernel
  5. Call the exit function (otherwise the kernel will assume the execution failed)

Strip out Sections

In the quick above diagram, we can see through the course of execution that there is a number of sections within the executable that aren’t needed. In most executables there may be debug symbols or various sections that apply to compilers and linkers that are no longer required once the executable has been put together.

In order to have a stripped executable, it can either be compiled with the -s flag (also make sure -g isn’t used, as this adds debug sections). Alternatively we can use the strip tool that has the capability to remove all non-essential sections.

$ strip --strip-all ./hello_in_C_STRIPPED
$ ls -la hello_in_C_ST*
-rwxrwxr-x. 1 dan dan 848930 Feb 28 15:35 hello_in_C_STATIC
-rwxrwxr-x. 1 dan dan 770312 Feb 28 18:07 hello_in_C_STRIPPED

With languages such as GO, there can be significant savings by stripping any sections that aren’t essential (although if you’re doing this for production binaries it should be part of your compile/make process).

Extreme Shrinking

The final option that will keep your hands clean for shrinking an executable is to make use of tools like UPX which adds a layer of compression to your executable shrinking what’s left of your stripped binary. Taking my original GO binary I went from:

  • go build hello_docker_world.go = 1633717 bytes
  • strip --strip-all = 1020296 bytes
  • upx = 377136 bytes

Clearly a significant saving in terms of space.

Getting your hands dirty

Everything that has been discussed so far has been compiled through standard build tools and modified with the compiler or OS toolchain that managed executables. Unfortunately we’ve reached as far as we can go with these tools, as they will always build to the ELF/OS standards and always create the sections that they deem required.

In order to build a smaller binary, we’re going to have to move away from the tools that make building executables easier and hand craft a tailored executable. Instead of sticking with the format of [header][code][data], we’re going to look at how we can hide our code inside the header.

Whilst there are some parts of the header that are a requirement, there are some that have to just be a non-zero value and others that are left blank for future use. This is going to allow us to simply change entries in the ELF header from legal values to the code we want to execute and the following will happen:

  1. The Operating System will be asked to execute the file
  2. The OS will read the ELF header, and verify it (even though some values don’t make sense)
  3. It will then find the code entry point in the header that points to the middle of the actual header 🙂
  4. The OS will then start executing from that point in the header, and run our code.

 

Explained Code below

This code pretty much fits just in the ELF header itself, so I have broken the header up and labelled the header fields and where we’ve hidden the code we want to execute.

First part of header (has to be correct)

org     0x05000000 Set Origin address
db      0x7F, "ELF" Identify as ELF binary
dd      1 32-bit
dd      0 Little endiann
dd      $$ Pointer to the beginning of the header
dw      2 Code is executable
dw      3 Instruction set (x86)
dd      0x0500001B
dd      0x0500001B Entry point for our code (section below)
dd      4

Broken Header / Our code

mov     dl, 20 Address of Sections header  Take 20 characters
mov     ecx, msg  From the string at this address
int     0x80 Elf Flag table  Print them

Remaining header (has to be correct)

db      0x25  Size of the Elf Header
dw      0x20  Size of the Program Header
dw      0x01  Entries in the Program Header

Remaining Code (now beyond the header)

inc      eax  Set Exit function
int      0x80 Call it

String section

msg     db      'Hello Docker world!', 10

It’s also worth pointing out that this code won’t be fully “compiled”, as what is written above is actually binary format and therefore nasm will take the text and write out the binary code directly as written above.

Build and run the executable with:

$ nasm -f bin ./tiny_hello_docker.asm -o hello_docker_world
$ chmod +x ./hello_docker_world
$./hello_docker_world
Hello Docker world!

Further Reading

This wikipedia article covers all of the ELF standard in the most readable way i’ve come across: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format

A much more in-depth overview of hiding things in the ELF headers is available here: http://www.muppetlabs.com/~breadbox/software/tiny/teensy.html

InfraKit from Docker – an Overview

This marks a first, people actually complaining that my somewhat rambling blog post wasn’t ready…

It’s not going to be possible to cover all of the various topics around InfraKit that I aim to cover in a single post, so this is the first of a few posts that will focus on what InfraKit is, and how it connects together.

What is InfraKit

InfraKit was released by Docker quite recently and has already had ample coverage in the tech media and on Wikipedia. However, I guess my take is that it is a framework(or toolkit, I suppose InfraWork doesn’t work as a name) that has the capability to interact with Infrastructure and provide provisioning, monitoring and healing facilities. The fact that it is a framework means that it is completely extendable and anyone can take InfraKit and extend it to provide those Infrastructure capabilities for their particular needs. Basically a Infrastructure vendor can take InfraKit and add in the capability to have their particular products managed by InfraKit. Although thats a rather commercial point of view, and my hope is that overstretched Infrastructure Admins can take InfraKit and write their own plugins (and contribute them back to the community) that will make their lives easier.

Haven’t we been here before?

There has been a lot of politics happening in the world recently, so I choose to give a politicians answer of yes and no. My entire IT career has been based around deploying, managing and monitoring of Infrastructure and over the ~10 years i’ve seen numerous attempts to make our lives easier through various types of automation.

  • Shell scripts (I guess powershell, sigh 🙁 )
  • Automation engines (with their own languages and nuances)
  • Workflow engines (Task 1 (completed successfully) -> Task 2 (success) -> Task 3 (failed, roll back))
  • OS Specific tools
  • Server/Storage/Network Infrastructure APIs, OS/Hypervisor Management APIs, Cloud APIs … and associated toolkits

All of these have their place and power businesses and companies around the world but where does InfraKit fit? The answer, is that it at has the flexibility to replace or enhance the entire list. Alternatively it can be used in a much more specific nature where it will simply just “keep the lights on” by replacing/rebuilding failed infra or growing and scaling it to meet business and application needs.

What powers InfraKit

There are four components that make up InfraKit that are already well documented on GitHub, however the possibilities for extending them are also discussed in the overview.

Instance

This is the component that i’ve focussed on currently and provides some of the core functionality of InfraKit. The Instance plugin provides the foundation of Infrastructure management for InfraKit and as the name suggests, the plugin will provide an instance of Infrastructure resource. The Instance plugins take configuration data in the form of JSON, and provides the properties that are used to configure an instance.

So, a few possible scenarios:

Hypervisor Plugin Cloud Plugin Physical Infrastructure Plugin
VM Template: Linux_VM Instance Type: Medium Hardware Type: 2 cpu server
Network: Dev Region: Europe Server Template: BigData
vCPUs: 4 SSH_KEY: gGJtSsV8 OS build plan: RHEL_7
Mem: 8GB Machine_image: linux Power_State: On
Name: dans_vm_server01

I can then use my instance plugin to define 1, 10, 100, as many as needed instances that will provide my Infrastructure resource. But I want 30 servers 20 for web traffic, 7 for middleware and 3 back end for persistent storage, how do I define my infrastructure to be those resources…

Flavor

The flavor plugin is what provides the additional layer of configuration that takes a relatively simple instance and defines it as providing a specific set of services.

Again some possibilities that could exist:

WebServer Plugin Middleware Plugin Persistent Storage Plugin
Packages Installed: nginx Packages Installed: rabbitmq Packages Installed: mysql
Storage: nfs://http/ Firewall: 5672 Msql_Username: test_env
Firewall: 80, 443 Cert: ——– cert —  Msql_Password: abc123
Config: nginx.conf RoutingKey: middleware Bind: 127.0.0.1
Name: http_0{x}  DB_Mount:/var/db

So given my requirements i’d define 20 virtual machine instances and attach the web server flavor to them and so on, that would give me the capacity and the configuration for my particular application or business needs. The flavor plugin not only provides the configuration control, it is also use to ensure that the instance is configured correctly and deemed healthy.

That defines the required infrastructure for one particular application or use case, however to separate numerous sets of instances I need to group them…

Group

The default group plugin is a relatively simple implementation that currently will just hold together all of your instances of varying flavors allowing you to create, scale, heal and destroy everything therein. However groups could be extended to provide the following:

  • Specific and tailored alerting
  • Monitoring
  • Chargeback or utilisation overview
  • Security or role based controls

InfraKit cli

The InfraKit cli is currently the direct way to interact with all of the plugins and direct them to perform their tasks based upon their plugin type.  To see all of the various actions that can be performed on a plugin on a specific type use the cli:

$ build/infrakit <group|flavor|instance> 

$ build/infrakit instance
Available Commands:
  describe    describe the instances
  destroy     destroy the resource
  provision   provisions an instance
  validate    validates an instance configuration

So if we were to use a hypervisor plugin to provision a virtual machine instance we would do something like the following:

Define the instance in some json:

{
    "Properties": {
    "template": "vm_http_template"
    "network": "dev"
    "vCPUs”: 2
    "mem": "4GB"
    ...}

Then provision the instance using the correct plugin

$ build/infrakit instance instance.json --name instance-hypervisor

… virtual instance is provisioned …

 

The next post will cover how the architecture of the plugins, the communication methods between them and how more advanced systems can be architected through the use of InfraKit.

Fixing problems deploying Docker DC (Offline)

Inside my current employer there is quite the buzz around Docker and especially Docker Datacenter being pushed as a commercial offering with Server Infrastructure. This in turn has led to a number of people deploying various Docker components in their various labs around the world, however they tend to be hitting the same issues.

I’m presuming that most of the lab environments must be an internal/secure environment as there is always a request to install without an internet connection (offline).

I did come across one person building Docker DC in one place then trying to manually copy all of the containers to the lab environment (which inevitable broke all of the certificates) essentially DON’T DO THAT.

Luckily for people wanting to deploy Docker DataCentre in an environment without an internet connection, their exists a full bundle containing all the bits that you need (UCP/DTR etc.).  Simply download it on anything with an internet connection, take it to your offline machines and transfer over the bundle and run the following:

$ docker load < docker_dc_offline.tar.gz

Unfortunately, the majority of people who follow the next steps tend to be confused with what happens next:

Unable to find image 'docker/ucp:latest' locally

Pulling repository docker.io/docker/ucp

docker: Error while pulling image: Get https://index.docker.io/v1/repositories/docker/ucp/images: dial tcp: lookup index.docker.io on [::1]:53: read udp [::1]:50867->[::1]:53: read: connection refused.

dockerdc_offline_fail

The culprit behind this is the default behaviour of Docker,based upon a commit in mid-2014 specified that if no tag is specified to a docker image then it will default to the tag :latest. So simply put the installation steps for the Docker UCP ask the use to run the image docker/ucp with no tag and a number of passed commands. Docker immediately ignores the offline bundle images and attempts to connect to docker hub for the :latest version, where the installation promptly fails.

To fix this, you simple need to look at the images that have been installed by the offline bundle and use the tag as part of the installation steps.

dockerdc_offline_sucess

ALSO: If you see this error, then please don’t ignore it 😀

WARNING: IPv4 forwarding is disabled. Networking will not work.

$ vim /etc/sysctl.conf:
net.ipv4.ip_forward = 1
...
sysctl -p /etc/sysctl.conf

Deploying Docker DataCenter

This is a simple guide to deploying some SLES hosts, configuring them to allow deployment of Docker Engines along with configuration to allow Docker Datacenter to be deployed on the platform. It’s also possible to deploy the components using Docker for Mac/Windows as detailed here.

Disclaimer: This is in “no way” an official or supported procedure for deploying Docker CaaS.

Overview

I’ve had Docker DataCenter in about 75% of my emails over the last few months and it’s certainly been something that has been on my to-do list to get a private (lab) deployment done. Given the HPE and SuSE announcement in September I decided that I would see how easy it would be to have my deployment on a few SLES hosts, turns out it’s surprisingly simple (although, I was expecting something like OpenStack deployments a few years ago 😕 )

Also, if you’re just looking to *just* deploy Docker DataCenter then ignore the Host configuration steps.

hpe_docker_suse

Requirements

  1. 1-2 large cups of tea (depends on any typing mistakes)
  2. 2 or more SLES hosts (virtual of physical makes no difference, 1vCPU and 2GB ram, 16GB disk) mine were all built from SLE-12-SP1-Server-DVD-x86_64-GM-DVD1
  3. A SuSE product registration, 60 day free is fine (can take 24 hours for the email to arrive) *OPTIONAL*
  4. A Docker Datacenter license 60 day trial is minimum *REQUIRED*
  5. An internet connection? (it’s 2016 … )

Configuring your hosts

SuSE Linux Enterprise Server (SLES) is a pretty locked down beast and will require a few things modified before it can run as a Docker host.

SLES 12 SP1 Installation

The installation was done through the CD images, although it you want to automate the procedure it’s a case of configuring your AutoYast to deploy the correct SuSE patterns. As you step through the installation screens there are a couple of screens to be aware of:

  • Product Registration: If you have the codes then add them in here, it simplifies installing Docker later. ALSO, this is where the Network Settings are hidden  😈 So either set your static networking here or alternatively it can be done through yast on the cli (details here). Ensure on the routing page that IPv4 Forwarding is enabled for Docker networking.
  • Installation Settings: The defaults can leave you with a system you can’t connect to.

sles12_install

Under the Software headline, deselect the patterns GNOME Desktop Environment and X Windows System as we won’t be needing a GUI or full desktop environment. Also under the Firewall and SSH headline, the SSH port is blocked by default and that means you won’t be able to SSH into your server once the Operating System has been installed so click (open).

So after my installation I ended up with two hosts (that can happily ping and resolve one another etc.):

ddc01  192.168.0.140 /  255.255.255.0

ddc02  192.168.0.141 /  255.255.255.0

The next step is to allow the myriad of ports required for UCP and DTR, this is quite simple and consists of opening up the file /etc/sysconfig/SuSEfirewall2 and modifying it to look like the following:

FW_SERVICES_EXT_TCP="443 2376 2377 12376 12379:12386"

Once this change has been completed, the firewall rules will be re-read by using the command SuSEfirewall2

Installing Docker with a Product registration

Follow the instructions here, no point copying it twice.

Installing Docker without a Product registration

I’m still waiting for my 60-day registration to arrive from SuSE , so in the meantime I decided to start adding other Repositories to deploy applications.  NOTE: As this isn’t coming from a Enterprise repository it certainly won’t be supported.

So the quickest way of getting the latest Docker on a SLES system is to have the latest OpenSuSe repository added, the following two lines will add the repository and add Docker:

zypper ar -f http://download.opensuse.org/tumbleweed/repo/oss/ oss
zipper in docker
...
docker -v
...
Docker version 1.12.1, build 8eab29e

 

To recap, we have a number of hosts configured that have network connectivity and the firewall ports open and finally we’ve Docker installed and ready to deploy containers.

Deploying Docker Datacenter

Deploying the Universal Control Plane (UCP)

On our first node ddc01, we deploy the UCP installer, which automates the pulling of additional containers that make up the UCP.

docker run --rm -it --name ucp \

-v /var/run/docker.sock:/var/run/docker.sock \

docker/ucp install -i --host-address 192.168.0.140

Errors to watch for:

FATA[0033] The following required ports are blocked on your host: 12376.  Check your firewall settings. 

.. Make sure that you’ve edited the firewall configuration and reloaded the rules

WARNING: IPv4 forwarding is disabled. Networking will not work.

Enable IPv4 forwarding in the yast routing configuration.

Once the installation starts it will ask you for a password for the admin user, for this example the password I set was ‘password’ however I highly recommend that you choose something a little more secure. The installer will also give you the option to set additional SANs for the TLS certificates for additional domain names.

The installation will complete, and in my environment i’ll be able to connect to my UCP by putting in the address of ddc01 in a web browser.

ucp

Adding nodes to the UCP

After logging into the UCP for the first time, the dashboard will display everything that the docker cluster is currently managing. There will be a number of containers displayed as they make up the UCP (UCP web server, etcd, swarm manager, swarm client etc…). Adding in additional nodes is as simple as adding in Docker workers to a swarm cluster, possibly simpler as the UCP provides you with a command that can be copied and pasted on all further nodes to add them to the cluster.

Note: The UCP needs a license adding, otherwise additional nodes will fail during the add process.

 add_nodes

Deploying the Docker Trusted Registry (DTR)

On ddc02 install the Docker Trusted Registry as it’s not supported or recommended to have the UCP and the DTR on the same nodes.

From ddc02 we download UCP certificate.

curl -k https://192.168.0.140/ca > ucp-ca.pem

To then install the DTR, run this docker command and it will pull down the containers and add the registry to the control plane.

docker run -it --rm docker/dtr install --ucp-url http://192.168.0.140 \

--ucp-node ddc02 \

--dtr-external-url 192.168.0.141 \

--ucp-username admin \

--ucp-password password \

--ucp-ca "$(cat ucp-ca.pem)"

docker_trusted_reg

Conclusion

With all this completed we have a the following:

  • A number of configured hosts with correct firewall rules.
  • Docker Engine, that starts and stops the containers
  • Docker Swarm, clusters together the Docker Engines (it’s worth noting that it’s not the in built swarm in 1.12 and it still uses the swarm container to link together engines)
  • Docker DTR, the platform for hosting Docker images to be deployed on the engines
  • Docker UCP, as the front end to the entire platform.

docker_platform

I was pleasantly surprised about the simplicity of deploying the components that make up Docker Datacenter. Although it looks a little bit like the various components are running behind the new functionality that has been added to the Docker engine, this is evident in that UCP doesn’t use swarm that is part of 1.12 and wastes a little bit of resource in deploying additional containers to provide the swarm clustering.

It would be nice in the future to provide a more feature rich UI that provides workflow capabilities to compose applications. As currently it’s based upon hand crafting compose files in YAML, that you can copy and paste into the UCP or upload your existing compose files. However the UCP provides an excellent overview of your deployed applications and the current status of containers (logs and statistics).

A peek inside Docker for Mac (Hyperkit, wait xhyve, no bhyve …)

It’s no secret that the code for Docker for Mac ultimately comes from the FreeBSD hypervisor, and the people that have taken the time to modify it to bring it to the Darwin (Mac) platform have done a great job in tweaking code to handle the design decisions that ultimately underpin the Apple Operating System.

Recently I noticed that the bhyve project had released code for the E1000 network card so I decided to take the hyperkit code and see what was required in order to add in the PCI code. What follows is a (rambling and somewhat incoherent) overview of what was changed to move from bhyve to hyperkit and some observations to be aware of when porting further PCI devices to hyperkit.  Again, please be aware i’m not a OS developer or a hardware designer so some of this based upon a possibly flawed understanding… feel free to correct or teach me 🙂

Update: Already heard from @justincormack about Docker for Mac, in that it uses vpnkit not vmnet.

VMM Differences

One of the key factors that led to the portability of bhyve to OSX is that the darwin kernel is loosely based upon the original kernel that powers FreeBSD (family tree from wikipedia here), which typically meant that a lot of the kernel structure and API calls aren’t too different. However OSX is typically aimed at the consumer market and not the server market meaning that as OSX has matured the people from Apple have stripped away some of the kernel functionality that comes as shipped, the obvious one being the removal of TUN/TAP devices in the kernel (can still be exposed through loading a kext (kernel extension)) which although problematic hyperkit has a solution for.

VM structure with bhyve

When bhyve starts a virtual machine it will create the structure of the VM as requested (allocated vCPUs, allocate the memory, construct PCI devices etc.) these are then attached to device nodes under /dev/vmm then the bhyve kernel module handles the VM execution. Also being able to examine /dev/vmm/ provides a place for administrators to see what virtual machines are currently running and also to allow them to continue running unattended.

Internally the bhyve userland tools make use of virtual machine contexts that link together the VM name to the internal kernel structures that are running the VM instance. This allows a single tool to run multiple virtual machines that you typically see from VMMs such as Xen, KVM or ESXi.

Finally the networking configuration that takes place inside of bhyve… Unlike the OSX kernel, freeBSD typically comes prebuilt with support for TAP devices (if not the command kldload if_tap is needed). However simply put, with the use of a TAP device it greatly simplifies the usage of guest network interfaces. When an interface is created with bhyve a PCI network device inside the VM is created and then on the physical host a TAP device is created. Inside the VM when network frames are written to the PCI device bhyve actually writes these frames onto the TAP device on the physical host (using standard write(), read() functions on file descriptors) and those packets are then broadcast out on the physical interface on the host. If you are familiar with VMware ESXi then the concept is almost identical to the way a VSwitch functions.

bhyve Network

VM Structure with Docker for Mac (hyperkit)

So the first observation with the architecture for hyperkit is that all of the device node code /dev/vmm/ has been removed, which has had the effect of making virtual machines process based. This means that when hyperkit starts a VM it will malloc() all of the requested memory etc.. and it become the singular owner of the virtual machine, essentially killing the process ID of hyperkit will kill the VM. Internally all of the virtual machine context code has been removed because hyperkit process to VM is now a 1:1 association.

The initial design to remove all of the context code (instead of possibly always tagging it to a single vm context) requires noticeable changes to every PCI module that is added/ported from bhyve as it’s all based on creating and applying these emulated devices to a particular VM context.

To manage VM execution hyperkit makes use of the hypervisor.framework which is a simplified framework for creating vCPUs, passing in mapped memory and creating an execution loop.

Finally are the changes around network interfaces, from inside the virtual machine the same virtio devices are created as would be created on bhyve. The difference is linking these virtual interfaces to a physical interface, as with OSX there is no TAP device that can be created to link virtual and physical. So their currently exists two methods to pass traffic between virtual and physical hosts, one of which is the virtIO to vmnet (virtio-vmnet) and the other is virtio to vpnkit (virtio-vpnkit) PCI devices. These both use the virtio drivers (specifically the network driver) that are part of any modern Linux kernel and then hand over to the backend of your choice on the physical system.

It’s worth pointing out here that the vmnet backend was the default networking method for xhyve and it makes use of the vmnet.framework, which as mentioned by other people is rather poorly documented. It also slightly complicates things by it’s design as it doesn’t create a file descriptors that would allow the existing simple code to read() and write() from, and it also requires elevated privileges to make use of.

With the work that has been done by the developers at Docker a new alternative method for communicating from virtual network interfaces to the outside world has been created. The solution from Docker is two parts:

  • The virtio-vpnkit device inside hyperkit that handles the reading and writing of network data from the virtual machine
  • The vpnkit component that has a full TCP/IP stack for communication with the outside world.

(I will add more details around vpnkit, when I’ve learnt more … or learnt OCaml, which ever comes first)

Networking overviews

bhyve overview (TAP devices)

bhyve_traffic

xhyve/hyperkit overview (VMNet devices)

hyperkit_traffic

 

 Docker for Mac / hyperkit overview (vpnkit)

docker_traffic

 

Porting (PCI devices) from bhyve to hyperkit

All of the emulated PCI devices all adhere to a defined set of function calls along with a structure that defines pointers to functions and a string that identifies the name of the PCI device (memory dump below)

pci_functions

The pci_emul_finddev(emul) will look for a PCI device e.g. (E1000, virtio-blk, virtio-nat) and then manage the calling of its pe_init function that will initialise the PCI device and then add it to the virtual machine PCI bus as a device that the operating system can use.

Things to be aware of when porting PCI devices are:

  • Removing VM context aware code, as mentioned it is a 1:1 between hyperkit and VM.
    • This also includes tying up paddr_guest2host() which maps physical addresses to guests etc.
  • Moving networking code from using TAP devices with read(), write() to making use of the vmnet framework

With regards to the E1000 PCI code i’ve now managed to tie up the code so that the PCI device is created correctly and added to the PCI bus, just struggling to fix the vmnet code (so feel free to give take my poor attempt and fix it successfully 🙂 https://github.com/thebsdbox/hyperkit)

img_6620

 

Further reading

http://bhyve.org/bhyve-fosdem2013.pdf

https://wiki.freebsd.org/bhyve

https://github.com/docker/hyperkit

Automate HPE OneView/Synergy with Chef and Docker

As per a previous post, I’ve been working quite a lot with both HPE OneView (powers HPE Synergy through it’s Composer) and thought i’d put a post together that summarises automating deployments through Chef. There is already tons of information (some of it somewhat sporadic) around the internet for using the HPE OneView Chef driver:

  • Build a Chef Environment and install HPE OneView Provisioning driver -> Here
  • Overview into the code architecture -> Here
  • Technical white paper -> Here

To simplify the process of using Chef with HPE OneView i’ve put together a couple of Dockerfiles that will build some Docker images that simplify the process so much that to have the requirements to use chef/OneView provisioning driver only takes a few commands. Also the side effect of containing Chef and the provisioning driver means that it becomes incredibly simple to have a group of configurations and recipes that will interact with both Synergy Composers and HPE OneView instances that manage DL and BL servers.

 

The below image depicts how have multiple configurations would work:

oneview-chef-docker

Essentially, using the -v /local/path:/container/path allows us to have three folders each containing their own knife.rb (Configuration for each OneView/Container) and a recipe.rb that is applicable for that particular set of infrastructure. The mapping described above will always map a path on the host machine to /root/chef inside the container, allowing simple provisioning through Chef. It also makes it become incredibly simple to manage a number of sets of infrastructure from a single location.

Also be aware that a Chef Server isn’t required, but in order to be able to clear up (:destroy) machines created through Chef in a docker container WITHOUT a Chef server then ensure that Chef is recording what is deployed locally.

Example for your recipe.rb :

with_chef_local_server :chef_repo_path => '/root/chef/',
  :client_name => Chef::Config[:node_name]

Dockerfile is located here

raspberry-pi-logo

 

For the more adventurous, it is also possible to have all of this code run from a Docker container on a Raspberry Pi (the same usage applies). To create a Docker container that will run on a Raspberry Pi the Docker file is located here

Raspberry Pi with Docker

I’ve put off purchasing Raspberry Pis for a few years as I was pretty convinced that the novelty would wear off very quickly and they would be consigned to the drawer of random cables and bizarre IT equipment i’ve collected over the years (Parallel cables and zip drives o_O).

The latest iteration of the Pi is the v3 that comes with 1Gb of ram and 4 arm cores and it turns out that whilst it’s not exactly a computer powerhouse, it can still handle enough load to do a few useful things.

Raspberry Pis

I’ve currently got a pair of them in a docker swarm cluster (docker 1.12-rc3 for armv7l available here). Which has given me another opportunity to really actually play with docker and try and replace some of my linux virtual machines with “lightweight” docker containers.

First up: To ease working between the two pi’s I created a nfs share for all my docker files etc. I then decided that having my own Docker registry to share images between hosts would be useful. So on my first node I did a docker pull etc. for the Docker registry container and attempted to start it. This is where I realised that the container will just continuously restart, a quick peer into the container itself and I realised that it has binaries compiled for x86_64 not armv7l  so that clearly wasn’t going to work here. So that chalks up failure number one for a pure Raspberry Pi Docker cluster as my Registry had to be ran from a CoreOS virtual machine.

Once that was up and running, my first attempt to push an image from the Pis resulted in the error message :

https://X.X.X.X:5000/v1/_ping: http: server gave HTTP response to HTTPS client

After some googling it appears that this issue is related to having an insecure repository, this can be resolved by going down the generating certificate route etc.. However to fix the insecurity issue the docker daemon will need starting with some parameters to allow the use of an insecure registry.

Note: The documentation online informs you to update the docker configuration files and restart the docker daemon, however there appears to be a bug in raspbian/debian implementation. For me editing /etc/default/docker and adding DOCKER_OPTS='--insecure-registry X.X.X.X:5000' had no effect. This can be inspected by looking at the output from $ docker info as the insecure registries are listed here.

To fix this I had to edit the systemd start up files so that it would start dockerd with the correct parameters.

$ sudo vim /lib/systemd/system/docker.service

...

ExecStart=/usr/bin/dockerd --insecure-registry X.X.X.X:5000 -H fd://

After restarting the daemon, I can successfully push/pull from the registry.

Finally: The first service I attempted to move to a lightweight container resulted in a day and a half of fiddling (clearly a lot of learning needed). Although to clarify this was due to me wanting some capabilities that were compiled into the existing packages.

Moving bind into a container “in theory” is relatively simple:

  • Pull an base container
  • pull the bind package and install (apt-get install -y bind9)
  • map a volume containing the configuration and zone files
  • start the bind process and point it at the mapped configuration files

All of this can be automated through the use of a Dockerfile like this one. However after further investigation and a desire to monitor everything as much as possible, it became apparent that the use of the statistic-channel on bind 9.9 wouldn’t be sufficient and i’d need the 9.10 (mainly for JSON output). After creating a new docker file that adds in the unstable repos for debian and pulling the bind 9.10 packages it turns out that debian compile bind without libjson support 🙁 meaning that json output was disabled. This was the point where Docker and I started to fall out as a combination of dockers layered file system and build’s lack of ability to use --privileged or the -v (volume) parameter don’t work. This resulted in me automating a docker container that did the following:

  • Pull an base container
  • Pull a myriad of dev libraries, compilers, make toolchains etc.
  • download the bind9.10 tarball and untar it
  • change the WORKDIR and run ./configure with all of the correct flags
  • make install
  • delete the source directory and tarball
  • remove all of the development packages and compilers

This resulted in a MASSIVE 800Mb docker image just to run bind 🙁 In order to shrink the docker container I attempted a number of alternative methods such as using an NFS mount inside the container where all of the source code would reside for the compiling which wouldn’t be needed once a make install was ran. However as mentioned NFS mounts (require --privileged to mount) aren’t allowed with docker build and neither is it an option to pass it through with a -v flag as that doesn’t exist for building. This left me with the only option of manually having to create a docker container for bind that would start a base container with a volume passed through of already “made” binaries along with the required shared libraries and do a make install. This container then could be exited and committed as an image for future use and was significantly smaller that the previous image.

dockerimages

There are a number of issues on the Docker Github page around docker build not supporting volume mapping or privileged features. However the general response to feature requests that would have assisted my deployment generally are for edge cases and won’t be part of Docker any time soon.

Still, got there in the end and with a third Pi in the post i’m looking forward to moving some more systems onto my Pi cluster and coding some cool monitoring solutions 😀