In pursuit of a tinier binary-(er)

… yes that was an attempt to make the title rhyme ūüôĀ

tl;dr¬†make an executable smaller by hiding your code inside the header of the executable… read on for the gory detail.

There was a great post recently from¬†Dieter Reuter¬†around building the smallest possible Docker Image, which I thought posed an interesting idea mainly due to some of the crazy sizes of Docker images I keep having to deal with. I decided that I would join in with the challenge and see where I could shrink both a binary and by association the resulting Docker Container down to it’s smallest possibility.

I created a number of binaries during my playing around, below is a list of five of them that all print to STDOUT the following text "Hello Docker World!\n"¬†If you’re not familiar with escaped characters, the ‘\n’ simply is the newline character.

*I realise I capitalised some of the strings by accident, but ‘h’ still occupies the same space as ‘H’ ūüėČ

Initial failure

Before I delve into the steps I went through to make the small container, it’s worth pointing out that there is a fatal flaw in one of the above binaries when placed in a SCRATCH container. *hint* there is a duplicate binary with the suffix _STATIC ūüôā

The reason that the hello_in_C will fail to run in the SCRATCH container is that it has dynamic requirements on a number of system libraries. Most notably is libc, which is the base C library that contains a lot of basic day-to-day code that provides the standard functionality to C programs. If we were to place this into a Docker container the following would be the result:

$ docker run -it --rm hello:C
standard_init_linux.go:178: exec user process caused "no such file or directory"

We can examine binaries to check for external dependencies using the ldd tool to see what external libraries are needed to run the binary file. Alternatively, we can use volume mapping to pass the host Operating System libraries into the SCRATCH container -v /lib64:/lib64:ro, this will provide the libraries required for this particular executable to successfully execute.

docker run -v /lib64:/lib64:ro -it --rm hello:C
Hello Docker World!

To permanently fix this issue is quite simple and requires building the C binary with the -static compile-time flag (the package glibc-static will be required), this quite simply will bundle all code into a single file instead of relying on external libraries. This has to knock on effect of making the binary easier to run on other systems (as all code is in one place) however the binary has now increased in size by 100¬†times… which is the opposite of what we’re trying to accomplish.

What makes an Executable

Ignoring MS-DOS .com files that no-one has touched and hasn’t been supported in years, most executables regardless of Operating System typically consist of a header that identifies the executable type (e.g. elf64, winPE) ¬†and a number of sections:

  • .text, code that can be executed
  • .data, static variables
  • .rodata, static constants
  • .strtab /.shstrtab, string tables
  • .symtab, symbol tables.

The Executable header will contain an entry that points to the beginning of the .text section, which the Operating System will then use when the executable is started to find the actual code to run. This code then will access the various bits of data that it needs from the .data or .rodata sections.

Basic overview of a “Hello Docker World!” execution process

  1. The exec() family functions will take the path of file and attempt to have the OS execute it.
  2. The Operating System will examine the header to verify the file, if OK it will examine the header structure an find the entry point.
  3. Once the entry point is found, the operating system will start executing the code from that point. It is at this point where the program itself is now running.
  4. The program will set up for the function to write the string to stdout
    1. Set string length
    2. Set the pointer to the string in the .data section
    3. Call the kernel
  5. Call the exit function (otherwise the kernel will assume the execution failed)

Strip out Sections

In the quick above diagram, we can see through the course of execution that there is a number of sections within the executable that aren’t needed. In most executables there may be debug symbols or various sections that apply to compilers and linkers that are no longer required once the executable has been put together.

In order to have a stripped executable, it can either be compiled with the -s flag (also make sure -g isn’t used, as this adds¬†debug sections). Alternatively we can use the strip tool that has the capability to remove all non-essential sections.

$ strip --strip-all ./hello_in_C_STRIPPED
$ ls -la hello_in_C_ST*
-rwxrwxr-x. 1 dan dan 848930 Feb 28 15:35 hello_in_C_STATIC
-rwxrwxr-x. 1 dan dan 770312 Feb 28 18:07 hello_in_C_STRIPPED

With languages such as GO, there can be significant savings by stripping any sections that aren’t essential (although if you’re doing this for production binaries it should be part of your compile/make process).

Extreme Shrinking

The final option¬†that will keep your hands clean for shrinking an executable is to make use of tools like UPX¬†which adds a layer of compression to your executable shrinking what’s left of your stripped binary. Taking my original GO binary I went from:

  • go build hello_docker_world.go = 1633717 bytes
  • strip --strip-all = 1020296¬†bytes
  • upx = 377136¬†bytes

Clearly a significant saving in terms of space.

Getting your hands dirty

Everything that has been discussed so far has been compiled through standard build tools and modified with the compiler or OS toolchain that managed executables. Unfortunately we’ve reached as far as we can go with these tools, as they will always build to the ELF/OS standards and always create the sections that they deem required.

In order to build a smaller binary, we’re going to have to move away from¬†the tools that make building executables easier and hand craft a tailored executable. Instead of sticking with the format of [header][code][data], we’re going to look at how we can hide our code inside the header.

Whilst there are some parts of the header that are a requirement, there are some that have to just be a non-zero value and others that are left blank for future use. This is going to allow us to simply change entries in the ELF header from legal values to the code we want to execute and the following will happen:

  1. The Operating System will be asked to execute the file
  2. The OS will read the ELF header, and verify it (even though some values don’t make sense)
  3. It will then find the code entry point in the header that points to the middle of the actual header ūüôā
  4. The OS will then start executing from that point in the header, and run our code.

 

Explained Code below

This code pretty much fits just in the ELF¬†header itself, so I have¬†broken the header up and labelled the header fields and where we’ve hidden the code we want to execute.

First part of header (has to be correct)

org     0x05000000 Set Origin address
db      0x7F, "ELF" Identify as ELF binary
dd      1 32-bit
dd      0 Little endiann
dd      $$ Pointer to the beginning of the header
dw      2 Code is executable
dw      3 Instruction set (x86)
dd      0x0500001B
dd      0x0500001B Entry point for our code (section below)
dd      4

Broken Header / Our code

mov     dl, 20 Address of Sections header  Take 20 characters
mov     ecx, msg  From the string at this address
int     0x80 Elf Flag table  Print them

Remaining header (has to be correct)

db      0x25  Size of the Elf Header
dw      0x20  Size of the Program Header
dw      0x01  Entries in the Program Header

Remaining Code (now beyond the header)

inc      eax  Set Exit function
int      0x80 Call it

String section

msg     db      'Hello Docker world!', 10

It’s also worth pointing out that this code won’t be fully “compiled”, as what is written above is actually binary format and therefore nasm will take the text and¬†write out the binary code directly as written above.

Build and run the executable with:

$ nasm -f bin ./tiny_hello_docker.asm -o hello_docker_world
$ chmod +x ./hello_docker_world
$./hello_docker_world
Hello Docker world!

Further Reading

This wikipedia article covers all of the ELF standard in the most readable way i’ve come across:¬†https://en.wikipedia.org/wiki/Executable_and_Linkable_Format

A much more in-depth overview of hiding things in the ELF headers is available here: http://www.muppetlabs.com/~breadbox/software/tiny/teensy.html

InfraKit – Writing an Instance Plugin

For a proof of concept i’ve had the opportunity to re-implement a¬†Docker InfraKit plugin instance plugin from scratch (in another language). To help anyone else who decides to do something similar, I thought it best to capture some of the key points that will need implementing when writing your own plugin.

Starting from Scratch

If you’re going to write a plugin for InfraKit in another language then you’ll need to ensure that you implement¬†all of the correct interfaces, APIs and methods needed to allow expected behaviour. Also if your plugin may need to implement the capability to maintain and store expected state somewhere (perhaps locally or in an external store).

Requirements

  • UNIX Sockets, that can be accessed by the InfraKit CLI tool
  • HTTPD Server that is bound to the UNIX Socket
  • JSON-RPC 2.0 that will be used as the transport mechanism for methods and method results
  • Methods to:
    • Validate the instance configuration
    • Determine the state of an instance
    • Provision new instances
    • Destroy instances
    • Describe instance parameters (configuration that can be made through the JSON methods)
  • Ideally garbage collection of sockets upon exit, leaving abandoned sockets in the plugin directory can be somewhat confusing to the CLI tool.

UNIX Sockets

The UNIX sockets should always reside in the same location so that the CLI can find them, which is an¬†.infrakit/plugins/ directory inside the home directory of the user that runs that plugins, so my plugins reside in /home/dan/.infrakit/plugins/¬†It may be apparent that as we’re using UNIX sockets then we’re effectively binding Infrakit to the file system of a single system, however it’s not a technical challenge to bind your plugin socket to a TCP port through tools such as nc or socat.

HTTPD Server

Typically you would (under the covers) create a UNIX socket and bind it to a TCP port of either 80/443. However in this implementation we create a UNIX socket and bind it to a file on the local file system, so instead of connecting to an IP address/port we speak to a file on the file system in the same manner.

JSON-RPC 2.0

After some head scratching around some inconsistencies with the API it was decided by the InfraKit developers to move to JSON-RPC 2.0 as the method for calling procedures. This simply wraps methods and parameters into a standardised JSON format, and expects the method results to be reported in the same standardised way. It’s a slightly bizarre standard as only makes use of the HTTPD POST method and expects a HTTP 200 OK returned even in the event that functions fail. The reason for this is that the errors should be encapsulated in the returning JSON, essentially moving the error reporting down (or up, i’m not sure) the stack.

Methods

The typical workflow for provisioning through the Instance plugin is as follows:

  1. After a user commits some configuration the InfraKit CLI will parse it and pass the correct parameters to the instance plugin that is defined in the configuration.
  2. The Instance plugin will then read through the parameters and use them when interacting (API call, CLI scraping, vendor specific protocol API etc.) with the Infrastructure provider.
  3. The Provider will then provision a new instance.
  4. Once complete the provider will return the new instance details to the Instance Plugin.
  5. The Instance plugin will report back to InfraKit that the instance has been provisioned.

The workflow above is pretty simplistic, but I presume will be the common steps for interacting with most infrastructure providers however depending on your provider there can be some caveats. The main caveat¬†relates to infrastructure that can’t be immediately provisioned and can result in InfraKit provisioning multiple instances as the instance¬†won’t have been provisioned within the group plugin polling window. If your instance is going to take a number of minutes to provision and you poll for it every 30 seconds, it will result in¬†InfraKit trying provision a new instance every timeout and once the instance count matches what is required it will have to start destroying excess instances as the come online.

The possible solutions are to either increase the poll timeout so that the instance will be provisioned within that window and be reported back as created, modify the group plugin, or develop some intelligence into your instance plugin. The plugin that I developed as part of a PoC had instances that would take ~4 minutes to be provisioned, which meant that the instance plugin needs a method for it to track what it had provisioned so that it could then check with the provider what state the instances were currently in.

Instance State

There are a number of different ways in order for a plugin to handle state, it could well be that just querying the provider will result in the instance state. However some providers may make it difficult to identify instance created by InfraKit and instances created directly etc. So in my example I needed the plugin to keep track of instances and maintain state during plugin restarts.

  1. InfraKit requires the list of instances so that it can make sure that resource is allocated correctly, so it asks the instance plugin to describe all of its instances.
  2. The Instance plugin will iterate through all of the last known state information.
  3. Each of the instances in it’s last known state will be checked with the provider to determine if they’re created/in-progress/failed.
  4. State information is updated, with the latest information
  5. The instances that are created or in-progress are listed back to InfraKit, where it will either be satisfied with instance count or require some more instances provisioned.

 

The Instance plugin code along with managing state is located here https://github.com/thebsdbox/InfraKit-Instance-C

 

InfraKit – The Architecture

Turns out that my previous InfraKit post was almost a complete duplication of something Docker posted pretty much at the same time .. -> https://blog.docker.com/2016/11/docker-online-meetup-recap-infrakit/

One thing that hasn’t so far been covered in a deep-dive fashion is around the plugins themselves, what they are, how you communicate with them and how they communicate with your infrastructure. So in this post, I aim to cover those pieces of information.

Plugin Architecture

Each plugin for InfraKit is technically its own application, and in most cases can be interacted with directly. As an application it needs to conform to a number of standards defined by InfraKit, and have the following capabilities and behaviours:

  • ¬†Create a UNIX socket in /var/run/infrakit
  • ¬†Host a httpd server on that socket
  • ¬†Respond correctly to a number of URLs¬†dependent on the ‚Äútype‚ÄĚ of plugin
  • Respond to the following HTTP Methods:
    • ¬† GET, ¬†typically will be looking at system or instance state
    • ¬† PUT, will be looking to perform instance provisioning or system configuration
  • ¬†Handle JSON passed and parse the data for configuration details

The design choices offer the following benefits:

  • Adheres to the same API designs of docker.socket¬† ->¬†https://docs.docker.com/engine/reference/api/docker_remote_api/
  • The use of UNIX Socket (that lives on the filesystem), allows the capability to place more complex plugins in containers and expose their interfaces through volume mapping.
  • ¬†As it uses standard http for its communication, it doesn‚Äôt tie the plugins to any language. In theory a python plugin could simply be written using the simplehttpserver and bind that to a UNIX Socket.
Reverse-Engineering-Ahead !

To see what is happening under the covers, we can create a fake plugin and use the infrakit cli to interact with it. Below I’m creating a socket using netcat-openbsd and placing it in the plugins directory. This would be visible with a $ build/infrakit plugins ls however that command doesn’t interact with the plugins and just lists the sockets in that directory.

We will now use the infrakit cli to provision an instance by passing it some configuration JSON¬†and asking it to use the ‚Äúfake‚ÄĚ plugin. Immediately afterwards netcat will output the data that was sent to the socket it created, and we can immediately see the URL format along with JSON data that would be sent¬†to a plugin.

$ nc -lU /root/.infrakit/plugins/instance-test &
$ build/infrakit instance provision test.json --name instance-test
POST /Instance.Provision HTTP/1.1
Host: socket
User-Agent: Go-http-client/1.1
Transfer-Encoding: chunked
Accept-Encoding: gzip

{"Properties":{"Instance":{"Plugin":"instance-file","Properties":{}},"Flavor":{"Plugin":"flavor-vanilla","Properties":{"Size":1}}},"Tags":null,"Init":"","LogicalID":null,"Attachments":null}

From here we can see that the URLs are in the form of /<plugin type>/<action> e.g. POST /Instance.Provision

How to communicate with Plugins

InfraKit gives you the flexibility to go from the simple provisioning of an instance, to having your Infrastructure deployed, monitored, and in the event of mis-configuration or failure, healed. In the simplest example of just needing an instance, through the InfraKit CLI we can explicitly request a new instance and pass the JSON that will contain the configuration needed to define that new instance.

$ build/infrakit instance provision physical_server.json ‚ÄĒname instance-physical

…

However for an end-to-end infrastructure definition a full infrastructure specification would need to be passing to a group plugin for that to ensure that it’s created.

infrakit-configuration-json

How plugins communicate with your infrastructure

Strictly speaking, the Infrakit architecture covers the design and¬†interaction between plugins (sockets, URL¬†adherence, parsing JSON¬†for configuration information)¬†The plugin itself can be written to use any method that makes sense to provision and destroy the¬†Infrastructure, such as speaking to APIs or even SSH’ing into network switches (it’s 2016, but sadly it’s still the way to interact with some devices ūüôĀ )

If we look at the vagrant plugin (line 55) we can see that we read all of the properties that were passed through the JSON (CPUs, Memory, networking) and build a Vagrantfile, the plugin will then call vagrant up, which will start the guest machine based upon the configuration in the Vagrantfile.

Some examples that could exist in the future

A networking plugin that takes the  properties and will ensures that those changes are written to that switch and also makes sure that the switch is compliant to the configuration. Given that a lot of switch configuration and feature sets are identical across vendors, it could be possible to have networking properties that are agnostic and the instance-networking can identify the vendor and provide the configuration that matches their interfaces.

infrakit-instance-network

DISCLAIMER, I'm only using HPE OneView as an example as I know the API and it's capabilities. There is no guarantee that HPE would write such a plugin.

 

A physical server plugin that would make use of the HPE OneView API to deploy Physical Server Instances.

infrakit-instance-oneview

It quickly becomes clear that from the plugins that exist already, to the ideas for the future that InfraKit can really provide Infrastructure as Code. As the plugin ecosystem continues to grow and evolve, InfraKit has the opportunity to provide a great way to help orchestrate your infrastructure.

Find out more about InfraKit and the plugins on the InfraKit GitHub ¬†…

InfraKit from Docker – an Overview

This marks a first, people actually complaining that my somewhat rambling blog post wasn’t ready…

It’s not going to be possible to cover all of the various topics around InfraKit that I aim to cover in a single post, so this is the first of a few posts that will focus on what InfraKit is, and how it connects together.

What is InfraKit

InfraKit was released by Docker quite recently¬†and has already had ample coverage in the tech media and on Wikipedia. However, I guess my take is that it is¬†a framework(or toolkit, I suppose InfraWork doesn’t work as a name) that has the capability to interact¬†with Infrastructure and provide provisioning, monitoring and healing facilities. The fact that it is a framework means that it is completely extendable and anyone can take InfraKit and extend it to provide those Infrastructure capabilities¬†for their particular needs. Basically a Infrastructure¬†vendor can take InfraKit and add in the capability to have their particular products managed by InfraKit. Although thats a rather commercial point of view,¬†and¬†my hope is that overstretched Infrastructure Admins can take InfraKit and write their own plugins (and contribute them back to the community) that will make their lives easier.

Haven’t we been here before?

There has been a lot of politics happening in the world recently, so I choose to give a politicians answer of yes and¬†no. My entire IT career has been based around deploying, managing and monitoring of Infrastructure and over¬†the ~10 years i’ve seen numerous attempts to make our lives easier through various types of automation.

  • Shell scripts (I guess powershell, sigh ūüôĀ )
  • Automation engines (with their own languages and nuances)
  • Workflow engines (Task 1 (completed successfully) -> Task 2 (success) -> Task 3 (failed, roll back))
  • OS Specific tools
  • Server/Storage/Network Infrastructure APIs, OS/Hypervisor Management APIs, Cloud APIs … and associated toolkits

All of these have their place and power businesses and companies around the world but where does InfraKit fit? The answer, is that it at has the flexibility to replace or enhance the entire list. Alternatively it can be used in a much more specific nature where it will simply just “keep the lights on” by replacing/rebuilding failed infra or growing and scaling it to meet business and application needs.

What powers InfraKit

There are four components that make up InfraKit that are already well documented on GitHub, however the possibilities for extending them are also discussed in the overview.

Instance

This is the component that i’ve focussed on currently and provides some of the core functionality of InfraKit. The Instance plugin provides the foundation of Infrastructure management for InfraKit and as the name suggests, the plugin will provide an instance of Infrastructure resource. The Instance plugins take configuration data in the form of JSON, and provides the properties that are used to configure an instance.

So, a few possible scenarios:

Hypervisor Plugin Cloud Plugin Physical Infrastructure Plugin
VM Template: Linux_VM Instance Type: Medium Hardware Type: 2 cpu server
Network: Dev Region: Europe Server Template: BigData
vCPUs: 4 SSH_KEY: gGJtSsV8 OS build plan: RHEL_7
Mem: 8GB Machine_image: linux Power_State: On
Name: dans_vm_server01

I can then use my instance plugin to define 1, 10, 100, as many as needed instances that will provide my Infrastructure resource. But I want 30 servers 20 for web traffic, 7 for middleware and 3 back end for persistent storage, how do I define my infrastructure to be those¬†resources…

Flavor

The flavor plugin is what provides the additional layer of configuration that takes a relatively simple instance and defines it as providing a specific set of services.

Again some possibilities that could exist:

WebServer Plugin Middleware Plugin Persistent Storage Plugin
Packages Installed: nginx Packages Installed: rabbitmq Packages Installed: mysql
Storage: nfs://http/ Firewall: 5672 Msql_Username: test_env
Firewall: 80, 443 Cert:¬†——– cert —¬† Msql_Password: abc123
Config: nginx.conf RoutingKey: middleware Bind: 127.0.0.1
Name: http_0{x}  DB_Mount:/var/db

So given my requirements i’d define 20 virtual machine instances and attach the web server flavor to them and so on, that would give me the capacity and the configuration for my particular application or business needs. The flavor plugin not only provides the configuration control, it is also use to ensure that the instance is configured correctly and deemed healthy.

That defines the required¬†infrastructure for one particular application or use case, however to separate numerous sets of instances I need to group them…

Group

The default group plugin is a relatively simple implementation that currently will just hold together all of your instances of varying flavors allowing you to create, scale, heal and destroy everything therein. However groups could be extended to provide the following:

  • Specific and tailored¬†alerting
  • Monitoring
  • Chargeback or utilisation¬†overview
  • Security or role based controls

InfraKit cli

The InfraKit cli is currently the direct way to interact with all of the plugins and direct them to perform their tasks based upon their plugin type.  To see all of the various actions that can be performed on a plugin on a specific type use the cli:

$ build/infrakit <group|flavor|instance> 

$ build/infrakit instance
Available Commands:
  describe    describe the instances
  destroy     destroy the resource
  provision   provisions an instance
  validate    validates an instance configuration

So if we were to use a hypervisor plugin to provision a virtual machine instance we would do something like the following:

Define the instance in some json:

{
    "Properties": {
    "template": "vm_http_template"
    "network": "dev"
¬† ¬† "vCPUs‚ÄĚ: 2
    "mem": "4GB"
    ...}

Then provision the instance using the correct plugin

$ build/infrakit instance instance.json --name instance-hypervisor

… virtual instance¬†is provisioned …

 

The next post will cover how the architecture of the plugins, the communication methods between them and how more advanced systems can be architected through the use of InfraKit.

Fixing problems deploying Docker DC (Offline)

Inside my current employer there is quite the buzz around Docker and especially Docker Datacenter being pushed as a commercial offering with Server Infrastructure. This in turn has led to a number of people deploying various Docker components in their various labs around the world, however they tend to be hitting the same issues.

I’m presuming that most of the lab environments must be an internal/secure environment as there is always a request to install without an internet connection (offline).

I did come across one person building Docker DC in one place then trying to manually copy all of the containers to the lab environment (which inevitable broke all of the certificates) essentially DON’T DO THAT.

Luckily for people wanting to deploy Docker DataCentre in an environment without an internet connection, their exists a full bundle containing all the bits that you need (UCP/DTR etc.).  Simply download it on anything with an internet connection, take it to your offline machines and transfer over the bundle and run the following:

$ docker load < docker_dc_offline.tar.gz

Unfortunately, the majority of people who follow the next steps tend to be confused with what happens next:

Unable to find image 'docker/ucp:latest' locally

Pulling repository docker.io/docker/ucp

docker: Error while pulling image: Get https://index.docker.io/v1/repositories/docker/ucp/images: dial tcp: lookup index.docker.io on [::1]:53: read udp [::1]:50867->[::1]:53: read: connection refused.

dockerdc_offline_fail

The culprit behind this is the default behaviour of Docker,based upon a commit in mid-2014 specified that if no tag is specified to a docker image then it will default to the tag :latest. So simply put the installation steps for the Docker UCP ask the use to run the image docker/ucp with no tag and a number of passed commands. Docker immediately ignores the offline bundle images and attempts to connect to docker hub for the :latest version, where the installation promptly fails.

To fix this, you simple need to look at the images that have been installed by the offline bundle and use the tag as part of the installation steps.

dockerdc_offline_sucess

ALSO:¬†If you see this error, then please don’t ignore it¬†ūüėÄ

WARNING: IPv4 forwarding is disabled. Networking will not work.

$ vim /etc/sysctl.conf:
net.ipv4.ip_forward = 1
...
sysctl -p /etc/sysctl.conf

Deploying Docker DataCenter

This is a simple guide to deploying some SLES hosts, configuring them to allow deployment of Docker Engines along with configuration to allow Docker Datacenter to be deployed on¬†the platform. It’s also possible to deploy the components using Docker for Mac/Windows as detailed here.

Disclaimer: This is in “no way” an¬†official or supported procedure for deploying Docker CaaS.

Overview

I’ve had Docker DataCenter in about 75% of my emails over the last few months and it’s certainly been something that¬†has been on my to-do list to get a private (lab) deployment done. Given the HPE and SuSE announcement in September I decided that I would see how easy it would be to have my deployment on a few SLES hosts, turns out it’s surprisingly simple (although, I was expecting something like OpenStack deployments a few years ago ūüėē )

Also,¬†if you’re just looking to *just* deploy Docker DataCenter then ignore the Host configuration steps.

hpe_docker_suse

Requirements

  1. 1-2 large cups of tea (depends on any typing mistakes)
  2. 2 or more SLES hosts (virtual of physical makes no difference, 1vCPU and 2GB ram, 16GB disk) mine were all built from SLE-12-SP1-Server-DVD-x86_64-GM-DVD1
  3. A SuSE product registration, 60 day free is fine (can take 24 hours for the email to arrive) *OPTIONAL*
  4. A Docker Datacenter license 60 day trial is minimum *REQUIRED*
  5. An internet connection? (it’s 2016 … )

Configuring your hosts

SuSE Linux Enterprise Server (SLES) is a pretty locked down beast and will require a few things modified before it can run as a Docker host.

SLES 12 SP1 Installation

The installation was done through the CD images, although it you want to automate the procedure it’s a case of configuring your AutoYast to deploy the correct SuSE patterns. As you step through the installation screens there are a couple of screens to be aware of:

  • Product Registration: If you have the codes then add them in here, it simplifies installing Docker later.¬†ALSO, this is where the¬†Network Settings are hidden¬† ūüėą So either set your static networking here or alternatively it can be done through yast on the cli (details here). Ensure on the routing page that IPv4 Forwarding is enabled for Docker networking.
  • Installation Settings: The defaults can leave you with a system you can’t connect to.

sles12_install

Under the Software headline, deselect the patterns GNOME Desktop Environment and X Windows System as we won’t be needing a GUI or full desktop environment. Also under the Firewall and SSH headline, the SSH port is blocked by default and that means you won’t be able to SSH into your server once the Operating System has been installed so click (open).

So after my installation I ended up with two hosts (that can happily ping and resolve one another etc.):

ddc01  192.168.0.140 /  255.255.255.0

ddc02  192.168.0.141 /  255.255.255.0

The next step is to allow the myriad of ports required for UCP and DTR, this is quite simple and consists of opening up the file /etc/sysconfig/SuSEfirewall2 and modifying it to look like the following:

FW_SERVICES_EXT_TCP="443 2376 2377 12376 12379:12386"

Once this change has been completed, the firewall rules will be re-read by using the command SuSEfirewall2

Installing Docker with a Product registration

Follow the instructions here, no point copying it twice.

Installing Docker without a Product registration

I’m still waiting for my 60-day registration to arrive from SuSE , so in the meantime I decided to start adding other Repositories to deploy applications. ¬†NOTE: As this isn’t coming from a Enterprise repository it certainly won’t be supported.

So the quickest way of getting the latest Docker on a SLES system is to have the latest OpenSuSe repository added, the following two lines will add the repository and add Docker:

zypper ar -f http://download.opensuse.org/tumbleweed/repo/oss/ oss
zipper in docker
...
docker -v
...
Docker version 1.12.1, build 8eab29e

 

To recap, we have a number of hosts configured that have network connectivity and the firewall ports open and finally we’ve Docker installed and ready to deploy containers.

Deploying Docker Datacenter

Deploying the Universal Control Plane (UCP)

On our first node ddc01, we deploy the UCP installer, which automates the pulling of additional containers that make up the UCP.

docker run --rm -it --name ucp \

-v /var/run/docker.sock:/var/run/docker.sock \

docker/ucp install -i --host-address 192.168.0.140

Errors to watch for:

FATA[0033] The following required ports are blocked on your host: 12376.  Check your firewall settings. 

.. Make sure that you’ve edited the firewall configuration and reloaded the rules

WARNING: IPv4 forwarding is disabled. Networking will not work.

Enable IPv4 forwarding in the yast routing configuration.

Once the installation starts it will ask you for a password for the admin user, for this example the password I set was ‘password’ however I highly recommend that you choose something a little more secure. The installer will also give you the option to set additional SANs for the TLS certificates for additional domain names.

The installation will complete, and in my environment i’ll be able to connect to my UCP by putting in the address of ddc01 in a web browser.

ucp

Adding nodes to the UCP

After logging into the UCP for the first time, the dashboard will display everything that the docker cluster is currently managing. There will be a number of containers displayed as they make up the UCP (UCP web server, etcd, swarm manager, swarm client etc…). Adding in additional nodes is as simple as adding in Docker workers to a swarm cluster, possibly simpler as the UCP provides you with a command that can be copied and pasted on all further nodes to add them to the cluster.

Note: The UCP needs a license adding, otherwise additional nodes will fail during the add process.

 add_nodes

Deploying the Docker Trusted Registry (DTR)

On ddc02 install the Docker Trusted Registry as it’s not supported or recommended to have the UCP and the DTR on the same nodes.

From ddc02 we download UCP certificate.

curl -k https://192.168.0.140/ca > ucp-ca.pem

To then install the DTR, run this docker command and it will pull down the containers and add the registry to the control plane.

docker run -it --rm docker/dtr install --ucp-url http://192.168.0.140 \

--ucp-node ddc02 \

--dtr-external-url 192.168.0.141 \

--ucp-username admin \

--ucp-password password \

--ucp-ca "$(cat ucp-ca.pem)"

docker_trusted_reg

Conclusion

With all this completed we have a the following:

  • A number of configured hosts with correct firewall rules.
  • Docker Engine, that starts and stops the containers
  • Docker Swarm, clusters together the Docker Engines (it’s worth noting that it’s not the in built swarm in 1.12 and it still uses the swarm container to link together engines)
  • Docker DTR, the platform for hosting Docker images to be deployed on the engines
  • Docker UCP, as the front end to the entire platform.

docker_platform

I was pleasantly surprised about the simplicity of deploying the components that make up Docker Datacenter. Although it looks a little bit like the various components are running behind the new¬†functionality that has been added to¬†the Docker engine, this is evident in that UCP doesn’t use swarm that is part of 1.12 and wastes a little bit of resource in deploying additional containers to provide the swarm clustering.

It would be nice in the future to provide a more feature rich UI that provides workflow capabilities to compose applications. As currently it’s based upon hand crafting compose files in YAML, that you can copy and paste into the UCP or upload your existing compose files. However the UCP provides an excellent overview of your deployed applications and the current status of containers (logs and statistics).

A peek inside Docker for Mac (Hyperkit, wait xhyve, no bhyve …)

It’s no secret that the code for Docker for Mac ultimately comes from the FreeBSD hypervisor, and the people that have taken the time to modify it to bring it to the Darwin (Mac) platform have done a great job in tweaking code to handle the design decisions that ultimately underpin the Apple Operating System.

Recently I noticed that¬†the bhyve project had released code for the E1000 network card so I decided to take the hyperkit code and see what was required in order to add in the PCI code. What follows is a (rambling and somewhat incoherent) overview of what was changed to move from bhyve to hyperkit and some observations to be aware of when porting further PCI devices to hyperkit. ¬†Again, please be aware i’m not a OS developer or a hardware designer so some of this based upon a possibly flawed understanding… feel free to correct or teach me ūüôā

Update: Already heard from @justincormack about Docker for Mac, in that it uses vpnkit not vmnet.

VMM Differences

One of the key factors that led to the portability of bhyve to OSX is that the darwin kernel is loosely based upon the original kernel that powers FreeBSD (family tree from wikipedia here), which typically meant that a lot of the kernel structure and API calls aren’t too different. However OSX is typically aimed at the consumer market and not the server market meaning that as OSX has matured the people from Apple have stripped away some of the kernel functionality that comes as shipped, the obvious one being the removal of TUN/TAP devices in the kernel (can still be exposed through loading a kext¬†(kernel extension)) which although problematic hyperkit has a solution for.

VM structure with bhyve

When bhyve starts a virtual machine it will create the structure of the VM as requested (allocated vCPUs, allocate the memory, construct PCI devices etc.) these are then attached to device nodes under /dev/vmm then the bhyve kernel module handles the VM execution. Also being able to examine /dev/vmm/ provides a place for administrators to see what virtual machines are currently running and also to allow them to continue running unattended.

Internally the bhyve userland tools make use of virtual machine contexts that link together the VM name to the internal kernel structures that are running the VM instance. This allows a single tool to run multiple virtual machines that you typically see from VMMs such as Xen, KVM or ESXi.

Finally the networking configuration that takes place inside of bhyve… Unlike the OSX kernel, freeBSD typically comes prebuilt with support for TAP¬†devices (if not the command kldload if_tap is needed). However simply put, with the use of a TAP device it greatly simplifies the usage of guest network interfaces. When an interface is created with bhyve a PCI network device inside the VM is created and then on the physical host a TAP device is created. Inside the VM when network frames are written to the PCI device bhyve actually writes these frames onto the TAP device on the physical host (using standard write(), read() functions on file descriptors) and those packets are then broadcast out on the physical interface on the host. If you are familiar with VMware ESXi then the concept is almost identical to the way a VSwitch functions.

bhyve Network

VM Structure with Docker for Mac (hyperkit)

So the first observation with the architecture for hyperkit is that all of the device node code /dev/vmm/ has been removed, which has had the effect of making virtual machines process based. This means that when hyperkit starts a VM it will malloc() all of the requested memory etc.. and it become the singular owner of the virtual machine, essentially killing the process ID of hyperkit will kill the VM. Internally all of the virtual machine context code has been removed because hyperkit process to VM is now a 1:1 association.

The initial design to remove all of the context code (instead of possibly always tagging it to a single vm context) requires noticeable changes to every PCI module that is added/ported from bhyve as it’s all based on creating and applying these emulated devices to a particular VM context.

To manage VM execution hyperkit makes use of the hypervisor.framework which is a simplified framework for creating vCPUs, passing in mapped memory and creating an execution loop.

Finally are the changes around network interfaces, from inside the virtual machine the same virtio devices are created as would be created on bhyve. The difference is linking these virtual interfaces to a physical interface, as with OSX there is no TAP device that can be created to link virtual and physical. So their currently exists two methods to pass traffic between virtual and physical hosts, one of which is the virtIO to vmnet (virtio-vmnet) and the other is virtio to vpnkit (virtio-vpnkit) PCI devices. These both use the virtio drivers (specifically the network driver) that are part of any modern Linux kernel and then hand over to the backend of your choice on the physical system.

It’s worth pointing out here that the vmnet backend was the default networking method for xhyve and it¬†makes use of the vmnet.framework, which as mentioned by other people is rather poorly documented.¬†It also slightly complicates things by it’s¬†design as¬†it doesn’t create a file descriptors that would allow the existing simple code to read() and write() from, and it also requires elevated privileges to make use of.

With the work that has been done by the developers at Docker a new alternative method for communicating from virtual network interfaces to the outside world has been created. The solution from Docker is two parts:

  • The virtio-vpnkit device inside hyperkit that handles the reading and writing of network data from the virtual machine
  • The vpnkit component that has a full TCP/IP stack for communication with the outside world.

(I will add more details around vpnkit, when I’ve learnt more … or learnt OCaml, which ever comes first)

Networking overviews

bhyve overview (TAP devices)

bhyve_traffic

xhyve/hyperkit overview (VMNet devices)

hyperkit_traffic

 

 Docker for Mac / hyperkit overview (vpnkit)

docker_traffic

 

Porting (PCI devices) from bhyve to hyperkit

All of the emulated PCI devices all adhere to a defined set of function calls along with a structure that defines pointers to functions and a string that identifies the name of the PCI device (memory dump below)

pci_functions

The pci_emul_finddev(emul) will look for a PCI device e.g. (E1000, virtio-blk, virtio-nat) and then manage the calling of its pe_init function that will initialise the PCI device and then add it to the virtual machine PCI bus as a device that the operating system can use.

Things to be aware of when porting PCI devices are:

  • Removing VM context aware code, as mentioned it is a 1:1 between hyperkit and VM.
    • This also includes tying up¬†paddr_guest2host() which maps physical addresses to guests etc.
  • Moving networking code from using TAP devices with read(), write() to making use of the vmnet framework

With regards to the E1000 PCI code i’ve now managed to tie up the code so that the PCI device is created correctly and added to the PCI bus, just struggling to fix the vmnet code (so feel free to give take my poor attempt and fix it successfully ūüôā¬†https://github.com/thebsdbox/hyperkit)

img_6620

 

Further reading

http://bhyve.org/bhyve-fosdem2013.pdf

https://wiki.freebsd.org/bhyve

https://github.com/docker/hyperkit

Automate HPE OneView/Synergy with Chef and Docker

As per a previous post, I’ve been working quite a lot with both HPE OneView (powers HPE Synergy through it’s Composer) and thought i’d put a post together that summarises automating deployments through Chef. There¬†is already tons of information (some of it somewhat sporadic) around the internet for using the HPE OneView Chef driver:

  • Build a Chef Environment and install HPE OneView Provisioning driver -> Here
  • Overview into the code architecture -> Here
  • Technical white paper -> Here

To simplify the process of using Chef with HPE OneView i’ve put together a couple of Dockerfiles that will build some Docker images that simplify the process so much that to have the requirements to use chef/OneView provisioning driver only takes a few commands. Also the side effect of containing Chef and the provisioning driver means that it becomes incredibly simple to have a group of configurations and recipes that will interact with both Synergy Composers and HPE OneView instances that manage DL and BL servers.

 

The below image depicts how have multiple configurations would work:

oneview-chef-docker

Essentially, using the -v /local/path:/container/path allows us to have three folders each containing their own knife.rb (Configuration for each OneView/Container) and a recipe.rb that is applicable for that particular set of infrastructure. The mapping described above will always map a path on the host machine to /root/chef inside the container, allowing simple provisioning through Chef. It also makes it become incredibly simple to manage a number of sets of infrastructure from a single location.

Also be aware that a Chef Server isn’t required, but in order to be able to clear up (:destroy) machines created through Chef in a docker container WITHOUT a Chef server then ensure that Chef is recording what is deployed locally.

Example for your recipe.rb :

with_chef_local_server :chef_repo_path => '/root/chef/',
  :client_name => Chef::Config[:node_name]

Dockerfile is located here

raspberry-pi-logo

 

For the more adventurous, it is also possible to have all of this code run from a Docker container on a Raspberry Pi (the same usage applies). To create a Docker container that will run on a Raspberry Pi the Docker file is located here

Compiling packages in Docker

After my previous post yesterday I was given a few tips thanks to http://twitter.com/yankcrime about some much better solutions for compiling packages and then building them into a package that can then be used with docker build in the correct fashion (I’m sure there are still some steps that could be better).

First attempt:

The first Dockerfile I put together consisted of installing half the compile toolchain, a hand full of development libraries and then downloading the source file for the application I was trying to build. It would then configure the build with required settings, spend an hour (Raspberry Pis aren’t the fastest¬†at compiling ūüôĀ ) building everything and then install all of the files in their expected locations. Finally remove all of the source code, un-install all of the development packages and finally do the final prep configuration work specified in the docker file.

Docker Image Result:

bind9_10      latest      d1ceef183d9f     3 hours ago         930.6 MB

Second attempt:

To try and shrink the size of the Docker image, I had to abandon my desire to automate the building of the docker container and switch to a two container model. The first container would partially automate the application configuration and the actual compilation and then copy the built application to shared storage. The second container would start with a simple base image install the minimum requirements (shared/required libraries) and also it required make. This image would be volume mapped to the application source directory where it would simply run a $ make install however that need for mapped volumes mean that this has to be a hand-crafted image (with a commit once complete)

Docker Image Result:

bind910json      latest      6eeae5a74642     20 hours ago          320 MB

Third attempt (thanks to @yankcrime and FPM):

This morning (02/08/2016) I updated my Dockerfiles and tried last night suggestions. The build is almost automated (I believe I can completely automate it with nested containers, but i’ve yet to try). The current environment consists of three things:

  • Docker container that automates fully building of bind9.10, with a make install into /tmp where the binaries are stripped and fpm will create a package (Dockerfile).
  • Docker run the new image, which will copy the new package to the location of my second Dockerfile location (commands below).

$ export DEB_LOCATION=/mnt/docker/Dockerfiles/bind9_10_deb/
$ docker run -i --rm -v $DEB_LOCATION:/files bind9_10 /bin/cp bind9_10_0.9.10-dan_armhf.deb /files

  • Docker build my other Dockerfile and away we go (Dockerfile).

Docker Image Result:

bind9_10_deb      latest      64d7df855866     21 seconds ago      220.2 MB

 

Notes:

I did attempt to make changes to the debian source package for bind9.10 and it was just a mine field of random dependencies and over the top scripting.. even adding in the correct configure options resulted in something breaking the config.h script for the build (HAVE_JSON was always missing)

There are a lot more steps that can be observed to have more efficient Docker images, including running multiple commands per RUN command to reduce the amount of space written per layer etc.

 

Docker Image sizing information:

http://developers.redhat.com/blog/2016/03/09/more-about-docker-images-size/

Docker Image Reduction Techniques and Tools

 

 

Raspberry Pi with Docker

I’ve put off purchasing Raspberry Pis for a few years as I was pretty convinced that the novelty would wear off very quickly and they would be consigned to the drawer of random cables and bizarre IT equipment i’ve collected over the years (Parallel cables and zip drives o_O).

The latest iteration of the Pi is the v3 that comes with 1Gb of ram and 4 arm cores and it turns out that whilst it’s not exactly a computer powerhouse, it can still handle enough load to do a few useful things.

Raspberry Pis

I’ve currently got a pair of them in a docker swarm cluster (docker 1.12-rc3 for armv7l available here). Which has given me another opportunity to really actually play with docker and try and replace some of my linux virtual machines with “lightweight” docker containers.

First up: To ease working between the two pi’s I created a nfs share for all my docker files etc. I then decided that having my own Docker registry to share images between hosts would be useful. So on my first node I did a docker pull etc. for the Docker registry container and attempted to start it. This is where I realised that the container will just continuously restart, a quick peer into the container itself and I realised that it has binaries compiled for x86_64¬†not¬†armv7l ¬†so that clearly wasn’t going to work here. So¬†that chalks up failure number one for a pure Raspberry Pi Docker cluster as my Registry had to be ran from a CoreOS virtual machine.

Once that was up and running, my first attempt to push an image from the Pis resulted in the error message :

https://X.X.X.X:5000/v1/_ping: http: server gave HTTP response to HTTPS client

After some googling it appears that this issue is related to having an insecure repository, this can be resolved by going down the generating certificate route etc.. However to fix the insecurity issue the docker daemon will need starting with some parameters to allow the use of an insecure registry.

Note: The documentation online informs you to update the docker configuration files and restart the docker daemon, however there appears to be a bug in raspbian/debian implementation. For me editing /etc/default/docker and adding DOCKER_OPTS='--insecure-registry X.X.X.X:5000' had no effect. This can be inspected by looking at the output from $ docker info as the insecure registries are listed here.

To fix this I had to edit the systemd start up files so that it would start dockerd with the correct parameters.

$ sudo vim /lib/systemd/system/docker.service

...

ExecStart=/usr/bin/dockerd --insecure-registry X.X.X.X:5000 -H fd://

After restarting the daemon, I can successfully push/pull from the registry.

Finally: The first service I attempted to move to a lightweight container resulted in a day and a half of fiddling (clearly a lot of learning needed). Although to clarify this was due to me wanting some capabilities that were compiled into the existing packages.

Moving bind into a container “in theory” is relatively simple:

  • Pull an base container
  • pull the bind package and install (apt-get install -y bind9)
  • map a volume containing the configuration and zone files
  • start the bind process and point it at the mapped configuration files

All of this can be automated through the use of a Dockerfile like this one. However after further investigation and a desire to monitor everything as much as possible, it became apparent that the use of the statistic-channel on bind 9.9 wouldn’t be sufficient and i’d need the 9.10 (mainly for JSON output). After creating a new docker file that adds in the unstable repos for debian and pulling the bind 9.10 packages it turns out that debian compile bind without libjson support ūüôĀ meaning that json output was disabled. This was the point where Docker¬†and I started to fall out as a combination of dockers layered file system and build’s lack of ability to use --privileged or the -v (volume) parameter don’t work. This resulted in me automating a docker container that did the following:

  • Pull an base container
  • Pull a myriad of dev libraries, compilers, make toolchains etc.
  • download the bind9.10 tarball and untar it
  • change the WORKDIR and run ./configure with all of the correct flags
  • make install
  • delete the source directory and tarball
  • remove all of the development packages and compilers

This resulted in a MASSIVE 800Mb docker image just to run bind ūüôĀ In order to shrink the docker container I attempted a number of alternative methods such as using an NFS mount inside the container where all of the source code would reside for the compiling which wouldn’t be needed once a make install was ran. However as mentioned NFS mounts (require --privileged to mount) aren’t allowed with docker build and neither is it an option to pass it through with a -v flag as that doesn’t exist for building. This left me with the only option of manually having to create a docker container for bind that would start a base container with a volume passed through of already “made” binaries along with the required shared libraries and do a make install. This container then could be exited and committed as an image for future use and was significantly smaller that the previous image.

dockerimages

There are a¬†number of issues on the Docker Github page around docker build not supporting volume mapping or privileged features. However the general response to feature requests that would have assisted my deployment generally are for edge cases and won’t be part of Docker any time soon.

Still, got there in the end and with a third Pi in the post i’m looking forward to moving some more systems onto my Pi cluster and coding some cool monitoring solutions ūüėÄ