I’ve put off purchasing Raspberry Pis for a few years as I was pretty convinced that the novelty would wear off very quickly and they would be consigned to the drawer of random cables and bizarre IT equipment i’ve collected over the years (Parallel cables and zip drives o_O).
The latest iteration of the Pi is the v3 that comes with 1Gb of ram and 4 arm cores and it turns out that whilst it’s not exactly a computer powerhouse, it can still handle enough load to do a few useful things.
I’ve currently got a pair of them in a docker swarm cluster (docker 1.12-rc3 for armv7l available here). Which has given me another opportunity to really actually play with docker and try and replace some of my linux virtual machines with “lightweight” docker containers.
First up: To ease working between the two pi’s I created a nfs share for all my docker files etc. I then decided that having my own Docker registry to share images between hosts would be useful. So on my first node I did a docker pull etc. for the Docker registry container and attempted to start it. This is where I realised that the container will just continuously restart, a quick peer into the container itself and I realised that it has binaries compiled for x86_64 not armv7l so that clearly wasn’t going to work here. So that chalks up failure number one for a pure Raspberry Pi Docker cluster as my Registry had to be ran from a CoreOS virtual machine.
Once that was up and running, my first attempt to push an image from the Pis resulted in the error message :
https://X.X.X.X:5000/v1/_ping: http: server gave HTTP response to HTTPS client
After some googling it appears that this issue is related to having an insecure repository, this can be resolved by going down the generating certificate route etc.. However to fix the insecurity issue the docker daemon will need starting with some parameters to allow the use of an insecure registry.
Note: The documentation online informs you to update the docker configuration files and restart the docker daemon, however there appears to be a bug in raspbian/debian implementation. For me editing
/etc/default/docker and adding
DOCKER_OPTS='--insecure-registry X.X.X.X:5000' had no effect. This can be inspected by looking at the output from
$ docker info as the insecure registries are listed here.
To fix this I had to edit the systemd start up files so that it would start dockerd with the correct parameters.
$ sudo vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --insecure-registry X.X.X.X:5000 -H fd://
After restarting the daemon, I can successfully push/pull from the registry.
Finally: The first service I attempted to move to a lightweight container resulted in a day and a half of fiddling (clearly a lot of learning needed). Although to clarify this was due to me wanting some capabilities that were compiled into the existing packages.
Moving bind into a container “in theory” is relatively simple:
- Pull an base container
- pull the bind package and install (
apt-get install -y bind9)
- map a volume containing the configuration and zone files
- start the bind process and point it at the mapped configuration files
All of this can be automated through the use of a Dockerfile like this one. However after further investigation and a desire to monitor everything as much as possible, it became apparent that the use of the statistic-channel on bind 9.9 wouldn’t be sufficient and i’d need the 9.10 (mainly for JSON output). After creating a new docker file that adds in the unstable repos for debian and pulling the bind 9.10 packages it turns out that debian compile bind without libjson support 🙁 meaning that json output was disabled. This was the point where Docker and I started to fall out as a combination of dockers layered file system and build’s lack of ability to use
--privileged or the
-v (volume) parameter don’t work. This resulted in me automating a docker container that did the following:
- Pull an base container
- Pull a myriad of dev libraries, compilers, make toolchains etc.
- download the bind9.10 tarball and untar it
- change the WORKDIR and run ./configure with all of the correct flags
- make install
- delete the source directory and tarball
- remove all of the development packages and compilers
This resulted in a MASSIVE 800Mb docker image just to run bind 🙁 In order to shrink the docker container I attempted a number of alternative methods such as using an NFS mount inside the container where all of the source code would reside for the compiling which wouldn’t be needed once a
make install was ran. However as mentioned NFS mounts (require
--privileged to mount) aren’t allowed with docker build and neither is it an option to pass it through with a -v flag as that doesn’t exist for building. This left me with the only option of manually having to create a docker container for bind that would start a base container with a volume passed through of already “made” binaries along with the required shared libraries and do a
make install. This container then could be exited and committed as an image for future use and was significantly smaller that the previous image.
There are a number of issues on the Docker Github page around docker build not supporting volume mapping or privileged features. However the general response to feature requests that would have assisted my deployment generally are for edge cases and won’t be part of Docker any time soon.
Still, got there in the end and with a third Pi in the post i’m looking forward to moving some more systems onto my Pi cluster and coding some cool monitoring solutions 😀