0%

I’ve been running an iPad Pro as a travel device for about six months now and I though I would make a quick post that details some of the ways i’ve added it into my workflow and a few of the apps that I use on a daily basis.

Hardware

I’ve had a few iPads over the years ranging from the first iPad in 2010 and an iPad mini 2 that has served as my travel entertainment device for a long time. After toying with the idea for some time I decided to opt for the 11 inch iPad Pro (https://www.apple.com/uk/ipad-pro/) as a replacement for my old and tired iPad mini, but also perhaps something that could serve as an in-between when I was in confined spaces (planes/trains) or didn’t have my laptop.

The iPad as a piece of hardware is pretty incredible and with the removal of the home button has oodles of screen real-estate. The Face ID functionality has been flawless for unlocking and payments, however the camera is in a bit of annoying place when using the iPad in landscape mode. The number of times, i’ve held my iPad by the side and accidentally obscured the camera I can’t count.

The battery life has been pretty stable and with moderate usage and 4G the iPad will quite happily last the entire day with plenty of battery to spare. Additionally with the inclusion of USB-C I can charge the iPad with the same charger as my MacBook Pro in a very short period of time. One additional feature that has been very useful is the ability to use a USB-C to lightening cable and have the iPad charge my iPhone whilst on the move.

Finally, the Smart Keyboard Folio works as a fantastic keyboard for a good few hours of typing and makes using the iPad a lot more like using a laptop (along with expected keyboard shortcuts e.g. command-Tab). The one complaint with the Smary Keyboard is that it only protects the front and the back, however three sides of the iPad have no protection so it’s quite easy to aquire scratches and marks.

The small form factor means that the iPad can be used quite comfortably even on the most budget of “budget flights”, the image above is from a Ryan air flight where I happily could edit various files.

Productivity

My day to day technology stack is pretty average for most enterprise companies and luckily there exists pretty good replica iOS applications for the iPad:

  • Office 365 apps (Outlook)
  • Slack
  • Google drive app / Google docs
  • Trello

All of these are “good enough” for medium-to-light usage when a laptop and the “full fat” application isn’t present.

Remote Access

My typical day-to-day tasks usually require the management and usage of a variety of remote systems, there are a few native utilities on the iPad that really make this easy. Two of the best applications for this are by the same developer (https://panic.com/) and it really shows that they know what they’re doing when it comes to different types of remote workload.

Prompt

For handling SSH into various remote servers, I’ve found that Prompt is pretty much unbeatable for handling stable connections for various servers. It also provides a fantastic UI for managing multiple connection information for being able to re-connect to saved sessions upon restarting prompt.

Coda

Coda could perhaps replace Prompt as it does support SSHing into remote hosts, however its main focus is being able to access a remote host over SSH/SFTP to provide a great way of manipulating files on the remote filesystem through the Coda UI. I typically use if for editing both markdown files and various source code files, Coda provides the capability to provide source code highlighting based upon the file extension and more importantly (for markdown) a preview function that will render certain files. Once a file has been opened the first time within Coda it can be edited and saved locally (on the iPad and within the Coda app) and can then be saved back to the remote file system once connectivity is restored.

Visual Studio Code

I would love this to eventually become a native application on the iPad, however until then the only option is to access Visual Studio Code through a web browser using (https://github.com/cdr/code-server). Once code-server is up and running the iPad can access it through the safari web browser, however there are a few things that can make it a little easier to use:

Full screen

To access Visual Studo Code in full screen on the iPad it will need adding to the home screen. Open code-server in safari and press the button next to the address bar, then navigate to the “Add to Home Screen” button. This will now create a button on the iPad Home screen that will open code-server directly with things like the address bar hidden, providing Visual Code in full screen.

Disable Shortcut bad

When typing in Visual Studio Code iOS will keep popping up a bar with commonly used words and shortcuts that can limit the screen real-estate. To disable this open Settings -> General -> Keyboard -> Shortcuts (disable).

VNC Viewer

For accessing either my macbook or hackintosh VNC Viewer with the “Bonjour” capability makes it fantastically easy to find macOS desktops that are being shared through screen share. The VNC Viewer tool is perfectly designed for use with the touch screen with “pinch” and scroll and makes using a full desktop pretty straight forward.

Additional Usage

I presume that “Sidecar” will remove the need for an additional app (https://www.macrumors.com/2019/06/06/macos-catalina-sidecar-limited-to-newer-macs/) but currently the Duet application has been pretty great for turning the iPad into an external display for my Macbook. The performance is fantastic for the software having to handle all of the external monitor functionality, although it sometimes will take a few app restarts for the iPad to appear as a second display.

In this second post we will cover the use of nftables in the linux kernel to provide very simple and low-level load balancing for the Kubernetes API server.

Nftables

Nftables was added to the Linux kernel in the 3.x release and was presumed to be the de-facto replacement of the often complained about iptables. However at the moment with Linux Kernels in the 5.x release it still looks like iptables has kept its dominance.

Even in some of the larger open source projects the move towards the more modern nftables appears to stall quite easily and become a stuck issue:

Nftables is either driven through the nft cli tooling or rules can be created through the netlink interface to provide a programmable driven approach. There are a few libraries that are in the process of being created to help address a programmable approach in various levels of development (see below in the additional resources).

There are a few distributions that have started to the migration to nftables, however to maintain backwards complexity a number of nftables wrappers have been created that allow the distro or an end user to issue iptable commands and have them translated to nftables rules.

Kubernetes API server load balancing with nftables

This is where the architecture gets a little bit different to the load balancers that I covered in my previous post. In the more common load balancers an administrator will create an load balancer endpoint (typically address and port) for a user to connect to, however with nftables we will be routing through it like a gateway or firewall.

Routing (simplistic overview)

In networking, routing is the capability of having networking traffic “routed” to different logical networks that aren’t directly addressable from the source host to the destination host.

Routing Example

  • Source Host is given the address 192.168.0.2/24
  • Destination Host is given the address 192.168.1.2/24

If we decompose the Source address we can see that it’s 192.168.0.2 and on the subnet /24 (typically shown through a netmask of 255.255.255.0). The subnet is used to determine how large the addressable network is, with the current subnet the source host can access anything that is 192.168.0.{x} (1 - 254)

Based upon the designated subnets neither network will be able to directly access one another, so we will need to create a router and then update the hosts so that they have routing table entries that specify how to access networks.

A router will need to have an address on each network in order for the hosts on that network to route their packets through so in this example our router will look like the following:

1
2
3
4
5
6
7
____________________________________________
| My Router |
| | |
| eth0 | eth1 |
| 192.168.0.100 <---------> 192.168.1.100 |
|__________________________________________|

The router could be a physical routing device or alternatively could just be a simple server or VM, in this example we’ll presume a simple Linux VM (as we’re focussing on nftables). In order for out Linux VM to forward packets to other networks we need to ensure that ipv4_forwarding is enabled.

The final step is to ensure that the source host has it’s routing tables updated so that it is aware of where packets need to go when they need to access the 192.168.1.0/24 network. Typically it will look like the following pseudo code (Each OS has slightly different ways of adding routes):

route add 192.168.1.0/24 via 192.168.0.100

This route effectively tells the kernel/networking stack that any packets addressed for the network 192.168.1.0/24 should be forwarded to the address 192.168.0.100, which is our router with an interface in that network range. This simple additional route now means that our source host can now access the destination address by routing packets via our routing vm.

I’ll leave the reader to determine a route for destination hosts wanting to connect to hosts in the source network :)

Nftables architectural overview

This section details the networks and addresses of the network i’ll be using, along with a crude diagram showing the layout.

Networks

  • 192.168.0.0/24 Home network
  • 192.168.1.0/24 VIP (Virtual IP) network
  • 192.168.2.0/24 Kubernetes node network

Important Addresses / Hosts

  • 192.168.0.1 (Gateway / NAT to internet)

  • 192.168.0.11 (Router address in Home network)

  • 192.168.2.1 (Router address in Kubernetes network)

  • 192.168.1.1 (Virtual IP of the Kubernetes API load balancer)

Kubernetes Nodes

  • 192.168.2.110 Master01
  • 192.168.2.111 Master02
  • 192.168.2.120 Worker01

(worker nodes will follow sequential addressing from Worker01)

Network diagram (including expected routing entries)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
_________________
| 192.168.0.0/24|
_________________
|
| 192.168.1.0/24 routes via 192.168.0.11
|
_________________ --> 192.168.0.11
| 192.168.1.0/24|
_________________ --> 192.168.2.1
|
| IPv4 forwarding, and NAT Masquerading
| 192.168.1.0/24 routes via 192.168.2.1
|
_________________
| 192.168.2.0/24|
_________________

Using nft to create the load balancer

The load balancing VM will be an ubuntu 18.04 VM with the nft package and all of it’s dependencies, also we will ensure that ipv4_forwarding is enabled and persists through reboots.

Step one: Creation of the nat table

nft create table nat

Step two: Creation of the postrouting and prerouting chains

nft add chain nat postrouting { type nat hook postrouting priority 0 \; }

nft add chain nat prerouting { type nat hook prerouting priority 0 \; }

Note we will need to add masquerading for the post routing of traffic (so traffic gets back to the source)

nft add rule nat postrouting masquerade

Step three: Creation of the load balancer

nft add rule nat prerouting ip daddr 192.168.1.1 tcp dport 6443 dnat to numgen inc mod 2 map { 0 : 192.168.2.110, 1 : 192.168.2.111 }

This is quite a long command so to understand it we will deconstruct some key parts:

Listening address

  • ip daddr 192.168.1.1 Layer 3 (IP) destination address 192.168.1.1
  • tcp dport 6443 Layer 4 (TCP) destination port 6443

Note
The load balancer listening VIP that we are creating isn’t a “tangible” address that would typically exist with a standard load balancer, there will be no TCP stack (and associated metrics). The address 192.168.1.1 exists only inside the nftables ruleset and only when traffic is routed through the nftables VM will traffic be load balanced.

This is important to be aware of, if we placed a host on the same 192.168.1.0/24 network and tried to access the load balancer address 192.168.1.1 we wouldn’t be able to access it as traffic on the same subnet is never passed through a router or gateway. So, the only way traffic can hit an nftable load balancer address (VIP) is if that traffic is routing through the host and being examined by our ruleset. Not realising this can lead to a lot of head scratching whilst contemplating what appears to be simple network configuration.

Destination of traffic

  • to numgen inc mod 2 Use a random number generator between two numbers
  • map { 0 : 192.168.2.110, 1 : 192.168.2.111 } Our pool of backend servers that the random number generator will select from

Step four: Validate nftables rules

nft list tables

This command will list all of the nftables that have been created, at this point we should see out nat table

nft list table ip nat

This command will list all of the rules/chains that have been created as part of the nat table, at this point we should see our routing chains and our ruleset watching for incoming traffic on our VIP and the destination hosts.

Step five: Add our routes

Client route

The client here will be my home workstation, and will need a route adding so that it can access the load balancer VIP 192.168.1.1 on the 192.168.1.0/24 network:

Pseudo code

route add 192.168.1.0/24 via 192.168.0.11

Kubernetes API server routes

These are needed for a number of reasons, the main reason here though is that kubeadm will need to access the load balancer VIP 192.168.1.1 to ensure that the control plane is accessible.

Pseudo code

route add 192.168.1.0/24 via 192.168.2.1

Using Kubeadm to create our HA (load balanced) Kubernetes control plane

At this point we should have the following:

  • Load balancer VIP/port created in nftables
  • Pool of servers defined under this VIP
  • Client route set through the nftables VM
  • Kubernetes API server set to route to the VIP network through the nftables VM

In order to create a HA control plane with kubeadm we will need to create a small yaml file with the load balancer configuration detailed. Below is an example configuration for Kubernetes 1.15 with our above load balancer configuration:

1
2
3
4
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: 1.15.0
controlPlaneEndpoint: "192.168.1.1:6443"

Once this yaml file has been created on the file system of the first control plane node we can bootstrap the kubernetes cluster:

kubeadm init --config=/path/to/config.yaml --experimental-upload-certs

The kubeadm utility will run through a large battery of pre-bootstrap tests and checks and will start to deploy the Kubernetes control plane components. Once they’re deployed and being started the kubeadm utility will attempt to verify the health of the control plane through the load balancer address. If everything has been deployed correctly then once the control plane components are healthy then traffic should flow through the load balancer to the bootstrapping node.

Load Balancing applications

All of the above notes cover using nftables to load balance the Kubernets API server, however the above example can easily be used in order to load balance an application that is being ran within the kubernetes cluster. This quick example will use the Kubernetes “Hello World” example (https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/) (which i’ve just discovered is 644MB ¯\_(ツ)_/¯) and will be load balanced in a subnet that i’ll define as my application network (192.168.4.0/24).

Step one: Create “Hello World” deployment

kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080

This will pull the needed images and start two replicas on the cluster

Step two: Expose the application through a service and NodePort

kubectl expose deployment hello-world --type=NodePort --name=example-service

Step three: Find the exposed NodePort

kubectl describe services example-service | grep NodePort

This command will show the port that Kubernetes has selected to expose our service on, we can now test that the application is working by examining the Pods to make sure they’re running kubectl get deployments hello-world and testing connectivity with curl http://node-IP:NodePort.

Step four: Load Balance the service

Our two workers (192.168.2.120/121) are exposing the service through their NodePort 31243, we will use our load balancer to create a VIP of 192.168.4.1 that is exposing the same port 31243 that will load balancer over both workers.

1
2
3
nft add rule nat prerouting ip daddr 192.168.4.1 \
tcp dport 31243 dnat to numgen inc mod 2 \
map { 0 : 192.168.2.120, 1 : 192.168.2.121 }

Note: Connection state tracking is managed by conntrack and is applied on the first packet in flow. (thanks @naadirjeewa)

Step five: Add route on client machine to access the load balanced service

My workstation is on the Home network as defined above and will need a route adding so that traffic will go through the load balancer to access the VIP. On my mac the command is:

sudo route -n add 192.168.4.0/24 192.168.0.11

Step six: Confirm that the service is now available

From the client machine we can now validate that our new load balancer address is both accessible and is load balancing over our application on the Kubernetes cluster.

1
2
 $ curl http://192.168.4.1:31243/
Hello Kubernetes!

Monitoring and Metrics with nftables

The nft cli tool provides capability for debugging and monitoring the various rules that have been created within nftables. However in order to create any level of metrics then counters (https://wiki.nftables.org/wiki-nftables/index.php/Counters) will need defining on a particular rule. We will re-create our Kubernetes API server load balancer with a counter enabled:

1
2
3
4
nft add rule nat prerouting ip daddr 192.168.1.1 tcp dport 6443 \
counter dnat to \
numgen inc mod 2 \
map { 0 : 192.168.2.110, 1 : 192.168.2.111 }

If we look at the above rule we can see that on the second line we’ve added a counter statement that is placed after the VIP definition and before the destination dnat to the endpoints.

If we now examine the nat ruleset we can see that counter values are being incremented as the load balancer VIP is being accessed.

1
2
3
nft list table ip nat | grep counter
ip daddr 192.168.1.1 tcp dport 6443 counter packets 54 bytes 3240 dnat to numgen inc mod 2 map { 0 : 192.168.2.110, 1 : 192.168.2.111 }

This output is parsable although not all that useful, the nftables documentation mentions snmp but i’ve yet to find any real concrete documentation.

Additional resources

This is a lazy introduction to options that can be employed to load balancer the Kubernetes API-Server, however in this first post we will be focusing on the Kubernetes API-Server and load balancers in a general sense just to get a slightly deeper understanding what is happening.

Kubernetes API server

As its name suggests the Kubernetes API server is the “entry-point” into a Kubernetes cluster, and allows for all CRUD (Create/read/update/delete)
operations. The interaction with the Kubernetes API server is typically through REST and JSON/YAML payloads that define objects within the Kubernetes cluster. A Kubernetes API server will happily run as a singular instance, however this (by its very design) is a single point of failure and offers no high availability. In order to provide a resilient and highly available Kubernetes API server then a load balancer should be placed above the Kubernetes API server replicas and have the load balancer handle ensuring that a client is passed to a healthy instance.

Example Kubernetes API architecture

Here a load balancer 192.168.1.1 will select one of the two instances beneath it and present it to the client

1
2
3
4
5
6
7
8
            -----------------
|192.168.1.1:6443|
-----------------
| |
------------------- -------------------
|192.168.1.110:6443| |192.168.1.111:6443|
------------------- -------------------

Load balancers

Load balancers have been a heavily deployed technology for a number of decades, their use-case has typically been to provide a solution to two technical challenges:

  • Scaling workloads over application replicas
  • High Availability over “working” application replicas

With scaling workloads a load balancer will take requests and dependant on the distribution algorithm send the request to one of the pre-configured backends.

The High availability the load balancer will typically employ a technology that will determine if an endpoint is working as correctly and use that to determine if traffic can be sent to that endpoint. As far as the end user is concerned, traffic will always be hitting a working endpoint as long as one is available.

Load balancer architecture

The majority of load balancers operate using a few common structures:

Front end

The front end is the client side part of the load balancer, and it is this front end that is exposed to the outside world. Either an end user or an application will connected to the front end in an identical manner that they would connect to the application directly. The load balancing should always be transparent to the end user/application that is connecting to the load balancer

Back end(s)

When a front end of a load balancer is accessed by a client, it is the load balancers main purpose to then redirect that traffic transparently to a back end server. A back end server is the location where the load balancer application should actually be running, and the load balancer will typically perform a check to ensure that the application is available before sending any traffic to it. As the main point of a load balancer is to provide both high availability and scaling then multiple back ends are usually configured under a single front end, these are typically called a pool from which the load balancer will select one from the many healthy back ends.

Selection Algorithm

Once a front end is defined and a pool of backends have been added there typically will be a predetermined selection algorithm that will be used in order to make the decision which backend should be chosen from the pool. As this is meant to be a simple overview, we will only cover the two most commonly used algorithms:

Round-Robin

This algorithm is arguably the most simple, and will simply loop through the pool of back end servers until they’re exhausted and start again.

Weighted

This method provides the capability of having traffic pre-determined to go more heavily to some backend in the pool than others. Weighted algorithms can be different between load balancers but if we were to pretend that it were based upon simple percentage we could imagine the following two examples:

Equal weighting

backend1.com 50%
backend2.com 50%

This would almost be the same as having round-robin load balancing as requests would be based equally between the two backends in the pool.

Weighted load balancing

backend1.com 10%
backend2.com 90%

This would mean that 1 out of every ten requests would be going to backend1.com, and the remaining 9 would be going to backend2.com. The main use-cases for this are typically things like a phased approach of a new release of an application (canary deployment)

Types of Load balancer

Load balancers provide a number of different mechanisms and ways of exposing their services to be consumed, each of these mechanisms has various advantages and gotchas to be aware of. The majority of load balancers will provide their service at a particular layer in the OSI stack > wiki link

Type 3 Load balancer

Type 3 can typically be thought of as “endpoint” load balancer. This load balancer will expose itself using an IP address sometimes referred to as a VIP (Virtual IP address) and then it will load balance incoming connections over one or more pre-defined endpoints. Health checks to these internal endpoints are typically performed by attempting to either create a connection to the endpoint or a simple “ping” check.

Type 4 Load balancer

A Type 4 load balancer will usually provide it’s functionality over a service that is exposed as a port on an IP address. The most common load balancer of this type is usually a web traffic load balancer that will expose traffic on the IP address of the load balancer through TCP ports 80 (un-encrypted) and 443 for (SSL encrypted web traffic). However this type of load balancer isn’t only restricted to web based traffic and can load balance connections to other services that listen on other TCP ports.

To ensure that the pool of backends are “health” a Type 4 load balancer has the capability to provide that not only does a backend accept connections, but also that the application is behaving as expected.

Example

Un-Healthy Application

  1. Ping endpoint (successful)
  2. Attempt to read data (a lot of web applications will expose /health URL that can determine if the application has completed its initialisation), if the application isn’t ready then it will return a HTTP error code 500 (anything >300 is typically not healthy)

Type 7 Load balancer

A Type 7 load balancer has to have knowledge about a particular protocol (or application knowledge) in order for it to provide the load balancing to a particular pool of endpoints. This particular load balancer if often found in-front of large websites that host various services under the same domain name.

Example

Consider these two urls:

Under the previous two types of load balancer, all traffic will only go to the same pool of servers regardless of the actual URL requested by the end user.

In a Type 3 load balancer:

example.com –> resolves to –> load balancer IP –> which then selects a server from –> endpoint pool

In a Type 4 load balancer:

https://example.com –> resolves to –> load balancer IP –> and the https:// resolves to port 443 –> which then selects a server from –> endpoint pool

The Type 7 load balancer however employs the knowledge of the protocol/application and can perform the load balancing based upon certain levels of application behaviour. In this example the load balancer can examine all of the networking decisions to direct traffic to the correct pool of servers, however it can also make additional decisions now based upon things like application behaviour or client request.

In a Type 7 load balancer:

https://example.com/finance –> resolves to –> load balancer IP –> and the https:// resolves to port 443 –> load balancer identifies traffic as http/s and can parse behaviour –> /finance read as the URI –> Selects from the “finance” endpoint pool.

Existing load balancers and examples

There has been a large market for load balancers for a number of decades now as the explosion of web and other highly available and scalable applications has driven demand. Originally the lions share of load balancers were typically hardware devices, and would sit with your switching, routing, firewall and other network appliances. However as the speed of change and requirements for quick and smaller load balancers has taken hold, the rise of the software based load balancer has become prevalent. Originally the software load balancer was limited due to consumer hardware limitations and missing functionality, however with hardware offloads on NICs and in-built crypto in CPUs a lot of these issues have been removed. Finally, as cloud services have exploded so has the LBaaS (load balancer as a service) providing an externally addressable IP that can be easily pointed at a number of internal instances through wizards/APIs or through the click of a few buttons.

A Quick Go example of a Load Balancer

Below is a quick example of writing your own http/s load balancer, that will create a handler function that will handle the passing of traffic to one of the backend servers and return the results back to the client of the load balancer. The two things to consider here are the backendURL() function on the second line the the req. (HTTP Request) modifications.

  • backendURL() - This function will choose a server from the available pool and use it as a backend server
  • req.XYZ - Here we’re re-writing some of the HTTP request so that the backend returns the traffic back to the correct client.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
handler := func(w http.ResponseWriter, req *http.Request) {

url, _ := url.Parse(backendURL())

proxy := httputil.NewSingleHostReverseProxy(url)

req.URL.Host = url.Host
req.URL.Scheme = url.Scheme
req.Header.Set("X-Forwarded-Host", req.Host)
req.Host = url.Host

proxy.ServeHTTP(w, req)
}

mux := http.NewServeMux()
mux.HandleFunc("/", handler)
http.ListenAndServe(frontEnd, mux)

HAProxy load balancing example for Kubernetes API server

You will find below a simple example for haproxy -> /etc/haproxy/haproxy.cfg that will load balance from the haproxy VM/server to the two Kubernetes control plane nodes.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
frontend kubernetes-api
bind <haproxy address>:6443
mode tcp
option tcplog
default_backend kubernetes-api

backend kubernetes-api
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100

server k8s-master1 192.168.2.1:6443 check
server k8s-master2 192.168.2.2:6443 check

Nginx load balancing example for Kubernetes API server

You will find below a simple example for nginx -> /etc/nginx/nginc.cfg that will load balance from the nginx VM/server to the two Kubernetes control plane nodes.

1
2
3
4
5
6
7
8
9
10
11
12
13
stream {
upstream kubernetes-api {
server 192.168.2.1:6443 weight=5 max_fails=3 fail_timeout=30s;
server 192.168.2.2:6443 weight=5 max_fails=3 fail_timeout=30s;
}

server {
listen 6443;
proxy_connect_timeout 1s;
proxy_timeout 3s;
proxy_pass kubernetes-api;
}
}

Next steps

This entire post was actually leading up to load balancing the Kubernetes API server with nftables, so click here to read that next. :-)

I’ve had numerous Homelabs over the years all of which have been an incredibly valuable learning tool. This even includes a 1U Pentium-III rackmount server I purchased from eBay in 2002, at this point i’d never been near a “real” server and was super excited up to the moment where we turned it on for the first time. Me and my student housemates spent an age finding a location for the thing where it didn’t both make the house shake and we’d still be able to sleep.

In 2013 I decided I wanted a homelab that would provide a usable vSphere environment (was working towards VCP/VCaP at the time) and also enough capacity to run a good few virtual machines for other technologies I wanted to learn.

I was in a relatively small flat where Kim was also working so I had a number of restrictions:

  • Small footprint
  • Minimal (if any) noise
  • Relatively “average” power consumption
  • Enough capacity and capability to run a good few VMs ~10 in a usable fashion

After doing some various calculations and looking at the options in the market (Nucs, Brix, free standing towers) I decided that a pair of the new ultra compact PCs should provide the functionality I needed.

Lab v1

In the end I opted for a pair of Gigabyte Brix GB-XM11-3337 (i5 dual core) (identical HW as a comparable Intel Nuc, but slightly cheaper at the time).

I also maxed out the ram and added M.2 modules giving me the following:

  • Dual core Intel(R) Core(TM) i5-3337U CPU @ 1.80GHz (+hyperthreading)
  • 16GB DDR3
  • 250GB M.2 2240 (half-length)
  • vSphere 5.x

Both Brix were also backed by an NFS datastore allowing vMotion between the two physical hosts.

Over the last five years the two small systems have provided the capability to:

  • Deploy enough of the VMware stack to acquire the VCP
  • Hosted various Cisco software, unified compute system (UCS) platform to develop against
  • The same with the HPE OneView datacentre simulator and virtual connect emulator
  • The Docker EE stack
  • A Highly Available kubernetes stack
  • Endless other incomplete projects :-)

Lab v2

After the five years the Brix have had only had one issue, a fan died in one of the brix (repair was the princely sum of £3.99). However as of vSphere 6.7 the CPUs are no longer supported and as I try to automate a lot of quick VM creation/destruction events it has become obvious that they’re starting struggle with the workload. I was very excited to see that the latest Ultra compact PCs support a theoretical 64GB of ram, I was less excited to find out that the 32GB SO-DIMMs aren’t for sale.

I breifly looked at some of the SuperMicro E200 / E300 machines that are seemingly popular in the homelab community, but was again concerned around the fan noise in the home office. After giving up the wait on 32GB so-dimms I priced up some of the newest Brix and decided to pull the trigger on a pair of Gigabyte Brix GB-BRi7-8550 (i7 quad core).

With some additions I had the following:

  • Quad core Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz (+hyperthreading)
  • 32GB DDR4
  • 1TB M.2 2280 (full-length)
  • vSphere 6.7

Deployment notes

vSphere installation

All nodes in the cluster boot from a USB stick in the rear port, leaving the internal disk available for VM storage and VM flash caching. To ease the installation I typically will mount the vSphere installation ISO and the USB Stick in VMware Fusion and just step through the installation steps (including setting the Networking configuration for vSphere). This typically means I don’t need to connect a screen/keyboard to the Brix I can just install and configure on my macbook and then move the USB stick over and boot away.

Network interface cards

The original 5.1 vSphere supported the NIC in the first two Brixs however it was subsequently removed as vSphere was updated. Luckily i’ve always gone through the vSphere upgrade path which doesn’t remove the previous Realtek r8168 NIC. The newer Brix come with an Intel I219-V interface, which is supported out of the box (apparently)..

Unfortunately vSphere will attempt to load the wrong driver for this interface, resulting in a warning in the vSphere UI (DCUI) that specifies that no adapters could be found.

To remedy this:

  1. The DCUI console needs to be enabled
  2. Switch to the correct TTY
  3. Disable the incorrect driver esxcli system module set --enabled=false --module=ne1000
  4. Reboot

After this change, the Brix will make use of the ne1000e driver and will have connectivity.

All added together I’ve now a four node cluster (2 x 6.5, 2 x 6.7) that should provide me with enough capability to hopefully last out a good couple of years for my use-cases.