Cluster-API-Skeleton Project

Pre-requisites

Before you can begin to create a Cluster-API provider there are a few tools and utilities that have to be in place to allow you to make your own provider.

GOLANG

At this time kubebuilder and the resulting examples all produce code in GO, so it makes sense to follow that path.

kubebuilder

The kubebuilder tool is used to handle the management and creation of CRDs (Custom resource definitions) and the controllers that manipulate the objects created from a resource definition. The installation information can be found here https://book.kubebuilder.io/quick-start.html. We will use kubebuilder in order to build our Machine, Cluster and Bootstrap CRDs and Controllers.

Cluster-API

Cluster-API is an upstream Kubernetes project that allows the extending of Kubernetes so that it can manage infrastructure resources and Kubernetes clusters much in the same way that it would manage the components of an application hosted on Kubernetes.

It can be installed by applying the manifest kubectl create -f https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.2.6/cluster-api-components.yaml (correct as of 28th Oct’ 2019). This manifest will install the following:

  • CRDs for Machines, Clusters, MachineSets, MachineDeployments and Bootstraps
  • Controllers to handle all of the above resources
  • ClusterRole Bindings that will allow the above controllers to manipulate other controllers that are extensions of Cluster-API. (important)

If we examine this ClusterRole we can see what it is allowed to work on:

1
2
3
4
5
6
7
8
9
10
11
12
kubectl describe clusterrole capi-manager-role | grep cluster

*.bootstrap.cluster.x-k8s.io [] [] [create delete get list patch update watch]
clusters.cluster.x-k8s.io/status [] [] [create delete get list patch update watch]
clusters.cluster.x-k8s.io [] [] [create delete get list patch update watch]
machinedeployments.cluster.x-k8s.io/status [] [] [create delete get list patch update watch]
machinedeployments.cluster.x-k8s.io [] [] [create delete get list patch update watch]
machines.cluster.x-k8s.io/status [] [] [create delete get list patch update watch]
machines.cluster.x-k8s.io [] [] [create delete get list patch update watch]
machinesets.cluster.x-k8s.io/status [] [] [create delete get list patch update watch]
machinesets.cluster.x-k8s.io [] [] [create delete get list patch update watch]
*.infrastructure.cluster.x-k8s.io [] [] [create delete get list patch update watch]

The final line details that we will have access to the resource *.infrastructure.cluster.x-k8s.io (with an asterisk), this resource is blanket statement for the other Cluster-API providers and will be covered a little bit more when we create a provider with kubebuilder.

Building

To build a Cluster-API provider we can make use of the model that exists within kubebuilder and extend it so that it’s both aware of Cluster-API and so that Cluster-API can drive the provider.

Initialise the repository

We will need a workspace in order to create our Cluster-API provider, so we will create our directory mkdir cluster-api-{x}; cd cluster-api-{x}. If this is outside of the $GOPATH, we can examine this by looking at go env then we will need to create a go.mod which we can do with go mod init {x}.

Create the CAP{x} project

Once our directory is created and we’ve initialised our go environment, we will use kubebuilder to define the initial project.

Important: If we look at the domain, we need to specify that it needs to be the same domain that is specified above as part of the clusterRole bindings for capi.

kubebuilder init --domain cluster.x-k8s.io --license apache2 --owner "The Kubernetes Authors"

Creating CRDs/Controllers with kubebuilder

With everything ready we can now define our Custom Resource Definitions and the Controllers that will manipulate them.

If we break down the command flags:

  • --kind The type of resource we are defining (needs a capital letter to define it)
  • --group The group these resources will live under
  • --resource Create the CRD
  • --controller Create the Controller code
  • --version The version of our CRD/Controller we’re defining

If we were creating a Cluster-API Provider called example then the command would look something like :

kubebuilder create api --kind ExampleCluster --group infrastructure --resource=true --controller=true --version v1alpha1

Important note

Which would create our resource => exampleCluster.infrastructure.cluster.x-k8s.io, which we can see looking back at the clusterRole bindings allows capi to manipulate it.

Create the Cluster Controller/Resource

kubebuilder create api --kind {x}Cluster --group infrastructure --resource=true --controller=true --version v1alpha1

Create the Machine Controller/Resource

kubebuilder create api --kind {x}Machine --group infrastructure --resource=true --controller=true --version v1alpha1

main.go

Add clusterv1 "sigs.k8s.io/cluster-api/api/v1alpha2" to import(s)

Add _ = clusterv1.AddToScheme(scheme) to init()

API Definitions in /api/v1alpha1

{x}cluster_types.go

Add Finalizer to {x}cluster_types.go

1
2
3
4
5
const (
// ClusterFinalizer allows {x}ClusterReconciler to clean up resources associated with {x}Cluster before
// removing it from the apiserver.
ClusterFinalizer = "{x}cluster.infrastructure.cluster.x-k8s.io"
)

Add additional fields to Status

1
2
3
4
5
6
7
8
9
// Ready denotes that the docker cluster (infrastructure) is ready.
Ready bool `json:"ready"`

// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file

// APIEndpoints represents the endpoints to communicate with the control plane.
// +optional
APIEndpoints []APIEndpoint `json:"apiEndpoints,omitempty"`

Cluster specific endpoints

1
2
3
4
5
6
7
8
// APIEndpoint represents a reachable Kubernetes API endpoint.
type APIEndpoint struct {
// Host is the hostname on which the API server is serving.
Host string `json:"host"`

// Port is the port on which the API server is serving.
Port int `json:"port"`
}

{x}machine_types.go

TODO

Controllers

Cluster Controller /controller/{x}cluster_controller.go

Modify Imports

Change infrastructurev1alpha1 <import path> to infrav1 <import path>, this will make the code easier to re-use in the future and to share code with other Infrastructure providers.

Define Cluster Controller Name

1
2
3
const (
clusterControllerName = "{x}cluster-controller"
)

Modify the Reconcile function (part 1: context, logging and getting our object)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
func (r *{x}ClusterReconciler) Reconcile(req ctrl.Request) (_ ctrl.Result, rerr error) {

ctx := context.Background()
log := log.Log.WithName(clusterControllerName).WithValues("{x}-cluster", req.NamespacedName)

// Create an empty instance of a {x} Cluster object
{x}ClusterObj := &infrav1.{x}Cluster{}

// Fetch the out Cluster object
if err := r.Client.Get(ctx, req.NamespacedName, {x}ClusterObj); err != nil {
if apierrors.IsNotFound(err) {
return ctrl.Result{}, nil
}
return ctrl.Result{}, err
}

Modify the Reconcile function (part 2: Find the Cluster-API cluster)

1
2
3
4
5
6
7
8
9
10
11
12
13
// Fetch the Cluster API Parent.
cluster, err := util.GetOwnerCluster(ctx, r.Client, {x}ClusterObj.ObjectMeta)
if err != nil {
return ctrl.Result{}, err
}

if cluster == nil {
log.Info("Waiting for Cluster Controller to set OwnerRef on {x} Cluster")
return ctrl.Result{}, nil
}

// Enable Logging to refence the Cluster-API cluster
log = log.WithValues("cluster", cluster.Name)

Modify the Reconcile function (part 3: Create a defer to patch the object)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Initialize the patch helper
patchHelper, err := patch.NewHelper({x}ClusterObj, r)
if err != nil {
return ctrl.Result{}, err
}

// Always attempt to Patch the {x}Cluster object and status after each reconciliation.
defer func() {
if err := patchHelper.Patch(ctx, {x}ClusterObj); err != nil {
log.Error(err, "failed to patch {x}Cluster Object")
if rerr == nil {
rerr = err
}
}
}()

Modify the Reconcile function (part 4: Act on the {x} cluster object)

1
2
3
4
5
6
7
8
9

// Handle deleted clusters
if !{x}Cluster.DeletionTimestamp.IsZero() {
return r.reconcileClusterDelete(log, {x}ClusterObj)
}

return r.reconcileCluster(log, cluster, {x}ClusterObj)

// End of Reconcile function

Additional functions

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
func (r *{x}ClusterReconciler) reconcileCluster(logger logr.Logger, cluster *clusterv1.Cluster, {x}Cluster *infrav1.{x}Cluster) (_ ctrl.Result, reterr error) {
logger.Info("Reconciling Cluster")

if !util.Contains({x}Cluster.Finalizers, infrav1.ClusterFinalizer) {
{x}Cluster.Finalizers = append({x}Cluster.Finalizers, infrav1.ClusterFinalizer)
}

// RECONCILE LOGIC

// IMPORTANT - Setting this status to true means that it is recognized as provisioned / ready
{x}Cluster.Status.Ready = true

return ctrl.Result{}, reterr
}

func (r *{x}ClusterReconciler) reconcileClusterDelete(logger logr.Logger, {x}Cluster *infrav1.{x}Cluster) (_ ctrl.Result, reterr error) {
logger.Info("Deleting Cluster")

// DELETE LOGIC

// Filter out this cluster from the list of finalizers (remove the object from Kubernetes)
{x}Cluster.Finalizers = util.Filter({x}Cluster.Finalizers, infrav1.ClusterFinalizer)

return ctrl.Result{}, reterr
}