Cluster-API-Skeleton Project
Pre-requisites
Before you can begin to create a Cluster-API provider there are a few tools and utilities that have to be in place to allow you to make your own provider.
GOLANG
At this time kubebuilder
and the resulting examples all produce code in GO, so it makes sense to follow that path.
kubebuilder
The kubebuilder
tool is used to handle the management and creation of CRDs (Custom resource definitions) and the controllers that manipulate the objects created from a resource definition. The installation information can be found here https://book.kubebuilder.io/quick-start.html. We will use kubebuilder
in order to build our Machine
, Cluster
and Bootstrap
CRDs and Controllers.
Cluster-API
Cluster-API is an upstream Kubernetes project that allows the extending of Kubernetes so that it can manage infrastructure resources and Kubernetes clusters much in the same way that it would manage the components of an application hosted on Kubernetes.
It can be installed by applying the manifest kubectl create -f https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.2.6/cluster-api-components.yaml
(correct as of 28th Oct’ 2019). This manifest will install the following:
- CRDs for
Machines
,Clusters
,MachineSets
,MachineDeployments
andBootstraps
- Controllers to handle all of the above resources
- ClusterRole Bindings that will allow the above controllers to manipulate other controllers that are extensions of Cluster-API. (important)
If we examine this ClusterRole we can see what it is allowed to work on:
1 | kubectl describe clusterrole capi-manager-role | grep cluster |
The final line details that we will have access to the resource *.infrastructure.cluster.x-k8s.io
(with an asterisk), this resource is blanket statement for the other Cluster-API providers and will be covered a little bit more when we create a provider with kubebuilder
.
Building
To build a Cluster-API provider we can make use of the model that exists within kubebuilder
and extend it so that it’s both aware of Cluster-API and so that Cluster-API can drive the provider.
Initialise the repository
We will need a workspace in order to create our Cluster-API provider, so we will create our directory mkdir cluster-api-{x}; cd cluster-api-{x}
. If this is outside of the $GOPATH, we can examine this by looking at go env
then we will need to create a go.mod
which we can do with go mod init {x}
.
Create the CAP{x} project
Once our directory is created and we’ve initialised our go environment, we will use kubebuilder
to define the initial project.
Important: If we look at the domain, we need to specify that it needs to be the same domain that is specified above as part of the clusterRole bindings for capi.
kubebuilder init --domain cluster.x-k8s.io --license apache2 --owner "The Kubernetes Authors"
Creating CRDs/Controllers with kubebuilder
With everything ready we can now define our Custom Resource Definitions and the Controllers that will manipulate them.
If we break down the command flags:
--kind
The type of resource we are defining (needs a capital letter to define it)--group
The group these resources will live under--resource
Create the CRD--controller
Create the Controller code--version
The version of our CRD/Controller we’re defining
If we were creating a Cluster-API Provider called example
then the command would look something like :
kubebuilder create api --kind ExampleCluster --group infrastructure --resource=true --controller=true --version v1alpha1
Important note
Which would create our resource => exampleCluster.infrastructure.cluster.x-k8s.io
, which we can see looking back at the clusterRole bindings allows capi to manipulate it.
Create the Cluster Controller/Resource
kubebuilder create api --kind {x}Cluster --group infrastructure --resource=true --controller=true --version v1alpha1
Create the Machine Controller/Resource
kubebuilder create api --kind {x}Machine --group infrastructure --resource=true --controller=true --version v1alpha1
main.go
Add clusterv1 "sigs.k8s.io/cluster-api/api/v1alpha2"
to import(s)
Add _ = clusterv1.AddToScheme(scheme)
to init()
API Definitions in /api/v1alpha1
{x}cluster_types.go
Add Finalizer to {x}cluster_types.go
1 | const ( |
Add additional fields to Status
1 | // Ready denotes that the docker cluster (infrastructure) is ready. |
Cluster specific endpoints
1 | // APIEndpoint represents a reachable Kubernetes API endpoint. |
{x}machine_types.go
TODO
Controllers
Cluster Controller /controller/{x}cluster_controller.go
Modify Imports
Change infrastructurev1alpha1 <import path>
to infrav1 <import path>
, this will make the code easier to re-use in the future and to share code with other Infrastructure providers.
Define Cluster Controller Name
1 | const ( |
Modify the Reconcile
function (part 1: context, logging and getting our object)
1 | func (r *{x}ClusterReconciler) Reconcile(req ctrl.Request) (_ ctrl.Result, rerr error) { |
Modify the Reconcile
function (part 2: Find the Cluster-API cluster)
1 | // Fetch the Cluster API Parent. |
Modify the Reconcile
function (part 3: Create a defer
to patch the object)
1 | // Initialize the patch helper |
Modify the Reconcile
function (part 4: Act on the {x} cluster object)
1 |
|
Additional functions
1 | func (r *{x}ClusterReconciler) reconcileCluster(logger logr.Logger, cluster *clusterv1.Cluster, {x}Cluster *infrav1.{x}Cluster) (_ ctrl.Result, reterr error) { |