Testing deployments with KinD

Testing deployments with KinD

A short post about how I use KinD to test my Kubernetes with Helm and Helmfile .

kind is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.

Why KinD?

KinD has t major advantages over things like Minikube , it runs on Docker, i.e. no VM, and it does support multiple nodes.

KinD Config

In the config, called config.yaml, below we state a couple of things to simulate the real deal. We use 2 nodes as control plane and 6 workers.

kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
- role: worker
- role: worker
- role: worker

Creating && Kubectl

Creating the cluster is easily done via the next command:

kind create cluster --wait 300s --config config.yaml

Setting the KUBECONFIG With KinD version < 0.60 you should set the KUBECONFIG via export KUBECONFIG=$(shell kind get kubeconfig-path). With KinD version > 0.60 this should go automagically, if this does not work you should check your ~/.kube folder. It should look something like this;


apiVersion: v1
clusters: []
- context:
    cluster: ""
    namespace: kind-config-kind
    user: ""
  name: current
current-context: ""
kind: Config
preferences: {}
users: []

~/.kube/kind-config-kind should be filled with a bunch of keys.

Kubernetes Dashboard

Setting up the dashboard will require a few more steps, installing will not be enough since KinD uses RBAC by default meaning we have to setup permissions.


kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml
kubectl apply -f auth.yaml


apiVersion: v1
kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
  name: admin-user
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
  - kind: ServiceAccount
    name: admin-user
    namespace: kubernetes-dashboard


To get the key, run the following command:

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print }' | cut -d' ' -f 1 ) | grep 'token:'


Accessing/running the dashboard is the same as with all the other clusters.

kubectl proxy

After which it should be available @ http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Dashboard Helm permissions

To give you permissions to view the Helm tiller and such from the dashboard we need to add a service account and give permissions.

helm init
kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller



Lets use a small helmfile.yaml as an example:

      - environment/default/values.yaml
      - environment/default/secrets.yaml
    missingFileHandler: warn
  - name: stable
    url: https://kubernetes-charts.storage.googleapis.com

  # no verify due to unable to sign https://github.com/helm/helm/issues/2843#issuecomment-449047847
  verify: false
  wait: true
  timeout: 720
  recreatePods: false
  force: true

  - name: nginx-ingress
    chart: stable/nginx-ingress
    version: ^1.26.2
    namespace: ingress-nginx

After that you should be able to deploy simply by using helmfile sync.

See also