How to route traffic via a reverse proxy, i.e. Nginx , in KinD .
The setup we’re going for is a cluster behind a single public/external IP address, in other words multiple servers in a private network and the incoming traffic can only be (statically) routed to one of them.
This can be compared to a typical home situation, you have a router running NAT, you can’t route all incoming traffic to more than 1 IP (technically per port).
ELI51 You’re the mailman, you have a letter which needs to be delivered at a certain address, you get there and it’s a big apartment building. You do have the location/address of the building but don’t know which apartment the letter is meant for. Since you can only deliver the letter once you can’t deliver it at all, hence the need for forwarding rules. One inbox for the building and someone or something that delivers it internally following certain predetermined rules.
With KinD we run into a similar problem, imagine your computer/host as the router and the Docker containers as servers within it. The same issue arises, (in this case it’s called binding) you can only bind a port to one app/service/container To summarize; we need to bind a single container and let it manage the traffic to the rest of the cluster.
The container which this is going to do is usually called a reverse proxy for which we’ll use Nginx. Kubernetes normally assigns the pods/container automagically to a worker/container and this brings us to our next issue; to make this work we need to make sure that Nginx runs on the worker/container that has the port bindings.
KinD config
First things first, let’s configure KinD to actually have bindings, this will be bound on the first node/worker.
config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
serviceSubnet: "172.30.0.0/16"
podSubnet: "10.254.0.0/16"
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: 0.0.0.0
- containerPort: 443
hostPort: 443
listenAddress: 0.0.0.0
- role: worker
- role: worker
- role: worker
We saved the above config to config.yaml
and run the following command; kind create cluster --config config.yaml
.
Next we ’tag’ the first node/worker as ingress host; kubectl label node kind-worker nginx=ingresshost
In case
kubectl
can’t connect, runkind export kubeconfig
to set the config
Next we actually deploy Nginx to the correct node/worker by appending a nodeSelect to the deployment.
helm install nginx-ingress stable/nginx-ingress --set controller.nodeSelector.nginx=ingresshost
All other service can now “register” their service via ingress’ and labels.
To sum this up, the incoming traffic comes in at the newly created Nginx deployment, Nginx will “go get” the content from your service internally.
To configure this you can set the domain in the values.yaml
in your Helm package or create it manually.
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
hosts:
- host: www.example.com
paths:
- /
Versions
- Docker
v18.09.9
- Helm
v3.0.2
- KinD
v0.7.0
- kubectl
v1.17.2