After installing your Kubernetes cluster composed by masters and workers, a few configurations steps need to complete. In fact, the join command is not the last operation to perform, in order to have a fully operational cluster.
See how to deploy a k8s cluster using kubeadm here: https://www.dbi-services.com/blog/kubernetes-how-to-install-a-single-master-cluster-with-kubeadm/ .
One of the most important steps in the configuration is the name resolution (DNS) within the k8s cluster. In this blog post, we will see how to properly configure CoreDNS for the entire cluster.
Before beginning, it’s important to know that Kubernetes have 2 DNS versions: Kube-DNS and CoreDNS. Initially, the first versions of Kubernetes started with Kube-DNS and change to CoreDNS since version 1.10. For people who wants to know more about the comparison between both: https://coredns.io/2018/11/27/cluster-dns-coredns-vs-kube-dns/
Pre-requisites:
> You need to have a Kubernetes cluster with kubectl command-line tool configured
> Kubernetes version 1.6 and above
> At least a 3 nodes cluster (1 master and 2 workers)
Once the cluster is initialized and worker nodes have been joined, you can check the status of the nodes and list all pods of the kube-system namespace as follows:
[docker@docker-manager000 ~]$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME docker-manager000 Ready master 55d v1.15.3 10.36.0.10 <none> CentOS Linux 7 (Core) 3.10.0-957.12.2.el7.x86_64 docker://18.9.6 docker-worker000 Ready <none> 46d v1.15.3 10.36.0.11 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://18.9.5 docker-worker001 Ready <none> 46d v1.15.3 10.36.0.12 <none> CentOS Linux 7 (Core) 3.10.0-957.10.1.el7.x86_64 docker://18.9.5
According to the previous command, our cluster is composed of 3 nodes:
> docker-manager000
> docker-worker000
> docker-worker001
which means that pods will be scheduled across all the above hosts. So, each host should be able to resolve the service names with IP addresses. The CoreDNS pods enable this operation and need to be deployed in all hosts.
Let’s check the pod’s deployment in the kube-system namespace:
[docker@docker-manager000 ~]$ kubectl get pods -o wide -n kube-system NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-65b8787765-894gs 1/1 Running 16 55d 172.20.123.30 docker-manager000 calico-node-5zhsp 1/1 Running 6 46d 10.36.0.12 docker-worker001 calico-node-gq5s9 1/1 Running 8 46d 10.36.0.11 docker-worker000 calico-node-pjrfm 1/1 Running 16 55d 10.36.0.10 docker-manager000 coredns-686f555694-mdsvd 1/1 Running 6 35d 172.20.123.26 docker-manager000 coredns-686f555694-w25wn 1/1 Running 6 35d 172.20.123.28 docker-manager000 etcd-docker-manager000 1/1 Running 16 55d 10.36.0.10 docker-manager000 kube-apiserver-docker-manager000 1/1 Running 0 13d 10.36.0.10 docker-manager000 kube-controller-manager-docker-manager000 1/1 Running 46 55d 10.36.0.10 docker-manager000 kube-proxy-gwkdh 1/1 Running 7 46d 10.36.0.11 docker-worker000 kube-proxy-lr5cf 1/1 Running 6 46d 10.36.0.12 docker-worker001 kube-proxy-mn7mt 1/1 Running 16 55d 10.36.0.10 docker-manager000 kube-scheduler-docker-manager000 1/1 Running 45 55d 10.36.0.10 docker-manager000
In more details, let’s verify the deployment of coredns pods:
[docker@docker-manager000 ~]$ kubectl get pods -o wide -n kube-system | grep coredns coredns-686f555694-mdsvd 1/1 Running 6 35d 172.20.123.26 docker-manager000 coredns-686f555694-w25wn 1/1 Running 6 35d 172.20.123.28 docker-manager000
Only 2 CoreDNS pods have been deployed within the same host: docker-manager000, our master node. The service name resolution will not work in our cluster for all pods. Let’s verify this supposition…
DNS Resolution test
Create a simple Pod to use for DNS testing:
[docker@docker-manager000 ~]$ cat > test-DNS.yaml << EOF apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always EOF [docker@docker-manager000 ~]$ kubectl apply -f test-DNS.yaml
Verify the status of the Pod previously deployed:
[docker@docker-manager000 ~]$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 13s 172.20.145.19 docker-worker000
The Pod will be deployed on one of the worker nodes. In our case it’s docker-worker000.
Once the pod is running we can execute a nslookup command in order to verify if the DNS is working or not:
[docker@docker-manager000 ~]$ kubectl exec -it busybox -- nslookup kubernetes.default Server: 172.21.0.10 Address 1: 172.21.0.10 kube-dns.kube-system.svc.cluster.local nslookup: can't resolve 'kubernetes.default'
As supposed, the DNS is not working properly. The next steps will be to deploy the CoreDNS pods in all cluster nodes, docker-worker000, and docker-worker001 in our example.
CoreDNS update deployment
The first step is to update the CoreDNS deployment in order to increase the number of replicas, as following:
[docker@docker-manager000 ~]$ kubectl edit deployment coredns -n kube-system # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "13" creationTimestamp: "2019-08-28T07:36:28Z" generation: 14 labels: k8s-app: kube-dns name: coredns namespace: kube-system resourceVersion: "6455829" selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/coredns uid: 3ebfd10f-c58b-43f4-84f1-a9f56dbdffdc spec: progressDeadlineSeconds: 600 replicas: 3 ...
We updated the number of replica from 2 to 3. Then save the changes and wait a few seconds for new CoreDNS pod deployment within the kube-system namespace.
[docker@docker-manager000 ~]$ kubectl get pods -o wide -n kube-system | grep coredns coredns-686f555694-k4678 1/1 Running 10 36d 172.20.27.186 docker-worker001 coredns-686f555694-mdsvd 1/1 Running 6 36d 172.20.123.26 docker-manager000 coredns-686f555694-w25wn 1/1 Running 6 36d 172.20.123.28 docker-manager000
At this step, the funniest happens because of the Kubernetes scheduler is considering that only 1 CoreDNS pod is needed in addition because 2 pods have been already created before. For this reason, only 1 CoreDNS pod has been deployed in the docker-worker001 randomly.
A workaround is to force update the CoreDNS deployment as follows:
[docker@docker-manager000 ~]$ wget https://raw.githubusercontent.com/zlabjp/kubernetes-scripts/master/force-update-deployment [docker@docker-manager000 ~]$ chmod +x force-update-deployment [docker@docker-manager000 ~]$ ./force-update-deployment coredns -n kube-system
Check now the status of the CoreDNS pods:
[docker@docker-manager000 ~]$ kubectl get pods -o wide -n kube-system | grep coredns coredns-7dc96b7db7-7ndwr 1/1 Running 0 35s 172.20.145.36 docker-worker000 coredns-7dc96b7db7-v7wjg 1/1 Running 0 28s 172.20.123.27 docker-manager000 coredns-7dc96b7db7-v9qcq 1/1 Running 0 35s 172.20.27.181 docker-worker001
The script should redeploy a CoreDNS pod on all hosts.
It may some times that the CoreDNS pods will not be redeployed in all hosts (some times 2 pods in the same host), in such case, execute again the script until 1 CoreDNS pod is deployed on each cluster nodes.
The DNS resolution should now work properly within the entire cluster. Let’s verify it by replaying the DNS resolution test.
Remove and redeploy the busybox deployment as follows:
[docker@docker-manager000 ~]$ kubectl delete -f test-DNS.yaml pod "busybox" deleted [docker@docker-manager000 ~]$ kubectl apply -f test-DNS.yaml pod/busybox created #Check pod status [docker@docker-manager000 ~]$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 1/1 Running 0 13s 172.20.145.19 docker-worker000
Once the pod is running we can execute a nslookup command to confirm that the DNS is properly working:
[docker@docker-manager000 ~]$ kubectl exec -it busybox -- nslookup kubernetes.default Server: 172.21.0.10 Address 1: 172.21.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 172.21.0.1 kubernetes.default.svc.cluster.local
Now our internal cluster DNS is working well 🙂 !!