While studying for CKA, I installed kubeadm using vagrant/virtualbox. Now I want to try the same, but using libvirt instead.
1- Install 3VM (1 master and 2 worker-nodes) I have installed vagrant and libvirtd already. Take this vagrant file as source.
2- I had to make two changes to that file
2.1- I want to use libvirtd, so need to change the Ubuntu vm.box to one that supports it.
#config.vm.box = “ubuntu/bionic64”
config.vm.box = “generic/ubuntu1804”
2.2- Then need to change the network interface
enp0s8 -> eth1
3- Create the VMs with vagrant.
$ ls -ltr -rw-r--r-- 1 tomas tomas 3612 Nov 15 16:36 Vagrantfile $ vagrant status Current machine states: kubemaster not created (libvirt) kubenode01 not created (libvirt) kubenode02 not created (libvirt) $ vagrant up ... An unexpected error occurred when executing the action on the 'kubenode01' machine. Please report this as a bug: cannot load such file -- erubis ...
3.1 Ok, we have to troubleshoot vagrant in my laptop. I googled a bit and couldnt find anything related. I remembered that you could install plugins with vagrant as once I had to update vagrant-libvirtd plugin. So this is kind of what I did.
$ vagrant version Installed Version: 2.2.13 Latest Version: 2.2.13 $ vagrant plugin list vagrant-libvirt (0.1.2, global) Version Constraint: > 0 $ vagrant plugin update Updating installed plugins… Fetching fog-core-2.2.3.gem Fetching nokogiri-1.10.10.gem Building native extensions. This could take a while… Building native extensions. This could take a while… Fetching vagrant-libvirt-0.2.1.gem Successfully uninstalled excon-0.75.0 Successfully uninstalled fog-core-2.2.0 Removing nokogiri Successfully uninstalled nokogiri-1.10.9 Successfully uninstalled vagrant-libvirt-0.1.2 Updated 'vagrant-libvirt' to version '0.2.1'! $ vagrant plugin install erubis $ vagrant plugin update Updating installed plugins… Building native extensions. This could take a while… Building native extensions. This could take a while… Updated 'vagrant-libvirt' to version '0.2.1'! $ vagrant plugin list erubis (2.7.0, global) Version Constraint: > 0 vagrant-libvirt (0.2.1, global) Version Constraint: > 0
3.2. Now, I can start vagrant fine
$ vagrant up .... $ vagrant status Current machine states: kubemaster running (libvirt) kubenode01 running (libvirt) kubenode02 running (libvirt)
4- Install kubeadm. I follow the official doc. It seems we have the pre-requisites. My laptop has 8GB RAM and 4 cpus. Our VMs are Ubuntu 16.04+.
4.1 Enable iptables in each VM:
$ vagrant ssh kubemaster vagrant@kubemaster:~$ lsmod | grep br_net vagrant@kubemaster:~$ vagrant@kubemaster:~$ sudo modprobe br_netfilter vagrant@kubemaster:~$ lsmod | grep br_net br_netfilter 24576 0 bridge 155648 1 br_netfilter vagrant@kubemaster:~$ vagrant@kubemaster:~$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vagrant@kubemaster:~$ sudo sysctl --system ...
5- Install runtime (docker). Following the official doc, we click on the link at the end of “Installing runtime”. We do this in each node:
vagrant@kubemaster:~$ sudo -i root@kubemaster:~# sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common ... root@kubemaster:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add - OK root@kubemaster:~# sudo add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
... root@kubemaster:~# sudo apt-get update && sudo apt-get install -y \ containerd.io=1.2.13-2 \ docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \ docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) .... root@kubemaster:~# cat <<EOF | sudo tee /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } root@kubemaster:~# sudo mkdir -p /etc/systemd/system/docker.service.d root@kubemaster:~# sudo systemctl daemon-reload root@kubemaster:~# sudo systemctl restart docker root@kubemaster:~# sudo systemctl enable docker Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker root@kubemaster:~# root@kubemaster:~#
5- Now we follow “Installing kubeadm, kubelet and kubectl” from main doc in each VM.
root@kubemaster:~# root@kubemaster:~# sudo apt-get update && sudo apt-get install -y apt-transport-https curl ... root@kubemaster:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - OK root@kubemaster:~# cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF deb https://apt.kubernetes.io/ kubernetes-xenial main root@kubemaster:~# sudo apt-get update ... root@kubemaster:~# sudo apt-get install -y kubelet kubeadm kubectl ... root@kubemaster:~# ip -4 a
We dont have to do anything with the next section “Configure cgroup driver…” as we are using docker. So from the bottom of the main page, we click on the next section for using kubeadm and create a cluster.
6- So we have our three VMS with kubeadm. Now we are going to create a cluster. The kubemaster VM will be the control-plane node. So following “Initializing your control-plane node”, we dont need 1 (as we have only one control-node), for 2) will install weave-net as CNI in the next step, we need to use a new network for this: 10.244.0.0/16. 3) we dont need it and 4) we will specify the master ip. So, only on kubemaster:
root@kubemaster:~# kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address=192.168.56.2
W1115 17:13:31.213357 9958 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
oh, problem. It seems we need to disable swap on the VMs. Actually, we will do in all VMs.
root@kubemaster:~# swapoff -a
Try again kubeadm init in master:
root@kubemaster:~# kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address=192.168.56.2 W1115 17:15:00.378279 10376 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubemaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.2] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubemaster localhost] and IPs [192.168.56.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubemaster localhost] and IPs [192.168.56.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 25.543262 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kubemaster as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubemaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: aeseji.kovc0rjt6giakn1v [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.56.2:6443 --token aeseji.kovc0rjt6giakn1v \ --discovery-token-ca-cert-hash sha256:c1b91ec9cebe065665c314bfe9a7ce9c0ef970d56ae762dae5ce308caacbd8cd root@kubemaster:~#
7- We need to follow the output of kubeadm init in kubemaster. As well pay attention as the info for joining our worker-nodes to the cluster in there too (“kubeadm join ….”)
root@kubemaster:~# exit logout vagrant@kubemaster:~$ mkdir -p $HOME/.kube vagrant@kubemaster:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config vagrant@kubemaster:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
We can test the status of the control-node. It is NotReady because it needs the network configuration.
vagrant@kubemaster:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster NotReady master 2m9s v1.19.4
8- From the same page, now we need to follow “Installing a Pod network add-on”. I dont know why but the documentation is not great about it. You need to dig in all version to find the steps to install wave-net. This is the link. So we install wave-net only on the kubemaster:
vagrant@kubemaster:~$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.apps/weave-net created vagrant@kubemaster:~$ vagrant@kubemaster:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster Ready master 4m32s v1.19.4
9- We can follow to the section “Joining your nodes”. We need to apply the “kubeadm join…” command from the outout of “kubeadm init” in master node in only the worker-nodes.
root@kubenode02:~# kubeadm join 192.168.56.2:6443 --token aeseji.kovc0rjt6giakn1v --discovery-token-ca-cert-hash sha256:c1b91ec9cebe065665c314bfe9a7ce9c0ef970d56ae762dae5ce308caacbd8cd [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster… [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap… This node has joined the cluster: Certificate signing request was sent to apiserver and a response was received. The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. root@kubenode02:~#
10- We need to wait a bit, but finally the worker nodes will come up as Ready if we check in the master/control-node:
vagrant@kubemaster:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster Ready master 6m35s v1.19.4 kubenode01 Ready 2m13s v1.19.4 kubenode02 Ready 2m10s v1.19.4 vagrant@kubemaster:~$
11- Let’s verify we have a working cluster just creating a pod.
vagrant@kubemaster:~$ kubectl run ngix --image=nginx pod/ngix created vagrant@kubemaster:~$ kubectl get pod NAME READY STATUS RESTARTS AGE ngix 0/1 ContainerCreating 0 5s vagrant@kubemaster:~$ vagrant@kubemaster:~$ kubectl get pod NAME READY STATUS RESTARTS AGE ngix 1/1 Running 0 83s vagrant@kubemaster:~$ vagrant@kubemaster:~$ kubectl delete pod ngix pod "ngix" deleted vagrant@kubemaster:~$ kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-f9fd979d6-b9b92 1/1 Running 0 10m coredns-f9fd979d6-t822r 1/1 Running 0 10m etcd-kubemaster 1/1 Running 0 10m kube-apiserver-kubemaster 1/1 Running 0 10m kube-controller-manager-kubemaster 1/1 Running 2 10m kube-proxy-jpb9p 1/1 Running 0 10m kube-proxy-lkpv9 1/1 Running 0 6m13s kube-proxy-sqd9v 1/1 Running 0 6m10s kube-scheduler-kubemaster 1/1 Running 2 10m weave-net-8rl49 2/2 Running 0 6m13s weave-net-fkqdv 2/2 Running 0 6m10s weave-net-q79pb 2/2 Running 0 7m48s vagrant@kubemaster:~$
So, we have a working kubernetes cluster built with kubeadm using vagrant/libvirtd!
As a note, while building the VMs and installing software on them, my laptop hang a couple of times as the 3VMS running at the same time takes nearly all RAM. But this is a good exercise to understand the requirements of kubeadm to build a cluster and as well, it is a lab env you can use while studying if the cloud env are down or you dont have internet. Let’s see If I manage to pass the CKA one day!!!
3VMs running ---- # top top - 17:24:10 up 9 days, 18:18, 1 user, load average: 5.22, 5.09, 4.79 Tasks: 390 total, 1 running, 388 sleeping, 0 stopped, 1 zombie %Cpu(s): 21.7 us, 19.5 sy, 0.0 ni, 56.5 id, 2.0 wa, 0.0 hi, 0.2 si, 0.0 st MiB Mem : 7867.7 total, 263.0 free, 6798.7 used, 806.0 buff/cache MiB Swap: 6964.0 total, 991.4 free, 5972.6 used. 409.6 avail MemPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
329875 tomas 20 0 9268464 251068 83584 S 55.8 3.1 14:27.84 chrome 187962 tomas 20 0 1302500 105228 46528 S 36.9 1.3 170:58.40 chrome 331127 libvirt+ 20 0 4753296 1.3g 5972 S 35.5 17.5 7:13.00 qemu-system-x86 330979 libvirt+ 20 0 4551524 954212 5560 S 7.3 11.8 4:08.33 qemu-system-x86 5518 root 20 0 1884932 135616 8528 S 5.3 1.7 76:50.45 Xorg 330803 libvirt+ 20 0 4550504 905428 5584 S 5.3 11.2 4:12.68 qemu-system-x86 6070 tomas 9 -11 1180660 6844 4964 S 3.7 0.1 44:04.39 pulseaudio 333253 tomas 20 0 4708156 51400 15084 S 3.3 0.6 1:23.72 chrome 288344 tomas 20 0 2644572 56560 14968 S 1.7 0.7 9:03.78 Web Content 6227 tomas 20 0 139916 8316 4932 S 1.3 0.1 19:59.68 gkrellm 3VMS stopped ---- root@athens:/home/tomas# top top - 18:40:09 up 9 days, 19:34, 1 user, load average: 0.56, 1.09, 1.30 Tasks: 379 total, 2 running, 376 sleeping, 0 stopped, 1 zombie %Cpu(s): 4.5 us, 1.5 sy, 0.0 ni, 94.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 7867.7 total, 3860.9 free, 3072.9 used, 933.9 buff/cache MiB Swap: 6964.0 total, 4877.1 free, 2086.9 used. 4122.1 avail MemPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
288344 tomas 20 0 2644572 97532 17100 S 6.2 1.2 11:05.35 Web Content 404910 root 20 0 12352 5016 4040 R 6.2 0.1 0:00.01 top 1 root 20 0 253060 7868 5512 S 0.0 0.1 0:47.82 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:02.99 kthreadd 3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp 6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H 9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq 10 root 20 0 0 0 0 S 0.0 0.0 0:11.39 ksoftirqd/0 11 root 20 0 0 0 0 I 0.0 0.0 2:13.55 rcu_sched root@athens:/home/tomas#