{"id":489,"date":"2020-11-15T18:33:56","date_gmt":"2020-11-15T18:33:56","guid":{"rendered":"https:\/\/blog.thomarite.uk\/?p=489"},"modified":"2020-11-15T18:42:04","modified_gmt":"2020-11-15T18:42:04","slug":"install-kubeadm-vagrant-libvirt","status":"publish","type":"post","link":"https:\/\/blog.thomarite.uk\/index.php\/2020\/11\/15\/install-kubeadm-vagrant-libvirt\/","title":{"rendered":"install-kubeadm-vagrant-libvirt"},"content":{"rendered":"\n<p>While studying for CKA, I installed kubeadm using vagrant\/virtualbox. Now I want to try the same, but using libvirt instead.<\/p>\n\n\n\n<p>1- Install 3VM (1 master and 2 worker-nodes) I have installed vagrant and libvirtd already. Take this vagrant file as <a href=\"https:\/\/github.com\/kodekloudhub\/certified-kubernetes-administrator-course\/blob\/master\/Vagrantfile\">source<\/a>.<\/p>\n\n\n\n<p>2- I had to make two changes to that file<\/p>\n\n\n\n<p>2.1- I want to use libvirtd, so need to change the Ubuntu vm.box to one that <a href=\"https:\/\/app.vagrantup.com\/generic\/boxes\/ubuntu1804\">supports it<\/a>.<\/p>\n\n\n\n<p>#config.vm.box = &#8220;ubuntu\/bionic64&#8221;<br>config.vm.box = &#8220;generic\/ubuntu1804&#8221;<\/p>\n\n\n\n<p>2.2- Then need to change the network interface<\/p>\n\n\n\n<p>enp0s8 -&gt; eth1<\/p>\n\n\n\n<p>3- Create the VMs with vagrant.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ ls -ltr\n-rw-r--r-- 1 tomas tomas 3612 Nov 15 16:36 Vagrantfile\n\n$ vagrant status\nCurrent machine states:\nkubemaster not created (libvirt)\nkubenode01 not created (libvirt)\nkubenode02 not created (libvirt)\n\n$ vagrant up\n...\nAn unexpected error occurred when executing the action on the\n'kubenode01' machine. Please report this as a bug:\ncannot load such file -- erubis\n...<\/pre>\n\n\n\n<p>3.1 Ok, we have to troubleshoot vagrant in my laptop. I googled a bit and couldnt find anything related. I remembered that you could install plugins with vagrant as once I had to update vagrant-libvirtd plugin. So this is kind of what I did.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ vagrant version\nInstalled Version: 2.2.13\nLatest Version: 2.2.13\n\n$ vagrant plugin list\nvagrant-libvirt (0.1.2, global)\nVersion Constraint: &gt; 0\n\n$ vagrant plugin update\nUpdating installed plugins\u2026\nFetching fog-core-2.2.3.gem\nFetching nokogiri-1.10.10.gem\nBuilding native extensions. This could take a while\u2026\nBuilding native extensions. This could take a while\u2026\nFetching vagrant-libvirt-0.2.1.gem\nSuccessfully uninstalled excon-0.75.0\nSuccessfully uninstalled fog-core-2.2.0\nRemoving nokogiri\nSuccessfully uninstalled nokogiri-1.10.9\nSuccessfully uninstalled vagrant-libvirt-0.1.2\nUpdated 'vagrant-libvirt' to version '0.2.1'!\n\n$ vagrant plugin install erubis\n\n$ vagrant plugin update\nUpdating installed plugins\u2026\nBuilding native extensions. This could take a while\u2026\nBuilding native extensions. This could take a while\u2026\nUpdated 'vagrant-libvirt' to version '0.2.1'!\n\n$ vagrant plugin list\nerubis (2.7.0, global)\nVersion Constraint: &gt; 0\nvagrant-libvirt (0.2.1, global)\nVersion Constraint: &gt; 0<\/pre>\n\n\n\n<p>3.2. Now, I can start vagrant fine<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ vagrant up\n....\n\n$ vagrant status\nCurrent machine states:\nkubemaster running (libvirt)\nkubenode01 running (libvirt)\nkubenode02 running (libvirt)<\/pre>\n\n\n\n<p>4- Install kubeadm. I follow the official <a href=\"https:\/\/kubernetes.io\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/\">doc<\/a>. It seems we have the pre-requisites. My laptop has 8GB RAM and 4 cpus. Our VMs are Ubuntu 16.04+.<\/p>\n\n\n\n<p>4.1 Enable iptables in <strong>each<\/strong> VM:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ vagrant ssh kubemaster\n\nvagrant@kubemaster:~$ lsmod | grep br_net\nvagrant@kubemaster:~$\nvagrant@kubemaster:~$ sudo modprobe br_netfilter\nvagrant@kubemaster:~$ lsmod | grep br_net\nbr_netfilter 24576 0\nbridge 155648 1 br_netfilter\nvagrant@kubemaster:~$\nvagrant@kubemaster:~$ cat &lt;&lt;EOF | sudo tee \/etc\/sysctl.d\/k8s.conf\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nEOF\nnet.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nvagrant@kubemaster:~$ sudo sysctl --system\n...<\/pre>\n\n\n\n<p>5- Install runtime (docker). Following the official <a href=\"https:\/\/kubernetes.io\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/\">doc<\/a>, we click on the <a href=\"https:\/\/kubernetes.io\/docs\/setup\/production-environment\/container-runtimes\/#docker\">link<\/a> at the end of &#8220;Installing runtime&#8221;.  We do this in <strong>each<\/strong> node:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">vagrant@kubemaster:~$ sudo -i\nroot@kubemaster:~# sudo apt-get update &amp;&amp; sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common\n...\nroot@kubemaster:~# curl -fsSL https:\/\/download.docker.com\/linux\/ubuntu\/gpg | sudo apt-key --keyring \/etc\/apt\/trusted.gpg.d\/docker.gpg add -\nOK\nroot@kubemaster:~# sudo add-apt-repository \\\n<code>\"deb [arch=amd64] https:\/\/download.docker.com\/linux\/ubuntu \\ <\/code>\n<code>$(lsb_release -cs) \\<\/code>\n<code>stable\"<\/code>\n...\nroot@kubemaster:~# sudo apt-get update &amp;&amp; sudo apt-get install -y \\\ncontainerd.io=1.2.13-2 \\\ndocker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \\\ndocker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)\n....\nroot@kubemaster:~# cat &lt;&lt;EOF | sudo tee \/etc\/docker\/daemon.json\n{\n\"exec-opts\": [\"native.cgroupdriver=systemd\"],\n\"log-driver\": \"json-file\",\n\"log-opts\": {\n\"max-size\": \"100m\"\n},\n\"storage-driver\": \"overlay2\"\n}\nEOF\n{\n\"exec-opts\": [\"native.cgroupdriver=systemd\"],\n\"log-driver\": \"json-file\",\n\"log-opts\": {\n\"max-size\": \"100m\"\n},\n\"storage-driver\": \"overlay2\"\n}\nroot@kubemaster:~# sudo mkdir -p \/etc\/systemd\/system\/docker.service.d\nroot@kubemaster:~# sudo systemctl daemon-reload\nroot@kubemaster:~# sudo systemctl restart docker\nroot@kubemaster:~# sudo systemctl enable docker\nSynchronizing state of docker.service with SysV service script with \/lib\/systemd\/systemd-sysv-install.\nExecuting: \/lib\/systemd\/systemd-sysv-install enable docker\nroot@kubemaster:~#\nroot@kubemaster:~#<\/pre>\n\n\n\n<p>5- Now we follow &#8220;Installing kubeadm, kubelet and kubectl&#8221; from main <a href=\"https:\/\/kubernetes.io\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/\">doc<\/a> in <strong>each<\/strong> VM.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">root@kubemaster:~#\nroot@kubemaster:~# sudo apt-get update &amp;&amp; sudo apt-get install -y apt-transport-https curl\n...\nroot@kubemaster:~# curl -s https:\/\/packages.cloud.google.com\/apt\/doc\/apt-key.gpg | sudo apt-key add -\nOK\nroot@kubemaster:~# cat &lt;&lt;EOF | sudo tee \/etc\/apt\/sources.list.d\/kubernetes.list\ndeb https:\/\/apt.kubernetes.io\/ kubernetes-xenial main\nEOF\ndeb https:\/\/apt.kubernetes.io\/ kubernetes-xenial main\nroot@kubemaster:~# sudo apt-get update\n...\nroot@kubemaster:~# sudo apt-get install -y kubelet kubeadm kubectl\n...\nroot@kubemaster:~# ip -4 a<\/pre>\n\n\n\n<p>We dont have to do anything with the next section &#8220;Configure cgroup driver&#8230;&#8221; as we are using docker. So from the bottom of the main <a href=\"https:\/\/kubernetes.io\/docs\/setup\/production-environment\/tools\/kubeadm\/install-kubeadm\/\">page<\/a>, we click on the next section for using kubeadm and create a <a href=\"https:\/\/kubernetes.io\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/\">cluster<\/a>.<\/p>\n\n\n\n<p>6- So we have our three VMS with kubeadm. Now we are going to create a cluster. The kubemaster VM will be the control-plane node. So following &#8220;Initializing your control-plane node&#8221;, we dont need 1 (as we have only one control-node), for 2) will install weave-net as CNI in the next step, we need to use a new network for this: 10.244.0.0\/16. 3) we dont need it and 4) we will specify the master ip. So, <strong>only<\/strong> on kubemaster:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">root@kubemaster:~# kubeadm init --pod-network-cidr 10.244.0.0\/16 --apiserver-advertise-address=192.168.56.2\nW1115 17:13:31.213357 9958 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]\n[init] Using Kubernetes version: v1.19.4\n[preflight] Running pre-flight checks\nerror execution phase preflight: [preflight] Some fatal errors occurred:\n[ERROR Swap]: running with swap on is not supported. Please disable swap\n[preflight] If you know what you are doing, you can make a check non-fatal with <code>--ignore-preflight-errors=...<\/code>\nTo see the stack trace of this error execute with --v=5 or higher<\/pre>\n\n\n\n<p>oh, problem. It seems we need to disable swap on the VMs. Actually, we will do in<strong> all VMs.<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">root@kubemaster:~# swapoff -a<\/pre>\n\n\n\n<p>Try again kubeadm init in master:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">root@kubemaster:~# kubeadm init --pod-network-cidr 10.244.0.0\/16 --apiserver-advertise-address=192.168.56.2\nW1115 17:15:00.378279 10376 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]\n[init] Using Kubernetes version: v1.19.4\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"\/etc\/kubernetes\/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [kubemaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.2]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd\/ca\" certificate and key\n[certs] Generating \"etcd\/server\" certificate and key\n[certs] etcd\/server serving cert is signed for DNS names [kubemaster localhost] and IPs [192.168.56.2 127.0.0.1 ::1]\n[certs] Generating \"etcd\/peer\" certificate and key\n[certs] etcd\/peer serving cert is signed for DNS names [kubemaster localhost] and IPs [192.168.56.2 127.0.0.1 ::1]\n[certs] Generating \"etcd\/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"\/etc\/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[kubelet-start] Writing kubelet environment file with flags to file \"\/var\/lib\/kubelet\/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"\/var\/lib\/kubelet\/config.yaml\"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder \"\/etc\/kubernetes\/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"\/etc\/kubernetes\/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"\/etc\/kubernetes\/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 25.543262 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.19\" in namespace kube-system with the configuration for the kubelets in the cluster\n[upload-certs] Skipping phase. Please see --upload-certs\n[mark-control-plane] Marking the node kubemaster as control-plane by adding the label \"node-role.kubernetes.io\/master=''\"\n[mark-control-plane] Marking the node kubemaster as control-plane by adding the taints [node-role.kubernetes.io\/master:NoSchedule]\n[bootstrap-token] Using token: aeseji.kovc0rjt6giakn1v\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\n[kubelet-finalize] Updating \"\/etc\/kubernetes\/kubelet.conf\" to point to a rotatable kubelet client certificate and key\n[addons] Applied essential addon: CoreDNS\n[addons] Applied essential addon: kube-proxy\n<strong>Your Kubernetes control-plane has initialized successfully!\nTo start using your cluster, you need to run the following as a regular user:<\/strong>\nmkdir -p $HOME\/.kube\nsudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config\nsudo chown $(id -u):$(id -g) $HOME\/.kube\/config\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\nhttps:\/\/kubernetes.io\/docs\/concepts\/cluster-administration\/addons\/\nThen you can join any number of worker nodes by running the following on each as root:\n<strong>kubeadm join 192.168.56.2:6443 --token aeseji.kovc0rjt6giakn1v \\\n<\/strong>--discovery-token-ca-cert-hash sha256:c1b91ec9cebe065665c314bfe9a7ce9c0ef970d56ae762dae5ce308caacbd8cd\nroot@kubemaster:~#<\/pre>\n\n\n\n<p>7- We need to follow the output of kubeadm init in kubemaster. As well pay <strong>attention<\/strong> as the info for joining our worker-nodes to the cluster in there too (&#8220;kubeadm join &#8230;.&#8221;)<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">root@kubemaster:~# exit\nlogout\nvagrant@kubemaster:~$ mkdir -p $HOME\/.kube\nvagrant@kubemaster:~$ sudo cp -i \/etc\/kubernetes\/admin.conf $HOME\/.kube\/config\nvagrant@kubemaster:~$ sudo chown $(id -u):$(id -g) $HOME\/.kube\/config<\/pre>\n\n\n\n<p>We can test the status of the control-node. It is NotReady because it needs the network configuration.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">vagrant@kubemaster:~$ kubectl get nodes\nNAME STATUS ROLES AGE VERSION\nkubemaster <strong>NotReady<\/strong> master 2m9s v1.19.4<\/pre>\n\n\n\n<p>8- From the same <a href=\"https:\/\/kubernetes.io\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/\">page<\/a>, now we need to follow &#8220;Installing a Pod network add-on&#8221;. I dont know why but the documentation is not great about it. You need to dig in all version to find the steps to install wave-net. This is the <a href=\"https:\/\/v1-16.docs.kubernetes.io\/docs\/setup\/production-environment\/tools\/kubeadm\/create-cluster-kubeadm\/\">link<\/a>. So we install wave-net only on the kubemaster:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">vagrant@kubemaster:~$ kubectl apply -f \"https:\/\/cloud.weave.works\/k8s\/net?k8s-version=$(kubectl version | base64 | tr -d '\\n')\"\nserviceaccount\/weave-net created\nclusterrole.rbac.authorization.k8s.io\/weave-net created\nclusterrolebinding.rbac.authorization.k8s.io\/weave-net created\nrole.rbac.authorization.k8s.io\/weave-net created\nrolebinding.rbac.authorization.k8s.io\/weave-net created\ndaemonset.apps\/weave-net created\nvagrant@kubemaster:~$\nvagrant@kubemaster:~$ kubectl get nodes\nNAME STATUS ROLES AGE VERSION\nkubemaster <strong>Ready<\/strong> master 4m32s v1.19.4<\/pre>\n\n\n\n<p>9- We can follow to the section &#8220;Joining your nodes&#8221;. We need to apply the &#8220;kubeadm join&#8230;&#8221; command from the outout of &#8220;kubeadm init&#8221; in master node in <strong>only<\/strong> the worker-nodes.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">root@kubenode02:~# kubeadm join 192.168.56.2:6443 --token aeseji.kovc0rjt6giakn1v --discovery-token-ca-cert-hash sha256:c1b91ec9cebe065665c314bfe9a7ce9c0ef970d56ae762dae5ce308caacbd8cd\n[preflight] Running pre-flight checks\n[preflight] Reading configuration from the cluster\u2026\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'\n[kubelet-start] Writing kubelet configuration to file \"\/var\/lib\/kubelet\/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"\/var\/lib\/kubelet\/kubeadm-flags.env\"\n[kubelet-start] Starting the kubelet\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap\u2026\nThis node has joined the cluster:\nCertificate signing request was sent to apiserver and a response was received.\nThe Kubelet was informed of the new secure connection details.\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.\nroot@kubenode02:~#<\/pre>\n\n\n\n<p>10- We need to wait a bit, but finally the worker nodes will come up as Ready if we check in the master\/control-node:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">vagrant@kubemaster:~$ kubectl get nodes\nNAME STATUS ROLES AGE VERSION\nkubemaster Ready master 6m35s v1.19.4\nkubenode01 Ready 2m13s v1.19.4\nkubenode02 Ready 2m10s v1.19.4\nvagrant@kubemaster:~$<\/pre>\n\n\n\n<p>11- Let&#8217;s verify we have a working cluster just creating a pod.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">vagrant@kubemaster:~$ kubectl run ngix --image=nginx\npod\/ngix created\n\nvagrant@kubemaster:~$ kubectl get pod\nNAME READY STATUS RESTARTS AGE\nngix 0\/1 ContainerCreating 0 5s\nvagrant@kubemaster:~$\nvagrant@kubemaster:~$ kubectl get pod\nNAME READY STATUS RESTARTS AGE\nngix 1\/1 Running 0 83s\nvagrant@kubemaster:~$\n\nvagrant@kubemaster:~$ kubectl delete pod ngix\npod \"ngix\" deleted\n\nvagrant@kubemaster:~$ kubectl get pod -n kube-system\nNAME READY STATUS RESTARTS AGE\ncoredns-f9fd979d6-b9b92 1\/1 Running 0 10m\ncoredns-f9fd979d6-t822r 1\/1 Running 0 10m\netcd-kubemaster 1\/1 Running 0 10m\nkube-apiserver-kubemaster 1\/1 Running 0 10m\nkube-controller-manager-kubemaster 1\/1 Running 2 10m\nkube-proxy-jpb9p 1\/1 Running 0 10m\nkube-proxy-lkpv9 1\/1 Running 0 6m13s\nkube-proxy-sqd9v 1\/1 Running 0 6m10s\nkube-scheduler-kubemaster 1\/1 Running 2 10m\nweave-net-8rl49 2\/2 Running 0 6m13s\nweave-net-fkqdv 2\/2 Running 0 6m10s\nweave-net-q79pb 2\/2 Running 0 7m48s\nvagrant@kubemaster:~$<\/pre>\n\n\n\n<p>So, we have a working kubernetes cluster built with kubeadm using vagrant\/libvirtd!<\/p>\n\n\n\n<p>As a note, while building the VMs and installing software on them, my laptop hang a couple of times as the 3VMS running at the same time takes nearly all RAM. But this is a good exercise to understand the requirements of kubeadm to build a cluster and as well, it is a lab env you can use while studying if the cloud env are down or you dont have internet. Let&#8217;s see If I manage to pass the CKA one day!!!<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">3VMs running\n----\n# top\ntop - 17:24:10 up 9 days, 18:18, 1 user, load average: 5.22, 5.09, 4.79\nTasks: 390 total, 1 running, 388 sleeping, 0 stopped, 1 zombie\n%Cpu(s): 21.7 us, 19.5 sy, 0.0 ni, 56.5 id, 2.0 wa, 0.0 hi, 0.2 si, 0.0 st\nMiB Mem : 7867.7 total, 263.0 free, 6798.7 used, 806.0 buff\/cache\n<strong>MiB Swap<\/strong>: 6964.0 total, <strong>991.4 free, 5972.6 used<\/strong>. 409.6 avail Mem\n<code>PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<\/code>\n329875 tomas 20 0 9268464 251068 83584 S 55.8 3.1 14:27.84 chrome\n187962 tomas 20 0 1302500 105228 46528 S 36.9 1.3 170:58.40 chrome\n331127 libvirt+ 20 0 4753296 1.3g 5972 S 35.5 17.5 7:13.00 qemu-system-x86\n330979 libvirt+ 20 0 4551524 954212 5560 S 7.3 11.8 4:08.33 qemu-system-x86\n5518 root 20 0 1884932 135616 8528 S 5.3 1.7 76:50.45 Xorg\n330803 libvirt+ 20 0 4550504 905428 5584 S 5.3 11.2 4:12.68 qemu-system-x86\n6070 tomas 9 -11 1180660 6844 4964 S 3.7 0.1 44:04.39 pulseaudio\n333253 tomas 20 0 4708156 51400 15084 S 3.3 0.6 1:23.72 chrome\n288344 tomas 20 0 2644572 56560 14968 S 1.7 0.7 9:03.78 Web Content\n6227 tomas 20 0 139916 8316 4932 S 1.3 0.1 19:59.68 gkrellm\n\n3VMS stopped\n----\nroot@athens:\/home\/tomas# top\ntop - 18:40:09 up 9 days, 19:34, 1 user, load average: 0.56, 1.09, 1.30\nTasks: 379 total, 2 running, 376 sleeping, 0 stopped, 1 zombie\n%Cpu(s): 4.5 us, 1.5 sy, 0.0 ni, 94.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nMiB Mem : 7867.7 total, 3860.9 free, 3072.9 used, 933.9 buff\/cache\n<strong>MiB Swap<\/strong>: 6964.0 total, <strong>4877.1 free, 2086.9 used<\/strong>. 4122.1 avail Mem\n<code>PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND<\/code>\n288344 tomas 20 0 2644572 97532 17100 S 6.2 1.2 11:05.35 Web Content\n404910 root 20 0 12352 5016 4040 R 6.2 0.1 0:00.01 top\n1 root 20 0 253060 7868 5512 S 0.0 0.1 0:47.82 systemd\n2 root 20 0 0 0 0 S 0.0 0.0 0:02.99 kthreadd\n3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp\n4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp\n6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker\/0:0H\n9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq\n10 root 20 0 0 0 0 S 0.0 0.0 0:11.39 ksoftirqd\/0\n11 root 20 0 0 0 0 I 0.0 0.0 2:13.55 rcu_sched\nroot@athens:\/home\/tomas#<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>While studying for CKA, I installed kubeadm using vagrant\/virtualbox. Now I want to try the same, but using libvirt instead. 1- Install 3VM (1 master and 2 worker-nodes) I have installed vagrant and libvirtd already. Take this vagrant file as source. 2- I had to make two changes to that file 2.1- I want to &hellip; <a href=\"https:\/\/blog.thomarite.uk\/index.php\/2020\/11\/15\/install-kubeadm-vagrant-libvirt\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;install-kubeadm-vagrant-libvirt&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[27],"tags":[],"class_list":["post-489","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/posts\/489","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/comments?post=489"}],"version-history":[{"count":3,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/posts\/489\/revisions"}],"predecessor-version":[{"id":493,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/posts\/489\/revisions\/493"}],"wp:attachment":[{"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/media?parent=489"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/categories?post=489"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/tags?post=489"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}