install-kubeadm-vagrant-libvirt

While studying for CKA, I installed kubeadm using vagrant/virtualbox. Now I want to try the same, but using libvirt instead.

1- Install 3VM (1 master and 2 worker-nodes) I have installed vagrant and libvirtd already. Take this vagrant file as source.

2- I had to make two changes to that file

2.1- I want to use libvirtd, so need to change the Ubuntu vm.box to one that supports it.

#config.vm.box = “ubuntu/bionic64”
config.vm.box = “generic/ubuntu1804”

2.2- Then need to change the network interface

enp0s8 -> eth1

3- Create the VMs with vagrant.

$ ls -ltr
-rw-r--r-- 1 tomas tomas 3612 Nov 15 16:36 Vagrantfile

$ vagrant status
Current machine states:
kubemaster not created (libvirt)
kubenode01 not created (libvirt)
kubenode02 not created (libvirt)

$ vagrant up
...
An unexpected error occurred when executing the action on the
'kubenode01' machine. Please report this as a bug:
cannot load such file -- erubis
...

3.1 Ok, we have to troubleshoot vagrant in my laptop. I googled a bit and couldnt find anything related. I remembered that you could install plugins with vagrant as once I had to update vagrant-libvirtd plugin. So this is kind of what I did.

$ vagrant version
Installed Version: 2.2.13
Latest Version: 2.2.13

$ vagrant plugin list
vagrant-libvirt (0.1.2, global)
Version Constraint: > 0

$ vagrant plugin update
Updating installed plugins…
Fetching fog-core-2.2.3.gem
Fetching nokogiri-1.10.10.gem
Building native extensions. This could take a while…
Building native extensions. This could take a while…
Fetching vagrant-libvirt-0.2.1.gem
Successfully uninstalled excon-0.75.0
Successfully uninstalled fog-core-2.2.0
Removing nokogiri
Successfully uninstalled nokogiri-1.10.9
Successfully uninstalled vagrant-libvirt-0.1.2
Updated 'vagrant-libvirt' to version '0.2.1'!

$ vagrant plugin install erubis

$ vagrant plugin update
Updating installed plugins…
Building native extensions. This could take a while…
Building native extensions. This could take a while…
Updated 'vagrant-libvirt' to version '0.2.1'!

$ vagrant plugin list
erubis (2.7.0, global)
Version Constraint: > 0
vagrant-libvirt (0.2.1, global)
Version Constraint: > 0

3.2. Now, I can start vagrant fine

$ vagrant up
....

$ vagrant status
Current machine states:
kubemaster running (libvirt)
kubenode01 running (libvirt)
kubenode02 running (libvirt)

4- Install kubeadm. I follow the official doc. It seems we have the pre-requisites. My laptop has 8GB RAM and 4 cpus. Our VMs are Ubuntu 16.04+.

4.1 Enable iptables in each VM:

$ vagrant ssh kubemaster

vagrant@kubemaster:~$ lsmod | grep br_net
vagrant@kubemaster:~$
vagrant@kubemaster:~$ sudo modprobe br_netfilter
vagrant@kubemaster:~$ lsmod | grep br_net
br_netfilter 24576 0
bridge 155648 1 br_netfilter
vagrant@kubemaster:~$
vagrant@kubemaster:~$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vagrant@kubemaster:~$ sudo sysctl --system
...

5- Install runtime (docker). Following the official doc, we click on the link at the end of “Installing runtime”. We do this in each node:

vagrant@kubemaster:~$ sudo -i
root@kubemaster:~# sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
...
root@kubemaster:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
OK
root@kubemaster:~# sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 
$(lsb_release -cs) \
stable"
...
root@kubemaster:~# sudo apt-get update && sudo apt-get install -y \
containerd.io=1.2.13-2 \
docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
....
root@kubemaster:~# cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
root@kubemaster:~# sudo mkdir -p /etc/systemd/system/docker.service.d
root@kubemaster:~# sudo systemctl daemon-reload
root@kubemaster:~# sudo systemctl restart docker
root@kubemaster:~# sudo systemctl enable docker
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
root@kubemaster:~#
root@kubemaster:~#

5- Now we follow “Installing kubeadm, kubelet and kubectl” from main doc in each VM.

root@kubemaster:~#
root@kubemaster:~# sudo apt-get update && sudo apt-get install -y apt-transport-https curl
...
root@kubemaster:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
OK
root@kubemaster:~# cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
deb https://apt.kubernetes.io/ kubernetes-xenial main
root@kubemaster:~# sudo apt-get update
...
root@kubemaster:~# sudo apt-get install -y kubelet kubeadm kubectl
...
root@kubemaster:~# ip -4 a

We dont have to do anything with the next section “Configure cgroup driver…” as we are using docker. So from the bottom of the main page, we click on the next section for using kubeadm and create a cluster.

6- So we have our three VMS with kubeadm. Now we are going to create a cluster. The kubemaster VM will be the control-plane node. So following “Initializing your control-plane node”, we dont need 1 (as we have only one control-node), for 2) will install weave-net as CNI in the next step, we need to use a new network for this: 10.244.0.0/16. 3) we dont need it and 4) we will specify the master ip. So, only on kubemaster:

root@kubemaster:~# kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address=192.168.56.2
W1115 17:13:31.213357 9958 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

oh, problem. It seems we need to disable swap on the VMs. Actually, we will do in all VMs.

root@kubemaster:~# swapoff -a

Try again kubeadm init in master:

root@kubemaster:~# kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address=192.168.56.2
W1115 17:15:00.378279 10376 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubemaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.2]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubemaster localhost] and IPs [192.168.56.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubemaster localhost] and IPs [192.168.56.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 25.543262 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubemaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubemaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: aeseji.kovc0rjt6giakn1v
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.56.2:6443 --token aeseji.kovc0rjt6giakn1v \
--discovery-token-ca-cert-hash sha256:c1b91ec9cebe065665c314bfe9a7ce9c0ef970d56ae762dae5ce308caacbd8cd
root@kubemaster:~#

7- We need to follow the output of kubeadm init in kubemaster. As well pay attention as the info for joining our worker-nodes to the cluster in there too (“kubeadm join ….”)

root@kubemaster:~# exit
logout
vagrant@kubemaster:~$ mkdir -p $HOME/.kube
vagrant@kubemaster:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
vagrant@kubemaster:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

We can test the status of the control-node. It is NotReady because it needs the network configuration.

vagrant@kubemaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster NotReady master 2m9s v1.19.4

8- From the same page, now we need to follow “Installing a Pod network add-on”. I dont know why but the documentation is not great about it. You need to dig in all version to find the steps to install wave-net. This is the link. So we install wave-net only on the kubemaster:

vagrant@kubemaster:~$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
vagrant@kubemaster:~$
vagrant@kubemaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 4m32s v1.19.4

9- We can follow to the section “Joining your nodes”. We need to apply the “kubeadm join…” command from the outout of “kubeadm init” in master node in only the worker-nodes.

root@kubenode02:~# kubeadm join 192.168.56.2:6443 --token aeseji.kovc0rjt6giakn1v --discovery-token-ca-cert-hash sha256:c1b91ec9cebe065665c314bfe9a7ce9c0ef970d56ae762dae5ce308caacbd8cd
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…
This node has joined the cluster:
Certificate signing request was sent to apiserver and a response was received.
The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@kubenode02:~#

10- We need to wait a bit, but finally the worker nodes will come up as Ready if we check in the master/control-node:

vagrant@kubemaster:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 6m35s v1.19.4
kubenode01 Ready 2m13s v1.19.4
kubenode02 Ready 2m10s v1.19.4
vagrant@kubemaster:~$

11- Let’s verify we have a working cluster just creating a pod.

vagrant@kubemaster:~$ kubectl run ngix --image=nginx
pod/ngix created

vagrant@kubemaster:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
ngix 0/1 ContainerCreating 0 5s
vagrant@kubemaster:~$
vagrant@kubemaster:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
ngix 1/1 Running 0 83s
vagrant@kubemaster:~$

vagrant@kubemaster:~$ kubectl delete pod ngix
pod "ngix" deleted

vagrant@kubemaster:~$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-b9b92 1/1 Running 0 10m
coredns-f9fd979d6-t822r 1/1 Running 0 10m
etcd-kubemaster 1/1 Running 0 10m
kube-apiserver-kubemaster 1/1 Running 0 10m
kube-controller-manager-kubemaster 1/1 Running 2 10m
kube-proxy-jpb9p 1/1 Running 0 10m
kube-proxy-lkpv9 1/1 Running 0 6m13s
kube-proxy-sqd9v 1/1 Running 0 6m10s
kube-scheduler-kubemaster 1/1 Running 2 10m
weave-net-8rl49 2/2 Running 0 6m13s
weave-net-fkqdv 2/2 Running 0 6m10s
weave-net-q79pb 2/2 Running 0 7m48s
vagrant@kubemaster:~$

So, we have a working kubernetes cluster built with kubeadm using vagrant/libvirtd!

As a note, while building the VMs and installing software on them, my laptop hang a couple of times as the 3VMS running at the same time takes nearly all RAM. But this is a good exercise to understand the requirements of kubeadm to build a cluster and as well, it is a lab env you can use while studying if the cloud env are down or you dont have internet. Let’s see If I manage to pass the CKA one day!!!

3VMs running
----
# top
top - 17:24:10 up 9 days, 18:18, 1 user, load average: 5.22, 5.09, 4.79
Tasks: 390 total, 1 running, 388 sleeping, 0 stopped, 1 zombie
%Cpu(s): 21.7 us, 19.5 sy, 0.0 ni, 56.5 id, 2.0 wa, 0.0 hi, 0.2 si, 0.0 st
MiB Mem : 7867.7 total, 263.0 free, 6798.7 used, 806.0 buff/cache
MiB Swap: 6964.0 total, 991.4 free, 5972.6 used. 409.6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
329875 tomas 20 0 9268464 251068 83584 S 55.8 3.1 14:27.84 chrome
187962 tomas 20 0 1302500 105228 46528 S 36.9 1.3 170:58.40 chrome
331127 libvirt+ 20 0 4753296 1.3g 5972 S 35.5 17.5 7:13.00 qemu-system-x86
330979 libvirt+ 20 0 4551524 954212 5560 S 7.3 11.8 4:08.33 qemu-system-x86
5518 root 20 0 1884932 135616 8528 S 5.3 1.7 76:50.45 Xorg
330803 libvirt+ 20 0 4550504 905428 5584 S 5.3 11.2 4:12.68 qemu-system-x86
6070 tomas 9 -11 1180660 6844 4964 S 3.7 0.1 44:04.39 pulseaudio
333253 tomas 20 0 4708156 51400 15084 S 3.3 0.6 1:23.72 chrome
288344 tomas 20 0 2644572 56560 14968 S 1.7 0.7 9:03.78 Web Content
6227 tomas 20 0 139916 8316 4932 S 1.3 0.1 19:59.68 gkrellm

3VMS stopped
----
root@athens:/home/tomas# top
top - 18:40:09 up 9 days, 19:34, 1 user, load average: 0.56, 1.09, 1.30
Tasks: 379 total, 2 running, 376 sleeping, 0 stopped, 1 zombie
%Cpu(s): 4.5 us, 1.5 sy, 0.0 ni, 94.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 7867.7 total, 3860.9 free, 3072.9 used, 933.9 buff/cache
MiB Swap: 6964.0 total, 4877.1 free, 2086.9 used. 4122.1 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
288344 tomas 20 0 2644572 97532 17100 S 6.2 1.2 11:05.35 Web Content
404910 root 20 0 12352 5016 4040 R 6.2 0.1 0:00.01 top
1 root 20 0 253060 7868 5512 S 0.0 0.1 0:47.82 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:02.99 kthreadd
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp
6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H
9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq
10 root 20 0 0 0 0 S 0.0 0.0 0:11.39 ksoftirqd/0
11 root 20 0 0 0 0 I 0.0 0.0 2:13.55 rcu_sched
root@athens:/home/tomas#

Marmalede-p1

This weekend I have tried something I had in mind of some time. Home-made marmalade!

I had done before membrillo! (quince) And it was great! Such a good memories came back. And you realize how is done the one you used to buy in shops…

I decided to try the same method with berries (strawberry, blueberries, rasberries, etc). The recipe is quite simple.

Ingredients

-1 kg of frozen berries (if you can get them fresh, even better)

– 100g sugar ( maybe you can add more)

– 1 glass recipient of 600ml. Disinfected with boiling water.

Process:

Heat up a big saucepan (middle heat), put the frozen berries and the sugar. Stir frequently.

The fruit will start to unfrozen. Once fruit is soft, reduce the heat a bit and keep stirring.

The fruits will release water so dont add any.

Once they are looking like a pure, taste with a spoon (dont burnt your tongue!!)

If you want to get rid of the big bits of fruit, use a hand blender.

Once you have the texture you want, it is done.

Let it cool off properly before transfer to the glass and then to the fridge.

Notes

Dont add water!!!! If you do, the marmalade will be quite liquid.

– Sugar levels. This is quite personal. Most marmalades I have checked have at least 40% sugar. I have decided with a 10%. 1kg fruit, 100g sugar. To be honest, depending on the fruit, it can be still a bit acid and you can add more. In my next attempt I will try 150g sugar.

2nd Attempt:

I followed the same process but without adding water. Still it was very liquid so i used a cooking filter to drain the mix and the result was quite good! Now it is more solid and you can drink the liquid (nothing to waste), it is super tasty!!!

Doctorow-Tor

I finished this book yesterday. This was my first book from Cory Doctorow, I have heard about him for some time about his support for digital freedom and his blogging (never read it though). Somehow I decided to read something from I chose this book as it seemed the latest. And to be honest, I am glad I did it because I liked it. I didnt know what to expect the four novellas really hit the nail on the head in the main issues of our society:

1- Immigration – Digital freedom – Social connection – Social classes – Youth against injustice

2- Racism – even superpowers can “fix” it – America blind eye (and the whole world to be honest)

3- Healthcare (cost, politics, etc), Brutal-capitalism, Radicalization, Guilt, Mental Health.

4- Clean water, Global instability, Violence, Social disconnection

I have the feeling that you can see the current work in each history. In one part you think we are doomed but there is always a spot of hope. And it is just “having hope”, it is taking action.

And I learned that the DMCA was signed by a Democrat…. good b-job Clinton…

And I want to use more often Tor more often. Just for browsing it is really easy.

Work-Hard

I get mad whenever I hear “work hard” lately. What the f* that means? Do I need to stay in my desk for 16 hours every day? This is what I understand for working hard. I am subscribed to the SDN mail list of IPSpace and this week the email was about this topic and related to network automation. My former CTO told me one day “work smarter, not harder”. I am not very smart, but I try. And one key thing, it is focus.

Apple-Pie

To celebrate a new lockdown season, I wanted to cook an apple pie like my mum used to make and I think I found a recipe that looks like that.

Ingredients:

1- 150g of biscuits smashed (my tin was a bit bigger than the video)

2- 70g melted butter

3- 8 apples, peeled and core removed – 6 for the sauce and 2 for decoration

4- 1 glass of milk (around 250ml)

5- 1 glass of plain flour (same glass as above)

6- 1/2 glass of suggar (same glass as above)

7- a pintch of cinnamon

8- Some tbsp of marmalade (or maple syrup as I ran out)

Process

1- Use a bit of butter to cover the bottom of your tin. Mix your crashed biscuits with the butter and fill the bottom of the tin. Be sure it looks like a firm surface. Put the tin in the fridge to cool down.

2- Pre-heat oven at 180C

3- In a blender, put the 6 chopped apples, milk, flour, sugar and cinnamon. After a couple of minutes, check everything is fully combined and liquid-like.

4- Take the tin from the bridge, and pour the apple mix on the tin. Be sure it is level. Then cut the other two apples in slices and cover the apple mix.

5- Put the cake in the over for around 60 minutes. The top should be brown and if you push a knife in the cake should come out clean.

6- Once the cake is out of the oven. Use the marmalade to bright up the surface of the cake.

7- Let the cake to cool in the fridge (1h) and ready to eat!

I actually liked it and I think it is similar to the one I used to have as a kid. Good memories!

gnmi-ssl-p2

I was already playing with gNMI and protobuf a couple of months ago. But this week I received a summary from the last NANOG80 meeting and there was a presentation about it. Great job from Colin!

So I decided to give it a go as the demo was based on docker and I have already my Arista lab in cEOS and vEOS as targets.

I started my 3node-ring cEOS lab with docker-topo

ceos-testing/topology master$ docker-topo --create 3-node-simple.yml
INFO:main:Version 2 requires sudo. Restarting script with sudo
[sudo] password for xxx:
INFO:main:
alias r01='docker exec -it 3node_r01 Cli'
alias r02='docker exec -it 3node_r02 Cli'
alias r03='docker exec -it 3node_r03 Cli'
INFO:main:All devices started successfully

Checked they were up:

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4160cc354ba2 ceos-lab:4.23.3M "/sbin/init systemd.…" 7 minutes ago Up 7 minutes 0.0.0.0:2002->22/tcp, 0.0.0.0:9002->443/tcp 3node_r03
122f72fb25bd ceos-lab:4.23.3M "/sbin/init systemd.…" 7 minutes ago Up 7 minutes 0.0.0.0:2001->22/tcp, 0.0.0.0:9001->443/tcp 3node_r02
68cf8ca39130 ceos-lab:4.23.3M "/sbin/init systemd.…" 7 minutes ago Up 7 minutes 0.0.0.0:2000->22/tcp, 0.0.0.0:9000->443/tcp 3node_r01

And then, check I had gnmi config in r01:

!
management api gnmi
transport grpc GRPC
port 3333
!

Need to find the IP of r01 in “3node_net-0” as the one used for management. I have had so many times hit this issue,…

$ docker inspect 3node_r01
...
"Networks": {
 "3node_net-0": {
 "IPAMConfig": null, 
 "Links": null,
 "Aliases": [ "68cf8ca39130" ],
 "NetworkID": "d3f72e7473228488f668aa3ed65b6ea94e1c5c9553f93cf0f641c3d4af644e2e", "EndpointID": "bca584040e71a826ef25b8360d92881dad407ff976eff65a38722fd36e9fc873", "Gateway": "172.20.0.1", 
"IPAddress": "172.20.0.2",
....

Now, I cloned the repo and followed the instructions/video. Copied targets.jon and updated it with my r01 device details:

~/storage/technology/gnmi-gateway release$ cat examples/gnmi-prometheus/targets.json 
{
  "request": {
    "default": {
      "subscribe": {
        "prefix": {
        },
        "subscription": [
          {
            "path": {
              "elem": [
                {
                  "name": "interfaces"
                }
              ]
            }
          }
        ]
      }
    }
  },
  "target": {
    "r01": {
      "addresses": [
        "172.20.0.2:3333"
      ],
      "credentials": {
        "username": "xxx",
        "password": "xxx"
      },
      "request": "default",
      "meta": {
        "NoTLS": "yes"
      }
    }
  }
}

Carrying out with the instructions, build docker gnmi-gateway, docker bridge and run docker gnmi-gateway built earlier.

go:1.14.6|py:3.7.3|tomas@athens:~/storage/technology/gnmi-gateway release$ docker run \
-it --rm \
-p 59100:59100 \
-v $(pwd)/examples/gnmi-prometheus/targets.json:/opt/gnmi-gateway/targets.json \
--name gnmi-gateway-01 \
--network gnmi-net \
gnmi-gateway:latest
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Starting GNMI Gateway."}
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Clustering is NOT enabled. No locking or cluster coordination will happen."}
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Starting connection manager."}
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Starting gNMI server on 0.0.0.0:9339."}
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Starting Prometheus exporter."}
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Connection manager received a target control message: 1 inserts 0 removes"}
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Initializing target r01 ([172.27.0.2:3333]) map[NoTLS:yes]."}
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Target r01: Connecting"}
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Target r01: Subscribing"}
{"level":"info","time":"2020-11-07T16:54:28Z","message":"Starting Prometheus HTTP server."}
{"level":"info","time":"2020-11-07T16:54:38Z","message":"Target r01: Disconnected"}
E1107 16:54:38.382032 1 reconnect.go:114] client.Subscribe (target "r01") failed: client "gnmi" : client "gnmi" : Dialer(172.27.0.2:3333, 10s): context deadline exceeded; reconnecting in 552.330144ms
{"level":"info","time":"2020-11-07T16:54:48Z","message":"Target r01: Disconnected"}
E1107 16:54:48.935965 1 reconnect.go:114] client.Subscribe (target "r01") failed: client "gnmi" : client "gnmi" : Dialer(172.27.0.2:3333, 10s): context deadline exceeded; reconnecting in 1.080381816s
bash-4.2# tcpdump -i any tcp port 3333 -nnn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
17:07:57.621011 In 02:42:7c:61:10:40 ethertype IPv4 (0x0800), length 76: 172.27.0.1.43644 > 172.27.0.2.3333: Flags [S], seq 557316949, win 64240, options [mss 1460,sackOK,TS val 3219811744 ecr 0,nop,wscale 7], length 0
17:07:57.621069 Out 02:42:ac:1b:00:02 ethertype IPv4 (0x0800), length 76: 172.27.0.2.3333 > 172.27.0.1.43644: Flags [S.], seq 243944609, ack 557316950, win 65160, options [mss 1460,sackOK,TS val 1828853442 ecr 3219811744,nop,wscale 7], length 0
17:07:57.621124 In 02:42:7c:61:10:40 ethertype IPv4 (0x0800), length 68: 172.27.0.1.43644 > 172.27.0.2.3333: Flags [.], ack 1, win 502, options [nop,nop,TS val 3219811744 ecr 1828853442], length 0
17:07:57.621348 Out 02:42:ac:1b:00:02 ethertype IPv4 (0x0800), length 89: 172.27.0.2.3333 > 172.27.0.1.43644: Flags [P.], seq 1:22, ack 1, win 510, options [nop,nop,TS val 1828853442 ecr 3219811744], length 21
17:07:57.621409 In 02:42:7c:61:10:40 ethertype IPv4 (0x0800), length 68: 172.27.0.1.43644 > 172.27.0.2.3333: Flags [.], ack 22, win 502, options [nop,nop,TS val 3219811744 ecr 1828853442], length 0
17:07:57.621492 In 02:42:7c:61:10:40 ethertype IPv4 (0x0800), length 320: 172.27.0.1.43644 > 172.27.0.2.3333: Flags [P.], seq 1:253, ack 22, win 502, options [nop,nop,TS val 3219811744 ecr 1828853442], length 252
17:07:57.621509 Out 02:42:ac:1b:00:02 ethertype IPv4 (0x0800), length 68: 172.27.0.2.3333 > 172.27.0.1.43644: Flags [.], ack 253, win 509, options [nop,nop,TS val 1828853442 ecr 3219811744], length 0
17:07:57.621586 In 02:42:7c:61:10:40 ethertype IPv4 (0x0800), length 68: 172.27.0.1.43644 > 172.27.0.2.3333: Flags [F.], seq 253, ack 22, win 502, options [nop,nop,TS val 3219811744 ecr 1828853442], length 0
17:07:57.621904 Out 02:42:ac:1b:00:02 ethertype IPv4 (0x0800), length 68: 172.27.0.2.3333 > 172.27.0.1.43644: Flags [R.], seq 22, ack 254, win 509, options [nop,nop,TS val 1828853443 ecr 3219811744], length 0

Ok, the container is created and seems running but the gnmi-gateway can’t connect to my cEOS r01….

First thing, I had to check iptables. It is not the first time that when playing with docker and building different environments (vEOS vs gnmi-gateway) with different docker commands, iptables may be not configured properly.

And it was the case again:

# iptables -t filter -S DOCKER-ISOLATION-STAGE-1
Warning: iptables-legacy tables present, use iptables-legacy to see them
-N DOCKER-ISOLATION-STAGE-1
-A DOCKER-ISOLATION-STAGE-1 -i br-43481af25965 ! -o br-43481af25965 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-94c1e813ad6f ! -o br-94c1e813ad6f -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-4bd17cfa19a8 ! -o br-4bd17cfa19a8 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-13ab2b6a0d1d ! -o br-13ab2b6a0d1d -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-121978ca0282 ! -o br-121978ca0282 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-00db5844bbb0 ! -o br-00db5844bbb0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN

So I moved the new docker bridge network for gnmi-gateway after “ACCEPT” and solved.

# iptables -t filter -D DOCKER-ISOLATION-STAGE-1 -j ACCEPT
# iptables -t filter -I DOCKER-ISOLATION-STAGE-1 -j ACCEPT
#
# iptables -t filter -S DOCKER-ISOLATION-STAGE-1
Warning: iptables-legacy tables present, use iptables-legacy to see them
-N DOCKER-ISOLATION-STAGE-1
-A DOCKER-ISOLATION-STAGE-1 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-43481af25965 ! -o br-43481af25965 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-94c1e813ad6f ! -o br-94c1e813ad6f -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-4bd17cfa19a8 ! -o br-4bd17cfa19a8 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-13ab2b6a0d1d ! -o br-13ab2b6a0d1d -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-121978ca0282 ! -o br-121978ca0282 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-00db5844bbb0 ! -o br-00db5844bbb0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
#

So, restarted gnmi-gateway, still same issue. Ok, I decided to check if the packets were actually hitting r01.

So at first sight, the tcp handshake is established but then there is TCP RST….

So I double checked that gnmi was runnig in my side:

r1#show management api gnmi 
Enabled:            Yes
Server:             running on port 3333, in MGMT VRF
SSL Profile:        none
QoS DSCP:           none
r1#

At that moment, I thought that was an issue in cEOS… checking logs I couldnt see any confirmation but I decided to give it a go with vEOS that is more feature rich. So I turned up my GCP lab and followed the same steps with gnmi-gateway. I updated the targets.json with the details of one of my vEOS devices. And run again:

~/gnmi/gnmi-gateway release$ sudo docker run -it --rm -p 59100:59100 -v $(pwd)/examples/gnmi-prometheus/targets.json:/opt/gnmi-gateway/targets.json --name gnmi-gateway-01 --network gnmi-net gnmi-gateway:latest
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Starting GNMI Gateway."}
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Clustering is NOT enabled. No locking or cluster coordination will happen."}
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Starting connection manager."}
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Starting gNMI server on 0.0.0.0:9339."}
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Starting Prometheus exporter."}
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Connection manager received a target control message: 1 inserts 0 removes"}
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Initializing target gcp-r1 ([192.168.249.4:3333]) map[NoTLS:yes]."}
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Target gcp-r1: Connecting"}
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Target gcp-r1: Subscribing"}
{"level":"info","time":"2020-11-07T19:22:20Z","message":"Starting Prometheus HTTP server."}
{"level":"info","time":"2020-11-07T19:22:30Z","message":"Target gcp-r1: Disconnected"}
E1107 19:22:30.048410 1 reconnect.go:114] client.Subscribe (target "gcp-r1") failed: client "gnmi" : client "gnmi" : Dialer(192.168.249.4:3333, 10s): context deadline exceeded; reconnecting in 552.330144ms
{"level":"info","time":"2020-11-07T19:22:40Z","message":"Target gcp-r1: Disconnected"}
E1107 19:22:40.603141 1 reconnect.go:114] client.Subscribe (target "gcp-r1") failed: client "gnmi" : client "gnmi" : Dialer(192.168.249.4:3333, 10s): context deadline exceeded; reconnecting in 1.080381816s

Again, same issue. Let’s see from vEOS perspective.

bash-4.2# tcpdump -i any tcp port 3333 -nnn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
18:52:31.874137 In 1e:3d:5b:13:d8:fe ethertype IPv4 (0x0800), length 76: 10.128.0.4.56546 > 192.168.249.4.3333: Flags [S], seq 4076065498, win 64240, options [mss 1460,sackOK,TS val 1752943121 ecr 0,nop,wscale 7], length 0
18:52:31.874579 Out 50:00:00:04:00:00 ethertype IPv4 (0x0800), length 76: 192.168.249.4.3333 > 10.128.0.4.56546: Flags [S.], seq 3922060793, ack 4076065499, win 28960, options [mss 1460,sackOK,TS val 433503 ecr 1752943121,nop,wscale 7], length 0
18:52:31.875882 In 1e:3d:5b:13:d8:fe ethertype IPv4 (0x0800), length 68: 10.128.0.4.56546 > 192.168.249.4.3333: Flags [.], ack 1, win 502, options [nop,nop,TS val 1752943123 ecr 433503], length 0
18:52:31.876284 In 1e:3d:5b:13:d8:fe ethertype IPv4 (0x0800), length 320: 10.128.0.4.56546 > 192.168.249.4.3333: Flags [P.], seq 1:253, ack 1, win 502, options [nop,nop,TS val 1752943124 ecr 433503], length 252
18:52:31.876379 Out 50:00:00:04:00:00 ethertype IPv4 (0x0800), length 68: 192.168.249.4.3333 > 10.128.0.4.56546: Flags [.], ack 253, win 235, options [nop,nop,TS val 433504 ecr 1752943124], length 0
18:52:31.929448 Out 50:00:00:04:00:00 ethertype IPv4 (0x0800), length 89: 192.168.249.4.3333 > 10.128.0.4.56546: Flags [P.], seq 1:22, ack 253, win 235, options [nop,nop,TS val 433517 ecr 1752943124], length 21
18:52:31.930028 In 1e:3d:5b:13:d8:fe ethertype IPv4 (0x0800), length 68: 10.128.0.4.56546 > 192.168.249.4.3333: Flags [.], ack 22, win 502, options [nop,nop,TS val 1752943178 ecr 433517], length 0
18:52:31.930090 In 1e:3d:5b:13:d8:fe ethertype IPv4 (0x0800), length 68: 10.128.0.4.56546 > 192.168.249.4.3333: Flags [F.], seq 253, ack 22, win 502, options [nop,nop,TS val 1752943178 ecr 433517], length 0
18:52:31.931603 Out 50:00:00:04:00:00 ethertype IPv4 (0x0800), length 68: 192.168.249.4.3333 > 10.128.0.4.56546: Flags [R.], seq 22, ack 254, win 235, options [nop,nop,TS val 433517 ecr 1752943178], length 0

So again in GCP, tcp is established but then TCP RST. As vEOS is my last resort, I tried to dig into that TCP connection. I downloaded a pcap to analyze with wireshark so get a better visual clue…

So, somehow, gnmi-gateway is trying to negotiate TLS!!! As per my understanding, my targets.json was configured with “NoTLS”: “yes” so that should be avoid, shouldn’t be?

At that moment, I wanted to know how to identfiy TLS/SSL packets using tcpdump as it is not always that easy to get quickly a pcap in wireshark. So I found the answer here:

bash-4.2# tcpdump -i any "tcp port 3333 and (tcp[((tcp[12] & 0xf0) >> 2)] = 0x16)"
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
19:47:01.367197 In 1e:3d:5b:13:d8:fe (oui Unknown) ethertype IPv4 (0x0800), length 320: 10.128.0.4.50486 > 192.168.249.4.dec-notes: Flags [P.], seq 2715923852:2715924104, ack 2576249027, win 511, options [nop,nop,TS val 1194424180 ecr 1250876], length 252
19:47:02.405870 In 1e:3d:5b:13:d8:fe (oui Unknown) ethertype IPv4 (0x0800), length 320: 10.128.0.4.50488 > 192.168.249.4.dec-notes: Flags [P.], seq 680803294:680803546, ack 3839769659, win 511, options [nop,nop,TS val 1194425218 ecr 1251136], length 252
19:47:04.139458 In 1e:3d:5b:13:d8:fe (oui Unknown) ethertype IPv4 (0x0800), length 320: 10.128.0.4.50490 > 192.168.249.4.dec-notes: Flags [P.], seq 3963338234:3963338486, ack 1760248652, win 511, options [nop,nop,TS val 1194426952 ecr 1251569], length 252

Not something easy to remember 🙁

Ok, I wanted to be sure that gnmi was functional in vEOS and by a quick internet look up, I found this project gnmic! Great job by the author!

So I configured the tool and tested with my vEOS. And worked (without needing TLS)

~/gnmi/gnmi-gateway release$ gnmic -a 192.168.249.4:3333 -u xxx -p xxx --insecure --insecure get \
--path "/interfaces/interface[name=*]/subinterfaces/subinterface[index=*]/ipv4/addresses/address/config/ip"
Get Response:
[
{
"time": "1970-01-01T00:00:00Z",
"updates": [
{
"Path": "interfaces/interface[name=Management1]/subinterfaces/subinterface[index=0]/ipv4/addresses/address[ip=192.168.249.4]/config/ip",
"values": {
"interfaces/interface/subinterfaces/subinterface/ipv4/addresses/address/config/ip": "192.168.249.4"
}
},
{
"Path": "interfaces/interface[name=Ethernet2]/subinterfaces/subinterface[index=0]/ipv4/addresses/address[ip=10.0.13.1]/config/ip",
"values": {
"interfaces/interface/subinterfaces/subinterface/ipv4/addresses/address/config/ip": "10.0.13.1"
}
},
{
"Path": "interfaces/interface[name=Ethernet3]/subinterfaces/subinterface[index=0]/ipv4/addresses/address[ip=192.168.1.1]/config/ip",
"values": {
"interfaces/interface/subinterfaces/subinterface/ipv4/addresses/address/config/ip": "192.168.1.1"
}
},
{
"Path": "interfaces/interface[name=Ethernet1]/subinterfaces/subinterface[index=0]/ipv4/addresses/address[ip=10.0.12.1]/config/ip",
"values": {
"interfaces/interface/subinterfaces/subinterface/ipv4/addresses/address/config/ip": "10.0.12.1"
}
},
{
"Path": "interfaces/interface[name=Loopback1]/subinterfaces/subinterface[index=0]/ipv4/addresses/address[ip=10.0.0.1]/config/ip",
"values": {
"interfaces/interface/subinterfaces/subinterface/ipv4/addresses/address/config/ip": "10.0.0.1"
}
},
{
"Path": "interfaces/interface[name=Loopback2]/subinterfaces/subinterface[index=0]/ipv4/addresses/address[ip=192.168.0.1]/config/ip",
"values": {
"interfaces/interface/subinterfaces/subinterface/ipv4/addresses/address/config/ip": "192.168.0.1"
}
}
]
}
]
~/gnmi/gnmi-gateway release$

So, I kind of I was sure that my issue was configuring gnmi-gateway. I tried to troubleshoot it: removed the NoTLS, using the debugging mode, build the code, read the Go code for Target (too complex for my Goland knowledge 🙁 )

So at the end, I gave up and opened an issue with gnmi-gateway author. And he answered super quick with the solution!!! I misunderstood the meaning of “NoTLS” 🙁

So I followed his instructions to configure TLS in my gnmi cEOS config

security pki certificate generate self-signed r01.crt key r01.key generate rsa 2048 validity 30000 parameters common-name r01
!
management api gnmi
transport grpc GRPC
ssl profile SELFSIGNED
port 3333
!
...
!
management security
ssl profile SELFSIGNED
certificate r01.crt key r01.key
!
end

and all worked straightaway!

~/storage/technology/gnmi-gateway release$ docker run -it --rm -p 59100:59100 -v $(pwd)/examples/gnmi-prometheus/targets.json:/opt/gnmi-gateway/targets.json --name gnmi-gateway-01 --network gnmi-net gnmi-gateway:latest
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Starting GNMI Gateway."}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Clustering is NOT enabled. No locking or cluster coordination will happen."}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Starting connection manager."}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Starting gNMI server on 0.0.0.0:9339."}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Starting Prometheus exporter."}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Connection manager received a target control message: 1 inserts 0 removes"}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Initializing target r01 ([172.20.0.2:3333]) map[NoTLS:yes]."}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Target r01: Connecting"}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Target r01: Subscribing"}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Target r01: Connected"}
{"level":"info","time":"2020-11-08T09:39:15Z","message":"Target r01: Synced"}
{"level":"info","time":"2020-11-08T09:39:16Z","message":"Starting Prometheus HTTP server."}
{"level":"info","time":"2020-11-08T09:39:45Z","message":"Connection manager received a target control message: 1 inserts 0 removes"}
{"level":"info","time":"2020-11-08T09:40:15Z","message":"Connection manager received a target control message: 1 inserts 0 removes"}

So I can start prometheus

~/storage/technology/gnmi-gateway release$ docker run \
-it --rm \
-p 9090:9090 \
-v $(pwd)/examples/gnmi-prometheus/prometheus.yml:/etc/prometheus/prometheus.yml \
--name prometheus-01 \
--network gnmi-net \
prom/prometheus
Unable to find image 'prom/prometheus:latest' locally
latest: Pulling from prom/prometheus
76df9210b28c: Pull complete
559be8e06c14: Pull complete
66945137dd82: Pull complete
8cbce0960be4: Pull complete
f7bd1c252a58: Pull complete
6ad12224c517: Pull complete
ee9cd36fa25a: Pull complete
d73034c1b9c3: Pull complete
b7103b774752: Pull complete
2ba5d8ece07a: Pull complete
ab11729a0297: Pull complete
1549b85a3587: Pull complete
Digest: sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210
Status: Downloaded newer image for prom/prometheus:latest
level=info ts=2020-11-08T09:40:26.622Z caller=main.go:315 msg="No time or size retention was set so using the default time retention" duration=15d
level=info ts=2020-11-08T09:40:26.623Z caller=main.go:353 msg="Starting Prometheus" version="(version=2.22.1, branch=HEAD, revision=00f16d1ac3a4c94561e5133b821d8e4d9ef78ec2)"
level=info ts=2020-11-08T09:40:26.623Z caller=main.go:358 build_context="(go=go1.15.3, user=root@516b109b1732, date=20201105-14:02:25)"
level=info ts=2020-11-08T09:40:26.623Z caller=main.go:359 host_details="(Linux 5.9.0-1-amd64 #1 SMP Debian 5.9.1-1 (2020-10-17) x86_64 b0fadf4a4c80 (none))"
level=info ts=2020-11-08T09:40:26.623Z caller=main.go:360 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2020-11-08T09:40:26.623Z caller=main.go:361 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2020-11-08T09:40:26.641Z caller=main.go:712 msg="Starting TSDB …"
level=info ts=2020-11-08T09:40:26.641Z caller=web.go:516 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2020-11-08T09:40:26.668Z caller=head.go:642 component=tsdb msg="Replaying on-disk memory mappable chunks if any"
level=info ts=2020-11-08T09:40:26.669Z caller=head.go:656 component=tsdb msg="On-disk memory mappable chunks replay completed" duration=103.51µs
level=info ts=2020-11-08T09:40:26.669Z caller=head.go:662 component=tsdb msg="Replaying WAL, this may take a while"
level=info ts=2020-11-08T09:40:26.672Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
level=info ts=2020-11-08T09:40:26.672Z caller=head.go:719 component=tsdb msg="WAL replay completed" checkpoint_replay_duration=123.684µs wal_replay_duration=2.164743ms total_replay_duration=3.357021ms
level=info ts=2020-11-08T09:40:26.675Z caller=main.go:732 fs_type=2fc12fc1
level=info ts=2020-11-08T09:40:26.676Z caller=main.go:735 msg="TSDB started"
level=info ts=2020-11-08T09:40:26.676Z caller=main.go:861 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2020-11-08T09:40:26.684Z caller=main.go:892 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=7.601103ms remote_storage=22.929µs web_handler=623ns query_engine=1.64µs scrape=5.517391ms scrape_sd=359.447µs notify=18.349µs notify_sd=3.921µs rules=15.744µs
level=info ts=2020-11-08T09:40:26.685Z caller=main.go:684 msg="Server is ready to receive web requests."

Now we can open prometheus UI and verify if we are consuming data from cEOS r01.

Yeah! it is there.

So all working at then. It has a nice experience. At the end of the day, I want to know more about gNMI/protobuffer, etc. The cold thing here is you can get telemetry and configuration management of your devices. So using gnmi-gateway (that is more for a high availability env like Netflix) and gnmic are great tools to get your head around.

Other lab I want to try is this eos-gnmi-telemetry-grafana.

The to-do list always keeps growing.

Pisto

Another dish I wanted to try for some time has been pisto. I had to be a teenager since last time. I found several good videos like this. But I tried to do it my way.

Ingredients:

  • 1 courgette
  • 1 big potato
  • 1 big onion
  • 2-3 garlic cloves
  • 3 peppers (red, green, whatever)
  • Salt, olive oil, any spice you fancy
  • 1 can tomato sauce

Process

1- Preheat oven at 200C

2- Chop your vegetables, put everything in a tray. Add salt, spices and coat in oil everything.

3- Put the tray in the oven, kind of 30 minutes. Be sure, they dont burn!

4- Heat up a frying pan. Add some oil, add the can of tomato sauce. Stir, add a bit of salt, and sugar (optional). It should thicken up a bit.

5- Once your vegs are grilled, add them to the tomate sauce (dont throw the liquid from the tray). Stir everything for some minutes and ready to serve.

6- Optional, you can fry an egg and add it on top!

Super easy, quick and healthy!

Chicken-Katsu-Curry

I am not good at cooking curry dishes but there is one I wanted to try. I found a video that I liked and gave it a try.

Curry Sauce:

  • a bit of olive oil to fry (2tsp)
  • 2 onions
  • 2 carrots
  • 5 garlic cloves
  • 1 apple
  • 1 tsp honey
  • 2 tsp flour
  • 4 tsp curry mild powder
  • 2 tsp garam masala
  • 4 tp soy sauce
  • 500ml water + 2 chicken stock cubes
  • 1/2 can coconut milk

Chicken:

  • 2 big chicken breasts
  • bread crumbs
  • 1 egg
  • flour
  • olive oil for frying

Rice

  • 1 cup of rice + 1 1/2 cup of water
  • 1/2 can coconut milk

Process for curry

1- Fry in a pan the chopped onions, carrots, apple and garlic. Stir for 4-5 minutes. Onions should soft a bit.

2- Add the flour, curry, garam masala, coconut milk, honey, soy sauce. Stir everything properly. It should be like a paste. Start adding the chicken stock. Leave it simmer for kind of 30 minutes, while you prepare the chicken and rice.

3- Heat another pan with the olive oil (you are not deep frying…)

4- Prepare the chicken breast. I pounce the chicken breast to make a bit flat. Then pass the chicken first via the flour, properly coated, then pass via the egg, properly coated,then finally via the bread crumbs, properly coated.

5- Fry the chicken, golden in both sides.

6- Boil your 1 1/2 water, put it in another pan, add the rice and the 1/2 can of coconut. Let it cook at middle heat.

7- Once the chicken is fried, the rice ready. It is time to finalize the sauce. I pass the sauce with a hand blender, I dont want to waste all ingredients and take only the liquid. I dont like to waste food.

The result was really good! I have good for a week!

cumulus-basics

Today finally I have managed to get a very basic cumulus setup. It is annoying because I tried several months ago and found some issues with libvirt (and I opened a ticket but didnt follow up) and gave up.

Now it works. I just want to use KVM-QEMU and Vagrant, that I have already installed in my system. So based on the link, I just created a folder and copied the vagrant file. Then “vagrant up” and wait.

/cumulus/1s2l$ vagrant up
Bringing machine 'spine01' up with 'libvirt' provider…
Bringing machine 'leaf01' up with 'libvirt' provider…
Bringing machine 'leaf02' up with 'libvirt' provider…
==> leaf01: Box 'CumulusCommunity/cumulus-vx' could not be found. Attempting to find and install…
leaf01: Box Provider: libvirt
leaf01: Box Version: 4.2.0
==> leaf01: Loading metadata for box 'CumulusCommunity/cumulus-vx'
leaf01: URL: https://vagrantcloud.com/CumulusCommunity/cumulus-vx
==> leaf01: Adding box 'CumulusCommunity/cumulus-vx' (v4.2.0) for provider: libvirt
leaf01: Downloading: https://vagrantcloud.com/CumulusCommunity/boxes/cumulus-vx/versions/4.2.0/providers/libvirt.box
Download redirected to host: d2cd9e7ca6hntp.cloudfront.net
==> leaf01: Successfully added box 'CumulusCommunity/cumulus-vx' (v4.2.0) for 'libvirt'!
==> spine01: Box 'CumulusCommunity/cumulus-vx' could not be found. Attempting to find and install…
spine01: Box Provider: libvirt
spine01: Box Version: 4.2.0
==> leaf01: Uploading base box image as volume into Libvirt storage…
==> spine01: Loading metadata for box 'CumulusCommunity/cumulus-vx'
spine01: URL: https://vagrantcloud.com/CumulusCommunity/cumulus-vx
Progress: 0%==> spine01: Adding box 'CumulusCommunity/cumulus-vx' (v4.2.0) for provider: libvirt
Progress: 0%==> leaf02: Box 'CumulusCommunity/cumulus-vx' could not be found. Attempting to find and install…
leaf02: Box Provider: libvirt
leaf02: Box Version: 4.2.0
Progress: 1%==> leaf02: Loading metadata for box 'CumulusCommunity/cumulus-vx'
leaf02: URL: https://vagrantcloud.com/CumulusCommunity/cumulus-vx
==> leaf02: Adding box 'CumulusCommunity/cumulus-vx' (v4.2.0) for provider: libvirt
==> leaf01: Creating image (snapshot of base box volume).
==> spine01: Creating image (snapshot of base box volume).
==> leaf02: Creating image (snapshot of base box volume).
==> leaf01: Creating domain with the following settings…
==> leaf01: -- Name: 1s2l_leaf01
==> leaf02: Creating domain with the following settings…
==> spine01: Creating domain with the following settings…
==> leaf02: -- Name: 1s2l_leaf02
==> spine01: -- Name: 1s2l_spine01
==> leaf01: -- Domain type: kvm
==> leaf02: -- Domain type: kvm
==> spine01: -- Domain type: kvm
==> leaf01: -- Cpus: 1
==> leaf02: -- Cpus: 1
==> spine01: -- Cpus: 1
==> leaf01: -- Feature: acpi
==> leaf02: -- Feature: acpi
==> spine01: -- Feature: acpi
==> leaf01: -- Feature: apic
==> leaf01: -- Feature: pae
==> leaf01: -- Memory: 768M
==> leaf02: -- Feature: apic
==> spine01: -- Feature: apic
==> spine01: -- Feature: pae
....
....

You can check the VMs are up:

/cumulus/1s2l$ vagrant status
Current machine states:
spine01 running (libvirt)
leaf01 running (libvirt)
leaf02 running (libvirt)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run vagrant status NAME.
/cumulus/1s2l$

And we can login and create some network interfaces as per documentation:

/cumulus/1s2l$ vagrant ssh leaf01
Linux leaf01 4.19.0-cl-1-amd64 #1 SMP Cumulus 4.19.94-1+cl4u5 (2020-07-10) x86_64
Welcome to Cumulus VX (TM)
Cumulus VX (TM) is a community supported virtual appliance designed for
experiencing, testing and prototyping Cumulus Networks' latest technology.
For any questions or technical support, visit our community site at:
http://community.cumulusnetworks.com
The registered trademark Linux (R) is used pursuant to a sublicense from LMI,
the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide
basis.
vagrant@leaf01:mgmt:~$ net add interface swp1,swp2,swp3
vagrant@leaf01:mgmt:~$ net commit
--- /etc/network/interfaces 2020-07-15 01:15:58.000000000 +0000
+++ /run/nclu/ifupdown2/interfaces.tmp 2020-10-31 14:12:30.826000000 +0000
@@ -5,15 +5,24 @@
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet dhcp
vrf mgmt
+auto swp1
+iface swp1
+
+auto swp2
+iface swp2
+
+auto swp3
+iface swp3
+
auto mgmt
iface mgmt
address 127.0.0.1/8
address ::1/128
vrf-table auto
net add/del commands since the last "net commit"
User Timestamp Command
------- -------------------------- --------------------------------
vagrant 2020-10-31 14:12:27.070219 net add interface swp1,swp2,swp3
vagrant@leaf01:mgmt:~$

And after configuring the interfaces in the three VMs, we have LLDP working:

/cumulus/1s2l$ vagrant ssh leaf01
Linux leaf01 4.19.0-cl-1-amd64 #1 SMP Cumulus 4.19.94-1+cl4u5 (2020-07-10) x86_64
Welcome to Cumulus VX (TM)
Cumulus VX (TM) is a community supported virtual appliance designed for
experiencing, testing and prototyping Cumulus Networks' latest technology.
For any questions or technical support, visit our community site at:
http://community.cumulusnetworks.com
The registered trademark Linux (R) is used pursuant to a sublicense from LMI,
the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide
basis.
Last login: Sat Oct 31 14:12:04 2020 from 10.255.1.1
vagrant@leaf01:mgmt:~$
vagrant@leaf01:mgmt:~$
vagrant@leaf01:mgmt:~$
vagrant@leaf01:mgmt:~$ net show lldp
LocalPort Speed Mode RemoteHost RemotePort
--------- ----- ------- ---------- ----------
swp1 1G Default spine01 swp1
swp2 1G Default leaf02 swp2
swp3 1G Default leaf02 swp3
vagrant@leaf01:mgmt:~$
vagrant@leaf01:mgmt:~$ net show system
Hostname……… leaf01
Build………… Cumulus Linux 4.2.0
Uptime……….. 0:06:09.180000
Model………… Cumulus VX
Memory……….. 669MB
Vendor Name…… Cumulus Networks
Part Number…… 4.2.0
Base MAC Address. 52:54:00:17:87:07
Serial Number…. 52:54:00:17:87:07
Product Name….. VX
vagrant@leaf01:mgmt:~$ exit

So I am happy because now I have something to play with and try to build an MPLS lab with cumulus. At some point I would like to try some quaga/frr lab.

I am pretty sure that in the past, I didnt have to type my password every single time I run a vagrant command….

Ok, we can shutdown the VMs and start the work for the next time:

/cumulus/1s2l$ vagrant halt spine01 leaf01 leaf02
==> leaf02: Halting domain…
==> leaf01: Halting domain…
==> spine01: Halting domain…
/cumulus/1s2l$
/cumulus/1s2l$
/cumulus/1s2l$ vagrant status
Current machine states:
spine01 shutoff (libvirt)
leaf01 shutoff (libvirt)
leaf02 shutoff (libvirt)
This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run vagrant status NAME.
/cumulus/1s2l$

WisdomOfInsecurity

I finished reading this book last night. To be honest, it has been hard to read and digest. Very hardcore philosophical for my level. To put things a bit on perspective, the book was written on 1954… and half way the book you realize that things he talks about are still pretty valid nowadays. Without noticing, he is taking a approach to Easter philosophy (Buddhism) in contrast to the Western one. We are very focus in the “I”, in the material world, etc. We try to get things defined as something static and we need that for security. Our brain is the one leading the shots but taking a different approach, accepting the insecurity (you can’t control everything, you can’t know everything) you can live a less stressful and meaningful life.

Again, this is the typical book I should read 30 times to get really a full understanding.