I was reading through my backlog and noticed too close by incidents. A BGP hijack on 30th September from Telstra and Tokyo Stock Exchange outage on 2nd Oct. At the end of the day, small mistakes/errors (on purpose or not) can cause massive impact (depending on your point of view). For BGP, RPKI is the security framework to make sure the advertised routes belong to the real owners. Yeah, quick summary. But at the end of the day, not all Internet providers are using RPKI, and even if you use it, you can make mistakes. This is better than nothing. For the exchanges, thinking that a piece of hardware can cause a stop to a 6 trillion $ market is crazy. And it seems is just a 350 servers system. That tells me that you dont need the biggest system to hold the biggest value and you will always hit a problem no matter how safe/resilience is your design/implementation/etc. Likely I am making this up and I need to review the book, but one of the conclusions I took from it, via Godel, it doesn’t matter how many statements you use to declare your (software) system, you can always find a weakness (false statement).
Author: flipaoXIX
Evolved-Indiana
This week I realised that Juniper JunOS was moving to Linux…. called Evolved. I guess they will still be supporting FreeBSD version but long term will be Linux. I am quite surprised as this was really announced early 2020, always late joining the party. So all big boys are running linux at some level: Cisco has done it sometime ago with nx-os, Brocade/Extrene did it too with SLX (based on Ubuntu) and obviously Arista with EOS (based on Fedora). So the trend of more “open” network OS will be on the raise.
And as well, I finished “Indiana Jones and the Temple of Doom” book. Indiana Jones films are among my favourites… although this was was always considered the “worse” (I erased from my mind the “fourth”) I have really enjoyed the book. It was like watching the movie at slow pace and didnt care that I knew the plot. I will get the other books likely.
NTS
From a new Cloudflare post, I learned that NTS is a standard. To be honest, I can’t remember there was work for making NTP secure. In the last years I have seen development in PTP for time sync in financial systems but nothing else. So it is nice to see this happening. We only need to encrypt BGP and we are done in the internet.. oh wait. Dreaming is free.
So I am trying to install and configure NTS in my system following these links: link1 link2
I have just installed ntpsec via debian packages system and that’s it, ntpsec is running…
# apt install ntpsec ... # service ntpsec status ● ntpsec.service - Network Time Service Loaded: loaded (/lib/systemd/system/ntpsec.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2020-10-04 20:35:58 BST; 6min ago Docs: man:ntpd(8) Main PID: 292116 (ntpd) Tasks: 1 (limit: 9354) Memory: 10.2M CGroup: /system.slice/ntpsec.service └─292116 /usr/sbin/ntpd -p /run/ntpd.pid -c /etc/ntpsec/ntp.conf -g -N -u ntpsec:ntpsec Oct 04 20:36:02 athens ntpd[292116]: DNS: dns_check: processing 3.debian.pool.ntp.org, 8, 101 Oct 04 20:36:02 athens ntpd[292116]: DNS: Pool taking: 81.128.218.110 Oct 04 20:36:02 athens ntpd[292116]: DNS: Pool poking hole in restrictions for: 81.128.218.110 Oct 04 20:36:02 athens ntpd[292116]: DNS: Pool taking: 139.162.219.252 Oct 04 20:36:02 athens ntpd[292116]: DNS: Pool poking hole in restrictions for: 139.162.219.252 Oct 04 20:36:02 athens ntpd[292116]: DNS: Pool taking: 62.3.77.2 Oct 04 20:36:02 athens ntpd[292116]: DNS: Pool poking hole in restrictions for: 62.3.77.2 Oct 04 20:36:02 athens ntpd[292116]: DNS: Pool taking: 213.130.44.252 Oct 04 20:36:02 athens ntpd[292116]: DNS: Pool poking hole in restrictions for: 213.130.44.252 Oct 04 20:36:02 athens ntpd[292116]: DNS: dns_take_status: 3.debian.pool.ntp.org=>good, 8 #
Checking the default config, there is nothing configured to use NTS so I made some changes based on the links above:
# vim /etc/ntpsec/ntp.conf ... # Public NTP servers supporting Network Time Security: server time.cloudflare.com:1234 nts # Example 2: NTS-secured NTP (default NTS-KE port (123); using certificate pool of the operating system) server ntp1.glypnod.com iburst minpoll 3 maxpoll 6 nts #Via https://www.netnod.se/time-and-frequency/how-to-use-nts server nts.ntp.se:3443 nts iburst server nts.sth1.ntp.se:3443 nts iburst server nts.sth2.ntp.se:3443 nts iburst
After restart, still not seeing NTS in sync 🙁
# service ntpsec restart ... # ntpq -puw remote refid st t when poll reach delay offset jitter time.cloudflare.com .NTS. 16 0 - 64 0 0ns 0ns 119ns ntp1.glypnod.com .NTS. 16 5 - 32 0 0ns 0ns 119ns 2a01:3f7:2:202::202 .NTS. 16 1 - 64 0 0ns 0ns 119ns 2a01:3f7:2:52::11 .NTS. 16 1 - 64 0 0ns 0ns 119ns 2a01:3f7:2:62::11 .NTS. 16 1 - 64 0 0ns 0ns 119ns 0.debian.pool.ntp.org .POOL. 16 p - 256 0 0ns 0ns 119ns 1.debian.pool.ntp.org .POOL. 16 p - 256 0 0ns 0ns 119ns 2.debian.pool.ntp.org .POOL. 16 p - 256 0 0ns 0ns 119ns 3.debian.pool.ntp.org .POOL. 16 p - 64 0 0ns 0ns 119ns -229.191.57.185.no-ptr.as201971.net .GPS. 1 u 25 64 177 65.754ms 26.539ms 7.7279ms +ns3.turbodns.co.uk 85.199.214.99 2 u 23 64 177 12.200ms 2.5267ms 1.5544ms +time.cloudflare.com 10.21.8.19 3 u 25 64 177 5.0848ms 2.6248ms 2.6293ms -ntp1.wirehive.net 202.70.69.81 2 u 21 64 177 9.6036ms 2.3986ms 1.9814ms +ns4.turbodns.co.uk 195.195.221.100 2 u 21 64 177 10.896ms 2.9528ms 1.5288ms -lond-web-1.speedwelshpool.com 194.58.204.148 2 u 23 64 177 5.6202ms 5.8218ms 3.2582ms -time.shf.uk.as44574.net 85.199.214.98 2 u 29 64 77 9.0190ms 4.9419ms 2.5810ms lux.22pf.org .INIT. 16 u - 64 0 0ns 0ns 119ns ns1.thorcom.net .INIT. 16 u - 64 0 0ns 0ns 119ns time.cloudflare.com .INIT. 16 u - 64 0 0ns 0ns 119ns time.rdg.uk.as44574.net .INIT. 16 u - 64 0 0ns 0ns 119ns -herm4.doylem.co.uk 185.203.69.150 2 u 19 64 177 15.024ms 9.5098ms 3.2011ms -213.251.53.217 193.62.22.74 2 u 17 64 177 5.7211ms 1.4122ms 2.1895ms *babbage.betadome.net 85.199.214.99 2 u 20 64 177 4.8614ms 4.1187ms 2.5533ms # # # ntpq -c nts NTS client sends: 56 NTS client recvs good: 0 NTS client recvs w error: 0 NTS server recvs good: 0 NTS server recvs w error: 0 NTS server sends: 0 NTS make cookies: 0 NTS decode cookies: 0 NTS decode cookies old: 0 NTS decode cookies too old: 0 NTS decode cookies error: 0 NTS KE probes good: 8 NTS KE probes_bad: 0 NTS KE serves good: 0 NTS KE serves_bad: 0 #
I ran tcpdump filtering on TCP ports 1234 (cloudflare) and 3443 (netnod), and I can see my system trying to negotiate NTS with Cloudflare and NetNod but both sessions are TCP RST 🙁

Need to carry on researching…
BPF – Linux
Last time I tried BPF was via an Ubuntu VM prepared for BPF. But this week checking another article, I realised that I can run BPF natively in my laptop!!!
So aptitude did the job installing the package, and didn’t have to install a new kernel or patch, so super easy and I can see it is working as based in the article:
# apt depends bpftrace bpftrace Depends: libbpfcc (>= 0.12.0) Depends: libc6 (>= 2.27) Depends: libclang1-9 (>= 1:9~svn359771-1~) Depends: libgcc-s1 (>= 3.0) Depends: libllvm9 (>= 1:9~svn298832-1~) Depends: libstdc++6 (>= 5.2) # # # dpkg -l | grep bpftrace ii bpftrace 0.11.0-1 amd64 high-level tracing language for Linux eBPF # # uname -a Linux athens 5.8.0-1-amd64 #1 SMP Debian 5.8.7-1 (2020-09-05) x86_64 GNU/Linux # # # bpftrace -e 'software:faults:1 { @[comm] = count(); }' Attaching 1 probe… ^C @[BatteryStatusNo]: 1 @[slack]: 52 @[Xorg]: 139 @[VizCompositorTh]: 455 @[Chrome_IOThread]: 463 @[ThreadPoolForeg]: 1305 @[CompositorTileW]: 2272 @[Compositor]: 3789 @[Chrome_ChildIOT]: 4610 @[chrome]: 8020 #
And run the same script.
# bpftrace bpftrace-example.bt Attaching 2 probes… Sampling CPU at 99hz… Hit Ctrl-C to end. ^C @cpu: [0, 1) 33 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| [1, 2) 23 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ | [2, 3) 31 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ | [3, 4) 23 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ | #
Now I really need to play with it in my own system, no excuse...
Screen-Brightness
Another thing I realized lately was that my laptop screen was very dark, not bright at all like my external screen so it was hard to use both. I use Debian Testing LXDE as it is quite light and I dont need anything as heavy as Gnome/KDE. So I struggle how to adjust the brightness but finally got it.
I had to try different programs but finally a blog showed all possibilities and found the one that works for me.
$ brightnessctl set 800 -d intel_backlight
The next thing, I had to be sure that was effective after reboots…. So not sure if this is very clean solution, but I just added that command to my .bashrc. It works. Moving on.
VirtualBox-Python2-Debian-Dependencies
This week I realised that Debian was removing python2 support and surprisingly…. it was trying to remove VirtualBox from my system…
So it seems that VirtualBox is still depending on python2. A bit disappointing.
I am not really keen of VirtualBox but I have had to use it lately for my Kubernetes training and testing OpenBSD. I prefer using kvm/quemu. So I know I will have to workout how to do kubernetes/bsd outside VirtualBox….
Something I learned by the way was to check the dependencies of a package in Debian…. I guess it is about time.
apt-cache depends package-name
Drive
No, it is not about cars. I just finished reading Drive from Daniel Pink. I quite liked it as it is mainly focus in the daily working life. And you can find a summary at the end of the book of each chapter. Plus specific advises for different circumstances.
The books is about what is motivation, what motivate us, etc. Funny enough, again, there is a reference to “Thinking fast and slow” as a proof that we are not as rational as we think making decisions. As well there are a lot of references to “flow” from Mihaly Csikszentmihalyi. Quite interesting and central to the book too.
Initially our motivations are survival and reproduction like any other animal. That heavily changed with the Industrial revolution and the move to a workforce based in offices were the motivation was based on carrot/stick policies. That works for repetitive tasks but not for creative ones.
And I feel identified about that. I am looking for that motivation, drive in myself. I want to enjoy my job, want to learn, want to see things happening due to my actions. And I dont want a massive salary, neither bonuses as it would be more a burden that a help. Just a decent salary (I am not going to become rich working) so you can remove the money from the table and focus in what is really fulfilling. But most of the work environments are not like this. Although the books shows some punctual places where they have applied a different approach and have produced results. This one is quite radical and motivating
As well, another thing I discovered in the book, it is the term B companies. Several links about it: definition1 definition2 example1 example2
So they are for profit-companies but with some soul. Really like it. And to be honest, as a consumer, want to support that. Even maybe one day work in one of those or even set up one (related to IT, but have no idea)
The author says the new motivation/drive for this century is based on your personality. If you are not influenced much for external things, then your drive is based on: autonomy, mastery and purpose.
If your goal is external things: money, promotions, power, sex, etc. Maybe you will not have enough.
You want to take responsibility if you want to give your best so you need to have the voice to choose how, when , with whom to achieve that. You want to master your task, that’s never a quick path, but slow and sometimes hard, but that makes it worth it. And finally, you want to see a meaning for all that.
You have those 3 ingredients in your life (and they are not going to come to your), you are in a fulfilling trip.
Kubernetes Troubleshooting I
Restore ETCD
This is a process no well documented in the official docs and I messed up in my CKA exam:
1- check config of etcd process. Maybe you will need some details for the restore process
$ kubectl describe pod -n kube-system etcd-master ... --name=master --initial-cluster=master=https://127.0.0.1:2380 --initial-advertise-peer-urls=https://127.0.0.1:2380 ...
2- Stop api-server if not running kubeadm
$ service kube-apiserver stop
3- Check help for all restore options. Keep in mind you will need (very likely) to provide certs for auth.
$ ETCDTL_API=3 etcdctl snapshot restore -h
4- Restore ETCD using a previous backup:
$ ETCDTL_API=3 etcdctl --endpoints 127.0.0.1:2379 snapshot restore FILE \ --cacert xxx --cert xx --key xxx--data-dir /NEW/DIR \
--initial-cluster-toker TOKEN \ (token is any word)
--name master \
--initial-cluster=master=https://127.0.0.1:2380 \
--initial-advertise-peer-urls=https://127.0.0.1:2380
USE HTTPS!!!!
5- Add new lines and update volume paths in ETCD config. If it is a static pod, check in /etc/kubernetes/manifests in master node.
--data-dir=/NEW/DIR --initial-cluster-token TOKEN ++ volumeMounts/volumes to new path /NEW/DIR !!!!
6- Restart services if not running kubeadm
$ systemctl daemon-reload $ service etcd restart $ service etcd kube-apiserver start
7- Checks
/// if using kubeadm, docker instance for etcd should restart $ docker ps -a | grep -i etcd /// check etcd is running showing members: $ ETCDCTL_API=3 etcdctl member list --cacert xxx --cert xx --key xxx
Sidecar -logging
Based on this doc. You want to send some logs to stderr so you create a new container that takes those.
Container with a sidecar:
apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busyboxargs:
- /bin/sh
- -c
- > i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log; i=$((i+1)); sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
- name: sidecar-1 image: busybox args: [/bin/sh, -c, 'tail -n+1 -f /var/log/1.log'] volumeMounts: name: varlog mountPath: /var/log volumes: name: varlog emptyDir: {}
Now you can see the logs of “/var/log/1.log” going via “sidecar-1”
$ kubectl logs counter sidecar-1
CPU/Memory of a POD
Based on these links: link1 , link2, link3
If you want to use “kubectl top” you need to install “metrics-server”
$ kubectl top pod --all-namespaces
Keep in mind that “kubectl top” shows metrics for a given pod. That information is based on reports from cAdvisor, which collects real pods resource usage.
And as per link3, “kubectl top” is not the same as running “top” inside the container.
Node NotReady
Based on this link:
$ kubectl get nodes $ kubectl describe nodes XXX $ ssh node -> check for kubelet logs cat /var/log/kubelet.log $ journalctl -u kubelet // systemctl status kubelet --> if a service
Cocoa Peanut Butter
I like a lot nuts and peanut butter can be a good snack before/after a workout but if you try to buy a good one from the supermarket is not cheap. 500g of roasted/salted nuts is around £3 so it is easier to do it yourself and you know what is in it! I took inspiration from this blog post.
- 500g of peanuts (if possible unsalted)
- pinch of sea salt
- 2 tsp of coconut oil
- 50g 100% cocoa
- 2 tsp of custer sugar
In my case, I just can find salted peanuts. So I pass them via water to remove the excess of salt.
1- Roast the peanuts in a pre-heat (200C) oven for 5 minutes. Toss them and give another a couple of minutes. Be sure they don’t burn! Let them cool for a bit until you can handle them
2- Put the peanuts, salt, coconut oil, cocoa and sugar in the food processor. Run at full speed for several minutes. Depending on your taste, you can make it super smooth. In my case, I like a bit crunchy. In the main time, taste it just in case you want to add anything else (salt, sugar, coconut oil, etc)

CKA
I am studying for the Kubernetes certification CKA. These are some notes:
1- CORE CONCEPTS
1.1- Cluster Architecture
Master node: manage, plan, schedule and monitor. These are the main components:
- etcd: db as k-v
- scheduler
- controller-manager: node-controller, replication-controller
- apiserver: makes communications between all parts
- docker
Worker node: host apps as containers. Main components:
- kubelet (captain of the ship)
- kube-proxy: allow communication between nodes
1.2- ETCD
It is a distributed key-value store (database). TCP 2379. Stores info about nodes, pods, configs, secrets, accounts, roles, bindings etc. Everything related to the cluster.
Basic commands:
client: ./etcdctl set key1 value1 ./etcdctl get key1
Install Manual:
1- wget "github binary path to etc" 2- setup config file: important "--advertise-client-urls: IP:2379" a lot of certs needed!!!
Install via kubeadm already includes etcd:
$ kubectl get pods -n kube-system | grep etcd // get all keys from etcd $ kubectl exec etcd-master -n kube-system etcdctl get / --prefix -keys-only
etcd can be set up as a cluster, but this is for another section.
1.3- Kube API Server
You can install a binary (like etcd) or use it via kubeadm.
It has many options and it defines certs for all connections!!!
1.4- Kube Controller-Manager
You can install a binary (like etcd) or use kubeadm. It gets all the info via the API server. Watch status of pods, remediate situations. Parts:
- node-controller
- replications-controller
1.5- Kube Scheduler
Decides which pod goes to which node. You can install a binary or via kubeadm.
1.6- Kubelet
It is like the “captain” of the “ship” (node). Communicates with the kube-cluster via the api-server.
Important: kubeadm doesnt install kubelet
1.7- Kube-Proxy
In a cluster, each pod can reach any other pod -> you need a pod network!
It runs in each node. Creates rules in each node (iptables) to use “services”
1.8- POD
It is the smallest kube object.
1 pod =~ 1 container + help container
It can be created via a “kubectl run” or via yaml file.
apiVersion: v1
kind: Pod
metadata:
name: postgres-pod
labels:
name: postgres-pod
app: demo-voting-app
spec:
containers:
- name: postgres
image: postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
value: "postgres"
Commands:
$ kubectl create -f my-pod.yaml $ kubectl get pods $ kubectl describe pod postgres
It always contains “apiVersion”, “kind”, “metadata” and “spec”.
1.9 ReplicaSet
Object in charge of monitoring pods, HA, loadbalancing, scaling. It is a replacement of “replication-controller”. Inside the spec.tempate you “cope/paste” the pod definition.
The important part is “selector.matchLabels” where you decide what pods are going to be managed by this replicaset
Example:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: my-rs
labels:
app: myapp
spec:
replicas: 3
selector: // match pods created before the RS - main difference between RS
and RC
matchLabels:
app: myapp --> find labels from pods matching this
template:
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: nginx-controller
image: nginx
Commands:
$ kubectl create -f my-rs.yaml $ kubectl get replicaset $ kubectl scale --replicas=4 replicaset my-rs $ kubectl replace -f my-rs.yaml
1.10- Deployments
It is an object that creates a pod + replicaset. It provides the upgrade (rolling updates) feature to the pods.
File is identical as a RS, only changes the “kind”
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: myapp
spec:
replicas: 3
selector: // match pods created before the RS - main difference between RS
and RC
matchLabels:
app: myapp --> find labesl from pods matching this
template:
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: nginx-controller
image: nginx
Commands:
$ kubectl create -f my-rs.yaml $ kubectl get deployments $ kubectl get replicaset $ kubectl get pods
1.11- Namespace
It is a way to create different environments in the cluster. ie: production, testing, features, etc. You can control the resource allocations for the “ns”
By default you have 3 namespaces:
- kube-system: where all control-plane pods are installed
- default:
- kube-public:
The “ns” is used in DNS.
db-service.dev.svc.cluster.local --------- --- --- ----------- svc name ns type domain(default) 10-10-1-3.default.pod.cluster-local --------- --- --- ----------- pod IP ns type domain(default)
Keep in mind that POD DNS names are just the “IP” in “-” format.
You can add “namespace: dev” into the “metadata” section of yaml files. By default, namespace=default.
$ kubectl get pods --namespace=xx (by default is used "default" namespace)
Create “ns”:
namespace-dev.yaml --- apiVersion: v1 kind: Namespace metadata: name: dev $ kubectl create -f namespace-dev.yaml or $ kubectl create namespace dev
Change “ns” in your context if you dont want to type it in each kubectl command:
$ kubectl config set-context $(kubectl config current-context) -n dev
See all objects in all ns:
$ kubectl get pods --all-namespaces $ kubectl get ns --no-headers | wc -l
1.12- Resource Quotas
You can state the resources (cpu, memory, etc) for a pod.
Example:
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: dev
spec:
hard:
pods: "10"
requests.cpu: "4"
requests.memory: 5Gi
limits.cpu: "10"
limits.memory: 10Gi
Commands:
$ kubectl create -f compute-quota.yaml
1.13 Services
It is an object. It connects pods to external users or other pods.
Types:
- NodePort: like docker port-mapping
- ClusterIP: like a virtual IP that is reachable to all pods in the cluster.
- LoadBalancer: only available in Cloud providers
1.13.1 NodePort
Like a virtual server. SessionAffinity: yes. Random Algorithm for scheduling.
Important parts:
- targetport: This is the pod port.
- port: This is the service port (most of the times, it is the same as targetport).
- nodeport: This is in the node (the port other pods in different nodes are going to hit)
Example:
apiVersion: v1
kind: Service
metadata:
name: mypapp-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30080 (range: 30000-32767)
selector:
app: myapp ---|
type: front-end ---|-> matches pods !!!!
The important bits are the “spec.ports” and “spec.selector” definitions. The “selector” is used to match on labels from pods where we want to apply this service.
Commands:
// declarative $ kubectl create -f service-definition.yml $ kubectl get services // imperative $ kubectl expose deployment simple-webapp-deployment --name=webapp-service --target-port=8080 --type=NodePort \ --dry-run=client -o yaml > svc.yaml --> create YAML !!!
Example of creating pod and service imperative way:
$ kubectl run redis --image=redis:alpine --labels=tier=db $ kubectl expose pod redis --name redis-service --port 7379 --target-port 6379
1.13.2 ClusterIP
It is used for access to several pods (VIP). This is the default service type.
Example:
apiVersion: v1
kind: Service
metadata:
name: back-end
spec:
type: ClusterIP // (default)
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: back-end
Commands:
$ kubectl create -f service-definition.yml $ kubectl get services
1.13.3 Service Bound
Whatever the service you use, you want to be sure it is in use, you can check that seeing if the service is bound to a node. That is configured by “selector” but to confirm that is correct, use the below command. You must have endpoints to proof your service is attached to some pods.
$ kubectl get service XXX | grep -i endpoint
1.13.4 Microservice Architecture Example
Based on this “diagram”:
voting-app result-app
(python) (nodejs)
|(1) ^ (4)
v |
in-memoryDB db
(redis) (postgresql)
^ (2) ^ (3)
| |
------- -------
| |
worker
(.net)
These are the steps we need to define:
1- deploy containers -> deploy PODs (deployment) 2- enable connectivity -> create service clusterIP for redis create service clusterIP for postgres 3- external access -> create service NodePort for voting create service NodePort for result
1.14- Imperative vs Declarative
imperative: how to do things (step by step)
$ kubectl run/create/expose/edit/scale/set … $ kubectl replace -f x.yaml !!! x.yaml has been updated
declarative: just what to do (no how to do) –> infra as code / ansible, puppet, terraform, etc
$ kublectl apply -f x.yaml <--- it creates/updates
1.15 – kubectl and options
--dry-run: By default as soon as the command is run, the resource will be created. If you simply want to test your command , use the --dry-run=client option. This will not create the resource, instead, tell you weather the resource can be created and if your command is right. -o yaml: This will output the resource definition in YAML format on screen. $ kubectl explain pod --recursive ==> all options available $ kubectl logs [-f] POD_NAME [CONTAINER_NAME] $ kubectl -n prod exec -it PODNAME cat /log/app.log $ kubectl -n prod logs PODNAME
1.16- Kubectl Apply
There are three type of files:
- local file: This is our yaml file
- live object config: This is the file generated via our local file and it is what you see when using “get”
- last applied config: This is used to find out when fields are REMOVED from the local file
“kubectl apply” compares the three files above to find our what to add/delete.
2- SCHEDULING
2.1- Manual Scheduling
- what to schedule? find pod without “nodeName” in the spec section, then finds a node for it.
- only add “nodeName” at creation time
- After creation, only via API call you can change that
Check you have a scheduler running:
$ kubectl -n kube-system get pods | grep -i scheduler
2.2 Labels and Selectors
- group and select things together.
- section “label” in yaml files
how to filter via cli:
$ kubectl get pods --selector key=value --selector k1=v1 $ kubectl get pods --selector key=value,k1=v1 $ kubectl get pods -l key=value -l k1=v1
In Replicasets/Services, the labels need to match!
--
spec:
replicas: 3
selector:
matchLabels:
app:App1 <----
template: |
metadata: |-- need to match !!!
labels: |
app:App1 <---
2.3 Taints and Tolerations
set restrictions to check what pods can go to nodes. It doesn’t tell the POD where to go!!!
- you set “taint” in nodes
- you set “tolerance” in pods
Commands:
$ kubectl taint nodes NODE_NAME key=value:taint-effect $ kubectl taint nodes node1 app=blue:NoSchedule <== apply $ kubectl taint nodes node1 app=blue:NoSchedule- <== remove(-) !!! $ kubectl taint nodes node1 <== display taints
*tain-effect = what happens to PODS that DO NOT Tolerate this taint? Three types:
- NoSchedule: - PreferNoSchedule: will try to avoid the pod in the node, but not guarantee - NoExecute: new pods will not be installed here, and current pods will exit if they dont tolerate the new taint. The node could have already pods before applying the taint…
Apply toleration in pod, in yaml, it is defined under “spec”:
spec:
tolerations:
- key: "app"
operator: "Equal"
value: "blue"
effect: "NoSchedule"
In general, the master node never gets pods (only the static pods for control-plane)
$ kubectl describe node X | grep -i taint
2.4 Node Selector
tell pods where to go (different for taint/toleration)
First, apply on a node a label:
$ kubectl label nodes NODE key=value $ kubectl label nodes NODE size=Large
Second, apply on pod under “spec” the entry “nodeSelector”:
... spec: nodeSelector: size: Large
2.5 Node Affinity
extension of “node selector” with “and” “or” logic ==> mode complex !!!!
apply on pod:#
....
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution: or
preferredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In || NotIn || Exists
values:
- Large Small
- Medium
DuringScheduling: pod is being created
2.6 Resource Limits
Pod needs by default: cpu(0.5) men(256m) and disk
By default: max cpu = 1 // max mem = 512Mi
Important regarding going over the limit:
if pod uses more cpu than limit -> throttle mem -> terminate (OOM)
Example:
pod
---
spec:
containers:
resources:
requests:
memory: "1Gi"
cpu: 1
limits:
memory: "2Gi"
cpu: 2
2.7 DaemonSets
It is like a replicaset (only kind changes). run 1 pod in each node: ie monitoring, logs viewer, networking (weave-net), kube-proxy!!!
It uses NodeAffinity and default scheduler to schedule pods in nodes.
$ kubectl get daemonset
if you add a node, the daemonset creates that pod delete deletes
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitoring-daemon
spec:
selector:
matchLabels:
app: monitoring-agent
template:
metadata:
labels:
app: monitoring-agent
spec:
containers:
- name: monitoring-agent
image: monitoring-agent
2.8 Static PODs
kubelet in a node, can create pods using files in /etc/kubernetes/manifests automatically. But, it can’t do replicasets, deployments, etc
The path for the static pods folder is defined in kubelet config file
kubelet.service <- config file ... --config=kubeconfig.yaml \ or --pod-manifest-path=/etc/kubernetes/manifests kubeconfig.yaml --- staticPodPath: /etc/kubernetes/manifests
You can check with”docker ps -a” in master for docker images running the static pods.
Static pods is mainly used by master nodes for installing pods related to the kube cluster (control-plane: controller, apiserver, etcd, ..)
Important:
- you can’t delete static pods via kubectl. Only by deleting the yaml file for the folder “/etc/kubernetes/manifests”
- the pods created via yaml in that folder, will have “-master” added to the name if you are in master node when using “kubectl get pods” or “-nodename” if it is other node.
Comparation Static-Pod vs Daemon-Set
static pod vs daemon-set
---------- -----------
- created by kubelet - created by kube-api
- deploy control-plane componets - deploy monitoring, logging
as static pods agents on nodes
- ignored by kube-scheduler - ignored by kube-scheduler
2.9 Multiple Schedulers
You can write you own scheduler.
How to create it:
kube-scheduler.service
--scheduler-name= custom-scheduler
/etc/kubernetes/manifests/kube-scheduler.yam --> copy and modify
--- (a scheduler is a pod!!!)
apiVersion: v1
kind: Pod
metadata:
name: my-custom-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=false
- --scheduler-name=my-custom-scheduler
- --lock-object-name=my-custom-scheduler
image: xxx
name: kube-scheduler
ports:
- containerPort: XXX
Assign new scheduler to pod:
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
name: nginx
schedulerName: my-custom-scheduler
How to see logs:
$ kubectl get events ---> view scheduler logs $ kubectl logs my-custom-scheduler -n kube-system
3- LOGGING AND MONITORING
Monitoring cluster components. There is nothing built-in (Oct 2018).
- pay: datadog, dynatrace
- Opensource Options: metrics server, prometheus, elastic stack, etc
3.1- metrics server
one per cluster. data kept in memory. kubelet (via cAdvisor) sends data to metric-server.
install: > minukube addons enable metrics-server //or other envs: git clone "github path to binary" kubectl create -f deploy/1.8+/ view: > kubectl top node/pod
4- APPLICATION LIFECYCLE MANAGEMENT
4.1- Rolling updates / Rollout
rollout -> a new revision. This is the reason you create “deployment” objects.
There are two strategies:
- recreate: destroy all, then create all -> outage! (scale to 0, then scale to X)
- rolling update (default): update a container at each time -> no outage (It creates a new replicaset and then starts introducing new pods)
How to apply a new version?
1) Declarative: make change in deployment yaml file kubectl apply -f x.yaml (recommended) or 2) Imperative: kubectl create deployment nginx-deploy --image=nginx:1.16 kubectl set image deployment/nginx-deploy nginx=nginx:1.17 --record
How to check status of the rollout
status: $ kubectl rollout status deployment/NAME history: $ kubectl rollout history deployment/NAME rollback: $ kubectl rollout undo deployment/NAME
4.2- Application commands in Docker and Kube
From a “Dockerfile”:
--- FROM Ubuntu ENTRYPOINT ["sleep"] --> cli commands are appended to entrypoint CMD ["5"] --> if you dont pass any value in "docker run .." it uses by default 5. ---
With the docker image created above, you can create a container like this:
$ docker run --name ubuntu-sleeper ubuntu-sleeper 10
So now, kubernetes yaml file:
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-sleeper-pod
spec:
containers:
- name: ubuntu-sleeper
image: ubuntu-sleeper
command: ["sleep","10"] --> This overrides ENTRYPOINT in docker
args: ["10"] --> This overrides CMD [x] in docker
["--color=blue"]
4.3- Environment variables
You define them inside the spec.containers.container section:
spec:
containers:
- name: x
image: x
ports:
- containerPort: x
env:
- name: APP_COLOR
value: pink
4.4- ConfigMap
Defining env var can be tedious, so config maps is the way to manage them a bit better. You dont have to define in each pod all env vars… just one entry now.
First, create configmap object:
imperative $ kubectl create configmap NAME \ --from-literal=KEY=VALUE \ --from-literal=KEY2=VALUE2 \ or --from-file=FILE_NAME FILE_NAME key1: val1 key2: val2 declarative $ kubectl create -f cm.yaml $ kubectl get configmaps cat app-config --- apiVersion: v1 kind: ConfigMap metadata: name: app-config data: KEY1: VAL1 KEY2: VAL2
Apply configmap to a container in three ways:
1) Via "envFrom": all vars
spec:
containers:
- name: xxx
envFrom: // all values
- configMapRef:
name: app-config
2) Via "env", to import only specific vars
spec:
containers:
- name: x
image: x
ports:
- containerPort: x
env:
- name: APP_COLOR -- get one var from a configmap, dont import everything
valueFrom:
configMapKeyRef:
name: app-config
key: APP_COLOR
3) Volume:
volumes:
- name: app-config-volume
configMap:
name: app-config
Check “explain” for more info:
$ kubectl explain pods --recursive | grep envFrom -A3
4.5- Secrets
This is encode in base64 so not really secure. It just avoid to have sensitive info in clear text…
A secret is only sent to a node if a pod on that node requires it.
Kubelet stores the secret into a tmpfs so that the secret is not written to disk storage. Once the Pod that depends on the secret is deleted, kubelet will delete its local copy of the secret data as well:
https://kubernetes.io/docs/concepts/configuration/secret/#risks
How to create secrets:
imperative $ kubectl create secret generic NAME \ --from-literal=KEY=VAL \ --from-literal=KEY2=VAL2 or --from-file=FILE cat FILE DB_Pass: password declarative $ kubectl create -f secret.yaml cat secret.yaml --- apiVersion: v1 kind: Secret metadata: name: app-secret data: DB_Pass: HASH <---- $ echo -n 'password' | base64 // ENCODE !!!! $ echo -n 'HASH' | base64 --decode // DECODE !!!!
You can apply secrets in three ways:
1) as "envFrom" to import all params from secret object spec: containers: - name: xxx envFrom: - secretRef: name: app-secret 2) Via "env" to declare only one secret param spec: containers: - name: x image: x env: name: APP_COLOR valueFrom: secretKeyRef: name: app-secret key: DB_password 3) Volumes: spec: containers: - command: ["sleep", "4800"] image: busybox name: secret-admin volumeMounts: - name: secret-volume mountPath: "/etc/secret-volume" readOnly: true volumes: - name: secret-volume secret: secretName: app-secret --> each key from the secret file is created as a file in the volume. The content of the file is the secret. $ ls -ltr /etc/secret-volume DB_Host DB_User DB_Password
4.6- Multi-container Pods
Scenarios where your app needs an agent, ie: web server + log agent
apiVersion: v1
kind: Pod
metadata:
name: simple-webapp
labels:
name: simple-webapp
spec:
containers:
- name: simple-webapp
image: simple-webapp
ports:
- containerPort: 8080
- name: log-agent
image: log-agent
4.7- Init Container
You use an init container when you want to setup something before the other containers are created. Once the initcontainers complete their job, the other containers are created.
An initContainer is configured in a pod like all other containers, except that it is specified inside a initContainers section
You can configure multiple such initContainers as well, like how we did for multi-pod containers. In that case each init container is run one at a time in sequential order.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'git clone <some-repository-that-will-be-used-by-application> ;']
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
5- CLUSTER MAINTENANCE
5.1- Drain Node
If you need to upgrade/reboot a node, you need to move the pods to somewhere else to avoid an outage.
Commands:
$ kubectl drain NODE -> pods are moved to another nods and it doesnt receive anything new $ kubectl uncordon NODE -> node can receive pods now $ kubectl cordon NODE -> it doesnt drain the node, it just make the node to not receive new pods
“kube-controller-manager” check the status of the nodes. By default, kcm takes 5 minutes to mark down:
$ kube-controller-manager --pod-eviction-timeout=5m0s (by default) time masters waits for a node to be backup
5.2- Kubernetes upgrade
You need to check the version you are running:
$ kubectl get nodes --> version: v_major.minor.path
Important: kube only supports only the last two version from current, ie:
new current v1.12 -> support v1.11 and v1.10 ==> v1.9 is not supported!!!
Important: nothing can be higher version than kube-apiserver, ie:
kube-apiserver=x (v1.10) - controller-mamanger, kube-scheduler can be x or x-1 (v1.10 , v1.9) - kubetet, kube-proxy can be x, x-1 or x-2 (v1.8, v1.9, v1.10) - kubectl can be x+1,x,x-1 !!!
Upgrade path: one minor upgrade at each time: v1.9 -> v1.10 -> v1.11 etc
Summary Upgrade:
1- upgrade master node 2- upgrade worker nodes (modes) - all nodes at the same time or - one node at each time - add new nodes with the new sw version, move pods to it, delete old node
5.2.1- Upgrade Master
From v1.11 to v1.12
$ kubeadm upgrade plan --> it gives you the info the upgrade $ apt-get update $ apt-get install -y kubeadm=1.12.0-00 $ kubeadm upgrade apply v1.12.0 $ kubectl get nodes (it gives you version of kubelet!!!!) $ apt-get upgrade -y kubelet=1.12.0-00 // you need to do this if you have "master" in "kubectl get nodes" $ systemctl restart kubelet $ kubectl get nodes --> you should see "master" with the new version 1.12
5.2.2- Upgrade Worker
From v1.11 to v1.12
master: node-1 --------------------- ----------------------- $ kubectl drain node-1 apt-get update apt-get install -y kubeadm=1.12.0-00 apt-get install -y kubelet=1.12.0-00 kubeadm upgrade node \ [config --kubelet-version v1.12.0] systemctl restart kubelet $ kubectl uncordon node-1 $ apt-mark hold package
5.3- Backup Resources
$ kubectl get all --all-namespaces -o yaml > all-deploy-service.yaml
There are other tools like “velero” from Heptio that can do it. Out of scope for CKA.
5.4- Backup/Restore ETCD – Difficult
“etcd” is important because stores all cluster info.
The difficult part is to get the certificates parameters to get the etcd command working.
– You can get some clues from the static pod definition of etcd:
/etc/kubernetes/manifests/etcd.yaml: Find under exec.command
– or do a ps -ef | grep -i etcd and see the parameters used by other commands
verify command: ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ --endpoints=127.0.0.1:2379 member list create backup: ETCDCTL_API=3 etcdctl snapshot save SNAPSHOT-BACKUP.db \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.crt \ --cert=/etc/etcd/etcd-server.crt \ --key=/etc/etcd/etcd-server.key verify backup: ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ --endpoints=127.0.0.1:2379 \ snapshot status PATH/FILE -w table
Summary:
etcd backup: 1- documentation: find the basic command for the API version 2- ps -ef | grep etcd --> get path for certificates 3- run command 4- verify backup
5.3.1- Restore ETCD
// 1- Stop api server $ service kube-apiserver stop // 2- apply etcd backup $ ETCDCTL_API=3 etcdctl snapshot restore SNAPSHOT-BACKUP.db \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.crt \ --cert=/etc/etcd/etcd-server.crt \ --key=/etc/etcd/etcd-server.key --data-dir /var/lib/etcd-from-backup \ --initial-cluster master-1=https://127.0.0.1:2380, master-2=https://x.x.x.y:2380 \ --initial-cluster-token NEW_TOKEN \ --name=master --initial-advertise-peer-urls https://127.0.0.1:2380 // 3- Check backup folder $ ls -ltr /var/lib/etcd-from-backup -> you should see a folder "member" // 4- Update etcd.service file. The changes will apply immediately as it is a static pod $ vim /etc/kubernetes/manifests/etcd.yaml ... --data-dir=/var/lib/etcd-from-backup (update this line with new path) --initial-cluster-token=NEW_TOKEN (add this line) … volumeMounts: - mountPath: /var/lib/etcd-from-backup (update this line with new path) name: etcd-data … volumes: - hostPath: path: /var/lib/etcd-from-backup (update this line with new path) type: DirectoryOrCreate name: etcd-data // 5- Reload services $ systemctl daemon-reload $ service etcd restart $ service kube-apiserver start
Important: In cloud env like aws,gcp you dont have access to ectd…
6- SECURITY
6.1- Security Primitives
kube-apiserver: who can access: files, certs, ldap, service accounts what can they do: RBAC authorization, ABAC autho
6.2- Authentication
Kubectl :
users: admin, devs --> kubectl can't create accounts service accountsL 3rd parties (bots) --> kubectl can create accounts
You can use static file for authentication – NO RECOMMENDED
file x.csv: password, user, uid, gid --> --basic-auth-file=x.csv token token.csv: token, user, uid, gid --> --token-auth-file=token.csv
Use of auth files in kube-api config:
kube-apiserver.yaml --- spec: containers: - command: … - --basic-auth-file=x.csv // or - --token-auth-file=x.csv
Use of auth in API calls:
$ curl -v -k https://master-node-ip:6443/api/v1/pods -u "user1:password1" $ curl -v -k https://master-node-ip:6443/api/v1/pods \ --header "Authorization: Bearer TOKEN"
6.3- TLS / Generate Certs
openssl commands to create required files:
gen key: openssl genrsa -out admin.key 2048 gen cert: openssl rsa -in admin.key -pubout > mybank.pem gen csr: openssl req -new -key admin-key -out admin.csr \ -subj "/CN=kube-admin/O=system:masters" (admin, scheduler, controller-manager, kube-proxy,etc)
Generate cert with SAN:
0) Gen key: openssl genrsa -out apiserver.key 2048 1) Create openssl.cnf with SAN info [req] req_extensions = v3_req [v3_req] basicConstraints = CA:FALSE keyUsage = nonRepudiation subjectAltName = @alt_names [alt_names] DNS.1 = kubernetes DNS.2 = kubernetes.default IP.1 = 10.96.1.1 IP.2 = 172.16.0.1 2) Gen CSR: openssl req -new -key apiserver.key -subj "/CN=kube-apiserve" -out apiserver.csr -config openssl.cnf 3) Sign CSR with CA: openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -out apiserver.crt
Self-Signed Cert: Sign the CSR with own key to generate the cert:
$ openssl x509 -req -in ca.csr -signkey ca.key -out ca.crt
User cers to query API:
$ curl https://kube-apiserver:6443/api/v1/pods --key admin.key --cert admin.crt --cacert ca.crt
Kube-api server config related to certs…:
--etcd-cafile= --etcd-certfile= --etcd-keyfile= … --kubelet-certificate-authority= --kubelet-client-certificate= --kubelet-client-key= … --client-ca-file= --tls-cert-file= --tls-private-key-file= …
Kubelet-nodes:
server cert name => kubelet-nodeX.crt kubelet-nodeX.key client cert name => Group: System:Nodes name: system:node:node0x
kubeadm can generate all certs for you:
cat /etc/kubernetes/manifests/kube-apiserver.yaml spec: containers: - command: - --client-ca-file= - --etcd-cafile - --etcd-certfile - --etcd-keyfile - --kubelet-client-certificate - --kubelet-client-key - --tls-cert-file - --tls-private-key-file
How to check CN, SAN and date in cert?
$ openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout
Where you check if there are issues with certs in a core service:
if installed manually: > journalctl -u etcd.service -l if installed kubeadm: > kubectl logs etcd-master
6.4- Certificates API
Generate certificates is quite cumbersome. So kubernetes has a Certificates API to generate the certs for users, etc
How to create a certificate for a user:
1) gen key for user openssl genrsa -out new-admin.key 2048 2) gen csr for user openssl req -new -key new-admin.key -subl "/CN=jane" -out new-admin.csr 3) create "CertificateSigningRequest" kubernetes object: cat new-admin-csr.yaml --- apiVersion: certificates.k8s.io/v1beta1 kind: CertificateSigningRequest metadata: name: jane spec: groups: - system:authenticated usages: - digital signature - key encipherment - server auth request: (cat new-admin.csr | base64) kubectl create -f new-admin-csr.yaml 4) approve new certificate, it can't be done automatically: kubectl get csr kubectl certificate approve new-admin 5) show certificate to send to user kubectl get certificate new-admin -o yaml --> put "certificate:" in (echo ".." | base64 --decode)
The certs used by CA API are in controller-manager config file:
kube-controller-manager.yaml --cluster-signing-cert-file= --cluster-signing-key-file=
6.5- Kubeconfig
kubectl is always querying the API whenever you run a command and use certs. You dont have to type the certs every time because it is configured in the kubectl config at ~HOME/.kube/config.
The kubeconfig file has three sections: clusters, users and contexts (that join users with contexts). And you can have several of each one.
kubeconfig example:
apiVersion: v1 kind: Config current-context: dev-user@gcp // example: user@cluster clusters: /// - name: cluster: certificate-authority: PATH/ca.crt //or certificate-authority-data: $(cat ca.crt | base64) server: https://my-kube-playground:6443 contexts: /// user@cluster - name: my-kube-admin@my-kube-playground context: my-kube-playground user: my-kube-admin cluster: my-kube-playground namespace: production users: // - name: my-kube-admin user: client-certificate: PATH/admin.crt client-key: PATH/admin.key //or client-certificate-data: $(cat admin.crt | base64) client-key-data: $(cat admin.key | base64)
You can test other user certs:
$ curl https://kube-apiserver:6443/api/v1/pods --key admin.key \ --cert admin.crt --cacert ca.crt $ kubectl get pods --server my-kube-playground:6443 \ --client-key admin.key \ --client-certificate admin.crt \ --certificate-authority ca.crt \
Use and view kubeconfig file:
$ kubectl get pods [--kubeconfig PATH/FILE] $ kubectl config view [--kubeconfig PATH/FILE] <-- show kubectl config file $ kubectl config use-context prod-user@prod <-- change use-context in file too!
6.6- API groups
This is a basic diagram of the API. Main thing is the difference between “api” (core stuff) and “apis” (named stuff – depends on a namespace):
/metrics /healthx /version /api /apis /logs
(core) (named)
/v1 |
namespace pods rc /apps /extensions ... (api groups)
pv pvc binding... /v1 /v1
|
/deployments /replicaset (resources)
|
-list,get,create,delete,update (verbs)
You can reach the API via curl but using the certs…
$ curl https://localhost:6443 -k --key admin.key --cert admin.crt \ --cacert ca.crt $ curl https://localhost:6443/apis -k | grep "name"
You can make your life easier using a kubectl proxy that uses the kubectl credentials to access kupeapi
$ kubectl proxy -> launch a proxy in 8001 to avoid use auth each time as it uses the ones from kube config file $ curl http://localhost:8001 -k
Important:
kube proxy != kubeCTL proxy (reach kubeapi) (service running on node for pods connectivity)
6.7- Authorization
What you can do. There are several method to arrange authorization:
Node authorizer: (defined in certificate: Group: SYSTEM:NODES CN: system:node:node01) ABAC (Atribute Base Access Control): difficult to manage. each user has a policy… {"kind": "Policy", "spec": {"user": "dev-user", "namespace": "", "resource": "pods", "apiGroup": ""}} RBAC: Role Base Access Control: mode standard usage. create role, assign users to roles Webhook: use external 3rd party: ie "open policy agent" AlwaysAllow, AlwaysDeny
You define the method in the kubeapi config file:
--authorization-mode=AlwaysAllow (default) or --authorization-mode=Node,RBAC,Webhook (you use all these mode for each request until allow)
6.8- RBAC
You need to define a role and a binding role (who uses which role) objects. This is “namespaced“.
dev-role.yaml -- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: dev namespace: xxx rules: - apiGroups: [""] resources: ["pods"] verbs: ["list", "get", "create", "update", "delete"] resourceNames: ["blue", "orange"] <--- if you want to filter at pod level too: only access to blue,orange - apiGroups: [""] resources: ["configMap"] verbs: ["create"] $ kubectl create -f dev-role.yaml dev-binding.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: dev-binding namespace: xxx subjects: - kind: User name: dev-user apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: dev apiGroup: rbac.authorization.k8s.io $ kubectl create -f dev-binding.yaml
Info about roles/rolebind:
$ kubectl get roles rolebindings describe role dev rolebinding dev-binding
Important: How to test the access of a user?
$ kubectl auth can-i create deployments [--as dev-user] [-n prod] update pods delete nodes
6.9- Cluster Roles
This is for cluster resources (non-namespae): nodes, pv, csr, namespace, cluster-roles, cluster-roles-binding
You can see the full list for each with:
$ kubectl api-resources --namespaced=true/false
The process is the same, we need to define a cluster role and a cluster role binding:
cluster-admin-role.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-administrator rules: - apiGroups: [""] resources: ["nodes"] verbs: ["list", "get", "create", "delete"] cluster-admin-role-bind.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: cluster-admin-role-bind subjects: - kind: User name: cluster-admin apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: cluster-administrator apiGroup: rbac.authorization.k8s.io
Important: You can create a “cluster role” for a user to access pods (ie), using cluster role, that give it access to all pod in all namespaces.
6.10- Images Security
Secure access to images used by pods. An image can be in docker, google repo, etc
image: docker.io/nginx/nginx | | | registry user image account from google: gcr.io/kubernetes-e2e-test-images/dnsutils
You can use a private repository:
$ docker login private.io user: pass: $ docker run private.io/apps/internal-app
How to define a private registry in kubectl:
kubectl create secret docker-registry regcred \ --docker-server= \ --docker-username= \ --docker-password= \ --docker-email=
How to use a specific registry in a pod?
spec: containers: - name: nginx image: private.io/apps/internal-app imagePullSecrets: name: regcred
6.11- Security Contexts
Like in docker, you can assign security params (like user, group id, etc) in kube containers. You can set the security params at pod or container level:
at pod level: ---- spec: securityContext: runAsUser: 1000 at container level: --- spec: containers: - name: ubuntu securityContext: runAsUser: 100 (user id) capabilities: <=== ONY AT CONTAINER LEVEL! add: ["MAC_ADMIN"]
6.12- Network Polices
This is like a firewall, iptables implementation for access control at network level. Regardless the network plugin, all pods in a namespace can reach any other pod (without adding any route into the pod).
Network policies are supported in kube-router, calico, romana and weave-net. It is not supported in flannel (yet)
You have ingress (traffic received in a pod) and egress (traffic generated by a pod) rule. You match the rule to a pod using labels with podSelector:
networkpolicy: apply network rule on pods with label role:db to allow only traffic from pods with label name: api-pod into port 3306 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: db-policy spec: podSelector: matchLabels: role: db policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: name: api-pod ports: - protocol: TCP port: 3306 $ kubectl apply -f xxx
6.13- Commands: kubectx / kubens
I haven’t seen any lab requesting the usage. For the exam is not required but maybe for real envs.
Kubectx reference: https://github.com/ahmetb/kubectx With this tool, you don't have to make use of lengthy “kubectl config” commands to switch between contexts. This tool is particularly useful to switch context between clusters in a multi-cluster environment. Installation: sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx sudo ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx Kubens This tool allows users to switch between namespaces quickly with a simple command. sudo git clone https://github.com/ahmetb/kubectx /opt/kubectx sudo ln -s /opt/kubectx/kubens /usr/local/bin/kubens
7- STORAGE
7.1- Storage in Docker
In docker, /container and /images are under /var/lib/docker.
Docker follows a layered architecture (each line in Dockerfile is a layer):
$ docker build --> Read Only (image layer) $ docker run -> new layer: it is rw (container layer) - lost once docker finish
So docker follows a “copy-on-write” strategy by default. If you want to be able to access that storage after the docker container is destroyer, you can use volumes:
> docker volume create data_volume --> /var/lib/docker/volumes/data_volume > docker run -v data_volume:/var/lib/mysql mysql --> volume mounting -> dir created in docker folders > docker run --mount type=bind,source=/data/mysql,target=/var/lib/mysql mysl --> path mounting,dir not created in docker folders volume driver: local, azure, gce, aws ebs, glusterfs, vmware, etc storage drivers: enable the layer driver: aufs, zfs, btrfs, device mapper, overlay, overlay2
7.2- Volumes, PersistentVolumes and PV claims.
Volume: Data persistence after container is destroyed
spec: containers: - image: alpine volumeMounts: - mountPath: /opt name: data-volume ==> /data -> alpine:/opt volumes: - name: data-volume hostPath: path: /data type: Directory
Persistent volumes: cluster pool of volumes that users can request part of it
apiVersion: v1 kind: PersistentVolume metadata: name: pv-vol1 spec: accessModes: - ReadWriteOnce (ReadOnlyMode, ReadWriteMany) capacity: storage: 1Gi hostPath: path: /tmp/data persistentVolumeReclaimPolicy: Retain (default) [Delete, Recycle] $ kubectl create -f xxx $ kubectl get persistenvolume [pv]
PV claims: use of a pv. Each pvc is bind to one pv.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi $ kubectl create -f xxx $ kubectl get persistentvolumeclaim [pvc] ==> If status is "bound" you have matched a PV
Use a PVC in a pod:
apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: "/var/www/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: myclaim
Important: a PVC will bound to one PV that fits its requirements. Use “get pvc” to check status.
7.3- Storage Class
dynamic provisioning of storage in clouds:
sc-definition -> pvc-definition -> pod-definition ==> we dont need pv-definition! it is created automatically
Example:
sc-definition --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: gcp-storage <===========1 provisioner: kubernetes.io/gce-pd parameters: (depends on provider!!!!) type: replication-type: pvc-def --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim <=========2 spec: accessModes: - ReadWriteOnce storageClassName: gcp-storage <======1 resources: requests: storage: 500Mi use pvc in pod --- apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: myfrontend image: nginx volumeMounts: - mountPath: "/var/www/html" name: mypd <=======3 volumes: - name: mypd <========3 persistentVolumeClaim: claimName: myclaim <===========2
8- NETWORKING
8.1 Linux Networking Basics
$ ip link (show interfaces) $ ip addr add 192.168.1.10/24 dev eth0 $ route $ ip route add 192.168.2.0/24 via 192.168.1.1 $ ip route default via 192.168.1.1 0.0.0.0/0 // enabling forwarding $ echo 1 > /proc/sys/net/ipv4/ip_forward $ vim /etc/sysctl.conf net.ipv4.ip_forward = 1
8.2 Linux DNS basics
$ cat /etc/resolv.conf nameserver 192.168.1.1 search mycompany.com prod.mycompany.com $ nslookup x.x.x.x $ dig
8.3 Linux Namespace
// create ns ip netns add red ip netns add blue ip netns (list ns) ip netns exec red ip link // ip -n red link ip netns exec red arp // create virtual ethernet between ns and assign port to them ip link add veth-red type veth peer name veth-blue (ip -n red link del veth-red) ip link set veth-red netns red ip link set veth-blue netns blue // assign IPs to each end of the veth ip -n red addr add 192.168.1.11 dev veth-red ip -n blue addr add 192.168.1.12 dev veth-blue // enable links ip -n red link set veth-red up ip -n blue link set veth-blue up // test connectivity ip netns exec red ping 192.168.1.2 ====== // create bridge ip link add v-net-0 type bridge // enable bridge ip link set dev v-net-0 up // ( ip -n red link del veth-red) // create and attach links to bridge from each ns ip link add veth-red type veth peer name veth-red-br ip link add veth-blue type veth peer name veth-blue-br ip link set veth-red netns red ip link set veth-red-br master v-net-0 ip link set veth-blue netns blue ip link set veth-blue-br master v-net-0 ip -n red addr add 192.168.1.11 dev veth-red ip -n blue addr add 192.168.1.12 dev veth-blue ip -n red link set veth-red up ip -n blue link set veth-blue up ip addr add 192.168.1.1/24 dev v-net-0 ip netns exec blue ip route add 192.168.2.0/24 via 192.168.1.1 ip netns exec blue ip route add default via 192.168.1.1 iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -j MASQUERADE iptables -t nat -A PREROUTING --dport 80 --to-destination 192.168.1.11:80 -j DNAT
8.4 Docker Networking
Three types:
- none: no connectivity - host: share host network - bridge: internal network is created and host is attached (docker network ls --> bridge -| are the same thing ip link --> docker0 -| iptables -t nat -A DOCKER -j DNAT --dport 8080 --to-destination 192.168.1.11:80
8.5 Container Network Interface
Container runtime must create network namespace: - identify network the container must attach to - container runtime to invoke network plugin (bridge) when container is added/deleted - json format of network config CNI: must support command line arguments add/del/chec must support parametes container id, network ns manage IP resutls in specific format **docker is not a CNI** kubernetes uses docker. it is created in the "host" network and then uses "bridge"
8.6 Cluster Networking
Most common ports:
etcd: 2379 (2380 as client) kube-api: 6443 kubelet: 10250 kube-scheduler: 10251 kube-controller: 10252 services: 30000-32767
Configure weave-network:
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
$ kubectl get pod -n kube-system | grep -i weave (one per node)
cluster-networking doc: Doesnt give you steps to configure any CNI….
8.7 Pod Networking
- every pod should have an ip.
- every pod shoud be able to community with every other pod in the same node and other nodes (without nat)
Networking config in kubelet:
--cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/etc/cni/bin ./net-script.sh add <container> <namespace>
8.8 CNI Weave-net
installs an agent in each node. deploy as pods in nodes
$kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
$kubectl get pods -n kube-system | grep weave-net
ipam weave:
where pods and bridges get the IPs? plugin: host-local -> provide free ips from node
8.9 Service Networking
“service” is cluster-wide object. The service has an IP. Kubeproxy in each node, creates iptables rules.
ClusterIP: IP reachable by all pods in the cluster
$ ps -ef | kube-api-server --service-cluster-ip-range=x.x.x.x/y !! pod network shouldnt overlap with service-cluster $ iptables -L -t -nat | grep xxx $ cat /var/log/kube-proxy.log
NodePort: same port in all nodes, sent to the pod
IPs for pod: check logs of pod weave:
$ kubectl -n kube-system logs weave-POD weave --> the pod has two container so you need to specify one of them
IPs for services –> check kube-api-server config
8.10 CoreDNS
For pods and services in the cluster (nodes are managed externally)
kube dns: hostname namespace type root ip address web-service apps svc cluster.local x.x.x.x (service) 10-244-2-5 default pod cluster.local x.x.x.y (pod) fqdn: web-service.apps.svc.cluster.local 10-244-2-5.default.pod.cluster.local
dns implementation in kubernetes use coredns (two pods for ha)
cat /etc/coredns/Corefile .53: { errors // plugins health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure // create record for pod as 10-2-3-1 instead of 10.2.3.1 upstream fallthrough in-addr.arpa ip6.arpa } prometheus: 9153 proxy: . /etc/resolv.conf // for external queries (google.com) from a pod cache: 30 reload } $ kubectl get configmap -n kube-system
pods dns config:
cat /etc/resolv.conf => nameserver IP <- it is the IP from $ kubectl get service -n kubesystem | grep dns this come from the kubelet config: /var/lib/kubelet/config.yaml: clusterDNS: - 10.96.0.10 $ host ONLY_FQDN
8.11 Ingress
Using a service “LoadBalance” is only possible in Cloud env like GCP, AWS, etc
When you create a service loadbalancer, the cloud provider is going to create a proxy/loadbalancer to access that service. so you can create a hierarchy of loadbalancers in the cloud provider… –> too complex ==> sol: Ingress
ingress = controller + resources. Not deployed by default
supported controller: GCP HTTPS Load Balancer (GCE) and NGINX (used in kubernetes)
8.11.1 Controller
1) nginx --> deployment file: --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller spec: replicas: 1 selector: matchLabels: name: nginx-ingress template: metadata: labels: name: nginx-ingress spec: containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_SPACE valueFrom: filedRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https: containerPorts: 443 2) nginx configmap used in deployment --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-configuration 3) service --- apiVersion: v1 kind: Service metadata: name: nginx-ingress spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http - port: 443 targetPort: 443 protocol: TCP name: https selector: name: nginx-ingress 4) service account (auth): roles, clusterroles, rolebinding, etc apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount
8.11.2 Options to deploy ingress rules
option1) 1rule/1backend: In this case the selector from the service, gives us the pod ingress-wear.yaml --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-wear spec: backend: serviceName: wear-service servicePort: 80 option 2) split traffic via URL: 1 Rule / 2 paths www.my-online-store.com /wear /watch | V nginx | ---------------------- | | svc svc wear vid ==== ==== | | wear-pod vid-pod ingress-wear-watch.yaml --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-wear-watch spec: rules: - http: paths: - path: /wear backend: serviceName: wear-service servicePort: 80 - path: /watch backend: serviceName: watch-service servicePort: 80 $ kubectl describe ingress NAME ==> watchout the default backend !!!! if nothing matches, it goes there!!! you need to define a default backend option 3) split by hostname: 2 Rules / 1 path each wear.my-online-store.com watch.my-online-store.com |------------------------------------| | V nginx | ---------------------- | | svc svc wear vid ==== ==== | | wear-pod vid-pod ingress-wear-watch.yaml --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-wear-watch spec: rules: - host: wear.my-online-store.com http: paths: - backend: serviceName: wear-service servicePort: 80 - host: watch.my-online-store.com http: paths: - backend: serviceName: watch-service servicePort: 80
ingress examples: https://kubernetes.github.io/ingress-nginx/examples/
8.12 Rewrite
I havent seen any question about this in the mock labs but just in case: Rewrite url nginx:
For example: replace(path, rewrite-target) using: http://<ingress-service>:<ingress-port>/wear --> http://<wear-service>:<port>/ In our case: replace("/wear","/") apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress namespace: critical-space annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /wear backend: serviceName: wear-service servicePort: 8282 with regex replace("/something(/|$)(.*)", "/$2") apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$2 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something(/|$)(.*)
9- Troubleshooting
9.1 App failure
- make an application diagram - test the services: curl, kubectl describe service (compare with yaml) - status pod (restarts), describe pod, pod logs (-f)
9.2 Control plane failure
- get nodes, get pods -n kube-system - master: service kube-apiserver/kube-controller-manager/ kube-scheduler status kubeadm: kubectl logs kube-apiserver-master -n kube-system service: sudo journalctl -u kube-apiserver - worker: service kubelet/kube-proxy status - Do exist static pods configured in kubelet config? 1 check /etc/systemd/system/kubelet.service.d/10-kubeadm.confg for config file 2 check static pod path in kubelet config
9.3 Worker node failure
- get nodes, describe nodes x (check status column) - top, dh, service kubelet status, kubelet certificates, kubelet service running? - kubectl cluster-info
10- JSONPATH
10.1 Basics
$ = root dictionary results are always in [] // list $.car.price -> [1000] --- { "car": { "color": "blue", "price": "1000" }, "bus": { "color": "red", "price": "1200" } } $[0] -> ["car"] --- [ "car", "bus", "bike ] $[?(@>40)] == get all numbers greater than 40 in the array -> [45, 60] --- [ 12, 45, 60 ] $.car.wheels[?(@.location == "xxx")].model // find prize winner named Malala $.prizes[?(@)].laureates[?(@.firstname == "Malala")] wildcard --- $[*].model $.*.wheels[*].model find the first names of all winners of year 2014 $.prizes[?(@.year == 2014)].laureates[*].firstname lists --- $[0:3] (start:end) -> 0,1,2 (first 3 elements) $[0:8:2] (start:end:step) -> 0,0+2=2,2+2=4,4+2=6 -> elements in position 0,2,4,6 $[-1:0] = last element $[-1:] = last element $[-3:] = last 3 elements
10.2 Jsonpath in Kubernetes
$ kubectl get pods -o json $ kubectl get nodes -o=jsonpath='{.items[*].metada.name}{"\n"} {.items[*].status.capacity.cpu}' master node01 4 4 $ kubectl get nodes -o=jsonpath='{range .items[*]}\ {.metada.name}{"\t"}{.status.capacity.cpu}{"\n"}\ {end}' master 4 node01 4 $ kubectl get nodes -o=custom-columns=NODE:.metadata.name, CPU:.status.capacity.cpu … NODE CPU master 4 node01 4 $ kubectl get nodes --sort-by= .metadata.name $ kubectl config view --kubeconfig=/root/my-kube-config -o=jsonpath='{.users[*].name}' > /opt/outputs/users.txt $ kubectl config view --kubeconfig=my-kube-config -o jsonpath="{.contexts[?(@.context.user=='aws-user')].name}" > /opt/outputs/aws-context-name
11- Install, Config and Validate Kube Cluster
All based on this.
11.1- Basics
education: minikube kubeadm/gcp/aws on-prem: kubeadm laptop: minikube: deploys VMs (that are ready) - single node cluster kubeadm: require VMS to be ready - single/multi node cluster turnkey solution: you provision, configure and maintein VMs. Use scripts to deploy cluster (KOPS in AWS) ie: openshift (redhat), Vagrant, VMware PKS, Cloud Foundry hosted solutions: (kubernetes as a service) provider provision and maintains VMs, install kubernetes: ie GKE in GCP
11.2 HA for Master
api-server --> need LB (active-active) active/passive $ kube-controller-manager --leader-elect true [options] --leader-elect-lease-duration 15s --leader-elect-renew-deadline 10s --leader-elect-retry-period 2s etcd: inside the masters (2 nodes total) or in separated nodes (4 nodes total)
11.3 HA for ETCD
leader etcd, writes and send the info to the others leader election - RAFT: quorum = n/2 + 1 -> minimun num of nodes to accept a transactio successful. recommend: 3 etcd nodes minimun => ODD NUMBER $ export ETCDCTL_API=3 $ etcdctl put key value $ etcdctl get key $ etcdctl get / --prefix --keys-only
11.4 Lab Deployment
LAB setup (5nodes) 1 LB 2 master nodes (with etcd) 2 nodes weave-net > download kubernetes latest release from github > uncompress > cd kubernetes > cluster/get-kube-binaries.sh --> downloads the latest binaries for your system. > cd server; tar -zxvf server-linux-xxx > ls kubernetes/server/bin Plan: 1- deploy etcd cluster 2- deploy control plane components (api-server, controller-manager, scheduler) 3- configure haproxy (for apiserver) haproxy | ------------------------- | | M1: M2: api api etcd etcd control-manager control-manager scheduler scheduler W1: W2: gen certs TLS Bootstrap: config kubelet - w2 creates and configure certs itself renew certs - config kubelet config kube-proxy - w2 to renew certs by itself - config kube-proxy TLS bootstrap: 1- in Master - create bootstrap token and associate it to group "system:bootstrappers" - assign role "system:node-bootstrapper" to group "system:bootstrappers" - assing role "system:certificates.k8s.io:certificatesigningrequests:nodeclient" to group "system:bootstrappers" - assing role "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" to group "system:node" 2- kubelet.service --bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig" // This is for getting the certs to join the cluster!! --rotate-certificates=true // this if for the client certs used to join the cluster (CSR automatic approval) --rotate-server-certificates=true // these are the certs we created in the master and copied to the worker manually the server cert requires CSR manual approval !!! > kubectl get csr > kubectl certificate approve csr-XXX bootsrap-kubeconfig --- apiVersion: 1 clusters: - cluster: certificate-authority: /var/lib/kubernetes/ca.crt server: https://192.168.5.30:6443 //(api-server lb IP) name: bootstrap contexts: - context: cluster: bootstrap user: kubelet-bootstrap name: bootstrap current-context: bootstrap kind: Config preferences: {} users: - name: kubelet-bootstrap user: token: XXXXXXXXXX
11.5 Testing
11.5.1 manual test
$ kubectl get nodes pods -n kube-system (coredns, etcd, kube-paiserver, controller-mamanger, proxy, scheduler, weave) $ service kube-apiserver status kube-controller-manager kube-scheduler kubelet kube-proxy $ kubectl run nginx get pods scale --replicas=3 deploy/nginx get pods $ kubectl expose deployment nginx --port=80 --type=NodePort get service $ curl http://worker-1:31850
11.5.2 kubetest
end to end test: 1000 tests (12h) // conformance: 160 tests (1.5h)
1- prepare: creates a namespace for this test 2- creates test pod in this namespace, waits for the pods to come up 3- test: executes curl on one popd to reach the ip of another pod over http 4- record result $ go get -u k8s.io/test-infra/kubetest $ kubetest --extract=v1.11.3 (your kubernetes version) $ cd kubernetes $ export KUBE_MASTER_IP="192.168.26.10:6443" $ export KUBE_MASTER=kube-master $ kubetest --test --provider=skeleton > test-out.txt // takes 12 hours $ kubetest --test --provider=skeleton --test_args="--ginkgo.focus=[Conformance]" > testout.txt // takes 1.5 hours $ kubeadm join 172.17.0.93:6443 --token vab2bs.twzblu86r60qommq \ --discovery-token-ca-cert-hash sha256:3c9b88fa034a6f894a21e49ea2e2d52435dd71fa5713f23a7c2aaa83284b6700