{"id":402,"date":"2020-09-01T13:17:33","date_gmt":"2020-09-01T12:17:33","guid":{"rendered":"https:\/\/blog.thomarite.uk\/?p=402"},"modified":"2020-09-03T16:47:16","modified_gmt":"2020-09-03T15:47:16","slug":"cka-p1","status":"publish","type":"post","link":"https:\/\/blog.thomarite.uk\/index.php\/2020\/09\/01\/cka-p1\/","title":{"rendered":"CKA"},"content":{"rendered":"\n<p>I am studying for the Kubernetes certification CKA. These are some notes:<\/p>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">1- CORE CONCEPTS<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">1.1- Cluster Architecture<\/h2>\n\n\n\n<p><strong>Master node<\/strong>: manage, plan, schedule and monitor. These are the main components:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>etcd<\/strong>: db as k-v<\/li><li><strong>scheduler<\/strong><\/li><li><strong>controller-manager<\/strong>: node-controller, replication-controller<\/li><li><strong>apiserver<\/strong>: makes communications between all parts<\/li><li>docker<\/li><\/ul>\n\n\n\n<p><strong>Worker node<\/strong>: host apps as containers. Main components:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>kubelet<\/strong> (captain of the ship)<\/li><li><strong>kube-proxy<\/strong>: allow communication between nodes<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">1.2- ETCD<\/h2>\n\n\n\n<p>It is a distributed key-value store (database). TCP <strong>2379<\/strong>. Stores info about nodes, pods, configs, secrets, accounts, roles, bindings etc. Everything related to the cluster.<\/p>\n\n\n\n<p>Basic commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">client: .\/etcdctl set key1 value1\n        .\/etcdctl get key1 <\/pre>\n\n\n\n<p>Install Manual:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">1- wget \"github binary path to etc\"\n2- setup config file: important \"--advertise-client-urls: IP:2379\"\n                      a lot of certs needed!!!<\/pre>\n\n\n\n<p>Install via kubeadm already includes etcd:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get pods -n kube-system | grep etcd\n\n\/\/ get all keys from etcd\n$ kubectl exec etcd-master -n kube-system etcdctl get \/ --prefix -keys-only<\/pre>\n\n\n\n<p>etcd can be set up as a cluster, but this is for another  section.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1.3- Kube API Server<\/h2>\n\n\n\n<p>You can install a binary (like etcd) or use it via kubeadm.<\/p>\n\n\n\n<p>It has many options and <em><strong>it defines certs for all connections<\/strong><\/em>!!!<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1.4- Kube Controller-Manager<\/h2>\n\n\n\n<p>You can install a binary (like etcd) or use kubeadm. It gets all the info via the API server. Watch status of pods, remediate situations. Parts:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>node-controller<\/li><li>replications-controller<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">1.5- Kube Scheduler<\/h2>\n\n\n\n<p>Decides which pod goes to which node. You can install a binary or via kubeadm.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1.6- Kubelet<\/h2>\n\n\n\n<p>It is like the &#8220;captain&#8221; of the &#8220;ship&#8221; (node). Communicates with the kube-cluster via the api-server.<\/p>\n\n\n\n<p><strong>Important<\/strong>:<em> kubeadm doesnt install kubelet<\/em><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1.7- Kube-Proxy<\/h2>\n\n\n\n<p>In a cluster, each pod can reach any other pod -&gt; you need a pod network!<\/p>\n\n\n\n<p>It runs in each node. Creates rules in each node (iptables) to use &#8220;services&#8221;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1.8- POD<\/h2>\n\n\n\n<p>It is the smallest kube object.<\/p>\n\n\n\n<p>1 pod =~ 1 container + help container<\/p>\n\n\n\n<p>It can be created via a &#8220;kubectl run&#8221; or via <strong>yaml<\/strong> file.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: v1\nkind: Pod\nmetadata:\n  name: postgres-pod\n  labels:\n    name: postgres-pod\n    app: demo-voting-app\nspec:\n  containers:\n    - name: postgres\n      image: postgres\n      ports:\n        - containerPort: 5432\n      env:\n        - name: POSTGRES_USER\n          value: \"postgres\"\n        - name: POSTGRES_PASSWORD\n          value: \"postgres\"<\/code><\/pre>\n\n\n\n<p>Commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl create -f my-pod.yaml\n$ kubectl get pods\n$ kubectl describe pod postgres<\/pre>\n\n\n\n<p>It <strong>always<\/strong> contains &#8220;apiVersion&#8221;, &#8220;kind&#8221;, &#8220;metadata&#8221; and &#8220;spec&#8221;.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1.9 ReplicaSet<\/h2>\n\n\n\n<p>Object in charge of monitoring pods, HA, loadbalancing, scaling. It is a replacement of &#8220;replication-controller&#8221;. <em>Inside the spec.tempate you &#8220;cope\/paste&#8221; the pod definition.<\/em><\/p>\n\n\n\n<p>The important part is &#8220;<strong>selector.matchLabels<\/strong>&#8221; where you decide what pods are going to be managed by this replicaset<\/p>\n\n\n\n<p>Example:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: apps\/v1\nkind: ReplicaSet\nmetadata:\n  name: my-rs\n  labels:\n    app: myapp\nspec:\n  replicas: 3\n  selector: \/\/ match pods created before the RS - main difference between RS \n                                                                      and RC\n    matchLabels:\n      app: myapp   --> find labels from pods matching this\n  template:\n    metadata:\n      name: myapp-pod\n      labels:\n        app: myapp\n    spec:\n      containers:\n      - name: nginx-controller\n        image: nginx<\/code><\/pre>\n\n\n\n<p>Commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl create -f my-rs.yaml\n$ kubectl get replicaset\n$ kubectl scale --replicas=4 replicaset my-rs\n$ kubectl replace -f my-rs.yaml<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">1.10- Deployments<\/h2>\n\n\n\n<p>It is an object that creates a pod + replicaset. It provides the upgrade (rolling updates) feature to the pods.<\/p>\n\n\n\n<p>File is identical as a RS, only changes the &#8220;kind&#8221;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: my-deployment\n  labels:\n    app: myapp\nspec:\n  replicas: 3\n  selector: \/\/ match pods created before the RS - main difference between RS \n                                                                   and RC\n    matchLabels:\n      app: myapp   --> find labesl from pods matching this\n  template:\n    metadata:\n      name: myapp-pod\n      labels:\n        app: myapp\n    spec:\n      containers:\n      - name: nginx-controller\n        image: nginx<\/code><\/pre>\n\n\n\n<p>Commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl create -f my-rs.yaml\n$ kubectl get deployments\n$ kubectl get replicaset\n$ kubectl get pods<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">1.11- Namespace<\/h2>\n\n\n\n<p>It is a way to create different environments in the cluster. ie: production, testing, features, etc. You can control the resource allocations for the &#8220;ns&#8221;<\/p>\n\n\n\n<p>By default you have 3 namespaces:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>kube-system: where all control-plane pods are installed<\/li><li>default:<\/li><li>kube-public:<\/li><\/ul>\n\n\n\n<p>The &#8220;ns&#8221; is used in <strong>DNS<\/strong>.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">db-service.dev.svc.cluster.local\n---------  --- ---  -----------\nsvc name   ns  type domain(default)\n\n10-10-1-3.default.pod.cluster-local\n--------- ---     ---  -----------\npod IP    ns      type  domain(default)<\/pre>\n\n\n\n<p>Keep in mind that <strong>POD DNS names <\/strong>are just the &#8220;IP&#8221; in &#8220;-&#8221; format.<\/p>\n\n\n\n<p>You can add &#8220;namespace: dev&#8221; into the &#8220;metadata&#8221; section of yaml files. By default, namespace=default.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get pods --namespace=xx (by default is used \"default\" namespace)<\/pre>\n\n\n\n<p>Create &#8220;ns&#8221;:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">namespace-dev.yaml\n---\napiVersion: v1\nkind: Namespace\nmetadata:\nname: dev\n\n$ kubectl create -f namespace-dev.yaml\nor\n\n$ kubectl create namespace dev<\/pre>\n\n\n\n<p>Change &#8220;ns&#8221; in your context if you dont want to type it in each kubectl command:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl config set-context $(kubectl config current-context) -n dev\n<\/pre>\n\n\n\n<p>See all objects in all ns:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get pods --all-namespaces\n\n$ kubectl get ns --no-headers | wc -l<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">1.12- Resource Quotas<\/h2>\n\n\n\n<p>You can state the resources (cpu, memory, etc) for a pod.<\/p>\n\n\n\n<p>Example:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: v1\nkind: ResourceQuota\nmetadata:\n name: compute-quota\n namespace: dev\nspec:\n hard:\n   pods: \"10\"\n   requests.cpu: \"4\"\n   requests.memory: 5Gi\n   limits.cpu: \"10\"\n   limits.memory: 10Gi<\/code><\/pre>\n\n\n\n<p>Commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl create -f compute-quota.yaml<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">1.13 Services<\/h2>\n\n\n\n<p>It is an object. It connects pods to external users or other pods.<\/p>\n\n\n\n<p>Types:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>NodePort: like docker port-mapping<\/li><li>ClusterIP: like a virtual IP that is reachable to all pods in the cluster.<\/li><li>LoadBalancer: only available in Cloud providers<\/li><\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">1.13.1 NodePort<\/h3>\n\n\n\n<p>Like a virtual server. SessionAffinity: yes. Random Algorithm for scheduling. <\/p>\n\n\n\n<p>Important parts:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>targetport: This is the pod port.<\/li><li>port: This is the service port (most of the times, it is the same as targetport).<\/li><li>nodeport: This is in the node (the port other pods in different nodes are going to hit)<\/li><\/ul>\n\n\n\n<p>Example:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: v1\nkind: Service\nmetadata:\n  name: mypapp-service\nspec:\n  type: NodePort\n  ports:\n  - targetPort: 80\n    port: 80\n    nodePort: 30080  (range: 30000-32767)\n  selector:\n    app: myapp        ---|\n    type: front-end   ---|-> matches pods !!!!\n<\/code><\/pre>\n\n\n\n<p>The <strong>important<\/strong> bits are the &#8220;spec.<strong>ports<\/strong>&#8221; and &#8220;spec.<strong>selector<\/strong>&#8221; definitions. The &#8220;selector&#8221; is used to match on labels from pods where we want to apply this service.<\/p>\n\n\n\n<p>Commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">\/\/ declarative\n$ kubectl create -f service-definition.yml\n$ kubectl get services\n\n\/\/ imperative\n$ kubectl expose deployment simple-webapp-deployment --name=webapp-service --target-port=8080 --type=NodePort \\\n--dry-run=client -o yaml &gt; svc.yaml --&gt; create YAML !!!<\/pre>\n\n\n\n<p>Example of creating pod and service imperative way:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl run redis --image=redis:alpine --labels=tier=db\n$ kubectl expose pod redis --name redis-service --port 7379 --target-port 6379<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">1.13.2 ClusterIP<\/h3>\n\n\n\n<p>It is used for access to several pods (VIP). This is the default service type.<\/p>\n\n\n\n<p>Example:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: v1\nkind: Service\nmetadata:\n  name: back-end\nspec:\n  type: ClusterIP \/\/ (default)\n  ports:\n  - targetPort: 80\n    port: 80\n  selector:\n    app: myapp\n    type: back-end<\/code><\/pre>\n\n\n\n<p>Commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl create -f service-definition.yml\n$ kubectl get services<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">1.13.3 Service Bound<\/h3>\n\n\n\n<p>Whatever the service you use, you want to be sure it is in use, you can check that seeing if the service is <strong>bound<\/strong> to a node. That is configured by &#8220;<strong>selector<\/strong>&#8221; but to confirm that is correct, use the below command. You must have endpoints to proof your service is attached to some pods.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get service XXX | grep -i endpoint<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">1.13.4 Microservice Architecture Example<\/h3>\n\n\n\n<p>Based on this &#8220;diagram&#8221;:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>voting-app     result-app\n (python)       (nodejs)\n   |(1)           ^ (4)\n   v              |\nin-memoryDB       db\n (redis)       (postgresql)\n    ^ (2)         ^ (3)\n    |             |\n    ------- -------\n          | |\n         worker\n          (.net)<\/code><\/pre>\n\n\n\n<p>These are the steps we need to define:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">1- deploy containers   -&gt; deploy PODs (deployment)\n2- enable connectivity -&gt; create service clusterIP for redis\n                          create service clusterIP for postgres\n3- external access     -&gt; create service NodePort for voting\n                          create service NodePort for result<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">1.14- Imperative vs Declarative<\/h2>\n\n\n\n<p><strong>imperative<\/strong>: how to do things (step by step)<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl run\/create\/expose\/edit\/scale\/set \u2026\n$ kubectl replace -f x.yaml !!! x.yaml has been updated<\/pre>\n\n\n\n<p><strong>declarative<\/strong>: just what to do (no how to do) &#8211;&gt; infra as code \/ ansible, puppet, terraform, etc<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kublectl apply -f x.yaml &lt;--- it creates\/updates<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">1.15 &#8211; kubectl and options<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>--dry-run<\/strong>: By default as soon as the command is run, the resource will be created. If you simply want to test your command , use the --dry-run=client option. This will not create the resource, instead, tell you weather the resource can be created and if your command is right.\n\n<strong>-o yaml<\/strong>: This will output the resource definition in YAML format on screen.\n\n$ kubectl <strong>explain<\/strong> pod <strong>--recursive<\/strong> ==&gt; all options available\n\n$ kubectl <strong>logs<\/strong> [-f] POD_NAME [CONTAINER_NAME]\n\n$ kubectl -n prod exec -it PODNAME cat \/log\/app.log\n$ kubectl -n prod logs PODNAME<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">1.16- Kubectl Apply<\/h2>\n\n\n\n<p>There are three type of files:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>local file<\/strong>: This is our yaml file<\/li><li><strong>live object config<\/strong>: This is the file generated via our local file and it is what you see when using &#8220;get&#8221;<\/li><li><strong>last applied config:<\/strong> This is used to find out when fields are <strong>REMOVED<\/strong> from the local file<\/li><\/ul>\n\n\n\n<p>&#8220;kubectl apply&#8221; compares the three files above to find our what to add\/delete.<\/p>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">2- SCHEDULING<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">2.1- Manual Scheduling<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>what to schedule? find pod without &#8220;nodeName&#8221; in the spec section, then finds a node for it.<\/li><li>only add &#8220;nodeName&#8221; at creation time<\/li><li>After creation, only via API call you can change that<\/li><\/ul>\n\n\n\n<p>Check you have a scheduler running:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl -n kube-system get pods | grep -i scheduler<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">2.2 Labels and Selectors<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>group and select things together.<\/li><li>section &#8220;label&#8221; in yaml files<\/li><\/ul>\n\n\n\n<p>how to filter via cli:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get pods --selector key=value --selector k1=v1\n$ kubectl get pods --selector key=value,k1=v1\n$ kubectl get pods -l key=value -l k1=v1<\/pre>\n\n\n\n<p>In Replicasets\/Services, the labels need to match!<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>--\nspec:\n replicas: 3\n selector:\n  matchLabels:\n    app:App1 &lt;----\n template:       |\n   metadata:     |-- need to match !!!\n    labels:      |\n     app:App1 &lt;---<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">2.3 Taints and Tolerations<\/h2>\n\n\n\n<p>set restrictions to check what pods can go to nodes. It doesn&#8217;t tell the POD where to go!!!<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>you set &#8220;taint&#8221; in nodes<\/li><li>you set &#8220;tolerance&#8221; in pods<\/li><\/ul>\n\n\n\n<p>Commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl taint nodes NODE_NAME key=value:taint-effect\n$ kubectl taint nodes node1 app=blue:NoSchedule &lt;== apply\n$ kubectl taint nodes node1 app=blue:<strong>NoSchedule-<\/strong> &lt;== remove(-) !!!\n$ kubectl taint nodes node1  &lt;== display taints<\/pre>\n\n\n\n<p>*tain-effect = what happens to PODS that DO NOT Tolerate this taint? Three types:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">- NoSchedule:\n- PreferNoSchedule: will try to avoid the pod in the node, but not guarantee\n- NoExecute: new pods will not be installed here, and current pods will exit if they dont tolerate the new taint. The node could have already pods before applying the taint\u2026<\/pre>\n\n\n\n<p>Apply toleration in pod, in yaml, it is defined under &#8220;spec&#8221;:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>spec:\n tolerations:\n - key: \"app\"\n   operator: \"Equal\"\n   value: \"blue\"\n   effect: \"NoSchedule\"<\/code><\/pre>\n\n\n\n<p>In general, the master node never gets pods (only the static pods for control-plane)<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl describe node X | grep -i taint<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">2.4 Node Selector<\/h2>\n\n\n\n<p>tell pods where to go (different for taint\/toleration)<\/p>\n\n\n\n<p>First, apply on a node a label:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl label nodes NODE key=value\n$ kubectl label nodes NODE size=Large<\/pre>\n\n\n\n<p>Second, apply on pod under &#8220;spec&#8221; the entry &#8220;nodeSelector&#8221;:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">...\nspec:\n  nodeSelector:\n    size: Large<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">2.5 Node Affinity<\/h2>\n\n\n\n<p>extension of &#8220;node selector&#8221; with &#8220;and&#8221; &#8220;or&#8221; logic ==&gt; mode complex !!!!<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apply on pod:#\n....\nspec:\n affinity:\n   nodeAffinity:\n    requiredDuringSchedulingIgnoredDuringExecution:  or \n    preferredDuringSchedulingIgnoredDuringExecution:\n      nodeSelectorTerms:\n      - matchExpressions:\n        - key: size\n          operator: In    ||   NotIn   ||    Exists\n          values:\n          - Large              Small\n          - Medium<\/code><\/pre>\n\n\n\n<p>DuringScheduling: pod is being created<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">2.6 Resource Limits<\/h2>\n\n\n\n<p>Pod needs by default: cpu(0.5) men(256m) and disk<\/p>\n\n\n\n<p>By default: max cpu = 1 \/\/ max mem = 512Mi<\/p>\n\n\n\n<p><strong>Important<\/strong> regarding going over the limit:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">if pod uses more <strong>cpu<\/strong> than limit -&gt; <strong>throttle<\/strong>\n                 <strong>mem<\/strong>            -&gt; <strong>terminate<\/strong> (OOM)<\/pre>\n\n\n\n<p>Example:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>pod\n---\nspec:\n  containers:\n    resources:\n      requests:\n        memory: \"1Gi\"\n        cpu: 1\n      limits:\n        memory: \"2Gi\"\n        cpu: 2<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">2.7 DaemonSets<\/h2>\n\n\n\n<p>It is like a replicaset (only kind changes). run 1 pod in each node: ie monitoring, logs viewer, networking (weave-net), kube-proxy!!!<\/p>\n\n\n\n<p>It uses NodeAffinity and default scheduler to schedule pods in nodes.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get daemonset<\/pre>\n\n\n\n<pre class=\"wp-block-preformatted\">if you <strong>add<\/strong>    a node, the daemonset <strong>creates<\/strong> that pod\n       <strong>delete<\/strong>                       <strong>deletes<\/strong><\/pre>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: apps\/v1\nkind: DaemonSet\nmetadata:\n  name: monitoring-daemon\nspec:\n  selector:\n    matchLabels:\n      app: monitoring-agent\n  template:\n    metadata:\n      labels:\n        app: monitoring-agent\n    spec:\n      containers:\n      - name: monitoring-agent\n        image: monitoring-agent<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">2.8 Static PODs<\/h2>\n\n\n\n<p>kubelet in a node, can create pods using files in \/etc\/kubernetes\/manifests automatically. <strong>But<\/strong>, it can&#8217;t do replicasets, deployments, etc<\/p>\n\n\n\n<p>The path for the static pods folder is defined in kubelet config file<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">kubelet.service &lt;- config file\n...\n--config=kubeconfig.yaml \\ or\n--pod-manifest-path=\/etc\/kubernetes\/manifests\n\n\nkubeconfig.yaml\n---\nstaticPodPath: \/etc\/kubernetes\/manifests<\/pre>\n\n\n\n<p>You can check with&#8221;docker ps -a&#8221; in master for docker images running the static pods.<\/p>\n\n\n\n<p>Static pods is mainly used by master nodes for installing pods related to the kube cluster (control-plane: controller, apiserver, etcd, ..)<\/p>\n\n\n\n<p><strong>Important<\/strong>: <\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>you can&#8217;t delete static pods via kubectl. Only by deleting the yaml file for the folder &#8220;\/etc\/kubernetes\/manifests&#8221;<\/li><li>the pods created via yaml in that folder, will have &#8220;-master&#8221; added to the name if you are in master node when using &#8220;kubectl get pods&#8221; or &#8220;-nodename&#8221; if it is other node.<\/li><\/ul>\n\n\n\n<p>Comparation Static-Pod vs Daemon-Set<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>static pod           vs          daemon-set\n----------                       -----------\n- created by kubelet              - created by kube-api\n- deploy control-plane componets  - deploy monitoring, logging\n    as static pods                     agents on nodes\n- ignored by kube-scheduler       - ignored by kube-scheduler<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">2.9 Multiple Schedulers<\/h2>\n\n\n\n<p>You can write you own scheduler.<\/p>\n\n\n\n<p>How to create it:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>kube-scheduler.service\n--scheduler-name= custom-scheduler\n\n\/etc\/kubernetes\/manifests\/kube-scheduler.yam --> copy and modify\n--- (a scheduler is a pod!!!)\napiVersion: v1\nkind: Pod\nmetadata:\n  name: my-custom-scheduler\n  namespace: kube-system\nspec:\n  containers:\n  - command:\n    - kube-scheduler\n    - --address=127.0.0.1\n    - --kubeconfig=\/etc\/kubernetes\/scheduler.conf\n    - --leader-elect=false\n    - --scheduler-name=my-custom-scheduler\n    - --lock-object-name=my-custom-scheduler\n    image: xxx\n    name: kube-scheduler\n    ports:\n    -  containerPort: XXX<\/code><\/pre>\n\n\n\n<p>Assign new scheduler to pod:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>---\napiVersion: v1\nkind: Pod\nmetadata:\n  name: nginx\nspec:\n  containers:\n  - image: nginx\n    name: nginx\n  schedulerName: my-custom-scheduler<\/code><\/pre>\n\n\n\n<p>How to see logs:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get events ---&gt; view scheduler logs\n$ kubectl logs my-custom-scheduler -n kube-system<\/pre>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">3- LOGGING  AND MONITORING<\/h1>\n\n\n\n<p>Monitoring cluster components. There is nothing built-in (Oct 2018).<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>pay: datadog, dynatrace<\/li><li>Opensource Options: metrics server, prometheus, elastic stack, etc<\/li><\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">3.1- metrics server<\/h2>\n\n\n\n<p>one per cluster. data kept in memory. kubelet (via cAdvisor) sends data to metric-server.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">install: &gt; minukube addons enable metrics-server \/\/or\n           other envs: git clone \"github path to binary\"\n                       kubectl create -f deploy\/1.8+\/\n\nview: &gt; kubectl top node\/pod<\/pre>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">4- APPLICATION LIFECYCLE MANAGEMENT<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">4.1- Rolling updates \/ Rollout<\/h2>\n\n\n\n<p>rollout -&gt; a new revision. This is the reason you create &#8220;deployment&#8221; objects.<\/p>\n\n\n\n<p>There are two strategies:<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li><strong>recreate<\/strong>: destroy all, then create all -&gt; outage! (scale to 0, then scale to X)<\/li><li><strong>rolling update<\/strong> (default): update a container at each time -&gt; no outage (It creates a new replicaset and then starts introducing new pods)<\/li><\/ul>\n\n\n\n<p>How to apply a new version?<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">1) Declarative: make change in deployment yaml file\nkubectl apply -f x.yaml (recommended)\n\nor\n\n2) Imperative: \nkubectl create deployment nginx-deploy --image=nginx:1.16\nkubectl set image deployment\/nginx-deploy nginx=nginx:1.17 --record<\/pre>\n\n\n\n<p>How to check status of the rollout<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">status:   $ kubectl rollout status deployment\/NAME\nhistory:  $ kubectl rollout history deployment\/NAME\nrollback: $ kubectl rollout undo deployment\/NAME\n<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">4.2- Application commands in Docker and Kube<\/h2>\n\n\n\n<p>From a &#8220;Dockerfile&#8221;:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">---\nFROM Ubuntu\nENTRYPOINT [\"sleep\"] --&gt; cli commands are appended to entrypoint\nCMD [\"5\"] --&gt; if you dont pass any value in \"docker run ..\" it uses by \n              default 5.\n---<\/pre>\n\n\n\n<p>With the docker image created above, you can create a container like this:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker run --name ubuntu-sleeper ubuntu-sleeper 10<\/pre>\n\n\n\n<p>So now, kubernetes yaml file:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: v1\nkind: Pod\nmetadata:\n  name: ubuntu-sleeper-pod\nspec:\n  containers:\n  -  name: ubuntu-sleeper\n     image: ubuntu-sleeper\n     command: &#91;\"sleep\",\"10\"] --> This overrides ENTRYPOINT in docker\n     args: &#91;\"10\"]   --> This overrides CMD &#91;x] in docker\n           &#91;\"--color=blue\"]\n<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">4.3- Environment variables<\/h2>\n\n\n\n<p>You define them inside the spec.containers.container section:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>spec:\n containers:\n - name: x\n   image: x\n   ports:\n   - containerPort: x\n   env:\n   - name: APP_COLOR\n     value: pink<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">4.4- ConfigMap<\/h2>\n\n\n\n<p>Defining env var can be tedious, so config maps is the way to manage them a bit better. You dont have to define in each pod all env vars&#8230; just one entry now.<\/p>\n\n\n\n<p>First, create configmap object:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>imperative<\/strong> $ kubectl create configmap NAME \\\n                       --from-literal=KEY=VALUE \\\n                       --from-literal=KEY2=VALUE2 \\\n                       or\n                       --from-file=FILE_NAME\nFILE_NAME\nkey1: val1\nkey2: val2\n\n<strong>declarative<\/strong> $ kubectl create -f cm.yaml\n            $ kubectl get configmaps\n\ncat app-config\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\nname: app-config\ndata:\nKEY1: VAL1\nKEY2: VAL2<\/pre>\n\n\n\n<p>Apply configmap to a container in three ways:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>1) Via \"envFrom\": all vars\n\nspec:\n  containers:\n  - name: xxx\n    envFrom:   \/\/ all values\n    -  configMapRef:\n         name: app-config\n\n2) Via \"env\", to import only specific vars\n\nspec:\n containers:\n - name: x\n   image: x\n   ports:\n   - containerPort: x\n   env:\n   - name: APP_COLOR  -- get one var from a configmap, dont import everything\n     valueFrom:\n       configMapKeyRef:\n         name: app-config\n         key: APP_COLOR\n\n3) Volume:\n\nvolumes:\n- name: app-config-volume\n  configMap:\n    name: app-config\n<\/code><\/pre>\n\n\n\n<p>Check &#8220;explain&#8221; for more info:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl explain pods --recursive | grep envFrom -A3<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">4.5- Secrets<\/h2>\n\n\n\n<p>This is encode in base64 so not really secure. It just avoid to have sensitive info in clear text&#8230;<\/p>\n\n\n\n<p>A secret is only sent to a node if a pod on that node requires it.<br>Kubelet stores the secret into a tmpfs so that the secret is not written to disk storage. Once the Pod that depends on the secret is deleted, kubelet will delete its local copy of the secret data as well:<br>https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/#risks<\/p>\n\n\n\n<p>How to create secrets:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>imperative<\/strong> $ kubectl create secret generic NAME \\\n                       --from-literal=KEY=VAL \\\n                       --from-literal=KEY2=VAL2 \n                       or\n                       --from-file=FILE\ncat FILE\nDB_Pass: password\n\n<strong>declarative<\/strong> $ kubectl create -f secret.yaml\n\ncat secret.yaml\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: app-secret\ndata:\n  DB_Pass: HASH &lt;---- $ echo -n 'password' | base64 \/\/ ENCODE !!!!\n                      $ echo -n 'HASH' | base64 --decode \/\/ DECODE !!!!<\/pre>\n\n\n\n<p>You can apply secrets in three ways:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">1) as \"envFrom\" to import all params from secret object\n\nspec:\n  containers:\n  - name: xxx \n    envFrom: \n    - secretRef:\n        name: app-secret\n\n2) Via \"env\" to declare only one secret param\n\nspec:\n  containers:\n  - name: x\n    image: x\n    env:\n      name: APP_COLOR\n      valueFrom:\n        secretKeyRef:\n          name: app-secret\n          key: DB_password\n\n3) Volumes:\n\nspec:\n  containers:\n  - command: [\"sleep\", \"4800\"]\n    image: busybox\n    name: secret-admin\n    volumeMounts:\n    - name: secret-volume\n      mountPath: \"\/etc\/secret-volume\"\n      readOnly: true\n  volumes:\n  - name: secret-volume\n    secret:\n      secretName: app-secret --&gt; each key from the secret file is created\n                                 as a file in the volume.\n                                 The content of the file is the secret.\n\n\n$ ls -ltr \/etc\/secret-volume\nDB_Host\nDB_User\nDB_Password<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">4.6- Multi-container Pods<\/h2>\n\n\n\n<p>Scenarios where your app needs an agent, ie: web server + log agent<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: v1\nkind: Pod\nmetadata:\n  name: simple-webapp\n  labels:\n    name: simple-webapp\nspec:\n containers:\n - name: simple-webapp\n   image: simple-webapp\n   ports:\n   - containerPort: 8080\n - name: log-agent\n   image: log-agent<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">4.7- Init Container<\/h2>\n\n\n\n<p>You use an init container when you want to setup something before the other containers are created. Once the initcontainers complete their job, the other containers are created.<\/p>\n\n\n\n<p>An initContainer is configured in a pod like all other containers, except that it is specified inside a initContainers section<\/p>\n\n\n\n<p>You can configure multiple such initContainers as well, like how we did for multi-pod containers. In that case each init container is run one at a time in sequential order.<\/p>\n\n\n\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/init-containers\/\">https:\/\/kubernetes.io\/docs\/concepts\/workloads\/pods\/init-containers\/<\/a><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>apiVersion: v1\nkind: Pod\nmetadata:\n  name: myapp-pod\n  labels:\n    app: myapp\nspec:\n  initContainers:\n  - name: init-myservice\n    image: busybox\n    command: &#91;'sh', '-c', 'git clone &lt;some-repository-that-will-be-used-by-application> ;']\n  containers:\n  - name: myapp-container\n    image: busybox:1.28\n    command: &#91;'sh', '-c', 'echo The app is running! &amp;&amp; sleep 3600']\n<\/code><\/pre>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">5- CLUSTER MAINTENANCE<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">5.1- Drain Node<\/h2>\n\n\n\n<p>If you need to upgrade\/reboot a node, you need to move the pods to somewhere else to avoid an outage.<\/p>\n\n\n\n<p>Commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl drain NODE -&gt; pods are moved to another nods and it doesnt \n                           receive anything new\n$ kubectl uncordon NODE -&gt; node can receive pods now\n\n$ kubectl cordon NODE -&gt; it doesnt drain the node, it just make the node to not receive new pods<\/pre>\n\n\n\n<p>&#8220;kube-controller-manager&#8221; check the status of the nodes. By default, kcm takes 5 minutes to mark down:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kube-controller-manager --pod-eviction-timeout=5m0s (by default) time masters waits for a node to be backup<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">5.2- Kubernetes upgrade<\/h2>\n\n\n\n<p>You need to check the version you are running:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get nodes --&gt; version: v_major.minor.path<\/pre>\n\n\n\n<p><strong>Important<\/strong>: kube only supports only the last two version from current, ie:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">new current v1.12 -&gt; support v1.11 and v1.10 ==&gt; v1.9 is not supported!!!<\/pre>\n\n\n\n<p><strong>Important<\/strong>: nothing can be higher version than kube-apiserver, ie:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">kube-apiserver=x (v1.10)\n- controller-mamanger, kube-scheduler can be x or x-1 (v1.10 , v1.9)\n- kubetet, kube-proxy can be x, x-1 or x-2 (v1.8, v1.9, v1.10)\n- kubectl can be x+1,x,x-1 !!!<\/pre>\n\n\n\n<p><strong>Upgrade path<\/strong>: one minor upgrade at each time: v1.9 -&gt; v1.10 -&gt; v1.11 etc<\/p>\n\n\n\n<p><strong>Summary Upgrade<\/strong>:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">1- upgrade master node\n2- upgrade worker nodes (modes)\n- all nodes at the same time\nor\n- one node at each time\n- add new nodes with the new sw version, move pods to it, delete old node<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">5.2.1- Upgrade Master<\/h3>\n\n\n\n<p>From v1.11 to v1.12<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ <strong>kubeadm upgrade plan<\/strong> --&gt; it gives you the info the upgrade\n\n$ apt-get update\n\n$ apt-get install -y <strong>kubeadm<\/strong>=1.12.0-00\n\n$ <strong>kubeadm upgrade apply<\/strong> v1.12.0\n\n$ kubectl get nodes (it gives you version of kubelet!!!!)\n\n$ apt-get upgrade -y <strong>kubelet<\/strong>=1.12.0-00 \/\/ you need to do this <strong>if<\/strong> you have \"master\" in \"kubectl get nodes\"\n\n$ systemctl restart kubelet\n\n$ kubectl get nodes --&gt; you should see \"master\" with the new version 1.12<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">5.2.2- Upgrade Worker<\/h3>\n\n\n\n<p>From v1.11 to v1.12<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">master:                     node-1\n---------------------       -----------------------\n$ kubectl drain node-1\n                            apt-get update\n                            apt-get install -y <strong>kubeadm<\/strong>=1.12.0-00\n                            apt-get install -y <strong>kubelet<\/strong>=1.12.0-00\n                            <strong>kubeadm upgrade node<\/strong> \\\n                                 [config --kubelet-version v1.12.0]\n                            systemctl restart kubelet\n$ kubectl uncordon node-1\n$ apt-mark hold package<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">5.3- Backup Resources<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get all --all-namespaces -o yaml &gt; all-deploy-service.yaml<\/pre>\n\n\n\n<p>There are other tools like &#8220;velero&#8221; from Heptio that can do it. Out of scope for CKA.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">5.4- Backup\/Restore ETCD &#8211; Difficult<\/h2>\n\n\n\n<p>&#8220;etcd&#8221; is important because stores all cluster info.<\/p>\n\n\n\n<p>The difficult part is to get the certificates parameters to get the etcd command working.<\/p>\n\n\n\n<p>&#8211; You can get some clues from the static pod definition of etcd:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">\/etc\/kubernetes\/manifests\/etcd.yaml: Find under exec.command<\/pre>\n\n\n\n<p>&#8211; or do a ps -ef | grep -i etcd and see the parameters used by other commands<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>verify command:<\/strong>\nETCDCTL_API=3 etcdctl --<strong>cacert<\/strong>=\/etc\/kubernetes\/pki\/etcd\/ca.crt \\\n                      --<strong>cert<\/strong>=\/etc\/kubernetes\/pki\/etcd\/server.crt \\\n                      --<strong>key<\/strong>=\/etc\/kubernetes\/pki\/etcd\/server.key \\\n                      --<strong>endpoints<\/strong>=127.0.0.1:2379 <em><strong>member<\/strong> list<\/em>\n\n<strong>create backup:<\/strong>\nETCDCTL_API=3 etcdctl <em><strong>snapshot save<\/strong> SNAPSHOT-BACKUP.db<\/em> \\\n                    --endpoints=https:\/\/127.0.0.1:2379 \\\n                    --cacert=\/etc\/etcd\/ca.crt \\\n                    --cert=\/etc\/etcd\/etcd-server.crt \\\n                    --key=\/etc\/etcd\/etcd-server.key\n\n<strong>verify backup:<\/strong>\nETCDCTL_API=3 etcdctl --cacert=\/etc\/kubernetes\/pki\/etcd\/ca.crt \\\n                      --cert=\/etc\/kubernetes\/pki\/etcd\/server.crt \\\n                      --key=\/etc\/kubernetes\/pki\/etcd\/server.key \\\n                      --endpoints=127.0.0.1:2379 \\\n                      <strong><em>snapshot status<\/em><\/strong> PATH\/FILE -w table<\/pre>\n\n\n\n<p><strong>Summary<\/strong>:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">etcd backup:\n1- documentation: find the basic command for the API version\n2- ps -ef | grep etcd --> get path for certificates\n3- run command\n4- verify backup<\/pre>\n\n\n\n<p>5.3.1- <strong>Restore ETCD<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">\/\/ 1- Stop api server\n$ service kube-apiserver stop\n\n\/\/ 2- apply etcd backup\n$ ETCDCTL_API=3 etcdctl snapshot restore SNAPSHOT-BACKUP.db \\\n                  --endpoints=https:\/\/127.0.0.1:2379 \\\n                  --cacert=\/etc\/etcd\/ca.crt \\\n                  --cert=\/etc\/etcd\/etcd-server.crt \\\n                  --key=\/etc\/etcd\/etcd-server.key\n                  <strong>--data-dir<\/strong> \/var\/lib\/etcd-from-backup \\\n                  <strong>--initial-cluster<\/strong> master-1=https:\/\/127.0.0.1:<strong>2380<\/strong>,\n                                      master-2=https:\/\/x.x.x.y:2380 \\\n                  -<strong>-initial-cluster-token<\/strong> <strong>NEW_TOKEN<\/strong> \\\n                  <strong>--name<\/strong>=master\n                  <strong>--initial-advertise-peer-urls<\/strong> https:\/\/127.0.0.1:<strong>2380<\/strong>\n\n\/\/ 3- Check backup folder\n$ ls -ltr \/var\/lib\/etcd-from-backup -&gt; you should see a folder \"member\"\n\n\/\/ 4- Update etcd.service file. The changes will apply immediately as it is a static pod\n\n$ vim \/etc\/kubernetes\/manifests\/etcd.yaml\n...\n<strong>--data-dir<\/strong>=\/var\/lib\/etcd-from-backup (update this line with new path)\n<strong>--initial-cluster-token=NEW_TOKEN<\/strong> <em>(add this line)<\/em>\n\u2026\n<strong>volumeMounts<\/strong>:\n- <strong>mountPath<\/strong>: \/var\/lib\/etcd-from-backup (update this line with new path)\n  name: <strong>etcd-data<\/strong>\n\u2026\n<strong>volumes<\/strong>:\n- hostPath:\n    <strong>path<\/strong>: \/var\/lib\/etcd-from-backup (update this line with new path)\n    type: DirectoryOrCreate\n  name: <strong>etcd-data<\/strong>\n\n\/\/ 5- Reload services\n$ systemctl daemon-reload\n$ service etcd restart\n$ service kube-apiserver start<\/pre>\n\n\n\n<p><strong>Important<\/strong>: In cloud env like aws,gcp you dont have access to ectd\u2026<\/p>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">6- SECURITY<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">6.1- Security Primitives<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">kube-apiserver: who can access: files, certs, ldap, service accounts\n                what can they do: RBAC authorization, ABAC autho<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.2- Authentication<\/h2>\n\n\n\n<p>Kubectl :<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">users: admin, devs                   --&gt; kubectl can't create accounts\nservice accountsL 3rd parties (bots) --&gt; kubectl can create accounts<\/pre>\n\n\n\n<p>You can use static file for authentication &#8211; NO RECOMMENDED<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">file x.csv:\n   password, user, uid, gid --&gt; --basic-auth-file=x.csv\n\ntoken token.csv:\n   token, user, uid, gid --&gt; --token-auth-file=token.csv<\/pre>\n\n\n\n<p>Use of auth files in kube-api config:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">kube-apiserver.yaml\n---\nspec:\n  containers:\n  - command: \n    \u2026 \n    - --basic-auth-file=x.csv \n    \/\/ or\n    - --token-auth-file=x.csv<\/pre>\n\n\n\n<p>Use of auth in API calls:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ curl -v -k https:\/\/master-node-ip:6443\/api\/v1\/pods -u \"user1:password1\"\n$ curl -v -k https:\/\/master-node-ip:6443\/api\/v1\/pods \\\n    --header \"Authorization: Bearer TOKEN\"<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.3- TLS \/ Generate Certs<\/h2>\n\n\n\n<p>openssl commands to create required files:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>gen key<\/strong>:  openssl genrsa -out admin.key 2048\n<strong>gen cert<\/strong>: openssl rsa -in admin.key -pubout &gt; mybank.pem\n<strong>gen csr<\/strong>:  openssl req -new -key admin-key -out admin.csr \\\n                   -subj \"\/CN=kube-admin\/O=system:masters\"\n             (admin, scheduler, controller-manager, kube-proxy,etc)<\/pre>\n\n\n\n<p>Generate cert with SAN:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>0) Gen key<\/strong>: \nopenssl genrsa -out apiserver.key 2048\n\n<strong>1) Create openssl.cnf<\/strong> with SAN info\n[req]\nreq_extensions = v3_req\n[v3_req]\nbasicConstraints = CA:FALSE\nkeyUsage = nonRepudiation\nsubjectAltName = @alt_names\n<em>[alt_names]\nDNS.1 = kubernetes\nDNS.2 = kubernetes.default\nIP.1 = 10.96.1.1\nIP.2 = 172.16.0.1<\/em>\n\n<strong>2) Gen CSR:<\/strong>\nopenssl req -new -key apiserver.key -subj \"\/CN=kube-apiserve\" -out apiserver.csr -config openssl.cnf\n\n<strong>3) Sign CSR with CA:<\/strong>\nopenssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -out apiserver.crt<\/pre>\n\n\n\n<p>Self-Signed Cert: Sign the CSR with own key to generate the cert:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ openssl x509 -req -in ca.csr -signkey ca.key -out ca.crt<\/pre>\n\n\n\n<p>User cers to query API:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ curl https:\/\/kube-apiserver:6443\/api\/v1\/pods --key admin.key --cert admin.crt --cacert ca.crt<\/pre>\n\n\n\n<p>Kube-api server config related to certs&#8230;:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">--etcd-cafile=\n--etcd-certfile=\n--etcd-keyfile=\n\u2026\n--kubelet-certificate-authority=\n--kubelet-client-certificate=\n--kubelet-client-key=\n\u2026\n--client-ca-file=\n--tls-cert-file=\n--tls-private-key-file=\n\u2026<\/pre>\n\n\n\n<p>Kubelet-nodes:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">server cert name =&gt; kubelet-nodeX.crt\n                    kubelet-nodeX.key\n\nclient cert name =&gt; Group: System:Nodes name: system:node:node0x<\/pre>\n\n\n\n<p>kubeadm can generate all certs for you:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">cat \/etc\/kubernetes\/manifests\/kube-apiserver.yaml\nspec:\n  containers:\n  - command:\n    - --client-ca-file=\n    - --etcd-cafile\n    - --etcd-certfile\n    - --etcd-keyfile\n    - --kubelet-client-certificate\n    - --kubelet-client-key\n    - --tls-cert-file\n    - --tls-private-key-file<\/pre>\n\n\n\n<p>How to check CN, SAN and date in cert?<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ openssl x509 -in \/etc\/kubernetes\/pki\/apiserver.crt -text -noout<\/pre>\n\n\n\n<p>Where you check if there are issues with certs in a core service:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">if installed manually: &gt; journalctl -u etcd.service -l\nif installed kubeadm: &gt; kubectl logs etcd-master<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.4- Certificates API<\/h2>\n\n\n\n<p>Generate certificates is quite cumbersome. So kubernetes has a Certificates API to generate the certs for users, etc<\/p>\n\n\n\n<p>How to create a certificate for a user:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">1) gen key for user\nopenssl genrsa -out new-admin.key 2048\n\n2) gen csr for user\nopenssl req -new -key new-admin.key -subl \"\/CN=jane\" -out new-admin.csr\n\n3) create \"CertificateSigningRequest\" kubernetes object:\n\ncat new-admin-csr.yaml\n---\napiVersion: certificates.k8s.io\/v1beta1\nkind: CertificateSigningRequest\nmetadata:\n  name: jane\nspec:\n  groups:\n  - system:authenticated\n  usages:\n  - digital signature\n  - key encipherment\n  - server auth\n  request: <strong>(cat new-admin.csr | base64)<\/strong>\n\nkubectl create -f new-admin-csr.yaml\n\n4) approve new certificate, it can't be done automatically:\nkubectl get csr\nkubectl certificate approve new-admin\n\n5) show certificate to send to user\nkubectl get certificate new-admin -o yaml --&gt; put \"certificate:\" in (echo \"..\" | base64 --decode)<\/pre>\n\n\n\n<p>The certs used by CA API are in controller-manager config file:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">kube-controller-manager.yaml\n--cluster-signing-cert-file=\n--cluster-signing-key-file=<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.5- Kubeconfig<\/h2>\n\n\n\n<p>kubectl is always querying the API whenever you run a command and use certs. You dont have to type the certs every time because it is configured in the kubectl config at ~HOME\/.kube\/config.<\/p>\n\n\n\n<p>The kubeconfig file has three sections: clusters, users and contexts (that join users with contexts). And you can have several of each one.<\/p>\n\n\n\n<p>kubeconfig example:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">apiVersion: v1\nkind: Config\ncurrent-context: dev-user@gcp \/\/ example: user@cluster\n\n<strong>clusters<\/strong>: \/\/\/\n  - name:\n    cluster:\n      <em>certificate-authority<\/em>: PATH\/ca.crt \n       \/\/or\n      certificate-authority-data: $(cat ca.crt | base64)\n      server: https:\/\/my-kube-playground:6443\n\n<strong>contexts<\/strong>: \/\/\/ user@cluster\n  - name: my-kube-admin@my-kube-playground\n    context: my-kube-playground\n      user: my-kube-admin\n      cluster: my-kube-playground\n      namespace: production\n\n<strong>users<\/strong>: \/\/\n  - name: my-kube-admin\n    user:\n    <em>client-certificate<\/em>: PATH\/admin.crt\n    <em>client-key<\/em>: PATH\/admin.key\n    \/\/or\n    client-certificate-data: $(cat admin.crt | base64)\n    client-key-data: $(cat admin.key | base64)<\/pre>\n\n\n\n<p>You can test other user certs:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ curl https:\/\/kube-apiserver:6443\/api\/v1\/pods --key admin.key \\\n                                     --cert admin.crt --cacert ca.crt\n\n$ kubectl get pods --server my-kube-playground:6443 \\\n                   --client-key admin.key \\\n                   --client-certificate admin.crt \\\n                   --certificate-authority ca.crt \\<\/pre>\n\n\n\n<p>Use and view kubeconfig file:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get pods [--kubeconfig PATH\/FILE]\n\n$ kubectl config view [--kubeconfig PATH\/FILE] &lt;-- show kubectl config file\n\n$ kubectl config use-context prod-user@prod &lt;-- change use-context in file too!<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.6- API groups<\/h2>\n\n\n\n<p>This is a basic diagram of the API. Main thing is the difference between &#8220;api&#8221; (core stuff) and &#8220;apis&#8221; (named stuff &#8211; depends on a namespace):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>\/metrics  \/healthx  \/version  \/api                \/apis          \/logs\n                             (core)               (named)\n                              \/v1                   |\n                      namespace pods rc      \/apps \/extensions ... (api groups)\n                      pv pvc binding...      \/v1                  \/v1\n                                              |\n                                     \/deployments \/replicaset  (resources)\n                                          |\n                                     -list,get,create,delete,update (verbs)<\/code><\/pre>\n\n\n\n<p>You can reach the API via curl but using the certs&#8230;<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ curl https:\/\/localhost:6443 -k --key admin.key --cert admin.crt \\\n                                 --cacert ca.crt\n$ curl https:\/\/localhost:6443\/apis -k | grep \"name\"<\/pre>\n\n\n\n<p>You can make your life easier using a kubectl proxy that uses the kubectl credentials to access kupeapi<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl proxy -> launch a proxy in 8001 to avoid use auth each time\n                   as it uses the ones from kube config file\n\n$ curl http:\/\/localhost:8001 -k<\/pre>\n\n\n\n<p>Important:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"> <strong>                   kube proxy<\/strong>  <strong>!=<\/strong> <strong>kubeCTL proxy<\/strong> (reach kubeapi)\n    (service running on node for \n     pods connectivity)<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.7- Authorization<\/h2>\n\n\n\n<p>What you can do. There are several method to arrange authorization:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Node authorizer<\/strong>: (defined in certificate: Group: SYSTEM:NODES CN: system:node:node01)\n\n<strong>ABAC<\/strong> (Atribute Base Access Control): difficult to manage. each user has a policy\u2026\n{\"kind\": \"Policy\", \"spec\": {\"user\": \"dev-user\", \"namespace\": \"<em>\", \"resource\": \"pods\", \"apiGroup\": \"<\/em>\"}}\n\n<strong>RBAC<\/strong>: Role Base Access Control: mode standard usage. create role, assign users to roles\n\n<strong>Webhook<\/strong>: use external 3rd party: ie \"open policy agent\"\n\n<strong>AlwaysAllow, AlwaysDeny<\/strong><\/pre>\n\n\n\n<p>You define the method in the kubeapi config file:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>--authorization-mode<\/strong>=AlwaysAllow (default)\nor\n--authorization-mode=Node,RBAC,Webhook (you use all these mode for each request until allow)<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.8- RBAC<\/h2>\n\n\n\n<p>You need to define a role and a binding role (who uses which role) objects. This is &#8220;<strong>namespaced<\/strong>&#8220;.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>dev-role.yaml<\/strong>\n--\napiVersion: rbac.authorization.k8s.io\/v1\n<em>kind: Role<\/em>\nmetadata:\n  name: dev\n  namespace: xxx\nrules:\n- apiGroups: [\"\"]\n  resources: [\"pods\"]\n  verbs: [\"list\", \"get\", \"create\", \"update\", \"delete\"]\n  resourceNames: [\"blue\", \"orange\"] &lt;--- if you want to filter at pod level\n                                        too: only access to blue,orange\n- apiGroups: [\"\"]\n  resources: [\"configMap\"]\n  verbs: [\"create\"]\n\n$ kubectl create -f dev-role.yaml\n\n<strong>dev-binding.yaml<\/strong>\n---\napiVersion: rbac.authorization.k8s.io\/v1\n<em>kind: RoleBinding<\/em>\nmetadata:\n  name: dev-binding\n  namespace: xxx\nsubjects:\n- kind: User\n  name: dev-user\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: Role\n  name: dev\n  apiGroup: rbac.authorization.k8s.io\n\n$ kubectl create -f dev-binding.yaml<\/pre>\n\n\n\n<p>Info about roles\/rolebind:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get roles\n               rolebindings\n          describe role dev\n                   rolebinding dev-binding<\/pre>\n\n\n\n<p><strong>Important<\/strong>: How to test the access of a user?<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ <strong>kubectl auth can-i<\/strong> create deployments [<strong>--as<\/strong> dev-user] [-n prod]\n                     update pods\n                     delete nodes<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.9- Cluster Roles<\/h2>\n\n\n\n<p>This is for cluster resources (non-namespae): nodes, pv, csr, namespace, cluster-roles, cluster-roles-binding<\/p>\n\n\n\n<p>You can see the full list for each with:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl api-resources --namespaced=true\/false<\/pre>\n\n\n\n<p>The process is the same, we need to define  a cluster role and a cluster role binding:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>cluster-admin-role.yaml<\/strong>\n---\napiVersion: rbac.authorization.k8s.io\/v1\n<em>kind: ClusterRole<\/em>\nmetadata:\n  name: cluster-administrator\nrules:\n- apiGroups: [\"\"]\n  resources: [\"nodes\"]\n  verbs: [\"list\", \"get\", \"create\", \"delete\"]\n\n<strong>cluster-admin-role-bind.yaml<\/strong>\n---\napiVersion: rbac.authorization.k8s.io\/v1\n<em>kind: ClusterRoleBinding<\/em>\nmetadata:\n  name: cluster-admin-role-bind\nsubjects:\n- kind: User\n  name: cluster-admin\n  apiGroup: rbac.authorization.k8s.io\nroleRef:\n  kind: ClusterRole\n  name: cluster-administrator\n  apiGroup: rbac.authorization.k8s.io<\/pre>\n\n\n\n<p><strong>Important<\/strong>: You can create a &#8220;cluster role&#8221; for a user to access pods (ie), using cluster role, that give it access to all pod in all namespaces.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">6.10- Images Security<\/h2>\n\n\n\n<p>Secure access to images used by pods. An image can be in docker, google repo, etc<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">image: docker.io\/nginx\/nginx\n           |       |     |\n       registry  user  image\n                account\n\nfrom google: gcr.io\/kubernetes-e2e-test-images\/dnsutils<\/pre>\n\n\n\n<p>You can use a private repository:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker login private.io\n  user:\n  pass:\n\n$ docker run private.io\/apps\/internal-app<\/pre>\n\n\n\n<p>How to define a private registry in kubectl:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">kubectl create secret docker-registry <strong>regcred<\/strong> \\\n--docker-server= \\\n--docker-username= \\\n--docker-password= \\\n--docker-email=<\/pre>\n\n\n\n<p>How to use a specific registry in a pod?<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">spec:\n  containers:\n  - name: nginx\n    image: private.io\/apps\/internal-app\n    <strong>imagePullSecrets<\/strong>:\n      name: <strong>regcred<\/strong><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.11- Security Contexts<\/h2>\n\n\n\n<p>Like in docker, you can assign security params (like user, group id, etc) in kube containers. You can set the security params at pod or container level:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">at pod level:\n----\nspec:\n  securityContext:\n  runAsUser: 1000\n\nat container level:\n---\nspec:\n  containers:\n  - name: ubuntu\n    securityContext:\n      runAsUser: 100 (user id)\n      capabilities: &lt;=== ONY AT CONTAINER LEVEL!\n        add: [\"MAC_ADMIN\"]<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.12- Network Polices<\/h2>\n\n\n\n<p>This is like a firewall, iptables implementation for access control at network level. Regardless the network plugin, all pods in a namespace can reach any other pod (without adding any route into the pod).<\/p>\n\n\n\n<p>Network policies are supported in kube-router, calico, romana and weave-net. It is not supported in flannel (yet)<\/p>\n\n\n\n<p>You have ingress (traffic received in a pod) and egress (traffic generated by a pod) rule. You match the rule to a pod using labels with podSelector:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">networkpolicy: apply network rule on pods with label role:db to allow only traffic from pods with label name: api-pod into port 3306\n\n---\napiVersion: networking.k8s.io\/v1\nkind: NetworkPolicy\nmetadata:\n  name: db-policy\nspec:\n  podSelector:\n    matchLabels:\n      role: db\n  policyTypes:\n  - Ingress\n  ingress:\n  - from: \n    - podSelector:\n        matchLabels:\n          name: api-pod\n    ports:\n    - protocol: TCP\n      port: 3306\n\n$ kubectl apply -f xxx<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">6.13- Commands: kubectx \/ kubens<\/h2>\n\n\n\n<p>I haven&#8217;t seen any lab requesting the usage. For the exam is not required but maybe for real envs.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Kubectx<\/strong>\nreference: https:\/\/github.com\/ahmetb\/kubectx\n\nWith this tool, you don't have to make use of lengthy \u201ckubectl config\u201d commands to switch between contexts. This tool is particularly useful to switch context between clusters in a multi-cluster environment.\n\nInstallation:\nsudo git clone https:\/\/github.com\/ahmetb\/kubectx \/opt\/kubectx\nsudo ln -s \/opt\/kubectx\/kubectx \/usr\/local\/bin\/kubectx\n\n<strong>Kubens<\/strong>\nThis tool allows users to switch between namespaces quickly with a simple command.\nsudo git clone https:\/\/github.com\/ahmetb\/kubectx \/opt\/kubectx\nsudo ln -s \/opt\/kubectx\/kubens \/usr\/local\/bin\/kubens<\/pre>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">7- STORAGE<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">7.1- Storage in Docker<\/h2>\n\n\n\n<p>In docker, \/container and \/images are under \/var\/lib\/docker.<\/p>\n\n\n\n<p>Docker follows a layered architecture (each line in Dockerfile is a layer):<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ docker build --> Read Only (image layer)\n$ docker run -> new layer: it is rw (container layer) - lost once docker finish<\/pre>\n\n\n\n<p>So docker follows a &#8220;copy-on-write&#8221; strategy by default. If you want to be able to access that storage after the docker container is destroyer, you can use volumes:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">> docker volume create data_volume \n    --> \/var\/lib\/docker\/volumes\/data_volume\n> docker run -v data_volume:\/var\/lib\/mysql mysql\n    --> volume mounting -> dir created in docker folders\n> docker run --mount type=bind,source=\/data\/mysql,target=\/var\/lib\/mysql mysl --> path mounting,dir not created in docker folders\n\nvolume driver: local, azure, gce, aws ebs, glusterfs, vmware, etc\n\nstorage drivers: enable the layer driver: aufs, zfs, btrfs, device mapper, overlay, overlay2<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">7.2- Volumes, PersistentVolumes and PV claims.<\/h2>\n\n\n\n<p>Volume: Data persistence after container is destroyed<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">spec:\n  containers:\n  - image: alpine\n    volumeMounts:\n    - mountPath: \/opt\n      name: data-volume ==> \/data -> alpine:\/opt\n\n  volumes:\n  - name: data-volume\n    hostPath:\n      path: \/data\n      type: Directory<\/pre>\n\n\n\n<p>Persistent volumes: cluster pool of volumes that users can request part of it<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">apiVersion: v1\nkind: PersistentVolume\nmetadata:\n  name: pv-vol1\nspec:\n  accessModes:\n    - ReadWriteOnce (ReadOnlyMode, ReadWriteMany)\n  capacity:\n    storage: 1Gi\n  hostPath:\n    path: \/tmp\/data\n  persistentVolumeReclaimPolicy: Retain (default) [Delete, Recycle]\n\n$ kubectl create -f xxx\n$ kubectl get persistenvolume [pv]<\/pre>\n\n\n\n<p>PV claims: use of a pv. Each pvc is bind to one pv.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  name: <strong>myclaim<\/strong>\nspec:\n  accessModes:\n    - ReadWriteOnce\n  resources:\n    requests:\n      storage: 500Mi\n\n$ kubectl create -f xxx\n$ kubectl get persistentvolumeclaim [pvc]  \n      ==> If status is \"bound\" you have matched a PV<\/pre>\n\n\n\n<p>Use a PVC in a pod:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">apiVersion: v1\nkind: Pod\nmetadata:\n  name: mypod\nspec:\n  containers:\n  - name: myfrontend\n    image: nginx\n    volumeMounts:\n    - mountPath: \"\/var\/www\/html\"\n      name: mypd\n  volumes:\n  - name: mypd\n    persistentVolumeClaim:\n      claimName: <strong>myclaim<\/strong><\/pre>\n\n\n\n<p><strong>Important<\/strong>: a PVC will bound to one PV that fits its requirements. Use &#8220;get pvc&#8221; to check status.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">7.3- Storage Class<\/h2>\n\n\n\n<p>dynamic provisioning of storage in clouds:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>sc<\/strong>-definition -> <strong>pvc<\/strong>-definition -> <strong>pod<\/strong>-definition \n     ==> we dont need <strong>pv<\/strong>-definition! it is created automatically<\/pre>\n\n\n\n<p>Example:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">sc-definition\n---\napiVersion: storage.k8s.io\/v1\nkind: StorageClass\nmetadata:\n  <strong>name: gcp-storage &lt;===========1<\/strong>\nprovisioner: kubernetes.io\/gce-pd\nparameters: (depends on provider!!!!)\n  type:\n  replication-type:\n\npvc-def\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n  <strong>name: myclaim &lt;=========2<\/strong>\nspec:\n  accessModes:\n  - ReadWriteOnce\n  <strong>storageClassName: gcp-storage &lt;======1<\/strong>\n  resources:\n    requests:\n      storage: 500Mi\n\nuse pvc in pod\n---\napiVersion: v1\nkind: Pod\nmetadata:\n  name: mypod\nspec:\n  containers:\n  - name: myfrontend\n    image: nginx\n    volumeMounts:\n    - mountPath: \"\/var\/www\/html\"\n      <strong>name: mypd &lt;=======3<\/strong>\n  volumes:\n  - <strong>name: mypd &lt;========3<\/strong>\n    persistentVolumeClaim:\n      <strong>claimName: myclaim &lt;===========2<\/strong><\/pre>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">8- NETWORKING<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">8.1 Linux Networking Basics<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">$ ip link (show interfaces)\n\n$ ip addr add 192.168.1.10\/24 dev eth0\n$ route\n\n$ ip route add 192.168.2.0\/24 via 192.168.1.1\n$ ip route default via 192.168.1.1\n            0.0.0.0\/0\n\n\/\/ enabling forwarding\n$ echo 1 > \/proc\/sys\/net\/ipv4\/ip_forward\n$ vim \/etc\/sysctl.conf\n  net.ipv4.ip_forward = 1<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8.2 Linux DNS basics<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">$ cat \/etc\/resolv.conf \nnameserver 192.168.1.1\nsearch mycompany.com prod.mycompany.com\n\n$ nslookup x.x.x.x\n$ dig<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8.3 Linux Namespace<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">\/\/ create ns\nip netns add red\nip netns add blue\nip netns (list ns)\nip netns exec red ip link \/\/ ip -n red link\nip netns exec red arp\n\n\/\/ create virtual ethernet between ns and assign port to them\nip link add veth-red type veth peer name veth-blue \n  (ip -n red link del veth-red)\nip link set veth-red netns red\nip link set veth-blue netns blue\n\n\/\/ assign IPs to each end of the veth\nip -n red addr add 192.168.1.11 dev veth-red\nip -n blue addr add 192.168.1.12 dev veth-blue\n\n\/\/ enable links\nip -n red link set veth-red up\nip -n blue link set veth-blue up\n\n\/\/ test connectivity\nip netns exec red ping 192.168.1.2\n\n======\n\n\/\/ create bridge\nip link add v-net-0 type bridge\n\n\/\/ enable bridge\nip link set dev v-net-0 up \/\/ ( ip -n red link del veth-red)\n\n\/\/ create and attach links to bridge from each ns\nip link add veth-red type veth peer name veth-red-br\nip link add veth-blue type veth peer name veth-blue-br\n\nip link set veth-red netns red\nip link set veth-red-br master v-net-0\n\nip link set veth-blue netns blue\nip link set veth-blue-br master v-net-0\n\nip -n red addr add 192.168.1.11 dev veth-red\nip -n blue addr add 192.168.1.12 dev veth-blue\n\nip -n red link set veth-red up\nip -n blue link set veth-blue up\n\nip addr add 192.168.1.1\/24 dev v-net-0\n\nip netns exec blue ip route add 192.168.2.0\/24 via 192.168.1.1\nip netns exec blue ip route add default via 192.168.1.1\n\niptables -t nat -A POSTROUTING -s 192.168.1.0\/24 -j MASQUERADE\niptables -t nat -A PREROUTING --dport 80 --to-destination 192.168.1.11:80 -j DNAT<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8.4 Docker Networking<\/h2>\n\n\n\n<p>Three types:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">- none: no connectivity\n- host: share host network\n- bridge: internal network is created and host is attached\n   (docker network ls --> bridge -| are the same thing\n    ip link --> docker0          -|\n\niptables -t nat -A DOCKER -j DNAT --dport 8080 --to-destination 192.168.1.11:80<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8.5 Container Network Interface<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">Container runtime must create network namespace:\n- identify network the container must attach to\n- container runtime to invoke network plugin (bridge) when container is added\/deleted\n- json format of network config\n\nCNI: \n must support command line arguments add\/del\/chec\n must support parametes container id, network ns\n manage IP\n resutls in specific format\n\n**docker is not a CNI**\n\nkubernetes uses docker. it is created in the \"host\" network and then uses \"bridge\"<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8.6 Cluster Networking<\/h2>\n\n\n\n<p>Most common ports:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">etcd: 2379 (2380 as client)\nkube-api: 6443\nkubelet: 10250\nkube-scheduler: 10251\nkube-controller: 10252\nservices: 30000-32767<\/pre>\n\n\n\n<p>Configure <a href=\"https:\/\/kubernetes.io\/docs\/setup\/production-environment\/tools\/kubeadm\/high-availability\/#steps-for-the-first-control-plane-node\">weave-network<\/a>:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ <code>kubectl apply -f \"https:\/\/cloud.weave.works\/k8s\/net?k8s-version=<strong>$(<\/strong>kubectl version | base64 | tr -d '\\n'<strong>)<\/strong>\"<\/code>\n\n$ kubectl get pod -n kube-system | grep -i weave (one per node)<\/pre>\n\n\n\n<p><a href=\"https:\/\/kubernetes.io\/docs\/concepts\/cluster-administration\/networking\/#how-to-implement-the-kubernetes-networking-model\">cluster-networking doc<\/a>: Doesnt give you steps to configure any CNI&#8230;.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">8.7 Pod Networking<\/h2>\n\n\n\n<ul class=\"wp-block-list\"><li>every pod should have an ip.<\/li><li>every pod shoud be able to community with every other pod in the same node and other nodes (without nat)<\/li><\/ul>\n\n\n\n<p>Networking config in kubelet:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">--cni-conf-dir=\/etc\/cni\/net.d\n--cni-bin-dir=\/etc\/cni\/bin\n.\/net-script.sh add &lt;container> &lt;namespace><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8.8 CNI Weave-net<\/h2>\n\n\n\n<p>installs an agent in each node. deploy as pods in nodes<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ <code>kubectl apply -f \"https:\/\/cloud.weave.works\/k8s\/net?k8s-version=$(kubectl version | base64 | tr -d '\\n')\" <\/code>\n\n$ <code>kubectl get pods -n kube-system | grep weave-net<\/code><\/pre>\n\n\n\n<p>ipam weave:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">where pods and bridges get the IPs?\nplugin: host-local -> provide free ips from node<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8.9 Service Networking<\/h2>\n\n\n\n<p>&#8220;<strong>service<\/strong>&#8221; is cluster-wide object. The service has an IP. Kubeproxy in each node, creates iptables rules.<\/p>\n\n\n\n<p><strong>ClusterIP<\/strong>: IP reachable by all pods in the cluster<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ ps -ef | kube-api-server --service-cluster-ip-range=x.x.x.x\/y\n!! pod network shouldnt overlap with service-cluster\n$ iptables -L -t -nat | grep xxx\n$ cat \/var\/log\/kube-proxy.log<\/pre>\n\n\n\n<p><strong>NodePort<\/strong>: same port in all nodes, sent to the pod<\/p>\n\n\n\n<p>IPs for pod: check logs of pod weave:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl -n kube-system logs weave-POD weave \n    --> the pod has two container so you need to specify one of them<\/pre>\n\n\n\n<p>IPs for services &#8211;> check kube-api-server config<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">8.10 CoreDNS<\/h2>\n\n\n\n<p>For pods and services in the cluster (nodes are managed externally)<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>kube dns<\/strong>: <strong>hostname    namespace  type  root           ip address<\/strong>\n          web-service apps       svc   cluster.local  x.x.x.x (service)\n          10-244-2-5  default    pod   cluster.local  x.x.x.y (pod)\n\n<strong>fqdn<\/strong>: web-service.apps.svc.cluster.local\n      10-244-2-5.default.pod.cluster.local<\/pre>\n\n\n\n<p>dns implementation in kubernetes use <strong>coredns<\/strong> (two pods for ha)<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">cat \/etc\/coredns\/Corefile\n.53: {\n  errors \/\/ plugins\n  health\n  kubernetes cluster.local in-addr.arpa ip6.arpa {\n     pods insecure \/\/ create record for pod as 10-2-3-1 instead of 10.2.3.1\n     upstream\n     fallthrough in-addr.arpa ip6.arpa\n  }\n  prometheus: 9153\n  proxy: . \/etc\/resolv.conf \/\/ for external queries (google.com) from a pod\n  cache: 30\n  reload\n}\n\n$ kubectl get configmap -n kube-system<\/pre>\n\n\n\n<p>pods dns config:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">cat \/etc\/resolv.conf => nameserver IP \n    &lt;- it is the IP from $ kubectl get service -n kubesystem | grep dns\n                         this come from the kubelet config:\n                         \/var\/lib\/kubelet\/config.yaml:\n                           clusterDNS:\n                           - 10.96.0.10\n\n$ host ONLY_FQDN<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">8.11 Ingress<\/h2>\n\n\n\n<p>Using a service &#8220;LoadBalance&#8221; is only possible in Cloud env like GCP, AWS, etc<\/p>\n\n\n\n<p>When you create a service loadbalancer, the cloud provider is going to create a proxy\/loadbalancer to access that service. so you can create a hierarchy of loadbalancers in the cloud provider\u2026 &#8211;> too complex ==> sol: Ingress<\/p>\n\n\n\n<p>ingress = controller + resources. Not deployed by default<\/p>\n\n\n\n<p>supported controller: GCP HTTPS Load Balancer (GCE) and NGINX (used in kubernetes)<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">8.11.1 Controller<\/h3>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>1) nginx --> deployment file<\/strong>:\n---\napiVersion: apps\/v1\nkind: Deployment\nmetadata:\n  name: nginx-ingress-controller\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      name: nginx-ingress\n  template:\n    metadata:\n      labels:\n        name: nginx-ingress\n    spec:\n      containers:\n      - name: nginx-ingress-controller\n        image: quay.io\/kubernetes-ingress-controller\/nginx-ingress-controller:0.21.0\n      args:\n      - \/nginx-ingress-controller\n      - --configmap=$(POD_NAMESPACE)\/nginx-configuration\n      env:\n      - name: POD_NAME\n        valueFrom:\n          fieldRef:\n            fieldPath: metadata.name\n      - name: POD_SPACE\n        valueFrom:\n          filedRef:\n            fieldPath: metadata.namespace\n      ports:\n      - name: http\n        containerPort: 80\n      - name: https:\n        containerPorts: 443\n\n<strong>2) nginx configmap used in deployment<\/strong>\n---\napiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: nginx-configuration\n\n<strong>3) service<\/strong>\n---\napiVersion: v1\nkind: Service\nmetadata:\n  name: nginx-ingress\nspec:\n  type: NodePort\n  ports:\n  - port: 80\n    targetPort: 80\n    protocol: TCP\n    name: http\n  - port: 443\n    targetPort: 443\n    protocol: TCP\n    name: https\n  selector:\n    name: nginx-ingress\n\n<strong>4) service account (auth)<\/strong>: roles, clusterroles, rolebinding, etc\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  name: nginx-ingress-serviceaccount<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">8.11.2 Options to deploy ingress rules<\/h3>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>option1) 1rule\/1backend<\/strong>: In this case the selector from the service, gives us the pod\n\ningress-wear.yaml\n---\napiVersion: extensions\/v1beta1\nkind: Ingress\nmetadata:\n  name: ingress-wear\nspec:\n  backend:\n    serviceName: wear-service\n    servicePort: 80\n\n\n<strong>option 2) split traffic via URL<\/strong>: 1 Rule \/ 2 paths\n\n           www.my-online-store.com\n          \/wear              \/watch\n                    |\n                    V\n                  nginx\n                    |\n           ----------------------\n           |                     |\n          svc                   svc\n          wear                  vid\n          ====                  ====\n           |                      |\n        wear-pod               vid-pod\n\n\ningress-wear-watch.yaml\n---\napiVersion: extensions\/v1beta1\nkind: Ingress\nmetadata:\n  name: ingress-wear-watch\nspec:\n  rules:\n  <strong>- http:<\/strong> \n      paths: \n      <strong>- path: \/wear<\/strong>\n        backend:\n          serviceName: wear-service\n          servicePort: 80\n      <strong>- path: \/watch<\/strong>\n        backend:\n          serviceName: watch-service\n          servicePort: 80\n\n$ kubectl describe ingress NAME\n    ==> watchout the default backend !!!! \n        if nothing matches, it goes there!!!\n        <strong>you need to define a default backend<\/strong>\n\n\n\n<strong>option 3) split by hostname<\/strong>: 2 Rules \/ 1 path each\n\nwear.my-online-store.com           watch.my-online-store.com\n        |------------------------------------|\n                           |\n                           V\n                         nginx\n                           |\n                ----------------------\n                |                    |\n               svc                  svc\n               wear                 vid\n               ====                 ====\n                |                    |\n            wear-pod               vid-pod\n\n\ningress-wear-watch.yaml\n---\napiVersion: extensions\/v1beta1\nkind: Ingress\nmetadata:\n  name: ingress-wear-watch\nspec:\n  rules:\n  - host: wear.my-online-store.com \n    http: \n      paths: \n      - backend:\n          serviceName: wear-service\n          servicePort: 80\n  - host: watch.my-online-store.com\n    http: \n      paths: \n      - backend:\n          serviceName: watch-service\n          servicePort: 80\n<\/pre>\n\n\n\n<p>ingress examples: https:\/\/kubernetes.github.io\/ingress-nginx\/examples\/<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">8.12 Rewrite<\/h2>\n\n\n\n<p>I havent seen any question about this in the mock labs but just in case: <a href=\"https:\/\/kubernetes.github.io\/ingress-nginx\/examples\/rewrite\/\">Rewrite url nginx<\/a>: <\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">For example: replace(path, rewrite-target)\nusing: http:\/\/&lt;ingress-service>:&lt;ingress-port>\/wear \n   --> http:\/\/&lt;wear-service>:&lt;port>\/\n\nIn our case: replace(\"\/wear\",\"\/\")\n\napiVersion: extensions\/v1beta1\nkind: Ingress\nmetadata:\n  name: test-ingress\n  namespace: critical-space\n  annotations:\n    nginx.ingress.kubernetes.io\/rewrite-target: \/\nspec:\n  rules:\n  - http: \n      paths: \n      - path: \/wear\n        backend:\n          serviceName: wear-service\n          servicePort: 8282\n\n<strong>with regex<\/strong>\nreplace(\"\/something(\/|$)(.*)\", \"\/$2\")\n\napiVersion: extensions\/v1beta1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io\/rewrite-target: \/$2\n  name: rewrite\n  namespace: default\nspec:\n  rules:\n  - host: rewrite.bar.com \n    http: \n      paths: \n      - backend:\n          serviceName: http-svc\n          servicePort: 80\n        path: \/something(\/|$)(.*)<\/pre>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">9- Troubleshooting<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">9.1 App failure<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">- make an application diagram\n- test the services: curl, kubectl describe service (compare with yaml)\n- status pod (restarts), describe pod, pod logs (-f)<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">9.2 Control plane failure<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">- get nodes, get pods -n kube-system\n\n- master: service kube-apiserver\/kube-controller-manager\/\n                                            kube-scheduler  status\n          kubeadm: kubectl logs kube-apiserver-master -n kube-system\n          service: sudo journalctl -u kube-apiserver\n\n- worker: service kubelet\/kube-proxy status\n\n\n- Do exist static pods configured in kubelet config?\n   1 check \/etc\/systemd\/system\/kubelet.service.d\/10-kubeadm.confg for config file\n   2 check static pod path in kubelet config<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">9.3 Worker node failure<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">- get nodes, describe nodes x (check status column)\n- top, dh, service kubelet status, kubelet certificates, kubelet service running?\n- kubectl cluster-info<\/pre>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">10- JSONPATH<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">10.1 Basics<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">$ = root dictionary\nresults are always in [] \/\/ list\n\n$.car.price -> [1000]\n---\n{\n  \"car\": {\n    \"color\": \"blue\",\n    \"price\": \"1000\"\n   },\n  \"bus\": {\n    \"color\": \"red\",\n    \"price\": \"1200\"\n   }\n}\n\n$[0] -> [\"car\"]\n---\n[\n \"car\",\n \"bus\",\n \"bike\n]\n\n$[<strong>?(@>40)<\/strong>] == get all numbers greater than 40 in the array -> [45, 60]\n---\n[\n 12,\n 45,\n 60\n]\n\n$.car.wheels[?(@.location == \"xxx\")].model\n\n\/\/ find prize winner named Malala\n$.prizes[?(@)].laureates[?(@.firstname == \"Malala\")]\n\nwildcard\n---\n$[<em>*].model <\/em>\n<em>$.<\/em>*.wheels[*].model\n\nfind the first names of all winners of year 2014\n$.prizes[?(@.year == 2014)].laureates[*].firstname\n\nlists\n---\n$[<a href=\"start:end\">0:3]<\/a> (start:end) -> 0,1,2 (first 3 elements)\n$<a href=\"start:end:step\">[0:8:2<\/a>] (start:end:step) -> 0,0+2=2,2+2=4,4+2=6 -> \n                                elements in position 0,2,4,6\n$[-1:0] = last element\n$[-1:] = last element\n$[-3:] = last 3 elements<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">10.2 Jsonpath in Kubernetes<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get pods -o json\n\n$ kubectl get nodes -o=jsonpath='{.items[*<em>].metada.name}{\"\\n\"}<\/em>\n                                 <em>{.items[*<\/em>].status.capacity.cpu}'\nmaster node01\n4      4\n\n$ kubectl get nodes -o=jsonpath='{range .items[*]}\\\n                          {.metada.name}{\"\\t\"}{.status.capacity.cpu}{\"\\n\"}\\\n                          {end}'\nmaster 4\nnode01 4\n\n$ kubectl get nodes -o=custom-columns=NODE:.metadata.name,\n                                      CPU:.status.capacity.cpu \u2026\nNODE CPU\nmaster 4\nnode01 4\n\n$ kubectl get nodes --sort-by= .metadata.name\n\n$ kubectl config view --kubeconfig=\/root\/my-kube-config \n            -o=jsonpath='{.users[*].name}' > \/opt\/outputs\/users.txt\n\n$ kubectl config view --kubeconfig=my-kube-config \n       -o jsonpath=\"{.contexts[?(@.context.user=='aws-user')].name}\" >\n                     \/opt\/outputs\/aws-context-name<\/pre>\n\n\n\n<p><a href=\"https:\/\/kubernetes.io\/docs\/reference\/kubectl\/jsonpath\/\">Documentation<\/a>.<\/p>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">11- Install, Config and Validate Kube Cluster<\/h1>\n\n\n\n<p>All based on <a href=\"https:\/\/github.com\/mmumshad\/kubernetes-the-hard-way\">this<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">11.1- Basics<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">education: minikube\n           kubeadm\/gcp\/aws\n\non-prem: kubeadm\n\nlaptop: minikube: deploys VMs (that are ready) - single node cluster\n        kubeadm: require VMS to be ready - single\/multi node cluster\n\nturnkey solution: you provision, configure and maintein VMs. \n                  Use scripts to deploy cluster (KOPS in AWS)\n                 ie: openshift (redhat), Vagrant, VMware PKS, Cloud Foundry\n\nhosted solutions: (kubernetes as a service) provider provision and maintains VMs, install kubernetes: ie GKE in GCP<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">11.2 HA for Master<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">api-server --> need LB (active-active)\n\nactive\/passive\n$ kube-controller-manager --leader-elect true [options]\n  --leader-elect-lease-duration 15s\n  --leader-elect-renew-deadline 10s\n  --leader-elect-retry-period 2s\n\netcd: inside the masters (2 nodes total) or in separated nodes (4 nodes total)<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">11.3 HA for ETCD<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\">leader etcd, writes and send the info to the others\nleader election - RAFT:\n   quorum = n\/2 + 1 -> minimun num of nodes to accept a transactio\n                       successful.\n   recommend: 3 etcd nodes minimun => ODD NUMBER\n\n$ export ETCDCTL_API=3\n$ etcdctl put key value\n$ etcdctl get key\n$ etcdctl get \/ --prefix --keys-only<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">11.4 Lab Deployment<\/h2>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>LAB setup (5nodes)<\/strong>\n  1 LB\n  2 master nodes (with etcd)\n  2 nodes\n  weave-net\n\n> download kubernetes latest release from github\n> uncompress\n> cd kubernetes\n> cluster\/get-kube-binaries.sh --> downloads the latest binaries for your system.\n> cd server; tar -zxvf server-linux-xxx\n> ls kubernetes\/server\/bin\n\n<strong>Plan:<\/strong>\n1- deploy etcd cluster\n2- deploy control plane components (api-server, controller-manager, scheduler)\n3- configure haproxy (for apiserver)\n\n        haproxy\n           |\n -------------------------\n |                       |\n M1:                     M2:\n api                     api\n etcd                    etcd\n control-manager         control-manager\n scheduler               scheduler\n\n W1:                      W2:\n gen certs                <strong>TLS Bootstrap<\/strong>:\n config kubelet             - w2 creates and configure certs itself\n renew certs                - config kubelet\n config kube-proxy          - w2 to renew certs by itself\n                            - config kube-proxy\n\n\n<strong>TLS bootstrap<\/strong>:\n1- in Master\n - create bootstrap token and associate it to group \"system:bootstrappers\"\n - assign role \"system:node-bootstrapper\" to group \"system:bootstrappers\"\n - assing role \"system:certificates.k8s.io:certificatesigningrequests:nodeclient\" to group \"system:bootstrappers\"\n - assing role \"system:certificates.k8s.io:certificatesigningrequests:selfnodeclient\" to group \"system:node\"\n\n2- kubelet.service\n   --bootstrap-kubeconfig=\"\/var\/lib\/kubelet\/bootstrap-kubeconfig\" \n       \/\/ This is for getting the certs to join the cluster!!\n   --rotate-certificates=true \/\/ this if for the client certs used to join the cluster (CSR automatic approval)\n   --rotate-server-certificates=true \/\/ these are the certs we created in the master and copied to the worker manually\nthe server cert requires CSR manual approval !!!\n\n> kubectl get csr\n> kubectl certificate approve csr-XXX\n\n\nbootsrap-kubeconfig\n---\napiVersion: 1\nclusters:\n- cluster:\n    certificate-authority: \/var\/lib\/kubernetes\/ca.crt\n    server: https:\/\/192.168.5.30:6443 \/\/(api-server lb IP)\n  name: bootstrap\ncontexts:\n- context:\n    cluster: bootstrap\n    user: kubelet-bootstrap\n  name: bootstrap\ncurrent-context: bootstrap\nkind: Config\npreferences: {}\nusers:\n- name: kubelet-bootstrap\n  user:\n    token: XXXXXXXXXX<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">11.5 Testing<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">11.5.1 manual test<\/h3>\n\n\n\n<pre class=\"wp-block-preformatted\">$ kubectl get nodes\n              pods -n kube-system (coredns, etcd, kube-paiserver, controller-mamanger, proxy, scheduler, weave)\n\n$ service kube-apiserver status\n          kube-controller-manager\n          kube-scheduler\n          kubelet\n          kube-proxy\n\n$ kubectl run nginx\n          get pods\n          scale --replicas=3 deploy\/nginx\n          get pods\n\n$ kubectl expose deployment nginx --port=80 --type=NodePort\n          get service\n$ curl http:\/\/worker-1:31850<\/pre>\n\n\n\n<h3 class=\"wp-block-heading\">11.5.2 kubetest<\/h3>\n\n\n\n<p>end to end test: 1000 tests (12h) \/\/ conformance: 160 tests (1.5h)<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">1- prepare: creates a namespace for this test\n2- creates test pod in this namespace, waits for the pods to come up\n3- test: executes curl on one popd to reach the ip of another pod over http\n4- record result\n\n$ go get -u k8s.io\/test-infra\/kubetest\n$ kubetest --extract=v1.11.3 (your kubernetes version)\n$ cd kubernetes\n$ export KUBE_MASTER_IP=\"192.168.26.10:6443\"\n$ export KUBE_MASTER=kube-master\n$ kubetest --test --provider=skeleton > test-out.txt \/\/ takes 12 hours\n$ kubetest --test --provider=skeleton --test_args=\"--ginkgo.focus=[Conformance]\" > testout.txt \/\/ takes 1.5 hours\n\n\n$ kubeadm join 172.17.0.93:6443 --token vab2bs.twzblu86r60qommq \\\n--discovery-token-ca-cert-hash sha256:3c9b88fa034a6f894a21e49ea2e2d52435dd71fa5713f23a7c2aaa83284b6700<\/pre>\n\n\n\n<h1 class=\"has-bright-blue-color has-text-color wp-block-heading\">12- Official cheatsheet<\/h1>\n\n\n\n<p><a href=\"https:\/\/kubernetes.io\/docs\/reference\/kubectl\/cheatsheet\/?source=post_page---------------------------\">here<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>I am studying for the Kubernetes certification CKA. These are some notes: 1- CORE CONCEPTS 1.1- Cluster Architecture Master node: manage, plan, schedule and monitor. These are the main components: etcd: db as k-v scheduler controller-manager: node-controller, replication-controller apiserver: makes communications between all parts docker Worker node: host apps as containers. Main components: kubelet (captain &hellip; <a href=\"https:\/\/blog.thomarite.uk\/index.php\/2020\/09\/01\/cka-p1\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;CKA&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[27],"tags":[],"class_list":["post-402","post","type-post","status-publish","format-standard","hentry","category-kubernetes"],"_links":{"self":[{"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/posts\/402","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/comments?post=402"}],"version-history":[{"count":14,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/posts\/402\/revisions"}],"predecessor-version":[{"id":421,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/posts\/402\/revisions\/421"}],"wp:attachment":[{"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/media?parent=402"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/categories?post=402"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.thomarite.uk\/index.php\/wp-json\/wp\/v2\/tags?post=402"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}