For some time, I wanted to take a look at kubernetes. There is a lot of talking about microservices in the cloud and after attending some meetups, I wasnt sure what was all this about so I signed for kodekloud to learn about it.
So far, I have completed the beginners course for Docker and Kubernetes. To be honest, I think the product is very good value for money.
I have been using docker a bit the last couple of months but still wanted to take a bit more info to improve my knowledge.
I was surprised when reading that kubernets pods rely on docker images.
Docker Notes
Docker commands
docker run -it xxx (interactive+pseudoterminal)
docker run -d xxx (detach)
docker attach ID (attach)
docker run --name TEST xxx (provide name to container)
docker run -p 80:5000 xxx (maps host port 80 to container port 5000)
docker run -v /opt/datadir:/var/lib/mysql mysql (map a host folder to container folder for data persistence)
docker run -e APP_COLOR=blue xxx (pass env var to the container)
docker inspect "container" -> check IP, env vars, etc
docker logs "container"
docker build . -t account_name/app_name
docker login
docker push account_name/app_name
docker -H=remote-docker-engine:2375 xxx
cgroups: restrict resources in container
docker run --cpus=.5 xxx (no more than 50% CPU)
docker run --memory=100m xxx (no more than 100M memory)
Docker File
----
FROM Ubuntu
ENTRYPOINT ["sleep"]
CMD ["5"] --> if you dont pass any value in "docker run .." it uses by default 5.
----
Docker Compose
$ cat docker-compose.yml
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_PASSWORD=mysecretpassword
wordpress:
image: wordpress
links:
- db
ports:
- 8085:80
verify file: $ docker-compose config
Docker Volumes
docker volume create NAME --> create /var/lib/docker/volumes/NAME
docker run -v NAME:/var/lib/mysql mysql (docker volume)
or
docker run -v PATH:/var/lib/mysql mysql (local folder)
or
docker run --mount type=bind,source=/data/mysql,target=/var/lib/mysql mysql
Docker Networking
networks: --network=xxx
bridge (default)
none isolation
host (only communication with other containers)
docker network create --driver bridge --subnet x.x.x.x/x NAME
docker network ls
inspect
Docker Swarm
I didnt use this, just had the theory. This is for clustering docker hosts: manager, workers.
manager: docker swarm init
workers: docker swarm join --token xxx
manager: docker service create --replicas=3 my-web-server
Kubernetes Notes
container + orchestration (docker) (kubernetes) node: virtual or physical, kube is installed here cluster: set of nodes master: node that manage clusters kube components: api, etcd (key-value store), scheduler (distribute load), kubelet (agent), controller (brain: check status), container runtime (sw to run containers: docker) master: api, etcd, controller, scheduler, $ kubectl cluster-info get nodes -o wide (extra info) node: kubelet, container
Setup Kubernetes with minikube
Setting up kubernetes doesnt look like an easy task so there are tools to do that like microk8s, kubeadm (my laptop needs more RAM, can’t handle 1master+2nodes) and minikube.
minikube needs: virtualbox(couldnt make it work with kvm2…) and kubectl
Install kubectl
I assume virtualbox is already installed
$ curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl
$ kubectl version --client
Install minikube
$ grep -E --color 'vmx|svm' /proc/cpuinfo // verify your CPU support
virtualization
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
> && chmod +x minikube
$ sudo install minikube /usr/local/bin/
Start/Status minikube
$ minikube start --driver=virtualbox --> it takes time!!!! 2cpu + 2GB ram !!!!
😄 minikube v1.12.3 on Debian bullseye/sid
✨ Using the virtualbox driver based on user configuration
💿 Downloading VM boot image ...
> minikube-v1.12.2.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
> minikube-v1.12.2.iso: 173.73 MiB / 173.73 MiB [] 100.00% 6.97 MiB p/s 25s
👍 Starting control plane node minikube in cluster minikube
💾 Downloading Kubernetes v1.18.3 preload ...
> preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4: 510.91 MiB
🔥 Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.12 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready master 5m52s v1.18.3
$ minikube stop // stop the virtualbox VM to free up resources once you finish
Basic Test
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
deployment.apps/hello-minikube created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
hello-minikube 1/1 1 1 22s
$ kubectl expose deployment hello-minikube --type=NodePort --port=8080
service/hello-minikube exposed
$ minikube service hello-minikube --url
http://192.168.99.100:30020
$ kubectl delete services hello-minikube
$ kubectl delete deployment hello-minikube
$ kubectl get pods
Pods
Based on documentation:
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.
$ kubectl run nginx --image=nginx
$ kubectl describe pod nginx
$ kubectl get pods -o wi
$ kubectl delete pod nginx
Pods – Yaml
Pod yaml structure:
pod-definition.yml:
---
apiVersion: v1
kind: (type of object: Pod, Service, ReplicatSet, Deployment)
metadata: (only valid k-v)
name: myapp-pod
labels: (any kind of k-v)
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
Example:
$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
type: frontend
spec:
containers:
- name: nginx
image: nginx
$ kubectl apply -f pod.yaml
$ kubectl get pods
Replica-Set
Based on documentation:
A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
> cat replicaset-definition.yml
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels:
app: myapp
type: front-end
spec:
template:
metadata: -------
name: nginx |
labels: |
app: nginx |
type: frontend |-> POD definition
spec: |
containers: |
- name: nginx |
image: nginx -----
replicas: 3
selector: <-- main difference from replication-controller
matchLabels:
type: front-end
> kubectl create -f replicaset-definition.yml
> kubectl get replicaset
> kubectl get pods
> kubectl delete replicaset myapp-replicaset
How to scale via replica-set
> kubectl replace -f replicaset-definition.yml (first update file to replicas: 6)
> kubectl scale --replicas=6 -f replicaset-definition.yml // no need to modify file
> kubectl scale --replicas=6 replicaset myapp-replicaset // no need to modify file
> kubectl edit replicaset myapp-replicaset (NAME of the replicaset!!!)
> kubectl describe replicaset myapp-replicaset
> kubectl get rs new-replica-set -o yaml > new-replica-set.yaml ==> returns the rs definition in yaml!
Deployments
Based on documentation:
A Deployment provides declarative updates for Pods ReplicaSets. You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
Example:
cat deployment-definition.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-controller
image: nginx
replicas: 3
selector:
matchLabels:
type: front-end
> kubectl create -f deployment-definition.yml
> kubectl get deployments
> kubectl get replicaset
> kubectl get pods
> kubectl get all
Update/Rollback
From documentation.
By default, it follows a “rolling update”: destroy one, create new one. So this doesnt cause an outage
$ kubectl create -f deployment.yml --record
$ kubectl rollout status deployment/myapp-deployment
$ kubectl rollout history deployment/myapp-deployment
$ kubectl rollout undo deployment/myapp-deployment ==> rollback!!!
Networking
Not handled natively by kubernetes, you need another tool like calico, weave, etc. More info here. This has not been covered in details yet. It looks complex (a network engineer talking…)
Services
Based on documentation:
An abstract way to expose an application running on a set of Pods as a network service. With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
types: NodePort: like docker port-mapping ClusterIP: LoadBalancer
Examples:
nodeport
--------
service: like a virtual server
targetport - in the pod: 80
service - 80
nodeport: 30080 (in the node)
service-definition.yml
apiVersion: v1
kind: Service
metadata:
name: mypapp-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30080 (range: 30000-32767)
selector:
app: myapp
type: front-end
> kubectl create -f service-definition.yml
> kubectl get services
> minikube service mypapp-service
clusterip:
---------
service-definition.yml
apiVersion: v1
kind: Service
metadata:
name: back-end
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: back-end
loadbalance: gcp, aws, azure only !!!!
-----------
service-definition.yml
apiVersion: v1
kind: Service
metadata:
name: back-end
spec:
type: LoadBalancer
ports:
- targetPort: 80
port: 80
nodePort: 30080
selector:
app: myapp
> kubectl create -f service-definition.yml
> kubectl get services
Microservices architecture example
Diagram
=======
voting-app result-app
(python) (nodejs)
|(1) ^ (4)
v |
in-memoryDB db
(redis) (postgresql)
^ (2) ^ (3)
| |
------- -------
| |
worker
(.net)
1- deploy containers -> deploy PODs (deployment)
2- enable connectivity -> create service clusterIP for redis
create service clusterIP for postgres
3- external access -> create service NodePort for voting
create service NodePort for result
Code here. Steps:
$ kubectl create -f voting-app-deployment.yml $ kubectl create -f voting-app-service.yml $ kubectl create -f redis-deployment.yml $ kubectl create -f redis-service.yml $ kubectl create -f postgres-deployment.yml $ kubectl create -f postgres-service.yml $ kubectl create -f worker-deployment.yml $ kubectl create -f result-app-deployment.yml $ kubectl create -f result-app-service.yml
Verify:
$ minikube service voting-service --url http://192.168.99.100:30004 $ minikube service result-service --url http://192.168.99.100:30005