Kubernetes
Kubernetes is an open source container orchestration tool designed for automating the deployment, scaling, and handling of containerized applications. If offers high availability (no downtime), scalability (high performance) and disaster recovery (backup and restore).
Hierarchy of Kubernetes system
A Kubernetes system is composed by a cluster, which contains one or more nodes. A node is a physical server or virtual machine which can allocate one or more pods. Each pod can manage one or more containers, but normally it is only one. Note that Kubernetes does not manage containers. The smallest element Kubernetes manages is a pod.
Cluster -> Node -> Pod -> Container
Main Kubernetes components
- Pod:
- Service: It sits in front of a pod and talks to another services. It has to main functions: hold a permanent IP address and act as load balancer. The nodes communicates between them via services
- Ingress:
- Volumes:
- ConfigMap:
- Secrets:
- Deployment:
- StatefulSet:
Kubernetes architecture
A simple Kubernetes cluster is composed of one master node and one or several workers nodes.
A worker node is composed of three processes:
- Container runtime: Like Docker.
- Kubelet: Executer of ther orders received by the master scheduler.
- Kubeproxy:
A master node is responsible of: schedule, re-schedule/re-restart, monitor pods and join new nodes. A master node is composed of four processes:
- API Server:
- Scheduler:
- Controller Manager:
- etcd: is the cluster brain. It contains the status of the whole cluster. Cluster changes are stored in the key value store.
Deployment and layers of abstraction
Deployment manages a ReplicaSet. A ReplicaSet manages a Pod. A Pod is an abstraction of a Container.
Deployment is the way to launch Pods. Everthing below Deployment is handled automatically by Kubernetes. This means that once you launch a Deployment there is no further configuration you can do to ReplicaSets and Pods directly.
- CRUD commands:
kubectl create deployment [NAME]kubectl edit deployment [NAME]kubectl delete deployment [NAME]
- Status of different k8s components:
kubectl get nodes | pod | services | replicaset | deployment
- Debugging pods:
kubectl logs [pod name]kubectl exec -it [pod name] --/bin/bashkubectl describe pod [pod name]
- Configuration file for CRUD:
kubectl apply -f [filename]kubectl delete -f [filename]
Configuration file
Each configuration file has 3 parts:
- Metadata
- Specification
- Status
Each configuration file can be written for one of the k8s components:
- Deployment
- Service
Tutorial
Launch three VMs with Vagrant
For this tutorial you need to install Vagrant and Virtual Box in your local machine.
We'll generate three virtual machines with Centos 7.6 installed using a Vagrantfile.
Also, in the three VMs, we will install Docker, and disable selinux and firewalld using this file: playbook_centos_install_docker.yaml
- Get the files:
mkdir MyCluster && cd MyClusterwget https://raw.githubusercontent.com/cheomanigua/webdevdocs/master/aux/Vagrantfilewget https://raw.githubusercontent.com/cheomanigua/webdevdocs/master/aux/playbook_centos_install_docker.yaml
- Create the VMs:
vagrant up
- How to access the VMs:
vagrant statusvagrant ssh masterorvagrant ssh worker1orvagrant ssh worker2
- Login as root user once inside the VM:
su -
The password is vagrant for all the three VMs.
Common configuration
The next steps have to be performed logged as root in all the three VMs
- Configure Kubernetes repository
Copy the whole text and paste it in the command line:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpgEOF
- Install kubelet, kubeadm and kubectl
yum install -y kubelet kubeadm kubectlsystemctl enable kubeletsystemctl start kubelet
- Disable SWAP
sed -i '/swap/d' /etc/fstabswapoff -a
- Set the net.bridge.bridge-nf-call-iptables to ‘1’ in your sysctl config file. This ensures that packets are properly processed by IP tables during filtering and port forwarding.
echo 'net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.confecho 'net.bridge.bridge-nf-call-iptables = 1' >> /etc/sysctl.confsysctl -p
How to Deploy a Kubernetes Cluster
The next steps have to be performed logged as root in master VM
- Initiate kubeadm
kubeadm init --ignore-preflight-errors=SystemVerification --apiserver-advertise-address=192.168.2.10 --pod-network-cidr=10.244.0.0/16
Don't forget to copy and keep the 'kubeadm join' token and hash
- Configure kubectl
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
- Configure flannel
cd /rootcurl -Lo kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlsed -i.bak -e "s/ip-masq/ip-masq\\n - --iface=eth1/g" kube-flannel.yml
IMPORTANT: Edit kube-flannel.yml and find all the lines with '- --iface=eth1'. These lines have tab indentation, which will yield an error when running kubectl create. Remove the tab indentation and add space indentation instead.
Finally:
kubectl create -f kube-flannel.yml
It will take a minute or so for the pods and nodes to be ready.
- Check status of cluster
kubectl get pods --all-namespaceskubectl get nodes
Adding a Worker Node to Cluster
- Login as root in worker1 node
vagrant ssh worker1su -
The password is vagrant
- Join worker1 node to cluster
kubeadm join 192.168.2.10:6443 --token blablabla --discovery-token-ca-cert-hash sha256:blablabla
The token and hash are the ones generated when running kubeadm init
You can repeat the steps for the worker2 node.
Once finished, you can check the status of the cluster from the master node:
su -kubectl get pods --all-namespaceskubectl get nodes
Removing a Worker Node from the Cluster
kubectl drain worker2 --delete-local-data --ignoredaemonsets
Delete Worker Node definitely from Cluster
kubectl delete node worker2
Generate new token
If the token has expired, you can generate a new one:
kubeadm token create --print-join-command
You can list all tokens:
kubeadm token list
Deploying an application to the Cluster
From the master node as root, issue:
kubectl run my-nginx --image=nbrown/nginxhello:1.12.1 --port=80kubectl get podkubectl expose pod my-nginx --type=NodePort --name my-nginx-servicekubectl get serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21mmy-nginx-service NodePort 10.109.2.145 <none> 80:31606/TCP 32s
We can visit the application at http://192.168.2.11:31606