Use a Single VM as a Kubernetes Cluster
Author: Sam GriffithKubernetes uses the term cluster to describe the general grouping of the physical or virtual machine(s) that substantiate a viable working environment. Thankfully, unlike other products out there, you do not have to sign up for anything, you do not have to pay for anything, and you aren’t limited to using this product in only one way. All that’s really needed in setting up a cluster is a bit of skill in the command line and a small VM or physical machine to use. So let’s get going!
Basic Kubernetes Architecture
This cluster will consist of control node(s) and worker node(s).
The controller node(s) house the “brains of the operation,” which is known officially as the Control Plane. This Control Plane includes an API Server that drives communication between internal and external processes, an etcd Database that stores the desired state of the cluster, and a Scheduler that decides where Pods should go.
The worker node(s) are where the computations take place. To put it another way, a worker node is where the Pods end up going.
A Pod is perhaps the most critical concept to understand when are starting out learning Kubernetes. Pod can be thought of as a wrapper that goes around one or more Docker/Containerd containers. This wrapper will now give the containers a common hostname and IP address. There are some really cool possibilities that this simple wrapper allows you to do, but that is outside the scope of this post. For now, just recognize that when I say Pod, you say group of one or more containers.
If you are looking for even more information on Kubernetes concepts such as Pods, Worker Nodes, Master Nodes, and MORE, check out this Introduction to Kubernetes!
Installing the Cluster
Before even thinking about installing the Cluster, get yourself a virtual machine (VM). It doesn’t have to be super beefy, 2GB RAM and 2 CPU is more than enough. I like to use Ubuntu 18.04 as it is the LTS (long term stable) release that is current.
> Also, make sure that you have sudo privileges if you are using the VM as a guest operator.
The process that I will walk you through here will allow you to stand up a cluster using kubeadm using only a single VM.
-
Kubernetes requires the installation of several packages and their dependencies, which will require you to use root privileges. Therefore, because nobody likes typing
sudo
repetitively, just become the root user.$
sudo su
-
Now, open up your favorite text editor (I use vim) and copy the following into a bash script.
#
vim kubeadm-k8s-install.sh
#!/bin/sh echo 'Configuring K8s Pre-Reqs' apt-get update apt-get install libseccomp2 wget https://storage.googleapis.com/cri-containerd-release/cri-containerd-1.2.4.linux-amd64.tar.gz sha256sum cri-containerd-1.2.4.linux-amd64.tar.gz curl https://storage.googleapis.com/cri-containerd-release/cri-containerd-1.2.4.linux-amd64.tar.gz.sha256 tar --no-overwrite-dir -C / -xzf cri-containerd-1.2.4.linux-amd64.tar.gz systemctl start containerd apt-get update && sudo apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - bash -c 'echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list' apt-get update apt-get install -y docker.io kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl echo '[Service] Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"' > /etc/systemd/system/kubelet.service.d/0-containerd.conf echo 'net.ipv4.ip_forward = 1' > /etc/sysctl.d/10-ip-forwarding.conf sysctl net.ipv4.ip_forward=1 echo 'br_netfilter' > /etc/modules-load.d/br_nf.conf modprobe br_netfilter systemctl daemon-reload
-
Save and quit (Esc :wq Enter), then run your bash script as root.
#
bash kubeadm-k8s-install.sh
-
Now let’s use kubeadm to create our cluster. In my experience, this should take up to 2 or 3 minutes. Note that we are passing the flags –pod-network-cidr (which allows us to specify the CIDR for the pods that our networking plugin requires) and –cri-socket (which ensures that Kubernetes knows that we want to build our containers using Containerd, not Docker).
#
kubeadm init --pod-network-cidr=192.168.0.0/16 --cri-socket "unix:///run/containerd/containerd.sock"
If you would like to add any more nodes in the future, you will need to read through the output and find a line that starts with kubeadm join. This is not necessary for a single VM cluster though, so we will skip this step for now.
-
If you take a look at the output from the previous command, you can see some other directions there as well. Let’s follow those as a non-root user.
#
exit
-
Create a new .kube directory to store our kubeconfig file in. This file allows us to connect our kubectl into our cluster.
$
mkdir -p $HOME/.kube
-
Copy the admin configuration file into the directory you just made.
$
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
-
Change the ownership of the file that you just copied to be owned by your user and your group id. This is what allows you, a non-root user, to be able to interact with your cluster.
$
sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
Start the networking add on.
$
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
-
Verify that you have networking running.
$
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-55754f75c-2w77d 1/1 Running 0 44s kube-system calico-node-9ktg6 1/1 Running 0 44s kube-system coredns-5644d7b6d9-j695g 1/1 Running 0 13m kube-system coredns-5644d7b6d9-tvds9 1/1 Running 0 13m kube-system etcd-k8s-670-master-01 1/1 Running 0 12m kube-system kube-apiserver-k8s-670-master-01 1/1 Running 0 12m kube-system kube-controller-manager-k8s-670-master-01 1/1 Running 0 12m kube-system kube-proxy-lw7tw 1/1 Running 0 13m kube-system kube-scheduler-k8s-670-master-01 1/1 Running 0 13m
-
The next step is absolutely crucial in order to allow your pods to be scheduled on your controller node. By default, Kubeadm assumes you will be connecting other nodes to your master node to make a ‘proper’ cluster. So to stop you from placing your Pods on your master, they have applied a taint to your node, which is called NoSchedule. This is attached to the Taints information, with the key of node-role.kubernetes.io/master. But thankfully we can save on our resources by removing the NoSchedule taint from your master node.
Be sure to replace
<master_name>
with the name you have given to your host machine. If you get stuck, do akubectl get nodes
to figure out the name of it.$
kubectl taint node <master_name> node-role.kubernetes.io/master-
-
Now you have a fully viable, kubeadm initiated, single VM Kubernetes cluster! Woohoo! Start to test it out by initiating a deployment with it:
$
kubectl apply -f https://static.alta3.com/projects/k8s/zombie.yaml
-
Wait a few seconds, and then verify that the pods are running.
$
kubectl get pods
NAMESPACE NAME READY STATUS RESTARTS AGE default zombie-534447c62f-ag7ug 1/1 Running 0 43s default zombie-534447c62f-43geq 1/1 Running 0 43s default zombie-534447c62f-9gj56 1/1 Running 0 43s
There you have it, a fully functioning Kubernetes Cluster that is perfect for a learning environment!