Setting up Kubernetes Cluster on Arch Linux

What this article will demonstrate are steps to create a Kubernetes Cluster on Arch Linux. Its simple and technical and aimed for starters.Not theoretical or about concepts.For concepts you should be looking at the k8S site.

We will be using Kubeadm rather than something more baked and already setup like minikube, canonical's minik8s etc.With kubeadm i could see what is happening though it takes a little more effort.We will be using 2 machines,one for Master and other Worker.

Ideally you would not want to be root when installing or running the cluster as would be the case in production environments.So create a user with sudo privilege.On Arch linux as root , create a user and add it to the groups wheel, storage,users.

Install sudo : pacman -S sudo
visudo
Uncomment to allow members of group wheel to execute any command
%wheel ALL=(ALL) ALL
Do this on both Master and Worker machines

Do this on both Master and Worker machines

First steps are to install the container (in our case this will be docker) and Kubernetes.On both Master and worker install by using

sudo pacman -S docker
sudo pacman -S kubeadm kube-proxy cni-plugins kubectl
Perform the following on Master First
Turn off swap :sudo swapoff -a
Start the docker and kubelet services:
systemctl start docker
systemctl start kubelet

P.S. : You may want to enable docker and kubelet on startup.For that use systemctl enable and also disable swap.For me i was fine with manual as i do not want to hold up systemd on something that isnt critical.

Install network addon.We will use flannel.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

My home network cidr is 192.168.1.0/16. 2 raised to the power 16 is the number of ip addresses i will get with the last 2 bit of ip address that can change

Master node is on IP 192.168.1.26
Worker will be on IP 192.168.1.24

Important to note that cidrs should not overlap as the master,workers and every element that is present on the cluster will have the same virtual network layered over the actual physical subnets. Consider the fact that you are doing this in an AWS VPC in which each VPC will have non-overlapping cidr. It kind of imposes a restriction. Thats where i feel K8S , Docker is so much about networking that its difficult to get your head around it if you dont come from that hemisphere.

On Mater and worker node turn off swap.  [sudo swapoff -a]


start docker and kubelet on Master

systemctl start docker
systemctl start kubelet

Initialize the control plane

ssudo kubeadm init --apiserver-advertise-address=192.168.1.26 --kubernetes-version stable-1.19 --pod-network-cidr=192.168.1.1/16

Note : 192.168.1.26 is the mater node Ip address that is being passed to the control plane.This will be the server on which the APi service will be available and the cidr used by the cluster will be spanning my entire home network.I could have used a smaller cidr like /27 since i have only 2 machines.

On success you will see the message “Your Kubernetes control-plane has initialized successfully!” along with the token and the command to join the worker. Note down the token printed with kubeadm join.

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join <your master node ip>:6443 --token <your-token \
--discovery-token-ca-cert-hash <your-hahs>

Copy over the config files to your home.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

At this point check the pods created on Master : kubectl get pods -A (pods in all namespace). The coredns pod maybe be in pending.The rest should be running as below.Re-run the command till the dns pods are up.

[lali@lali ~]$ kubectl get pods -A
kube-system  coredns-f9fd979d6-cn8vq 1/1 Running  55d13h
kube-system  coredns-f9fd979d6-h9565 1/1 Running  55d13h
kube-system  etcd-lali1/1 Running  33d8h
kube-system  kube-apiserver-lali  1/1 Running  33d8h
kube-system  kube-controller-manager-lali  1/1 Running  43d8h
kube-system  kube-flannel-ds-lgdql1/1 Running  33d8h
kube-system  kube-flannel-ds-wcb7h1/1 Running  93d8h
kube-system  kube-proxy-9jc8k  1/1 Running  83d8h
kube-system  kube-proxy-9rzc9  1/1 Running  33d8h
kube-system  kube-scheduler-lali  1/1 Running  43d8h

Go to the worker node (for me i ssh into it) turn off swap,start docker and kubelet as we did in Master. Then run the kubeadm join command.It should tell you that the node was registered.

Label the Worker Node : kubectl label node chicu kubernetes.io/role=worker

On the Master node run the below command

[lali@lali ~]$ kubectl get nodes
NAMESTATUS  ROLESAGE VERSION
chicu  Readyworker  5d13h  v1.19.4
laliReadymaster  5d13h  v1.19.4

Note : It shows the Worker node as registered and running.

Execute the following command on Worker : kubectl get pods -A

It will not list anything instead will display something like below

The connection to the server 192.168.1.26:6443 was refused - did you specify the right host or port?

To fix this we need to copy over the kubernetes configuration files to our home directory as we did for Master.

mkdir -p $HOME/.kube
sudo cp -ir /etc/kubernetes/ .
sudo chown -R $(id -u):$(id -g) $HOME/.kube/
cp $HOME/.kube/kubelet.conf $HOME/.kube/config
sudo cp /var/lib/kubelet/pki/kubelet-client-current.pem .
chown $(id -u):$(id -g) $HOME/.kube/kubelet-client-current.pem

Modify the line in $HOME/.kube/config which has /var/lib/kubelet/pki/kubelet-client-current.pem.Change it to

      client-certificate: $HOME/.kube/kubelet-client-current.pem
client-key: $HOME/.kube/kubelet-client-current.pem

Basically it should point to the certificates copied over to the home .kube directory in $HOME

      $HOME/.kube/

Re-run kubectl get pods -A .It will show the pods running on master.The output of this command on aster and worker will now be identical

NAMESPACE NAME  READY  STATUSRESTARTS  AGE
defaultpresto-58766c69b8-qvfx7 1/1 Running  32d7h
kube-system  coredns-f9fd979d6-h9565 1/1 Running  55d14h
kube-system  etcd-lali1/1 Running  55d14h
kube-system  kube-apiserver-lali  1/1 Running  55d14h
kube-system  kube-controller-manager-lali  1/1 Running  65d14h
kube-system  kube-flannel-ds-lgdql1/1 Running  55d13h
kube-system  kube-flannel-ds-wcb7h1/1 Running  10  5d13h
kube-system  kube-proxy-9jc8k  1/1 Running  95d13h
kube-system  kube-proxy-9rzc9  1/1 Running  55d14h
kube-system  kube-scheduler-lali  1/1 Running  65d14h

Lets deploy our first containerized application.I will be using a simple Apache web server container from docker hub.By default K8s is configured to pull images from docker hub

Run the following commands to complete the deployment and name it as presto on Master

   kubectl create deployment presto --image=httpd --port=80

Test the deployment STATUS

  [lali@lali ~]$ kubectl get deployments
NAME READY  UP-TO-DATE  AVAILABLE  AGE
presto  1/1 1  1 5d10h

Its ready so we need to get the node endpoint on the cluster to access it. For that.

Get the pod name running the container

 [lali@lali ~]$ kubectl get pods | grep presto
presto-58766c69b8-qvfx7  1/1 Running  32d7h

Get the IP

[lali@lali ~]$ kubectl describe pod presto-58766c69b8-qvfx7 | grep IP
IP: 192.168.2.11

Go to the Worker node and run curl 192.168.2.11:80

      It would show the test page on Apache ::

<html><body><h1>It works!</h1></body></html>

Note at this point this command will timeout on Master.Also if we restart the pod the ip will https://www.linkedin.com/redir/phishing-page?url=change%2eTo address that we need to expose this a create a permanent endpoint for it in the cluster. To do so run:

  kubectl expose deployment presto --port=8080 --target-port=80 --name=presto-service --type=LoadBalancer

To get the IP address of the cluster to access this container run the following

lali@lali ~]$ kubectl get services

NAME  TYPE CLUSTER-IP EXTERNAL-IPPORT(S)AGE
kubernetes  ClusterIP  10.96.0.1  <none>  443/TCP5d14h
prestoLoadBalancer  10.101.10.30  <none>  8080:31337/TCP  3d13h

When we expose a deployment we create a service out of it in K8S terminology.

 curl 10.101.10.30:8080 on Worker would show the "It Works" message. But this service would not be AVAILABLE from anywhere outside of the cluster.

 To bind it to a publicly available IP we need to use a LoadBalancer (typically provided by cloud providers like AWS) or use Ingress or just use some basic networking principle. In this case our container is unning on Worker node which has a public IP of 192.168.1.24. We will patch the service to assign that IP as a public IP for the service.

 kubectl patch svc presto -n default -p '{"spec": {"type": "LoadBalancer", "externalIPs":["192.168.1.24"]}}'

Now curl 192.168.1.24:8080 from nywhere within your home network or open the url in the browser that "It Works" message will be displayed.

To add more worker nodes to the cluster essentially what is required is replicating the steps we did for the worker here with some minor adjustments. You can also scale the deployment to spin up more containers if you want to. Thats a simple copy-paste of command from the K8S website.

If you want to do away with cluster follow the Arch Wiki.It Basically consists of 2 commands

lali == your node name
 kubectl drain lali --delete-local-data --force --ignore-daemonsets
 kudeadm reset

Stop the docker and kubelet services from systemctl stop.If you have enabled those services dont forget to diable them to prevent them from being started on machine reboots.

Hope you liked it.Options for K8S is pretty expansive.The aim here was to be a sort of Hitchhikers guide . A quick start with the concepts to see how it works before a deep dive. Something i found kind of hard to come by on www.

Why Arch?

Those who are familiar with Arch Linux know that what we get with Arch is 80% kernel and some utilities to manage things like disks,network etc. With debian based variants or other Arch variants (including servers) we get a full OS. With minimal arch you only put in things you need and you know what you put. This is essential in the "trying out" paradigm as it shows whats going on underneath the hood. Plus, when things are working its fine. But here comes an upgrade and it goes bust.One will not be entirely clueless as we would know what part of the kernel/OS its interacting with at the first layer.

You have a very basic setup which is inherently production type in nature and you can enhance stuff.Like use a Load Balancer on AWS if you have a free tier, implement security, HA Control planes etc. Possibilites are infinite.


Suresh Nallusamy

Enterprise-wide Digital Transformation Leader, Digital Marketing & Commerce, Digital Consulting, Product Management | Ex-Wipro | Startup Tech Advisor

3y

very nice and neatly explained for novice..

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics