Bare Metal Kubernetes Quick Installation Arm64 & Arch

- kubernetes linux arch

I’m still playing with my 3 nodes arm64 cluster, having some stability issues with k3s, I turned into kubeadm to deploy a bare metal non HA one master two workers Kubernetes cluster.

My host is Arch which is theoretically not supported but still works.

Required tasks

sudo pacman -S ethtool ebtables socat cni-plugins

Install aur/kubelet-bin and aur/kubeadm-bin

I needed a private registry to host my images, on master node:

docker run -d -p 5000:5000 --restart=always -v /opt/local-path-provisioner/registry:/var/lib/registry  --name registry registry:2

On all hosts:

cat > /etc/sysctl.d/bridge.conf <<EOF                             
net.bridge.bridge-nf-call-iptables=1
EOF

cat > /etc/docker/daemon.json <<EOF
{
  "insecure-registries" : ["mymasternode:5000"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

sudo systemctl restart docker

Kubeadm

Start kubeadm on the master

sudo kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address 192.168.40.10 --apiserver-cert-extra-sans extrahostname.node  --node-name mymasternode

To make kubectl talk to the new cluster:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Change kubeadm config to point it to /usr/lib/cni which is the path used by the Arch package edit /var/lib/kubelet/kubeadm-flags.env:

KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --hostname-override=mymasternode --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --resolv-conf=/run/systemd/resolve/resolv.conf --cni-bin-dir=/usr/lib/cni"
sudo systemctl restart kubelet

On worker nodes:

kubeadm join  192.168.40.10:6443 --token q3l12s.r811b5pbibi9mjy \
    --discovery-token-ca-cert-hash sha256:b67aaaaaaaaaaaaaaaaabbbbbbbccccccc --node-name myworker1

Modify /var/lib/kubelet/kubeadm-flags.env to add --cni-bin-dir=/usr/lib/cni on the workers then restart kubelet.

Install Flannel

On this very small cluster with a dedicated layer 2 connection there is no need for vxlan (which I had issues troubleshooting with k3s), so I’ve applied the multi arch Flannel deployment with a twist.

curl https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml |  sed "s/vxlan/host-gw/" > kube-flannel.yaml

Label your nodes as wanted:

kubectl label node myworker2 node-role.kubernetes.io/worker=worker

Delete everything

If something goes wrong you can restart from scratch with.

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubeadm reset
docker system prune -a
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X