The Home SDDC Lab is fully online and operational now, so I’ve begun deploying different Kubernetes clusters, leveraging different CNA projects. The plan is to get better exposure to what the community projects offer before I dive deeper into the VMware Tanzu offerings.
After deploying a rather vanilla Kubernetes 1.17.4 cluster with Flannel, I embarked on the journey of leveraging Cilium as the CNI and their cluster-wide network policies with eBPF.
All of my Kubernetes VMs are running Ubuntu 18.04 with 2 vCPU, 4Gb of RAM and 160Gb of storage. The networking for these VMs is simply a VLAN-backed network on my Cisco Catalyst 3960g switch — not leveraging NSX-T for these clusters yet. Finally, the installation of Docker and Kubernetes is done through the official app repos and automated using Ansible.

Cilium Installation
Note: Guide assumes Kubernetes software has already been deployed onto the bare metal or virtual machines you intend to leverage Cilium with.
The official documentation is available here. As I am finding with most Kubernetes projects, leveraging the official documentation is the first step, however there are usually other things required before things run smoothly.
Cilium relies upon BPF and needs to have a filesystem mounted local on the Kubernetes master node(s) and all the minion nodes.
/etc/systemd/system/sys-fs-bpf.mount [Unit] Description=Cilium BPF mounts Documentation=http://docs.cilium.io/ DefaultDependencies=no Before=local-fs.target umount.target After=swap.target [Mount] What=bpffs Where=/sys/fs/bpf Type=bpf [Install] WantedBy=multi-user.target
After creating the systemd
unit file, the service must be enabled and started on each node.
$ sudo systemctl enable sys-fs-bpf.mount $ sudo systemctl start sys-fs-bpf.mount
From there, we can validate the service is running and the filesystem is mounted properly.
$ sudo systemctl status sys-fs-bpf.mount ● sys-fs-bpf.mount - Cilium BPF mounts Loaded: loaded (/etc/systemd/system/sys-fs-bpf.mount; enabled; vendor preset: enabled) Active: active (mounted) since Tue 2020-03-24 13:20:21 UTC; 1h 59min ago Where: /sys/fs/bpf What: bpffs Docs: http://docs.cilium.io/ Process: 5744 ExecMount=/bin/mount bpffs /sys/fs/bpf -t bpf (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 4660) CGroup: /system.slice/sys-fs-bpf.mount Mar 24 13:20:21 cilium-master01 systemd[1]: Mounting Cilium BPF mounts... Mar 24 13:20:21 cilium-master01 systemd[1]: Mounted Cilium BPF mounts. $ mount | grep bpffs bpffs on /sys/fs/bpf type bpf (rw,relatime)
If you haven’t already, initialize the Kubernetes cluster and then deploy the Cilium layer.
$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16 $ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.7.1/install/kubernetes/quick-install.yaml serviceaccount/cilium created serviceaccount/cilium-operator created configmap/cilium-config created clusterrole.rbac.authorization.k8s.io/cilium created clusterrole.rbac.authorization.k8s.io/cilium-operator created clusterrolebinding.rbac.authorization.k8s.io/cilium created clusterrolebinding.rbac.authorization.k8s.io/cilium-operator created daemonset.apps/cilium created deployment.apps/cilium-operator created
Check the status of the Cilium deployment.
$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system cilium-operator-6547f48966-wd7hl 0/1 Pending 0 2m39s kube-system cilium-pjdvr 1/1 Running 0 2m39s kube-system coredns-6955765f44-qn5vq 1/1 Running 0 10m kube-system coredns-6955765f44-r9872 1/1 Running 0 10m kube-system etcd-cilium-master01 1/1 Running 0 10m kube-system kube-apiserver-cilium-master01 1/1 Running 0 10m kube-system kube-controller-manager-cilium-master01 1/1 Running 0 10m kube-system kube-proxy-wwbzg 1/1 Running 0 10m kube-system kube-scheduler-cilium-master01 1/1 Running 0 10m
Proceed to add in the Kubernetes minions to the cluster and watch their respective Cilium pieces deploy and come online.
$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system cilium-j5q2h 0/1 Init:0/1 0 28s kube-system cilium-operator-6547f48966-wd7hl 0/1 Pending 0 5m44s kube-system cilium-pjdvr 1/1 Running 0 5m44s kube-system coredns-6955765f44-qn5vq 1/1 Running 0 13m kube-system coredns-6955765f44-r9872 1/1 Running 0 13m kube-system etcd-cilium-master01 1/1 Running 0 13m kube-system kube-apiserver-cilium-master01 1/1 Running 0 13m kube-system kube-controller-manager-cilium-master01 1/1 Running 0 13m kube-system kube-proxy-wwbzg 1/1 Running 0 13m kube-system kube-proxy-zrwsn 0/1 ContainerCreating 0 28s kube-system kube-scheduler-cilium-master01 1/1 Running 0 13m $ kubectl get nodes NAME STATUS ROLES AGE VERSION cilium-master01 Ready master 15m v1.17.4 cilium-node01 Ready <none> 109s v1.17.4
After adding all of the nodes, you are all set to start deploying applications to the Cilium Kubernetes cluster.
$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system cilium-5hrfc 1/1 Running 0 33m kube-system cilium-d8ztv 1/1 Running 0 29m kube-system cilium-hl9dx 1/1 Running 0 8m28s kube-system cilium-j5q2h 1/1 Running 0 65m kube-system cilium-jqj86 1/1 Running 0 62m kube-system cilium-operator-6547f48966-wd7hl 1/1 Running 0 70m kube-system cilium-pjdvr 1/1 Running 0 70m kube-system coredns-6955765f44-qn5vq 1/1 Running 0 79m kube-system coredns-6955765f44-r9872 1/1 Running 0 79m kube-system etcd-cilium-master01 1/1 Running 0 79m kube-system kube-apiserver-cilium-master01 1/1 Running 0 79m kube-system kube-controller-manager-cilium-master01 1/1 Running 0 79m kube-system kube-proxy-5c8wz 1/1 Running 0 29m kube-system kube-proxy-m6wzc 1/1 Running 0 62m kube-system kube-proxy-mwndd 1/1 Running 0 8m28s kube-system kube-proxy-rhk9x 1/1 Running 0 33m kube-system kube-proxy-wwbzg 1/1 Running 0 79m kube-system kube-proxy-zrwsn 1/1 Running 0 65m kube-system kube-scheduler-cilium-master01 1/1 Running 0 79m $ kubectl get nodes NAME STATUS ROLES AGE VERSION cilium-master01 Ready master 95m v1.17.4 cilium-node01 Ready <none> 81m v1.17.4 cilium-node02 Ready <none> 78m v1.17.4 cilium-node03 Ready <none> 49m v1.17.4 cilium-node04 Ready <none> 45m v1.17.4 cilium-node05 Ready <none> 24m v1.17.4
Next post will be showing how I deployed Hubble and validated it was functioning properly in the cluster.
Enjoy!