Kubernetes with Cilium & Hubble

The first post in the series showed how to deploy Cilium as the CNI plugin for a Kubernetes cluster. This post will focus on bringing Hubble online within the cluster to assist in the visualization of the cluster and the deployments running on it. Cilium has provided decent documentation on Hubble that can be accessed on the GitHub site.

Hubble Installation

First off, we need Helm to get Hubble running within the Kubernetes cluster. Download the Helm tarball, or install media of choice, and follow the directions contained here.

$ tar zxvf helm-v3.1.2-linux-amd64.tar.gz
$ sudo mv linux-amd64/helm /usr/local/bin
$ helm version

Next, I needed to add the kubernetes-charts.storage.googleapis.com repo to Helm.

$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
$ helm repo update

Now, here is where I had to make a few deviations from the standard documentation, which seemed to be focused on running Hubble within a mini-kube installation. Since I am running my Kubernetes clusters within a more production-like VMware SDDC environment, I found some of the documentations assumptions didn’t hold for me. This is a criticism I have of a lot of the Kubernetes documentation currently available, it is focused on running Kubernetes on laptops or single-server machines, and not discussing how to run Kubernetes in a data center.

So, first I cloned the GitHub repo for Hubble to the master node of my Cilium Kubernetes cluster.

$ git clone https://github.com/cilium/hubble.git 

Once the repo was cloned, I was then able to leverage the documentation again to create the hubble.yaml file, albeit that I needed to leverage several different documentation pages to get all the necessary syntax.

$ cd install/kubernetes
$ helm template hubble \
--namespace kube-system \
--set ui.enabled=true \
--set metrics.enabled="{dns:query;ignoreAAAA;destinationContext=pod-short,drop:sourceContext=pod;destinationContext=pod,tcp,flow,port-distribution,icmp,http}" \
> hubble.yaml

This created the hubble.yaml file used to create the server on the Kubernetes cluster. However, before launching Hubble into the cluster there are two things I recommend doing, and that I found necessary within my own cluster.

First, grep the hubble.yaml file to see what address and port the Hubble-UI service will be accessible on.

$ grep listen hubble.yaml
     - --listen-client-urls=0.0.0.0:50051
     - --listen-client-urls=unix:///var/run/hubble.sock

If you would like the Hubble-UI service to run on a different port than 50051, you can add the following syntax to the helm template hubble command above.

--set listenClientUrls='{0.0.0.0:PORT}'

Second, the /var/run/hubble.sock file will need to already exist on the master and minion nodes prior to launching the Hubble service — otherwise, the pods will launch but stay in a 0/1 READY status.

$ sudo touch /var/run/hubble.sock && sudo chmod 664 /var/run/hubble.sock

At this point, and through much troubleshooting and testing on the cluster, I was able to launch Hubble into the Kubernetes cluster and have it properly come online.

$ kubectl apply -f hubble.yaml

The Kubernetes cluster should now be running a hubble-ui container and a hubble-xxxxx container for each Kubernetes node in the cluster.

The final step in deploying all of the supporting components for Hubble is the deployment of Prometheus and Grafana stacks. The documentation says if you already have those deployed, you can leverage those existing stacks for Hubble. My Cilium-backed Kubernetes cluster is brand new, so I needed to deploy those workloads.

$ kubectl create namespace cilium-monitoring
$ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.6/examples/kubernetes/addons/prometheus/monitoring-example.yaml

After a few moments the containers should have been created and found in a ready state.

Accessing the Hubble UI

So this is really where the rubber meets the road for me — being able to access the Hubble UI and visualize the Cilium-backed Kubernetes cluster. Again, this is an area where the typical documentation, Stack Overflow posts, blog posts really let me down. All of them made the assumption the Kubernetes cluster was running locally on my laptop or in a single VM using mini-kube. I couldn’t find any easy-to-follow documentation on accessing the Hubble UI like it would be running in a production Kubernetes environment inside a remote data center.

But this is why I’m doing all of this in the lab — to learn, to discover, and ultimately, to share with the community.

Simplest way is to open aport-forward to the Hubble-UI service by doing the following in a shell.

$ export POD_NAME=$(kubectl get pods --namespace kube-system -l "k8s-app=hubble-ui" -o jsonpath={.items[0].metadata.name}")
$ kubectl --namespace=kube-system port-forward --address 0.0.0.0 $POD_NAME 12000

Open a browser window to the master Kubernetes node on port 12000 and you should be able to see the following (after selecting the kube-system namespace).

Access Hubble Metrics in Grafana

The metrics we setup in the hubble.yaml file are viewable through a dashboard in Grafana. The instructions on the Cilium GitHub site refer to opening a port-forward to the Grafana service running in the cilium-monitoring namespace created during deployment. Again, the command was only useful if you are running Kubernetes on your local machine.

$ kubectl -n cilium-monitoring port-forward --address 0.0.0.0 service/grafana 3000:3000

The command was running in the foreground of the shell terminal and made the Grafana UI accessible through the master Kubernetes node. The next step is to import a prebuilt Grafana dashboard the Cilium team makes available.

Download the file grafana.json from the GitHub repository. From there select the + icon –> Import –> Upload .json File. When completed you should see the dashboard pre-populated and already gathering metrics.

The next posts will show how to deploy a demo application and then see the relationship between the containers in the Hubble-UI and highlighting some of the challenges I’ve faced through this process.

Enjoy!