preloader
Virtual Elephant
TKG Learning Series

Deploying a Tanzu BYOH Workload Cluster

Overview

The stereotypical use-case for the Tanzu Kubernetes Grid BYOH product is a bare-metal Kubernetes environment. The product was designed with Kubernetes edge use-cases in mind. As part of my efforts to leverage the Tanzu BYOH product, the primary use-case I have been engaged with is a hybrid deployment, where the Kubernetes controllers are VMs hosted inside a VMware SDDC environment and the Kubernetes workers are small form-factor single-board computes (SPC). This topology allows for greater flexibility for application deployments and lifecycle activities, by allowing worker nodes with larger resource footprints to be deployed within the VMware SDDC environment.

VM Template or BYOH Host Customizations

Specific to leveraging the BYOH bootstrap process for building a workload cluster, the current version (v1.6) is not fully integrated with Tanzu, so several of the Tanzu environment variables set for internet-restricted environments are not always honored. I found it simpler to pre-seed most of the Kubernetes containers themselves inside the VM Template or BYOH host directly.
 
When troubleshooting this issue originally, I relied heavily on Scott Lowe’s blog on exporting and importing container images into containerd.
byoh-node$ sudo ctr -n=k8s.io image import antrea-advanced-debian.tar
byoh-node$ sudo ctr -n=k8s.io image import coredns.tar
byoh-node$ sudo ctr -n=k8s.io image import etcd.tar
byoh-node$ sudo ctr -n=k8s.io image import kube-apiserver.tar
byoh-node$ sudo ctr -n=k8s.io image import kube-controller-manager.tar
byoh-node$ sudo crt -n=k8s.io image import kube-proxy.tar
byoh-node$ sudo crt -n=k8s.io image import kube-scheduler.tar
byoh-node$ sudo crt -n=k8s.io image import kube-vip.tar
byoh-node$ sudo crt -n=k8s.io image import pause.tar

 

The BYOH nodes also require the imgpkg binary to be installed locally, as the bootstrap process of BYOH requires that be installed to extract and install the other packages contained in the Harbor repository. The simplest method is to copy it from the Tanzu bootstrap VM within the environment and install in on the local VM or BYOH host.

bootstrap$ scp /usr/local/bin/imgpkg byoh-host-ip:~/
bootstrap$ ssh byoh-host-ip
byoh-host$ sudo install imgpkg /usr/local/bin

Additionally, when running Ubuntu 20.04 or Ubuntu 22.04, containerd should be preinstalled on the BYOH VM or host to support the bootstrap process. If running Ubuntu 22.04, the file /etc/containerd/config.toml needs to be in place to avoid a known issue with the OS and containerd.

 

More text to come.

Generate BYOH Workload Cluster YAML

Using the clusterctl we can build a YAML template file for the BYOH workload cluster.
 
Note: This leverages kubevip and not the NSX-T Advanced Load Balancer for the frontend IP address for the Kubernetes cluster. Substitute the IP address for your local network.

 

bootstrap$ CONTROL_PLANE_ENDPOINT_IP=10.231.0.51 clusterctl generate byoh-cluster --infrastructure byoh --kubernetes-version 1.23.5 --control-plane-machine-count 1 --worker-machine-count 1 > byoh-workload-cluster.yaml

After creating the YAML file, we need to modify the contents to point specifically to the Harbor repository for the internet-restricted environment.

bootstrap$ vim byoh-workload-cluster.yaml
165 bundleLookupBaseRegistry: harbor.home.virtualelephant.com/tanzu/cluster_api_provider_bringyourownhost
212       bundleRepo: harbor.home.virtualelephant.com/tanzu/cluster_api_provider_bringyourownhost

With the YAML filed modified, validate the agents are all online and then create the cluster.

bootstrap$ kubectl get byoh -A --show-labels
bootstrap$ kubectl apply -f byoh-workload-cluster.yaml

Optional BYOH Workload Cluster YAML Labels

If you have VMs or hosts that you want to control whether they are Kubernetes controllers or workers, you can modify both the YAML file for creating the cluster and start the BYOH host agent with a tag. When starting the BYOH host agent, you can use the following syntax to add a tag that will be leveraged in the YAML file.

byoh-controller$ sudo byoh-hostagent-linux-amd64 --bootstrap-kubeconfig config --label "type=controller" 2>&1 | tee hostagent.log
byoh-worker$ sudo byoh-hostagent-linux-amd64 --bootstrap-kubeconfig config --label "type=worker" 2>&1 | tee hostagent.log


The labels can be validated on the Tanzu Kubernetes management cluster with the following command.

bootstrap$ kubectl get byoh -A --show-labels

Once the labels are confirmed to be in place, edit the byoh-workload-cluster.yaml file created in the previous step and modify the ByoMachineTemplate sections.
bootstrap$ vim byoh-workload-cluster.yaml
169 ---
170 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
172 metadata:
174   namespace: default
176   template:
178       installerRef:
180         kind: K8sInstallerConfigTemplate
182         namespace: default
184         matchLabels:
186 ---
188 kind: ByoMachineTemplate
190   name: byoh-cluster-md-0
192 spec:
194     spec:
196         apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
198         name: byoh-cluster-md-0
200       selector:
202           "type": "worker"


Now, when you create the workload cluster, it will look for those labels and apply the correct Kubernetes role to those nodes.

TWITTER

@chrismutchler

LOCATION

Colorado