Virtual Elephant

Tanzu Kubernetes Grid
Workload Cluster

VMware’s flagship Kubernetes offering, Tanzu Kubernetes Grid, allows for an organization to quickly build and deploy an enterprise-grade Kubernetes service offering. The deployment of the Kubernetes Workload Clusters is accomplished after the Management Cluster is running inside the environment. This article will cover the steps necessary to deploy the first Workload Cluster within a VMware SDDC environment.

Official Documentation

The Virtual Elephant YouTube channel has videos walking through the process of deploying the Tanzu Kubernetes Grid Workload Cluster inside a VMware SDDC environment.

Tanzu Kubernetes Grid Workload Cluster

The majority of the configuration was accomplished when the YAML file for the TKG Management Cluster was created. I recommend copying the YAML file, and then editing the following lines to allow for the deployment of the Workload Cluster. If you have a Git repository, you can commit your YAML files there to keep track of the different versions and configurations within your environment.

The following lines need to be modified within the YAML file to allow the deployment of the Workload Cluster:

CLUSTER_NAME: <workload-cluster-name>

VSPHERE_NETWORK: /<DC>/network/<network-pg-name>
VSPHERE_FOLDER: /<DC>/<folder-name>
VSPHERE_DATASTORE: /<DC>/datastore/<datastore-name>

After editing the YAML file with the details specific to your environment, the cluster can be created with the following command:

bootstrap $ tanzu cluster create -f <YAML> -v 5 --timeout 90m

During the cluster creation workflow, the status of the VM deployments within the VMware SDDC environment can be monitored through Kubernetes or through the vCenter Server UI. To monitor the machine deployments through Kubernetes, run the following command:

bootstrap $ watch kubectl get machines

The first thing to do after the workload cluster is fully deployed is to export the kubeconfig file onto the Bootstrap VM. Run the following command to save the kubeconfig for the new cluster:

bootstrap $ tanzu cluster kubeconfig get <cluster-name> --admin

To switch the Kubernetes context, execute the following command:

bootstrap $ kubectl config use-context <cluster-name>-admin@<cluster-name>

I like to create aliases for quickly switching contexts, which you will see reflected in the following screenshot.

You can validate the nodes are properly running and on the expected version of Kubernetes by running the command:

bootstrap $ kubectl get nodes

You can also check the running pods within the Kubernetes cluster by running the command:

bootstrap $ kubectl get pods -A

At this point, the Kubernetes cluster is running a bare-minimum set of pods and additional services will need to be configured in order for ingress and httpproxy objects to be created and tied in with the NSX Advanced Load Balancer.

The Tanzu Kubernetes Grid Workload Cluster is now ready for consumption. Additional TKG Workload Clusters can be deployed in the same fashion, simply by editing the YAML file and changing the same lines as documented in the article.