preloader
Virtual Elephant
Articles

Tanzu Kubernetes Grid
Management Cluster

VMware’s flagship Kubernetes offering, Tanzu Kubernetes Grid, allows for an organization to quickly build and deploy an enterprise-grade Kubernetes service offering. The deployment of the Management Cluster within a VMware SDDC environment will allow the quick and automated deployment of Kubernetes workload clusters within a VMware SDDC environment. The deployment of a standalone Management Cluster allows for the TKG service offering to be divorced from the version of ESXi running on the physical infrastructure and allows the virtual machines to be lifecycled independently.

Official Documentation

https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.6/vmware-tanzu-kubernetes-grid-16/GUID-mgmt-clusters-vsphere.html

The Virtual Elephant YouTube channel has videos walking through the process of deploying the Tanzu Kubernetes Grid Management Cluster inside a VMware SDDC environment.

 

TKG Management Cluster UI

The Tanzu CLI allows for a UI to be run locally on the Bootstrap VM in order to create the first YAML file necessary for the deployment of the standalone Kubernetes Management Cluster inside the VMware SDDC environment. In order to launch the UI, execute the following command:

bootstrap $ tanzu management-cluster create --ui

This will allow you to open a browser window the http://127.0.0.1:8080. The following screenshots will walk you through the process for populating all of then necessary information needed for creating the YAML deployment file.

Log into the AVI Controller UI and navigate to Templates->Security->SSL/TLS Certificates and click the export icon to the right of the controller certificate.

Paste the Certificate data into the Controller Certificate Authority window inside the TKG UI and click Connect.

The current versions of Tanzu Kubernetes Grid allow for the Kubernetes control and data planes to be front-ended with different load balancers and network segments to provide different network paths for Kubernetes API traffic and applications running inside the Kubernetes cluster.

It is important to remember, the topology chosen for the Management Cluster will be then leveraged for all Workload Clusters deployed and managed.

Deploying the Management Cluster

I recommend exporting the configuration YAML file through the UI and then running the command to create the Management Cluster itself through a SSH session.

After downloading the configuration file, I recommend making a few modifications based on issues I’ve encountered over the past 12 months of leveraging TKG and running TKG workshops with customers.

Remove the following lines:

VSPHERE_CONTROL_PLANE_ENDPOINT: ""
VSPHERE_CONTROL_PLANE_PCI_DEVICES: ""
VSPHERE_CONTROL_PLANE_CUSTOM_VMX_KEYS: ""
VSPHERE_IGNORE_PCI_DEVICES_ALLOW_LIST: ""
VSPHERE_WORKER_CUSTOM_VMX_KEYS: ""
VSPHERE_WORKER_PCI_DEVICES: ""
WORKER_ROLLOUT_STRATEGY: ""

By executing the instantiation of the Management Cluster through SSH, it allows the command to be modified to provide more robust logging and extending the default timeout value (30 minutes). I recommend running the following command to create the Management Cluster:

bootstrap $ tanzu management-cluster create -f <YAML File> -v 5 --timeout 90m

Once the local Kubernetes cluster is created on the Bootstrap VM, you can use the temporary kube_config file to watch the pods be instantiated on the Bootstrap VM by running the following command:

bootstrap $ kubectl get pods -A --kubeconfig=<temp_kube_config_file>

To monitor the creation the TKG Management Cluster VMs inside the VMware SDDC environment, you can monitor the creation of the VMs through Kubernetes and also through the vCenter UI. To view the machines from a Kubernetes perspective, run the following command:

bootstrap $ kubectl get machines -A --kubeconfig=<temp_kube_config_file>

The TKG Management Cluster deployment will take ~20 minutes (depending on your environment). After the cluster is created, the first thing you want to do is export the kubconfig file onto the Bootstrap VM by running the following command:

bootstrap $ tanzu management-cluster kubeconfig get --admin

To verify all the nodes and the pods are running correctly inside the TKG Management Cluster you can run the following commands:

bootstrap $ kubectl get nodes
bootstrap $ kubectl get pods -A

The Tanzu Kubernetes Grid Management Cluster is now running inside your VMware SDDC environment. From here, you can install additional services, such as Contour and Envoy, and deploy the TKG Workload Clusters.