Deploying a Tanzu BYOH Workload Cluster
byoh-node$ sudo ctr -n=k8s.io image import antrea-advanced-debian.tar
byoh-node$ sudo ctr -n=k8s.io image import coredns.tar
byoh-node$ sudo ctr -n=k8s.io image import etcd.tar
byoh-node$ sudo ctr -n=k8s.io image import kube-apiserver.tar
byoh-node$ sudo ctr -n=k8s.io image import kube-controller-manager.tar
byoh-node$ sudo crt -n=k8s.io image import kube-proxy.tar
byoh-node$ sudo crt -n=k8s.io image import kube-scheduler.tar
byoh-node$ sudo crt -n=k8s.io image import kube-vip.tar
byoh-node$ sudo crt -n=k8s.io image import pause.tar
The BYOH nodes also require the imgpkg binary to be installed locally, as the bootstrap process of BYOH requires that be installed to extract and install the other packages contained in the Harbor repository. The simplest method is to copy it from the Tanzu bootstrap VM within the environment and install in on the local VM or BYOH host.
bootstrap$ scp /usr/local/bin/imgpkg byoh-host-ip:~/
bootstrap$ ssh byoh-host-ip
byoh-host$ sudo install imgpkg /usr/local/bin
Additionally, when running Ubuntu 20.04 or Ubuntu 22.04, containerd should be preinstalled on the BYOH VM or host to support the bootstrap process. If running Ubuntu 22.04, the file /etc/containerd/config.toml needs to be in place to avoid a known issue with the OS and containerd.
More text to come.
bootstrap$ CONTROL_PLANE_ENDPOINT_IP=10.231.0.51 clusterctl generate byoh-cluster --infrastructure byoh --kubernetes-version 1.23.5 --control-plane-machine-count 1 --worker-machine-count 1 > byoh-workload-cluster.yaml
After creating the YAML file, we need to modify the contents to point specifically to the Harbor repository for the internet-restricted environment.
bootstrap$ vim byoh-workload-cluster.yaml
165 bundleLookupBaseRegistry: harbor.home.virtualelephant.com/tanzu/cluster_api_provider_bringyourownhost
212 bundleRepo: harbor.home.virtualelephant.com/tanzu/cluster_api_provider_bringyourownhost
With the YAML filed modified, validate the agents are all online and then create the cluster.
bootstrap$ kubectl get byoh -A --show-labels
bootstrap$ kubectl apply -f byoh-workload-cluster.yaml
byoh-controller$ sudo byoh-hostagent-linux-amd64 --bootstrap-kubeconfig config --label "type=controller" 2>&1 | tee hostagent.log
byoh-worker$ sudo byoh-hostagent-linux-amd64 --bootstrap-kubeconfig config --label "type=worker" 2>&1 | tee hostagent.log
The labels can be validated on the Tanzu Kubernetes management cluster with the following command.
bootstrap$ kubectl get byoh -A --show-labels
bootstrap$ vim byoh-workload-cluster.yaml
169 ---
170 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
172 metadata:
174 namespace: default
176 template:
178 installerRef:
180 kind: K8sInstallerConfigTemplate
182 namespace: default
184 matchLabels:
186 ---
188 kind: ByoMachineTemplate
190 name: byoh-cluster-md-0
192 spec:
194 spec:
196 apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
198 name: byoh-cluster-md-0
200 selector:
202 "type": "worker"
Now, when you create the workload cluster, it will look for those labels and apply the correct Kubernetes role to those nodes.