Welcome in the world of managed Kubernetes!

You’ve probably click or found this blog searching for more informations on Tanzu. Probably also because full or part of your infrastructure is managed with vSphere and you have interests.

Perfect, you’re in the right place!

Managed Kubernetes and the opposite

Kubernetes cluster can be provisioned “the hard way” using kubeadm scripts. This method requires you to understand precisely each component of the cluster and interaction between them, it’s the downside of most open-source product, fortunately there is plenty of documentation official or alternative.

In the other hand, you have managed Kubernetes, where you can focus on using your Kubernetes cluster without having the whole maintenance thingy.

Tanzu Kubernetes Grid (TKG)

Before diving directly into Kubernetes, let me tell you my misadventures with Tanzu official documentation. I faced multiple namings like TKG, TKGs, TKGm, TKGi and I can say that confusion is very easy… I hope that everything will be clearer after reading this introduction blog.

First of all, to select the right version, it will depend on where will be the destination of your workload.

  • On premise: vSphere 6.7/7/8
  • Cloud: AWS
  • Cloud: Microsoft Azure
  • Cloud: Google Cloud Platform

If you plan to deploy your workload on the cloud, then you will have to choose Tanzu Kubernetes Grid Multi-cloud (TKGm).
The common name for this version is TKG in VMware documentation.

It allows you to deploy a Supervisor or a standalone management cluster depending on how you’ll want to manage your workload cluster, I will explain later the Supervisor functionality.

Depending on your infrastructure requirements, you may choose to deploy on premise workload.

If your infrastructure is configured with VMware NSX Network, then Tanzu Kubernetes Grid Integrated (TKGi) will be your choice. With TKGi, you can deploy vSphere Pod.

Otherwise, if your network is not using NSX, then the version you will use is Tanzu Kubernetes Grid Service (TKGs).
For this introduction blog, I will focus on this specific version and describe more about TKGs.

Now that we clearly identified which Tanzu we wil use, we need go deeper into details with the vCenter version where we will deploy our Supervisor to deploy our managed cluster.

In case you are already using vSphere 8, then you don’t have to choose between TKGm or TKGs because VMware solved it with TKG 2.0.

On-premise (vSphere)Multi-cloud (+ on prem)
vSphere 7 with NSXTKGiTKGm
vSphere 7 without NSXTKGsTKGm
vSphere 8TKG 2.0TKG 2.0
Tanzu Kubernetes Grid version recapitulation

What is a Supervisor ?

A supervisor is a Kubernetes cluster running inside the hypervisor layer that enable the capability to provision ESXi resources (CPU/Mem), vSphere networking or vSAN or other storage solutions with only description YAML files!

Which edition do you need?

This is another point that needs to be clarified about Tanzu. Each edition comes with a set of managed/support components:

  • Tanzu Basic
  • Tanzu Standard
  • Tanzu For Kubernetes Operations
  • Tanzu Advanced
  • Tanzu Community Edition

You’ll find a clear description below.


License calculation is similar to ESXi host licenses calculation. It is based on number of core per CPU and number of CPU on the host (ESXi).

Tanzu Mission Control (TMC)

Depending on the size of your workload you will need this tool. It allows you to manage multiple clusters from a single point across multiple cloud providers (or on-premise).

Of course, there are different editions depending on your usage. For example, the Essentials edition comes with a minimum set of capabilities to manage your Kubernetes Clusters and set access control policies, whereas the Standard edition allows you to configure them at a enterprise layer. An Advanced edition is also available with additional policy types for networking, quota and also more granular custom policies.
One important thing to mention. It is a SaaS application and all information related to the cluster are only description. It does not contain sensitive content like database credentials, database content, applicative logs, etc…

Role and Responsibility

Now let’s talk about roles with Tanzu. We can identify 2 roles

  • vSphere admin
  • DevOps engineer

As a vSphere admin, you’ll be responsible to create the Supervisor cluster from the vCenter interface. It will require you to configure

  • Network that will be used to deploy different workloads
  • Instance types (similar to AWS types for CPU/Mem) to activate for control plane / workers
  • Storage class types and size
  • vSphere Namespace and assign rights and permissions for the DevOps engineer
  • Limitations of Kubernetes objects (jobs, deployments, secrets, services, etc… follow link for details)


A vSphere namespace can be generalised as an isolated pool of resources created by the vSphere admin for the DevOps Engineer to create his Kubernetes cluster and use the allocated resources.

As a DevOps engineer, you’ll be able to create a Tanzu Kubernetes cluster inside the vSphere namespace with the resources provided by the vSphere admin.

One of them is the Kubernetes version for your workload, it is directly related to the vSphere version you have, it will allow you to deploy a certain number of “supported” versions.

Pre-requisite to create your first workload

To create our first workload, we will require some information for our description YAML file.

In this example, we use vSphere 7.0 u3 and if we run this command from the Supervisor, we can see the following

root@mysupervisor:~# kubectl get tanzukubernetesreleases

NAME                                VERSION                          READY   COMPATIBLE   CREATED   UPDATES AVAILABLE

v1.16.12---vmware.1-tkg.1.da7afe7   1.16.12+vmware.1-tkg.1.da7afe7   True    True         12d       [1.17.17+vmware.1-tkg.1.d44d45a 1.16.14+vmware.1-tkg.1.ada4837]

v1.16.14---vmware.1-tkg.1.ada4837   1.16.14+vmware.1-tkg.1.ada4837   True    True         12d       [1.17.17+vmware.1-tkg.1.d44d45a]

v1.16.8---vmware.1-tkg.3.60d2ffd    1.16.8+vmware.1-tkg.3.60d2ffd    False   False        12d       [1.17.17+vmware.1-tkg.1.d44d45a 1.16.14+vmware.1-tkg.1.ada4837]

v1.17.11---vmware.1-tkg.1.15f1e18   1.17.11+vmware.1-tkg.1.15f1e18   True    True         12d       [1.18.19+vmware.1-tkg.1.17af790 1.17.17+vmware.1-tkg.1.d44d45a]

v1.17.11---vmware.1-tkg.2.ad3d374   1.17.11+vmware.1-tkg.2.ad3d374   True    True         12d       [1.18.19+vmware.1-tkg.1.17af790 1.17.17+vmware.1-tkg.1.d44d45a]

v1.17.13---vmware.1-tkg.2.2c133ed   1.17.13+vmware.1-tkg.2.2c133ed   True    True         12d       [1.18.19+vmware.1-tkg.1.17af790 1.17.17+vmware.1-tkg.1.d44d45a]

v1.17.17---vmware.1-tkg.1.d44d45a   1.17.17+vmware.1-tkg.1.d44d45a   True    True         12d       [1.18.19+vmware.1-tkg.1.17af790]

v1.17.7---vmware.1-tkg.1.154236c    1.17.7+vmware.1-tkg.1.154236c    True    True         12d       [1.18.19+vmware.1-tkg.1.17af790 1.17.17+vmware.1-tkg.1.d44d45a]

v1.17.8---vmware.1-tkg.1.5417466    1.17.8+vmware.1-tkg.1.5417466    True    True         12d       [1.18.19+vmware.1-tkg.1.17af790 1.17.17+vmware.1-tkg.1.d44d45a]

v1.18.10---vmware.1-tkg.1.3a6cd48   1.18.10+vmware.1-tkg.1.3a6cd48   True    True         12d       [1.19.16+vmware.1-tkg.1.df910e2 1.18.19+vmware.1-tkg.1.17af790]

v1.18.15---vmware.1-tkg.1.600e412   1.18.15+vmware.1-tkg.1.600e412   True    True         12d       [1.19.16+vmware.1-tkg.1.df910e2 1.18.19+vmware.1-tkg.1.17af790]

v1.18.15---vmware.1-tkg.2.ebf6117   1.18.15+vmware.1-tkg.2.ebf6117   True    True         12d       [1.19.16+vmware.1-tkg.1.df910e2 1.18.19+vmware.1-tkg.1.17af790]

v1.18.19---vmware.1-tkg.1.17af790   1.18.19+vmware.1-tkg.1.17af790   True    True         12d       [1.19.16+vmware.1-tkg.1.df910e2]

v1.18.5---vmware.1-tkg.1.c40d30d    1.18.5+vmware.1-tkg.1.c40d30d    True    True         12d       [1.19.16+vmware.1-tkg.1.df910e2 1.18.19+vmware.1-tkg.1.17af790]

v1.19.11---vmware.1-tkg.1.9d9b236   1.19.11+vmware.1-tkg.1.9d9b236   True    True         12d       [1.20.12+vmware.1-tkg.1.b9a42f3 1.19.16+vmware.1-tkg.1.df910e2]

v1.19.14---vmware.1-tkg.1.8753786   1.19.14+vmware.1-tkg.1.8753786   True    True         12d       [1.20.12+vmware.1-tkg.1.b9a42f3 1.19.16+vmware.1-tkg.1.df910e2]

v1.19.16---vmware.1-tkg.1.df910e2   1.19.16+vmware.1-tkg.1.df910e2   True    True         12d       [1.20.12+vmware.1-tkg.1.b9a42f3]

v1.19.7---vmware.1-tkg.1.fc82c41    1.19.7+vmware.1-tkg.1.fc82c41    True    True         12d       [1.20.12+vmware.1-tkg.1.b9a42f3 1.19.16+vmware.1-tkg.1.df910e2]

v1.19.7---vmware.1-tkg.2.f52f85a    1.19.7+vmware.1-tkg.2.f52f85a    True    True         12d       [1.20.12+vmware.1-tkg.1.b9a42f3 1.19.16+vmware.1-tkg.1.df910e2]

v1.20.12---vmware.1-tkg.1.b9a42f3   1.20.12+vmware.1-tkg.1.b9a42f3   True    True         12d       [1.21.6+vmware.1-tkg.1.b3d708a]

v1.20.2---vmware.1-tkg.1.1d4f79a    1.20.2+vmware.1-tkg.1.1d4f79a    True    True         12d       [1.21.6+vmware.1-tkg.1.b3d708a 1.20.12+vmware.1-tkg.1.b9a42f3]

v1.20.2---vmware.1-tkg.2.3e10706    1.20.2+vmware.1-tkg.2.3e10706    True    True         12d       [1.21.6+vmware.1-tkg.1.b3d708a 1.20.12+vmware.1-tkg.1.b9a42f3]

v1.20.7---vmware.1-tkg.1.7fb9067    1.20.7+vmware.1-tkg.1.7fb9067    True    True         12d       [1.21.6+vmware.1-tkg.1.b3d708a 1.20.12+vmware.1-tkg.1.b9a42f3]

v1.20.8---vmware.1-tkg.2            1.20.8+vmware.1-tkg.2            True    True         12d       [1.21.6+vmware.1-tkg.1]

v1.20.9---vmware.1-tkg.1.a4cee5b    1.20.9+vmware.1-tkg.1.a4cee5b    True    True         12d       [1.21.6+vmware.1-tkg.1.b3d708a 1.20.12+vmware.1-tkg.1.b9a42f3]

v1.21.2---vmware.1-tkg.1.ee25d55    1.21.2+vmware.1-tkg.1.ee25d55    True    True         12d       [1.21.6+vmware.1-tkg.1.b3d708a]

v1.21.6---vmware.1-tkg.1            1.21.6+vmware.1-tkg.1            True    True         12d       

v1.21.6---vmware.1-tkg.1.b3d708a    1.21.6+vmware.1-tkg.1.b3d708a    True    True         12d

v1.23.8---vmware.1-tkg.1            1.23.8+vmware.1-tkg.1            False    False         12d

We can notice that the highest Kubernetes version available is 1.23.8. Unfortunately, only the version 1.21.6 is compatible with the vSphere version we have.

From the vendor, the availability is expected around February 2023.

Now let’s query some informations regarding instances types available in our vSphere namespace.

root@mysupervisor:~# kubectl get virtualmachineclasses

NAME                 CPU   MEMORY   AGE

best-effort-large    4     16Gi     11d

best-effort-medium   2     8Gi      11d

best-effort-small    2     4Gi      11d

We can also check storage class available for our cluster

root@mysupervisor:~# kubectl get storageclass


tkg-gold-storagepolicy   csi.vsphere.vmware.com   Delete          Immediate           true                   5h38m

These 3 examples of resources will be useful for our description file.

Short comment for those who are familiar with Vanilla Kubernetes, you have surely observed CRD specific to Tanzu, like tanzukubernetesreleases, virtualmachineclasses, etc … They are new resources at the Supervisor layer.


I hope those informations will be relevant to help you start your journey into Tanzu.

Stay tuned, another blog will follow. It will help you create your first managed cluster in Tanzu for vSphere and configure tools that are available regarding your edition.

Thumbnail [60x60]
Chay Te