Kubernetes as a Service

Welcome to the Kubernetes as a Service lab.

Through this lab, you will learn how to provision Kubernetes as a Service to your tenants through VCD, CSE and Tanzu.

Topics covered are:

  • Setup Container as a service with Tanzu (Provider view)
    • Create supervisor cluster on vSphere
    • Configure k8s policy through CSE and publish to tenant
  • Provision and use a Tanzu Kubernetes Cluster (Tenant view)
    • Provision a k8s cluster through CSE
    • Use kubectl to deploy an application 

Explore vSphere with Tanzu

The Provider will create a supervisor cluster on vSphere, create a Kubernetes-enabled pVDC from the supervisor cluster, create an oVDC for a tenant and publish k8s policies to the tenant.

Create supervisor cluster on vSphere

The first step is to set up a supervisor cluster at the vSphere level.

In reality, a supervisor cluster would take around 45 minutes to configure. In this lab, we have already configured the supervisor cluster for you. However, it would be helpful to go through the configurations to know how the supervisor cluster is set up.

Launch Google Chrome

In the lab home screen, click on Google Chrome.

Log in to vSphere

In the vSphere page, enter the following credentials:

Username: administrator@corp.local

Password: VMware1!

Then, click LOGIN.

Explore Supervisor Cluster

We have already configured the cluster 'NextGenWorkloads' to become a supervisor cluster. Let's take a look at it.

1. Click on vsca-01a.corp.local > RegionA01 > NextGenWorkloads.

2. Click on Monitor > Namespaces > Overview.

Take a moment to view the view the details of the Supervisor Cluster.

Explore Namespaces resource pool

Let's explore the Supervisor Cluster further. In the sidebar, go to vsca-01a.corp.local > RegionA01 > NextGenWorkloads >Namespaces

Explore supervisor cluster control plane nodes

Expand the Namespaces section.

You will see three VMs named SupervisorControlPlane-XXX, which serve as the control plane nodes of the Supervisor cluster.  The ESXi server being the worker nodes.

Explore Workload Management

Bug 2505479 – [SR][20100679402][UNE EPM TELECOMUNICACIONES S.A. E.S.P][PLT] The container "vmware/vrops-vcd-tenant-app-ui" wont start, stuck at restarting.

Now, let's create a new namespace for our cluster.

Click on Menu > Workload Management.

Create a new Namespace

Bug 2505479 – [SR][20100679402][UNE EPM TELECOMUNICACIONES S.A. E.S.P][PLT] The container "vmware/vrops-vcd-tenant-app-ui" wont start, stuck at restarting.

Click on CREATE NAMESPACE.

Enter Namespace information

Bug 2505479 – [SR][20100679402][UNE EPM TELECOMUNICACIONES S.A. E.S.P][PLT] The container "vmware/vrops-vcd-tenant-app-ui" wont start, stuck at restarting.

Enter Next-Gen Workloads for the cluster and provider-infra for the name. Click CREATE.

Explore the Namespace

You will be taken back to the main page. Read the information on the dialogue box appears and click GOT IT to exit.

Bug 2505479 – [SR][20100679402][UNE EPM TELECOMUNICACIONES S.A. E.S.P][PLT] The container "vmware/vrops-vcd-tenant-app-ui" wont start, stuck at restarting.

Click on Open in the status window

Connect to Kubernetes cluster

How to make kubectl login without --insecure-skip-tls-verify - vSphere China Proliferation Engineering - VMware Core Confluence

You will be taken to a page prompting you to download the kubectl CLI tool for vSphere. However, luckily we have already installed this plugin in the lab environment.

  1. Launch a Powershell Prompt
  2. Enter the following command to connect to the kubernetes cluster
    Do note the server IP is the one you find in the URL.
kubectl vsphere login --server=172.16.12.33 --insecure-skip-tls-verify
  1. Enter vSphere username and password when prompted
    username: administrator@vsphere.local
    password: VMware1!

Get Kubernetes nodes

How to make kubectl login without --insecure-skip-tls-verify - vSphere China Proliferation Engineering - VMware Core Confluence

1.     Note that you have access to the 'provider-infra' namespace through the kubectl client.

2.     Enter the following command to get the list of nodes in the cluster:

kubectl get nodes

3.     View the list of nodes.

Setup Cloud Director with Container Service Extension

The Container Service Extension UI that allows to provision kubernetes clusters in vSphere with Tanzu is included with Cloud Director 10.2 and can be manually installed in earlier versions.

Let see how a provider can configure Cloud Director and Container Service Extension to provide self-service creation of Kubernetes cluster.

Login to Cloud Director provider portal

On the bookmarks bar, click on vCD Provider.

  1. Enter the following credentials:
    Username: admin
    Password: VMware1!
  2. Click the SIGN IN button

Refresh vCenter connection

In this lab environment, it is necessary to reconnect vCenter after the lab has started to make sure the inventory is up to date in VMware Cloud Director.
It is not necessary in a production environment.

  1. Go to Resources > Infrastructure Resources
  2. Make sure vCenter Server Instances is selected
  3. Select the vCenter vcsa-01a.corp.local
  4. Click the RECONNECT button

Publish Container Service Extension to tenants

Despite being packaged with Cloud Director 10.2, CSE UI plugin needs to be published to tenants so they can access the CSE interface.

  1. In the top bar, go to More > Customize Portal

Publish Container UI Plugin

  1. Select Container UI Plugin.
  2. Click the PUBLISH button.

Choose tenants to publish to

It is possible to configure which tenant will have access to the Container Service Extension UI.

  1. Select Tenants.
  2. In the menu that appears, select pied-piper.
  3. Click the SAVE button.

It is also possible to keep the container service extension only for the provider. This allow the provider to provide managed Kubernetes Cluster for their tenants.

Configure Kubernetes cluster access rights

Kubernetes Cluster Management in Cloud Director is tied to specific rights, so it is easy to give the right to interact with Kubernetes cluster to only certain tenants and certain roles. By default this right is not enabled and needs to be enabled through a Rights Bundle.

  1. Go to Administration > Rights Bundles
  2. Select vmware:tkgcluster Entitlement.
  3. Click the PUBLISH button.

Select tenant

  1. Toggle the switch Publish to tenants
  2. Select the tenant pied-piper.
  3. Click SAVE.

Authorize tenant roles

  1. Go to Administration > Global Roles.
  2. Select Organization Administrator
  3. Click EDIT.

Modify Organization Administrator role

  1. Scroll all the way down
  2. Under the Other tab, check the View and Manage boxes to give the tenant org admin full access to Tanzu Kubernetes.
  3. Click SAVE.

It is possible to give only view and edit access to the tenant so he can only view and scale the cluster. The provider will have to provision the cluster once, as part of the on-boarding process for example. By this way the provider can control how many cluster a tenant will provision.

Configure resources for kubernetes

Once Container Service Extension has been properly configured and the appropriate rights given to the tenant, it is necessary to give resources to the tenant to let him provision kubernetes cluster.

It is required to have:

  • A vDC with the Flex allocation model
  • A Kubernetes policy published to the tenant

Let see how to configure it in Cloud Director.

Do note that in reality, you will have to create a pVDC leveraging the vSphere supervisor cluster-enabled. But for this lab, the supervisor cluster-enabled pVDC has already been created for you. It is the 'nextgen-resources' pVDC.

Create new oVDC from Kubernetes-enabled pVDC

Let's create an oVDC to provide tenants with Kubernetes functionality.

  1. Go to Cloud Resources > Organization VDCs
  2. Click NEW.

Enter name of oVDC

  1. Enter the organization name as piperVDC.
  2. Click NEXT.

Select Organization

  1. Select the organization pied piper
  2. Click NEXT.

Select Provider VDC

  1. Select the pVDC nextgen-resources
  2. Click NEXT.

Select Allocation model

  1. Select the Flex allocation model
  2. click NEXT.

Note: For an oVDC to be able to contain k8s policies, it has to be on a Flex allocation model

Set CPU and memory allocation

Input the following settings:

  1. CPU allocation: 10 GHz
  2. vCpu speed: 2 Ghz
  3. Memory allocation: 10 GB  !!! Don't forget to change the unit to GB !!!
  4. Click the NEXT button

Select storage policies

  1. Select lab-shared-storage
  2. click NEXT

Select network pool

  1. Select 'regionA'
  2. Click NEXT.

Review details

  1. Review the summary details and click FINISH.

Wait until the task is complete before continuing.

Configure Kubernetes policy

Now that resources have been allocated to the tenant, it is required to configure a Kubernetes policy to configure the amount and the type of resource that will be used by Kubernetes clusters.

  1. Go to Resources > Provider VDCs.
  2. Click on the nextgen-resources pVDC

!!! WARNING !!! If you don't see the Kubernetes Logo near the PVDC, you have to refresh the vCenter Connection
(Infrastructure Resources > vCenter | Select vCenter instance | Click the RECONNECT button)

View pVDC k8s policy

A pVDC already have a Kubernetes policy which defined the type of machine to use as well as the storage For this lab, a k8s policy as already been created in the pVDC nextgen-resources. Now let's locate that policy and publish that to our tenant pied-piper.

In the window that opens:

  1. Click on Kubernetes
  2. Select nextgen-resources-Next-Gen Workloads-KubernetesPolicy
  3. Click PUBLISH

Publish Kubernetes policy

  1. Name: piperPolicy
  2. Description: k8s policy for pied-piper.
  3. Click NEXT.

Select Organization VDC

  1. Select oVDC piperVDC and click NEXT.

Select CPU and memory limits

CPU and memory limits can be configured for the tenant. These limit must be smaller or equals to what was configured in the organization VDC.

You are unable to reserve CPU and memory in this policy, because the base policy in the pVDC is not configured as such.

  1. CPU Limit: 10GHz
  2. Memory Limit: 10GB
  3. Accept the other default settings and click NEXT.

Select machine classes

Next, you can see the Machine Classes that will be available to use for the nodes of the Tanzu Kubernetes Cluster.

  1. Select best-effort-small & best-effort-xsmall (scroll down if necessary)
  2. click NEXT.

Since the Kubernetes policy in the pVDC was configured not to reserve memory or CPU, you can only see the best-effort options.

Select storage policy

  1. Select lab-shared-storage
  2. Click NEXT.

Review summary

  1. Review the details and click PUBLISH.

Voila!

From a provider standpoint, everything has been configured to allow Kubernetes-as-a-Service from Cloud Director portal.

You can logout from Cloud Director.

Use the three dot icon on the top right corner to logout from the provider account.

Provision and use a Tanzu Kubernetes cluster (Tenant view)

Let's switch to the tenant side to see how a tenant can provision a new kubernetes cluster and use it.

Login to tenant portal

  1. In the Bookmarks tab, click vCD - Pied Piper.
  2. Enter the following credentials:
    Username: richard
    Password: VMware1!
  3. Click SIGN IN.

View Organization VDC

Click on the piperVDC panel.

View allocated k8s policy

  1. Select the Kubernetes Policies tab (scroll down if needed)
  2. View the Kubernetes policy that has been configured to your VDC. This policy gives access to a certain amount of resources as well as certain type of machine (used for Kubernetes nodes).

Go to tenant CSE interface

Create new Tanzu Kubernetes Cluster

Now let's create a k8s cluster running on vSphere with Tanzu.

  1. Go to More > Kubernetes Container Clusters.
  1. Click on NEW.

Configure Kubernetes Runtime

  1. Select the Kubernetes runtime vSphere with Tanzu
  2. Click NEXT.

Other Kubernetes runtime could be available such as Native Kubernetes or TKGi (aka PKS). However they require the installation and configuration of the CSE server. (it won't be covered in this lab).

Enter name of cluster

Type in the name as pipertkc (all lowercase) and click NEXT.

Select Virtual Data Center

Select the oVDC piperVDC and click NEXT.

Select k8s policy

  1. Select the k8s policy piperPolicy
  2. Make sure the version v1.18.5 is selected
  3. click NEXT.

Select no. of nodes and machine class size

Enter the following parameters:

  1. Number of Control Plane Nodes: 1
  2. Number of Worker Nodes: 3
  3. Control Plane Machine Class: best-effort-small
  4. Worker Machine Class: best-effort-xsmall
  5. Click NEXT.

If the provider had configured the k8s policy to include more machine class sizes, they would show up in the tenant view here.

Select storage policy

  1. Select lab-shared-storage for both the Control Plane Storage Class and the Worker Storage Class.

Review summary details

Review the details and click FINISH.

View Tanzu Kubernetes Cluster

The cluster will take 5-10 minutes to deploy.

While the cluster is creating, let's take our provider hat and look at what is happening behind the scene.

View Tanzu Kubernetes Cluster in vSphere

You can actually see how the Tanzu Kubernetes Cluster looks in vSphere.

Change tab or login to vCenter.

View Tanzu Kubernetes Cluster in vSphere

Expand the menu on the left to vsca-01a.corp.local > RegionA01 > Next-Gen Workloads > Namespaces > pipertkc

Can you see where the TKC cluster is located?

The 'piperpolicy' kubernetes policy from VCD is actually a namespace in vSphere

The 'pipertkc' Tanzu Kubernetes Cluster in VCD is a group of Control plane and Worker nodes in vSphere.

View Tanzu Kubernetes cluster in NSX

  1. Open a new tab in Chrome
  2. Click the bookmark NSX-T
  3. Enter the credentials:
    username: admin
    password: VMware1!VMware1!
  4. Click the LOG IN button

Explore the networking

Stickies Capture – Miro Support & Help Center
  1. Let's have a look at the Tier-1 gateways

Tier-1 Gateway details

  1. Expand the Tier01 Gateway Name column to see the full name of the gateways.
  2. Expand the gateway where the name starts with vnet and contains piperpolicy (VCD policy) and pipertkc (cluster name configured in VCD)

A T1 gateway has been created for the Tanzu Kubernetes Cluster. If it is not yet there, you might want to wait a little and click the refresh button.

View NAT

Stickies Capture – Miro Support & Help Center
  1. A SNAT rule has been created to allow the kubernetes pod network (where the TKC nodes stands) to access the outside world.
  2. Click the CLOSE button to close the NAT window

View Load Balancers

  1. Click the Load Balancing tab
  2. Observe the load balancer that has been created for the Tanzu Kubernetes Cluster. It is used to frontend the kubernetes API for this cluster.

Use Tanzu Kubernetes Cluster

powershell set env variable $env - Google Search

Let's take our tenant developer hat back and check if the cluster is now running.

  1. Click the Cloud Director tab in Chrome

Download Kubeconfig file

  1. Select cluster pipertkc you have just created
  2. Make sure the status shows running (otherwise wait until it is the case)
  3. Click on DOWNLOAD KUBE CONFIG.

At the bottom of your Chrome browser, the download will appear. Once it has finished, click on the arrow and click 'Show in folder'.

Use kubeconfig

  1. Open a Powershell prompt
  2. Set the kubeconfig environment variable with the value being the kubeconfig file just downloaded from VCD
$env:KUBECONFIG="C:\Users\Administrator\Downloads\kubeconfig-pipertkc.txt"
  1. List the nodes to confirm it is the right cluster
kubectl get nodes

Deploy an application onto your Tanzu Kubernetes Cluster

powershell set env variable $env - Google Search

In the command prompt, type in the following commands to deploy a test application:

kubectl run --restart=Always --image=gcr.io/kuar-demo/kuard-amd64:blue kuard
kubectl expose pod kuard --type=LoadBalancer --port=80 --target-port=8080

The first command deploys a container from an external image registry.

The second command create a kubernetes service of type Load Balancer to expose the container to the outside.

Get IP of deployed application

powershell set env variable $env - Google Search

Type in the following command to get the IP of your new application.

kubectl get svc/kuard

Note down the IP address that is shown under EXTERNAL-IP.

Test deployed application

Open Google Chrome, and type in the address http://[EXTERNAL-IP].
You should be able to see the demo application running as shown below.

Congratulations! You have just deployed a working application in a Tanzu Kubernetes Cluster.

Configure RoleBinding

Example Role Bindings for Pod Security Policy

Tanzu Kubernetes Grid Service provisions Tanzu Kubernetes clusters with the PodSecurityPolicy Admission Controller enabled. This means that pod security policy is required to deploy workloads. Cluster administrators can deploy pods from their user account to any namespace, and from service accounts to the kube-system namespace. For all other use cases, you must explicitly bind to pod security policy. Clusters include default pod security policies that you can bind to, or create your own.

Let's use the most Permissive PSP. Equivalent to running a cluster without the PSP Admission Controller enabled.

kubectl create clusterrolebinding default-tkg-admin-privileged-binding --clusterrole=psp:vmware-system-privileged --group=system:authenticated

The cluster is now ready to welcome more workloads, let see how we can deploy them from Cloud Director with App Launchpad.