Provide Container-as-a-Service on vCD

Introduction to Container Service Extension

Container Service Extension (CSE) is a VMware vCloud Director (vCD) extension that helps tenants create and work with Kubernetes clusters.

CSE brings Kubernetes as a Service to vCD, by creating customized VM templates (Kubernetes Node templates) and enabling tenant users to deploy fully functional Kubernetes clusters as self-contained vApps.

Now let's get started.

Install & Configure CSE

The installation will be done in the vCD cell because it is a lab environment.
In a production environment, it is recommended have a dedicated host to run CSE.

SSH to vCD server (vcd-01a.corp.local)

restart rabbitmq - Google Search

From the desktop, find in 'PuTTY' in the task bar and click on the  icon.

  1. Scroll down to find vcd-01a.corp.local
  2. Select vcd-01a.corp.local in the list
  3. Click the Open button

Download CSE

Container Service Extension is shipped as a Python package that can be directly installed from the Python public repository.

restart rabbitmq - Google Search

Enter the following command to install the package for CSE:

pip3 install container-service-extension

The command will install the Container Service Extension (CSE) package into the vCD server.

Create config file

cse sample -o /root/cse/config.yaml
chmod 600 /root/cse/config.yaml

Edit config file with Notepad++

For simplicity, Notepad++ will be used for editing and NppFTP will be used to access the file.

Desktop

Start Notepad++ from the desktop.

Open config file on the server

restart rabbitmq - Google Search

NppFTP window should be already visible on the right side.
If not, Click on Plugins at the toolbar section > NppFTP > Show NppFTP Window.

  1. Click on the 'cog' icon > vcd-01a.corp.local - CSE.

This should show the sample config file was previously created.

restart rabbitmq - Google Search
  1. Double-Click the config.yaml file to edit it

Update the configuration sample

Fill the information as below (copy and paste from inside the console) and save the file.

amqp:
  exchange: cse
  host: vcd-01a.corp.local
  password: guest
  port: 5672
  prefix: vcd
  routing_key: cse
  ssl: false
  ssl_accept_all: false
  username: guest
  vhost: /

vcd:
  api_version: '33.0'
  host: vcd-01a.corp.local
  log: true
  password: VMware1!
  port: 443
  username: administrator
  verify: false

vcs:
- name: vcsa-01a.corp.local
  password: VMware1!
  username: administrator@corp.local
  verify: false

service:
  enforce_authorization: false
  listeners: 5
  log_wire: false

broker:
  catalog: public-cse
  default_template_name: photon-v2_k8-1.14_weave-2.5.2
  default_template_revision: 1
  ip_allocation_mode: pool
  network: public-internet
  org: public
  remote_template_cookbook_url: https://raw.githubusercontent.com/vmware/container-service-extension-templates/master/template.yaml
  storage_profile: '*'
  vdc: public-OVDC

pks_config: null

Indentation must be accurate otherwise the YAML file will have syntax issues.

Don't forget to save the file (File > Save)!

Create template

Container Service Extension comes with ready-to-build templates but it is also possible to create custom templates.

Switch back to the Putty window (SSH session to vcd-01a.corp.local) and Enter the following command to list the available templates:

cd /root/cse
cse template list

Since it is a lab, only one template will be created.

Enter the following command to create one template in vCD

cse template install photon-v2_k8-1.14_weave-2.5.2 1

Clean CSE cache

rm -rf cse_cache

Install CSE

With one template now available in vCD, Container Service Extension can be installed without re-creating all the templates.

CSE is installed by running the following command:

-c config.yaml = use the config file that has been created before
--skip-template-creation = leverage eny existing template in the catalog without re-creating new one.
--ssh-key /root/.ssh/authorized_keys = use same ssh-key as the one in this vCD cell.

cse install -c config.yaml --skip-template-creation --ssh-key /root/.ssh/authorized_keys

This command configure vCloud Director API extension as well as deploy new rights for CSE. Additionally it configure the RabbitMQ message bus if necessary.

Start CSE

CSE has been installed and needs to be started using the following command:

cse run --config config.yaml&

Congratulations!
Container Service Extension is successfully running.

In a production environment, linux service can be used for easier lifecycle management of the CSE service.

Deploy Kubernetes cluster and applications

Once CSE is up and running, kubernetes clusters can be created.

Open a Command Line in Windows

Kubernetes cluster can be created from any computer that has access to vCloud Director. The only requirement is the the vcd-cli + cse extension installed.
The Control Center already has vcd-cli installed.

Click the Command Prompt shortcut in the Task Bar

Login as tenant admin

Desktop

Before using CSE, it is required to login to vCloud Director.
Enter the following command to login as a tenant admin

vcd login vcd-01a.corp.local kanjana kadmin -iw

The password is : VMware1!

Create a kubernetes cluster

Type the following command to create a Kubernetes cluster
- 2 worker nodes
- using k-routed network
- using controlcenter SSH key, so SSH connection to nodes is easy.

vcd cse cluster create -N 2 -k C:\hol\SSH\keys\cc_authkey.txt -n k-routed k-cluster

It takes around 5-10min to create the cluster.

Progress can be followed in the command prompt as well as in vCloud Director in the Recent Tasks pane.

Retrieve kubectl configuration

In order to interact with a kubernetes cluster and deploy application, the command kubectl is used. This command leverage a configuration file to know how to contact and authenticate to the kubernetes cluster.

Unless specify, kubectl use the config file in the .kube folder located at the root of the user profile
(valid for Windows & Linux)

Run the following command to retrieve the configuration from CSE and store it into the config file.

vcd cse cluster config k-cluster > .kube\config

Check cluster configuration

Desktop
kubectl get nodes

Deploy Sock Shop Demo application

In the deployment phase, we are going to achieve the following objectives:

  1. Deploy applications
  2. Deploy sock shop
  3. Create a Namespace to deploy the application

Create a namespace

Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces can not be nested inside one another and each Kubernetes resource can only be in one namespace.

Namespaces are a way to divide cluster resources between multiple users.

Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. Start using namespaces when you need the features they provide.

Since multiple application could have overlapping service name, it is better to create a namespace in this case, let's start with our first application "sock shop" and create a namespace for it:

kubectl create namespace sock-shop

Deploy the application

One way to create a Deployment using a .yaml file is to use the "kubectl apply" command in the kubectl command-line interface, passing the .yaml file as an argument. This yaml file can be stored directly in github.

Let's create a deployment of the sock-shop application by running the following command:

kubectl apply -n sock-shop -f https://raw.githubusercontent.com/microservices-demo/microservices-demo/master/deploy/kubernetes/complete-demo.yaml

Monitor the container deployment

It can takes couple of minutes for the deployment to be ready. The progress can be monitored with couple of kubectl command.

Desktop
kubectl get pods --namespace sock-shop

Retrieve app information

In order to be able to access the application, this particular deployment expose a service (through NodePort) that is made available through the kubernetes nodes IP addresses and a TCP port that is dynamically mapped.

Let's retrieve the IP address of one of the node of the kubernetes cluster with the following command:

vcd cse cluster info k-cluster

Take note of the master node ip address

Let's retrieve the port that is mapped to the front-end service by running the following command:

kubectl get service/front-end -n sock-shop

Take note of the service port

Connect to the application

Use the IP address and port retrieved at the previous steps to connect to the application.

Deploy Google Micro Service Demo

Repeat the same operations to deploy another demo application in the same kubernetes cluster.

The Yaml file is the following:

https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/kubernetes-manifests.yaml
Create a Namespace to deploy the application
kubectl create namespace google-demo
Deploy the application
kubectl apply -n google-demo -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/kubernetes-manifests.yaml
Retrieve application information
vcd cse cluster info k-cluster

Take note of the master node ip address

kubectl get service/frontend-external -n google-demo

Take note of the service port

Remove the deployed application

It is now time to clean up the kubernetes cluster by running the following commands:

kubectl delete -n sock-shop -f https://raw.githubusercontent.com/microservices-demo/microservices-demo/master/deploy/kubernetes/complete-demo.yaml
kubectl delete -n google-demo -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/kubernetes-manifests.yaml