Why you should use a service mesh with microservices Four container security vulnerabilities and how to avoid them
X

Kubernetes basics: A step-by-step implementation tutorial

This Kubernetes implementation example demonstrates how to create a single-node cluster on Windows 10 to get a containerized application up and running.

Containerization enables developers and system administrators to quickly deploy applications and scale them to their requirements, without downtime. With container orchestration and management, IT organizations can manage cloud-native and distributed applications, and package legacy applications for increased portability and efficient deployment.

Kubernetes, the de facto technology for container management and orchestration, is renowned for being complex to set up and implement. However, once it's up and running, it is highly configurable, and serves as the engine for an organization's modern IT efforts.

Let's get started with the key steps and Kubernetes commands for a basic Kubernetes implementation.

Basic Kubernetes architecture

First, it's useful to understand the foundation of a Kubernetes system, which revolves around four important concepts: nodes, pods, deployments and services. The servers that run the containers -- and the applications inside them -- are called nodes. A master server controls, manages and configures these node servers. The master also manages pods, or a group of one or more containers that resides on nodes.

The master server acts as the control plane for the Kubernetes clusters. The main purpose of the various components in the control plane is to maintain a record of all Kubernetes objects and to continuously manage the desired state of the cluster by automatically responding to changes.

There are four major components of a control plane:

  • Etcd. The most fundamental component of a Kubernetes cluster, etcd is a lightweight available configuration store. It can be set up across more than one Kubernetes node. Its main purpose is to store configuration data that's accessible to each node in the Kubernetes cluster.
  • Kube-apiserver. This is a RESTful interface for command-line tools such as kubectl, providing them a way to interact with the Kubernetes cluster. In simpler terms, it serves as a management point for various tools to control and configure Kubernetes clusters through REST APIs.
  • Kube-controller-manager. The primary role of this service is to maintain the state of a cluster, such as the number of replicas. If any variation in desired state is observed, the kube-controller-manager runs required procedures to make sure that the desired state is met.
  • Kube-scheduler. This component's primary responsibility is to schedule cluster workloads based on configurations, resource requirements and workload-specific requirements.

There are several components of node servers that make it possible for the master to work with them. These include:

  • Container runtime. To run applications encapsulated in a pod that exists on nodes, we need a lightweight, isolated operating environment: the container. A container runtime is a tool, such as Docker, to create, deploy and run applications.
  • Kubelet. A kubelet is an agent or a service that is installed on nodes to communicate with and receive commands from the master node. It directs actions that need to be performed on the node, such as to create, scale and delete containers.
  • Kube-proxy. This proxy service relays requests to the correct container and performs load balancing. Kube-proxy also performs host-level subnetting to ensure services are accessible to other components.

Kubernetes implementation steps

In this tutorial, we'll create a single-node cluster that runs on the Windows 10 operating system.

To install and set up Kubernetes on Windows, load kubectl and install minikube. The Chocolatey package manager helps in this process. A command-line tool, kubectl runs commands against Kubernetes clusters, while minikube is a tool that enables us to run a single-node cluster in a VM on a machine. Enter the command:

choco install Kubernetes-cli minikube

This will return an output that looks as follows:

Chocolatey package manager installation for Kubernetes implementation

Now, run the following command and verify that kubectl is installed on your machine:

kubectl.exe version --client

You must see an output like in the following screenshot:

confirm kubectl installation
TechTarget

Next, spin up a worker machine -- or node -- in Kubernetes. It can be a physical or virtual machine. To do this, use the following command to start minikube:

minikube start

This will return an output as shown below:

command to start minikube for Kubernetes implementation

Since we are installing Kubernetes on Windows, use Hyper-V to launch a VM and set up a Kubernetes cluster on the VM.

This completes the setup.

Running Kubernetes containers

To start containers, use the Kubernetes command below to create a deployment. Provide a name for the deployment and the container image to deploy. Kubernetes will automatically pick Docker as the default container runtime. Here we use an image that will run the Nginx web server:

kubectl.exe create deployment my-nginx --image nginx

When a deployment is created, Kubernetes builds pods to host application instances.

Enter get pods just after running the previous command to catch the ContainerCreating status as pods are deployed:

kubectl.exe get pods

This will complete in a few seconds and the container status should change to Running:

Kubernetes command kubectl for container status

Use the describe keyword with the deployment name, as shown below, to see more detailed information about deployments:

kubectl.exe get deployment

kubectl.exe describe deployment helloworld-nginx

This returns results, such as number of replicas, desired state and timestamps:

describe command for container deployment

In this Kubernetes implementation tutorial, we run only one container or Nginx server, but sometimes it's necessary to accommodate increased workload and traffic. In that case, scale up the number of application instances. This can be achieved using kubectl scale deployment with a --replicas parameter:

kubectl.exe scale deployment helloworld-nginx --replicas 4

Check the deployment. You will observe that four replicas of the application instance have been deployed:

check container deployment for number of replicas

Now, check the Kubernetes pods; there should be four containers running on the node:

Kubernetes command to check number of pods running

Deploy your app

Now there is an application running in multiple containers with their own IP addresses. Next, expose them outside the cluster so that the application is accessible:

Kubectl.exe expose deployment helloworld-nginx --port=80 --type=NodePort

Verify this via the kubectl get services command. This will return a service type of NodePort to expose port 80 on each node of the Kubernetes cluster. This service is an abstraction layer that basically load balances and groups more than one pod in a cluster that shares an IP address.

kubectl commant to verify Kubernetes cluster is exposed

To open this application in a web browser, create a proxy to connect the local port to the cluster port, which we exposed using the NodePort service in the previous step:

kubectl.exe port-forward svc/helloworld-nginx 80:80

This will look as follows:

Image 9

Go to the web browser and open http://127.0.0.1/ or http://localhost/ to see the homepage of the Nginx web server deployed in Kubernetes containers.

Nginx deployed in a Kubernetes container

Test the Kubernetes control plane

Lastly, test that the Kubernetes control plane, or master server, is able to maintain the desired state of the pods that run on the node server. To check this, use the following command to forcibly delete one of the pods that runs the application instance:

kubectl.exe delete pod helloworld-nginx-67bb76cf46-mks6j

This undesired state will be identified, and Kubernetes will immediately run a new instance of the application in a container:

test Kubernetes control plane for pod state

What's next for your Kubernetes implementation?

Once you have the basics of Kubernetes deployment under your belt, think about how you can extend this knowledge. Some related areas you'll likely encounter:

Next Steps

Kubernetes performance testing tutorial: Load test a cluster

Learn how to bootstrap Kubernetes clusters with kubeadm

Manage Kubernetes clusters with PowerShell and kubectl

How to set and manage environment variables in Kubernetes

Dig Deeper on Containers and virtualization