Introduction to Docker Containers and Kubernetes

Damian Igbe
Feb. 21, 2022, 8:14 p.m.

Subscribe to Newsletter

Be first to know about new blogs, training offers, and company news.


Let’s start by defining Kubernetes as the open-source platform for automating the deployment, scaling, and managing of application containers. The emphasis here is on the application containers. It simply means that without containers, Kubernetes may be of no use.

So what are application containers and why are they relevant? Docker Inc popularized containers with the introduction of docker images. Basically, an image is a self-contained template with the application and all dependencies required to run the application. These often include binary or source files, configuration files, shared libraries, environmental variables, and everything else that the application might need. The docker image when instantiated by a Docker runtime engine creates a running version of the image called a docker container. When a container runs, it thinks it is the only one in the entire computer system. This is good because there can be no conflict between the container process and any other running process.

Containers are so relevant that they changed the way IT looks at both software development and operations management. Containers are particularly relevant to software developers who are able to do certain things that were difficult to do before containers such as:

    • Ability to have a uniform and consistent environment across development, testing, and production stages of software development.
    • Containers are lightweight so developers have the ability to have a full development environment on their local laptop.
    • Containers enable agile software development and make it easier to have a full-blown production application.
    • Container images can be easily shared for collaboration.


Comparing Containers to Hypervisors

Containers and hypervisors are conceptually similar but have differences. In the diagrams below, they are 2 major differences.

  1. In Fig 2 the container system (such as Docker) has replaced the hypervisor (e.g ESX, KVM, or Xen ) in Fig 1.
  2. Fig 2 does not have the operating system sublayer present on each virtual machine in Fig 1. This is because each container uses the operating system of the host computer (operating system virtualization) while in Fig 1, each VM has its own operating system (hardware virtualization).

Fig 1: Hypervisor manager running 3 virtual machines with applications

Fig 2: Container manager running 3 containerized applications


The overall goal here is the isolation of processes but the container manager does it in a better and more efficient way. Not only can you get better ROI using the container manager, since you can have more container density in equivalent hardware compared to the hypervisor, but containers are also several microseconds faster than virtual machines.

Both the virtual machines and the containers use images but container images are lightweight and better portable between different computing environments than virtual machines images. So the new way of developing an application is to package the code along with its dependencies in a docker image and then deploy the image to a registry and used it to create containers. Putting the image in a central docker repository also helps with sharing the image between groups.

Containerized Microservices Apps

So containers are revolutionary and very helpful to the developers but the next thing is how to manage large clusters of containers. On a single node, containers are great and don’t require much management but understand that with the introduction of containers, developers have also found a new and better way of developing cloud applications. Cloud applications or cloud-native applications are designed in form of microservices as compared to traditional monolithic applications. To develop a cloud-native microservice application, developers would split the tasks into various functions and then develop each function into a standalone application that communicates with other standalone applications through API calls. Each standalone application, called a microservice, is then containerized and deployed on the cloud platform.

   i.e. one microservice = one containerized application

A typical cloud-native application consists of hundreds if not thousands of containerized microservice apps running. In such a dynamic environment, we need a system to be in charge of coordinating and managing the overall health of the application. Specifically, we need to perform the following at the minimum:

    • Scheduling of containers to the hosting environment (bare-metal or VMs)
    • Naming and service discovery so that containers/microservices can talk to themselves
    • Load balancing of traffic to several containers that perform the same operation
    • Scaling the microservices application either due to increased or decreased demand
    • Logging and monitoring to check the health of the application
    • Debugging
    • Volume attachment and management so that persistent data can be stored and saved
    • Security within the microservices application
    • Rolling update to achieve zero-downtime
    • Networking infrastructure to enable the communication between microservices


Managing Containers at scale with Kubernetes

The helmsman in charge of all these activities is called Kubernetes. This is the captain of the ship. Needless to say, without the container helmsman, containers would not achieve the prominent status they occupy today since they will be very difficult to manage at scale. We can illustrate with the ship on the sea below.


Here, if each of the containers represents a microservice application, Kubernetes would be the captain of the ship, and of course, we cannot overemphasize the competency of the captain of the ship. At the same time, this comparison can be an oversimplification since in reality, the functions of Kubernetes, as stated earlier, also involve coordinating several activities between the microservices/containers.

Features of  Kubernetes

Now that we understand the critical role of Kubernetes in containerized applications, let’s mention two other features that make Kubernetes the darling of the cloud it is today–portability and extensibility.


A major advantage of containers and Kubernetes, and this is major,   is that they are abstracted from the underlying infrastructure so that any cloud-native containerized app can be deployed and managed concurrently across the on-premise environments or across any public cloud providers. This is commonly referred to as multi-cloud.

    • Portability means no vendor lock-in
    • Portability means freedom
    • Portability means peace of mind even if your cloud provider get hit by a meteorite



Kubernetes is designed to be extensible. This means that you can add features to extend its capabilities to suit your cloud applications. You can either develop the features yourself or you may obtain a plugin from the community to extend Kubernetes.


In conclusion, as it stands, Kubernetes is one of the most relevant open-source projects today. Kubernetes seems to be the only glue that ties all cloud providers together. With the size and momentum of the community behind it, and with the new focus on cloud-native applications, it looks like the journey has just begun. It would be interesting to see how Docker/Kubernetes continues to change the cloud computing landscape.

Zero-to-Hero Program: We Train and Mentor you to land your first Tech role