Deploying a Microservice with Docker and Kubernetes

Damian Igbe, Phd
March 24, 2022, 6:33 p.m.

Subscribe to Newsletter

Be first to know about new blogs, training offers, and company news.

This is a series of blogs and video tutorials aimed at Managing Microservices with Kubernetes. I aim to cover the following topics using a microservice application for illustration:

  1. Deploying a microservice with Docker and Kubernetes (this blog)
  2. Scaling a microservice application in Kubernetes
  3. Blue/Green deployment of microservices in Kubernetes
  4. Communication between Microservices in Kubernetes
  5. Accessing microservices with a Load Balancer and Ingress in Kubernetes
  6.  Service mesh in Kubernetes with Istio
  7. Securing a microservices application in Kubernetes
  8. Volume management of microservices in Kubernetes
  9. GitOps of microservices in Kubernetes
  10. Managing a microservice with Helm and Operator
  11. Monitoring and Logging of microservices in Kubernetes

In this first blog of the series, let me ensure that we are all clear on Docker and Kubernetes. I am often asked the question:

  • Why Containers and why Kubernetes?
  • And even sometimes, what is the difference between Docker and Kubernetes?

In this blog, I will answer by showing you what Docker and Kubernetes can do.

Watch the video

The Voting App Microservice

Here is an application called the Voting App that I will use to illustrate the difference between Docker and Kubernetes. This code is publicly available from Docker Inc. at this GitHub location https://github.com/dockersamples/example-voting-app

In this tutorial, I am going to run the voting application using Docker and Kubernetes. I will then explain the main differences between them. Here is the Architecture of the application. This is a microservices application consisting of 5 microservices. The 5 microservices are described below:


  1. Voting-app: Python  front-end web app which lets you vote between two options
  2. Redis: Redis  queue which collects new votes
  3. Worker: .NET Core which consumes votes and stores them in…
  4. DB:  Postgres  database backed by a Docker volume
  5. Result: Node.js  web app which shows the results of the voting in real-time

In a nutshell, users vote at the voting-app endpoint and you can view the results at a result-app endpoint. Both voting-app and result-app run on different ports.

Now that we understand how this application works, let us deploy the application using Docker and Kubernetes.  I already have a Kubernetes cluster consisting of 1 master node and 4 worker nodes. I also have a single node dedicated to Docker.

Lab 1: Create The application using Docker-compose

First, I will ssh to the docker node and create the microservice application running on Docker with a Docker utility called docker-compose. In a nutshell, Docker-compose helps to run a multi-tiered application just by running a single command, docker-compose up. Before running the command, you should have created a docker-compose file to describe the microservice application. You can get more information about docker-compose here.

So, let’s create the application from the provided docker-compose file to do that:

cloudexperts@dockerlab:~$ git clone https://github.com/dockersamples/example-voting-app
cloudexperts@dockerlab:~$ cd example-voting-app
cloudexperts@dockerlab:~$ docker-compose up

cloudexperts@dockerlab:~/example-voting-app/tmp/example-voting-app$ docker-compose ps
Name Command State Ports 
db docker-entrypoint.sh postgres Up 5432/tcp 
example-voting-app_result_1 docker-entrypoint.sh nodem ... Up>5858/tcp,>80/tcp
example-voting-app_vote_1 python app.py Up>80/tcp 
example-voting-app_worker_1 /bin/sh -c dotnet src/Work ... Up 
redis docker-entrypoint.sh redis ... Up>6379/tcp 

As you can see, when I run docker-compose ps you see a bunch of containers running. This application is a microservice application consisting of 5 microservices as seen in the Architecture of the Microservice above.

To view this application from the browser, I need the voting-app microservice and the port it is running on. You can see from the docker-compose command above (highlighted in bold), that the example-voting-app-vote it is running on port 5000, while the example-voting-app_result is running on port 5001. In Docker terminology, these microservices are exposed on ports 5000 and 5001 respectively on the docker host though the applications are each running on port 80 inside the containers.

To access the voting application and the voting result from the browser use the IP address of the Docker host where the containers are running, and the port. (Remember to change the IP address to the IP address of your Docker node)


In lab 2 below, we are going to see how Kubernetes manages the same application just deployed with Docker using Docker compose.

Lab 2: Create the Application using Kubernetes

Let’s create the microservice from the manifests files provided. Here I will log into the master node of the Kubernetes cluster and create the voting app application by running the below command. YAML files are used to manage the deployment of an application on  Kubernetes and these manifest files were provided on the GitHub location below.

cloudexperts@master1:~$ git clone https://github.com/dockersamples/example-voting-app
cloudexperts@master1:~$ cd example-voting-app/k8s-specifications
cloudexperts@master1:~/example-voting-app/k8s-specifications$ ls -al
total 44
drwxrwxr-x 2 cloudexperts cloudexperts 4096 Jun  4 19:39 .
drwxrwxr-x 8 cloudexperts cloudexperts 4096 Jun  8 14:52 ..
-rw-rw-r-- 1 cloudexperts cloudexperts  646 Jun  4 17:57 db-deployment.yaml
-rw-rw-r-- 1 cloudexperts cloudexperts  209 Jun  4 17:57 db-service.yaml
-rw-rw-r-- 1 cloudexperts cloudexperts  510 Jun  4 17:57 redis-deployment.yaml
-rw-rw-r-- 1 cloudexperts cloudexperts  221 Jun  4 17:57 redis-service.yaml
-rw-rw-r-- 1 cloudexperts cloudexperts  408 Jun  4 17:57 result-deployment.yaml
-rw-rw-r-- 1 cloudexperts cloudexperts  239 Jun  4 17:57 result-service.yaml
-rw-rw-r-- 1 cloudexperts cloudexperts  394 Jun  4 17:57 vote-deployment.yaml
-rw-rw-r-- 1 cloudexperts cloudexperts  234 Jun  4 18:06 vote-service.yaml
-rw-rw-r-- 1 cloudexperts cloudexperts  335 Jun  4 17:57 worker-deployment.yaml

cloudexperts@master1:~/example-voting-app/k8s-specifications$ kubectl get nodes
master1   Ready    master   18d   v1.18.3
node01    Ready    <none>   18d   v1.18.3
node02    Ready    <none>   18d   v1.18.3
node03    Ready    <none>   18d   v1.18.3
node04    Ready    <none>   18d   v1.18.3

cloudexperts@dockerlab:~/example-voting-app/k8s-specifications$ Kubectl create ns vote
cloudexperts@dockerlab:~/example-voting-app/k8s-specifications$ Kubectl create -f .
deployment.apps/db created
service/db created
deployment.apps/redis created
service/redis created
deployment.apps/result created
service/result created
deployment.apps/vote created
service/vote created
deployment.apps/worker created

This will create all the Kubernetes objects that are needed to run the microservice. Let me mention here that Kubernetes has a way of doing its own thing and it can often look more complicated than doing things in Docker. Here, about 4 Kubernetes objects are required to create the microservice: Pods, ReplicaSets, Deployment, and Service objects. I will explain the functions of these objects in the follow-up tutorials. To access the microservice application running on Kubernetes, you need the voting-app microservice and its port. To see that I need to check the service Object. Before doing that, let me ensure that all the pods/containers are up and running.

cloudexperts@master1:~/example-voting-app/k8s-specifications$ kubectl get pods -n vote
db-6789fcc76c-cm5c9 1/1 Running 0 5h35m
redis-554668f9bf-9wbfs 1/1 Running 0 5h35m
result-79bf6bc748-qtjlw 1/1 Running 19 5h35m
vote-7478984bfb-pq46s 1/1 Running 0 5h35m
vote-7478984bfb-vrbkr 1/1 Running 0 5h26m
worker-dd46d7584-7btn7 1/1 Running 0 5h35m

cloudexperts@master1:~/example-voting-app/k8s-specifications$ kubectl get svc -n vote
db ClusterIP <none> 5432/TCP 5h36m
redis ClusterIP <none> 6379/TCP 5h36m
result NodePort <none> 5001:31001/TCP 5h36m
vote NodePort <none> 5000:31000/TCP 5h36m

Here I run the command to view the vote service on the vote namespace. There are different types of service objects, but this is a NodePort. To access the application on a NodePort, I need the IP address of any of the cluster nodes and the second port number  (ensure the port is between 30000 and 32767) as indicated in bold above. For the vote service, the first port 5000 is the ClusterIP port while the second port 31000 is the Nodeport service.

Here I can access the voting app on port 31000 and the results on port 31001. You can check this blog to understand Kubernetes service objects and the different types of service objects. To access the voting microservice application and the result  microservice application from the browser use  any  IP of  the Kubernetes  cluster  and the corresponding ports(remember to change the IP address to one of the IP addresses of your Kubernetes cluster):



Now that we have seen the same application deployed on both Docker and Kubernetes, let me answer the questions that we started with.

Why Containers and why Kubernetes?

Both the Docker runtime and Kubernetes Orchestrator help us to deploy and run a microservice application. You can also run your application in a VM or even on a physical BareMetal machine but compared to those options, Docker trumps them all because Docker is:

  • Portable: Portable across different platforms. You can have an application running both on-premise and running on different cloud platforms such as AWS and GCP at the same time
  • Fast: Lightweight and hence very fast
  • Cheap: You can package several containers in a node hence cheaper than VMs and blade servers.

Rarely do you get something faster for cheaper but this is what Docker provides. Docker enables you to build, ship, and run your application. With Docker, you package your application into a container along with all the required libraries (build), upload the image to a registry (ship), and then run the application as I just did.

Great! now that we know why Docker and Kubernetes, let us know some of the differences between them:

Differences Between Docker and Kubernetes

The voting-app application may be running fine on a single node but imagine that you want to expose this voting app to the entire country of several million people. For the application to handle this level of users, you need to scale the application. Better still, you want to be able to dynamically scale the application as the number of people accessing the website increases. As good as Docker is, Docker is relevant on a single node but once you want to run your application on more than 1 node and to scale it, Docker may no longer be able to help. Scaling the application for several users will require resources more than one node can provide. For more than 1 node, you will generally require an Orchestrator, and this is where Kubernetes comes in. Kubernetes is an Orchestrator (cluster-manager) that helps with several things such as:

  • Scheduling of the containers (that forms the microservice) to different nodes
  • Communication between the microservices on different nodes
  • Scaling of the microservices independently
  • Security of the microservices application as a whole
  • Monitoring of the microservices
  • Logging of the microservices
  • Networking setup for the microservice application
  • Volume management for the microservice application

So, to clarify: what is the difference between Docker and Kubernetes? Docker is a container runtime while Kubernetes is a cluster manager that can handle the above-mentioned requirements of a microservice. Docker handles mostly the building and the shipping of the images and running a microservice on an individual cluster node while Kubernetes helps with the running of the entire microservices across all the nodes in a production environment. In running Kubernetes, you still need a container engine like Docker and in fact, Kubernetes is useless without a container engine.

The Workflow to create and run a microservice typically looks like this:

  1. Write the code of your microservice
  2. Use Docker to package the code along with all the dependencies in an image
  3. Repeat the above 2 steps for all your microservices
  4. Upload the images to the Docker hub or your Docker registry of choice
  5. Select your Kubernetes cluster of choice—self-managed or use managed Kubernetes cluster
  6. Deploy and manage your microservices on a Kubernetes cluster (you can use kubectl or the Kubernetes Dashboard)


In teaching microservices, Docker, and Kubernetes, this is how I try to help my students understand the differences between Docker and Kubernetes. Here, I have not only explained but I have also shown you how to run a microservice application under both platforms.   Used properly, Docker and Kubernetes are as powerful as never seen before in the history of IT infrastructure management.  Stay tuned for the rest of the series and thank you for reading/watching.

Zero-to-Hero Program: We Train and Mentor you to land your first Tech role