Kubernetes

Deploy Prometheus on Kubernetes

Damian Igbe, Phd
March 24, 2022, 5:15 p.m.

Subscribe to Newsletter

Be first to know about new blogs, training offers, and company news.

Prometheus, according to premetheus.io, is a full monitoring and trending system that includes built-in and active scraping, storing, querying, graphing, and alerting based on time series data. It has knowledge about what the environment should look like (which endpoints should exist, what time series patterns mean trouble, etc.), and actively tries to find faults. Prometheus works well for recording any purely numeric time series. It fits both machine-centric monitoring as well as monitoring of highly dynamic service-oriented architectures. In a world of microservices, its support for multi-dimensional data collection and querying is a particular strength. Prometheus is designed for reliability, to be the system you go to during an outage to allow you to quickly diagnose problems. Each Prometheus server is standalone, not depending on network storage or other remote services. You can rely on it when other parts of your infrastructure are broken, and you do not need to set up extensive infrastructure to use it. This tutorial will deploy Prometheus on the Kubernetes cluster.

Prometheus is involved in every monitoring process. The common features of a monitoring system are:

  • Visualization and dashboardboarding: Prometheus has dashboard editing features though it’s a common practice to use Grafana for graph and dashboard editing and only later move to Prometheus console templates when more expertise is obtained.
  • Data Storage: Time-series data needs to be stored somewhere before the visualization is performed. Prometheus stores time-series data using a dimensional model, with key-value tagging along the time series to better organize the data and offer strong query capabilities.
  • Data Collection:  Either by using traditional methods of SNMP or new ones using agents, a way to obtain the metrics that will eventually be stored as time-series is needed.
  • Plug-in Extensible Architecture: Prometheus uses plugin architecture called Exporters. The Exporters allow third-party tools to export their data into Prometheus. You can extend already available core functionality as well as include a set of completely new functions in your solution. Prometheus connects with several open-source software applications, some of which are already compatible with Prometheus. More information is at this link.
  • Alarming and Event Tracking: Prometheus can generate alarms when any metric malfunctions or does not exhibit good expected behavior.  This can help to better diagnose problems in your infrastructure.
  • Cloud Monitoring:  Prometheus can be used to monitor cloud environments. The AWS monitoring service, Cloudwatch,  includes not only the data storage for all its time series-based metrics but also includes a basic graph and dashboard editing. Prometheus has an official exporter for AWS Cloudwatch, so you can monitor all your AWS cloud components with Prometheus. The monitoring service for Google cloud, Stackdriver, also has a Prometheus exporter.

Prometheus Architecture:

 

 

  • Prometheus Server: The main heart of Prometheus which scrapes and stores time-series data. You can use multiple Prometheus servers for HA but each one of them is self-contained. PromQL is the query language for Prometheus time series Data.
  • Pushgateway for supporting short-lived jobs. The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.
  • Jobs/Exporter: Prometheus plugins are special-purpose exporters for services like HAProxy, StatsD, Graphite, etc. There are a number of libraries and servers which help in exporting existing metrics from third-party systems such as Prometheus metrics.
  • Alertmanager: The alertmanager handle alerts. The Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integrations such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.
  • Various support tools: API clients, Grafana, etc can all be used with Prometheus.
  • Client libraries for instrumenting application code. Before you can monitor your services, you need to add instrumentation to your code via one of the Prometheus client libraries. These implement the Prometheus metric types in Go, Java, Ruby, Python.

Deploy Prometheus on Kubernetes

To deploy Prometheus on Kubernetes, the Kubernetes cluster must be up and running. Here we will use kubeadm to deploy Prometheus on the Kubernetes cluster on AWS.

If performing the installation on AWS, make sure to use t1.medium as the smaller image like t1.tiny and t1.small are too small and hence did not work. Once the Kubernetes cluster is up and running, it is time to deploy Prometheus on Kubernetes. You can follow the instructions below to deploy Prometheus on Kubernetes. We will use Helm and Kubernetes Operator to install Prometheus and Grafana.

Step 1: Install helm

Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes. In order words, it's the packaging manager for Kubernetes. An alternative to the helm is to create manifest files in json/yaml and use kubectl utility to install the manifest files manually but this can be slow and inefficient. Helm makes it quick and easy. Helm comprises a client and server application. The server side is called tiller and the client-side is the helm. Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster. But for development, it can also be run locally and configured to talk to a remote Kubernetes cluster.

To install Helm on an Ubuntu 16.04 LTS follow the steps below:

$wget https://storage.googleapis.com/kubernetes-helm/helm-v2.8.2-linux-amd64.tar.gz

$tar -xvzf helm-v2.8.2-linux-amd64.tar.gz

$sudo mv linux-amd64/helm /usr/bin

$ helm init
$ helm repo add coreos https://s3-eu-west-1.amazonaws.com/coreos-charts/stable/

Helm RBAC Setup for K8s v1.6+

$kubectl -n kube-system create sa tiller 

$kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller 

$helm init --service-account tiller --upgrade

Step 2: Install Kubernetes Operator

Kubernetes Operator is a tool developed by CoreOS for self-managing applications on top of Kubernetes. Kubernetes Operator represents human operational knowledge in software to reliably manage an application. Imagine depositing your specialist and/or expert knowledge of a particular domain (e.g Prometheus) into the software so that the piece of software can act like a robot on your behalf! Operators are a step towards fully automated infrastructure. An Operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex stateful applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts but includes domain or application-specific knowledge to automate common tasks.

Since CoreOS created ETCD operators, they have extended the idea to various other services and the list of operators continues to increase.  Here we will install Prometheus-operator to manage our Prometheus installation.

$helm install coreos/prometheus-operator --name prometheus-operator --namespace monitoring 
$helm install coreos/kube-prometheus --name kube-prometheus --set global.rbacEnable=true --namespace monitoring

$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-masternode 1/1 Running 4 5d
kube-system kube-apiserver-masternode 1/1 Running 4 5d
kube-system kube-controller-manager-masternode 1/1 Running 4 5d
kube-system kube-dns-6f4fd4bdf-rmhpg 3/3 Running 12 5d
kube-system kube-proxy-2fgf2 1/1 Running 3 5d
kube-system kube-proxy-66t4t 1/1 Running 4 5d
kube-system kube-proxy-8fgz5 1/1 Running 3 5d
kube-system kube-scheduler-masternode 1/1 Running 4 5d
kube-system tiller-deploy-7bf964fff8-zxkqn 1/1 Running 4 5d
kube-system weave-net-4cmkg 2/2 Running 10 5d
kube-system weave-net-9vs8l 2/2 Running 15 5d
kube-system weave-net-c87c7 2/2 Running 9 5d
monitoring alertmanager-kube-prometheus-0 2/2 Running 4 5d
monitoring kube-prometheus-exporter-kube-state-7485549855-8qcsw 2/2 Running 5 5d
monitoring kube-prometheus-exporter-node-gcqxn 1/1 Running 2 5d
monitoring kube-prometheus-exporter-node-hmhz6 1/1 Running 2 5d
monitoring kube-prometheus-exporter-node-pdpbh 1/1 Running 3 5d
monitoring kube-prometheus-grafana-6f8697c6d5-b92n5 2/2 Running 4 5d
monitoring prometheus-kube-prometheus-0 2/2 Running 2 5d
monitoring prometheus-operator-77fd5d856f-qrx52 1/1 Running 2 5d

At this point, Prometheus, Prometheus Alertmanager, and Grafana have been installed as you can see from the containers/pods running.

Step 3: Grafana for Viewing Metrics

 

Grafana is the open platform for beautiful analytics and monitoring. Prometheus has a basic expression browser for debugging purposes but to have a good-looking dashboard, use Grafana. Grafana has a data source ready to query Prometheus.

Locally Accessing the Services

These services are running inside containers. To access the services locally, you need to forward the services from the containers/pods to your local computer. If you deployed these on AWS, you will need to configure Ingress controllers or use Service type Loadbalancer, a topic for a future blog.

Prometheus:

Forward the Prometheus server to your machine so you can take a better look at the dashboard at the URL http://localhost:9090

$kubectl port-forward -n monitoring prometheus-kube-prometheus-0 9090 &

Alertmanager:

Forward the Prometheus Alertmanager  to your machine so you can take a better look at the dashboard at the URL http://localhost:9093

$ kubectl port-forward -n monitoring alertmanager-kube-prometheus-0 9093 &

Grafana:

Forward Grafana server to your machine so you can take a better look at the dashboard at the url http://localhost:3000

$ kubectl port-forward $(kubectl get  pods --selector=app=kube-prometheus-grafana -n  monitoring --output=jsonpath="{.items..metadata.name}") -n monitoring  3000 &

Conclusion

Here we have installed Kubernetes using kubeadm and deployed Prometheus on Kubernetes using helm and Kubernetes Operator to monitor Kubernetes Cluster. If you follow the guides you should have a running Kubernetes cluster monitored with Prometheus.

Zero-to-Hero Program: We Train and Mentor you to land your first Tech role