This guide is the second part of the "Build a Kubernetes Lab with Vagrant & Ansible" series. In this article, we’ll add resource usage monitoring to your cluster by installing the Kubernetes Metrics Server.
Table of Contents
- Prerequisites
- Step 1: Install Metrics Server
- Step 2: Patch the Deployment
- Step 3: Verify Installation
- Step 4: Use the Metrics API
- What's Next?
- Useful links
📌 Prerequisite: Your Local Cluster Must Be Running
This article assumes you have already completed the first part of the series, where we created a local Kubernetes cluster using Vagrant and Ansible. If not, go back and follow Part 1: Build Your Local Kubernetes Cluster first.
Reminder: You should have exported your Kubernetes configuration locally with the following command:
vagrant ssh kubmaster -c "sudo cat /etc/kubernetes/admin.conf" > ~/kubeconfig-vagrant.yaml
Set this file as your config path:
export KUBECONFIG=~/kubeconfig-vagrant.yaml
Step 1: Install Metrics Server
We’ll use the official manifest from the Kubernetes SIG repository to install the Metrics Server:
KUBECONFIG=$KUBECONFIG kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml &> /dev/null
This deploys the
metrics-server
in thekube-system
namespace. However, some environments (like local or self-hosted) require additional flags to work properly.
Step 2: Patch the Deployment
By default, the metrics server may not collect metrics correctly in local environments due to self-signed certificates or insecure Kubelet connections.
To fix that, apply the following patch to the deployment:
KUBECONFIG=$KUBECONFIG kubectl patch deployment metrics-server \
-n kube-system \
--type='json' \
-p='[
{
"op": "add",
"path": "/spec/template/spec/containers/0/command",
"value": [
"/metrics-server",
"--kubelet-insecure-tls",
"--kubelet-preferred-address-types=InternalIP"
]
}
]' &> /dev/null
This tells the metrics server to:
- Ignore TLS errors from kubelet (safe for local)
- Prefer InternalIP when communicating with nodes
Step 3: Verify Installation
Once installed and patched, verify that everything is working:
KUBECONFIG=$KUBECONFIG kubectl get deployment metrics-server -n kube-system
If it shows READY
and AVAILABLE
, you're good to go!
Step 4: Use the Metrics API
You can now query live resource usage data:
View Node Metrics
KUBECONFIG=$KUBECONFIG kubectl top nodes
View Pod Metrics (All Namespaces)
KUBECONFIG=$KUBECONFIG kubectl top pods -A
You now have visibility into CPU and memory usage across your cluster.
What's Next?
With your metrics server up and running, you're ready to:
- Explore Kubernetes Horizontal Pod Autoscaling (HPA)
- Set up resource dashboards like Lens or K9s
- Monitor cluster health trends locally
In the next part of the series, we’ll configure the NGINX Ingress Controller to manage external traffic to your services.
Top comments (0)