DEV Community

Houssam Bourkane
Houssam Bourkane

Posted on

Exposing Kubernetes Metrics: Adding Metrics Server to Your Local Cluster

This guide is the second part of the "Build a Kubernetes Lab with Vagrant & Ansible" series. In this article, we’ll add resource usage monitoring to your cluster by installing the Kubernetes Metrics Server.


Table of Contents

📌 Prerequisite: Your Local Cluster Must Be Running

This article assumes you have already completed the first part of the series, where we created a local Kubernetes cluster using Vagrant and Ansible. If not, go back and follow Part 1: Build Your Local Kubernetes Cluster first.

Reminder: You should have exported your Kubernetes configuration locally with the following command:

vagrant ssh kubmaster -c "sudo cat /etc/kubernetes/admin.conf" > ~/kubeconfig-vagrant.yaml
Enter fullscreen mode Exit fullscreen mode

Set this file as your config path:

export KUBECONFIG=~/kubeconfig-vagrant.yaml
Enter fullscreen mode Exit fullscreen mode

Step 1: Install Metrics Server

We’ll use the official manifest from the Kubernetes SIG repository to install the Metrics Server:

KUBECONFIG=$KUBECONFIG kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml &> /dev/null
Enter fullscreen mode Exit fullscreen mode

This deploys the metrics-server in the kube-system namespace. However, some environments (like local or self-hosted) require additional flags to work properly.


Step 2: Patch the Deployment

By default, the metrics server may not collect metrics correctly in local environments due to self-signed certificates or insecure Kubelet connections.

To fix that, apply the following patch to the deployment:

KUBECONFIG=$KUBECONFIG kubectl patch deployment metrics-server \
  -n kube-system \
  --type='json' \
  -p='[
    {
      "op": "add",
      "path": "/spec/template/spec/containers/0/command",
      "value": [
        "/metrics-server",
        "--kubelet-insecure-tls",
        "--kubelet-preferred-address-types=InternalIP"
      ]
    }
  ]' &> /dev/null
Enter fullscreen mode Exit fullscreen mode

This tells the metrics server to:

  • Ignore TLS errors from kubelet (safe for local)
  • Prefer InternalIP when communicating with nodes

Step 3: Verify Installation

Once installed and patched, verify that everything is working:

KUBECONFIG=$KUBECONFIG kubectl get deployment metrics-server -n kube-system
Enter fullscreen mode Exit fullscreen mode

If it shows READY and AVAILABLE, you're good to go!


Step 4: Use the Metrics API

You can now query live resource usage data:

View Node Metrics

KUBECONFIG=$KUBECONFIG kubectl top nodes
Enter fullscreen mode Exit fullscreen mode

View Pod Metrics (All Namespaces)

KUBECONFIG=$KUBECONFIG kubectl top pods -A
Enter fullscreen mode Exit fullscreen mode

You now have visibility into CPU and memory usage across your cluster.


What's Next?

With your metrics server up and running, you're ready to:

  • Explore Kubernetes Horizontal Pod Autoscaling (HPA)
  • Set up resource dashboards like Lens or K9s
  • Monitor cluster health trends locally

In the next part of the series, we’ll configure the NGINX Ingress Controller to manage external traffic to your services.


Resources

Top comments (0)