Kubernetes is powerful, but writing yaml is a pain
If you’ve worked with Kubernetes for more than a few days, you’ve probably run into this moment:
You know what you want to do deploy something simple, maybe expose a port, set a few env vars but somehow, you’re 40 lines deep into a YAML file wondering if you got the indentation right. Again.
Kubernetes is great for managing infrastructure, but let’s be honest writing and editing YAML isn’t exactly fun. It’s repetitive, fragile, and weirdly easy to mess up even when you know what you’re doing.
There’s also the constant context switching:
Docs → Terminal → Editor → YAML → Error → Back to docs.
For experienced devs, it’s annoying.
For new folks, it’s borderline hostile.
But what if instead of all that, you could just say:
“create a deployment with 3 replicas of nginx”
…and it generates the YAML for you?
That’s what kubectl-ai
does. And after using it for a bit, I think it’s one of the more useful AI tools I’ve added to my workflow this year not because it’s magical, but because it does one thing really well: turns plain language into working Kubernetes manifests.
Let’s break down how it works, why it’s useful, and what to watch out for.
What is kubectl-ai?
kubectl-ai
is a command-line plugin that lets you describe what you want in plain English, and it returns the Kubernetes YAML for it.
It’s kind of like having a smart assistant built into your terminal that understands Kubernetes so instead of remembering every field and format, you just say what you need.
Example:
kubectl ai "create a service for my nginx deployment on port 80"
And you get a complete Service
manifest. No guessing, no digging through docs.

Promo code: devlink50
The plugin was created by Sertaç Özercan, and it’s fully open source. You can find the project on GitHub here:
github.com/sozercan/kubectl-ai
How it works
It uses a large language model behind the scenes to interpret your prompt and generate valid YAML. You don’t need to know the exact syntax just explain your intent, and it fills in the details.
Install options
It’s super easy to install. You’ve got options:
-
Homebrew (Mac):
brew install sozercan/kubectl-ai/kubectl-ai
-
Krew plugin manager (cross-platform):
kubectl krew install ai
- Manual install: Download binaries directly from GitHub releases
It also supports multiple LLM providers OpenAI, Azure, and even local models so you’re not locked into one service.
Why this tool makes sense now
In the last couple of years, two things have gotten really popular in dev circles:
- Kubernetes
- Large language models (LLMs) like GPT
kubectl-ai
brings them together in a way that actually makes sense.
Instead of trying to replace engineers or build an AI-driven platform no one asked for, it solves a real pain point: writing YAML.
Why now?
There’s a shift happening.
We’re starting to see more CLI tools that treat LLMs as part of the workflow not just for code generation, but for context-aware commands. kubectl-ai
fits into that category.
It also:
- Works with multiple AI providers not just ChatGPT
- Uses the Kubernetes OpenAPI schema, which means it knows how resources should actually look (including custom resources)
- Doesn’t try to be a GUI or dashboard it just stays in your terminal where you’re already working
It’s not hype it’s useful
What makes this tool interesting isn’t that it’s AI-powered.
It’s that it’s built for developers who already know Kubernetes but don’t want to manually craft YAML for everything especially when you’re in the middle of troubleshooting or prototyping.
In other words: it’s not solving imaginary problems it’s speeding up work you’re already doing.
What you can do with it
At its core, kubectl-ai
just wants you to stop hand-coding YAML. Whether you’re new to Kubernetes or deep in DevOps, you’ll probably find at least one use case where this saves time (and maybe sanity).
Let’s look at what it can actually do starting simple, then leveling up.
Basic stuff: simple prompts that just work
kubectl ai "create a deployment with 3 replicas of nginx"
Boom you get a clean Deployment
manifest with 3 replicas, using the nginx
image, and the usual scaffolding included.
Want to edit it?
kubectl ai "change replicas to 5 in this YAML"
It’ll take your existing manifest (you can pipe it in), and update only what you asked.
No scrolling, no indentation battles.
Other examples:
kubectl ai "add a configmap volume to mount /config"
kubectl ai "expose this deployment on port 80"
Intermediate to advanced use
This is where things get more interesting.
Edit CRDs using plain language
Custom resources are usually a mess to write manually.
With kubectl-ai
, you can just say:
kubectl ai "edit this cert-manager issuer to use DNS01 with Cloudflare"
And it knows how to shape the nested structure correctly (thanks to OpenAPI awareness).
Use it inside your editor
It works nicely with:
- VS Code
- Vim
- Neovim
Just run it in a terminal split or integrate it into your keybindings/macros.
Automate with CI/CD
Imagine running a prompt-driven script in your pipeline:
kubectl ai "generate a staging namespace policy with read-only access"
You get repeatable, valid YAML without hardcoding templates.
You don’t have to use all of these to get value.
Even if all you do is speed up one-liners in your terminal, it’s already worth having installed.
What’s new or easy to miss in 2025
If you tried kubectl-ai
back when it first dropped and thought “neat, but basic,” you might want to give it another shot it’s quietly gotten a lot more capable.
Here’s what’s been added or improved recently:
Support for newer models
The latest version works with OpenAI’s GPT-4o and other newer LLMs, which means:
- Faster responses
- Better YAML formatting
- More accurate interpretation of your intent
Plus, if you’re using Azure’s OpenAI or a local model via something like LocalAI, you’re not missing out it plays nice with those too.
Uses the Kubernetes OpenAPI schema
This one’s underrated.
Because it understands the actual Kubernetes OpenAPI spec, the tool can generate valid manifests even for custom resources like cert-manager
, Linkerd
, or ArgoCD
.
Less guesswork. Fewer missing fields. More reliable outputs.
Early support for prompt workflows and customization
The community’s starting to build simple prompt chains and shared presets.
Want your deployments to always include liveness/readiness probes? You can build that into your prompt structure or fine-tune locally.
Still early but promising.
Local mode support
Don’t want to call OpenAI?
You can use a local model (with enough RAM) for totally offline manifest generation.
Ideal for air-gapped environments or folks who don’t want to burn API credits.
YAML diff previews and auto-apply mode
If enabled, kubectl-ai
can now:
- Show diffs between your existing manifest and the AI-generated change
- Let you auto-apply changes with a flag (with safety prompts)
This makes iteration faster, especially during prototyping or live debugging sessions.
So yeah it’s not just a basic wrapper anymore.
It’s starting to feel like a real part of the toolchain.

Why kubernetes users should pay attention
Look, YAML isn’t going away. But the way we deal with it? That’s starting to change and kubectl-ai
is part of that shift.
Here’s why this tool is actually worth caring about, even if you’ve been hand-crafting manifests for years.
Less syntax, fewer mistakes
Kubernetes is powerful, but YAML is brittle. One wrong indent or missing field and you’re chasing a bug that isn’t really a bug.
kubectl-ai
helps cut down on:
- Copy-paste errors
- Misconfigured resources
- Forgotten fields in CRDs or Helm charts
Fast prototyping during on-call or debugging
Need to try a quick fix at 2 AM?
Don’t want to fumble with docs?
kubectl ai "create a resource quota for staging namespace"
Done.
This saves time when you’re troubleshooting and don’t want to build things from scratch under pressure.
Works well with tools you already use
It’s not trying to replace kubectl
.
It extends it and works inside your terminal, editor, or CI setup. Low friction, high value.
Better onboarding for newer devs
For folks just getting into Kubernetes, writing manifests can be a steep curve.
Letting someone say “I want a deployment with autoscaling” and getting something usable helps reduce the intimidation factor without dumbing anything down.
Less hunting through docs and forums
We’ve all opened three tabs for:
- Kubernetes docs
- A blog post from 2019
- A Stack Overflow thread with no accepted answer
If you can skip that by writing a decent prompt, that’s already a win.
Bottom line: whether you’re a platform engineer, SRE, or just someone who’s tired of YAML, this tool gives you a solid shortcut without cutting corners where it counts.
What it changes in day-to-day devops work
This isn’t one of those tools that overhauls your stack.
It just slides in quietly and starts saving you time especially in the little moments where YAML usually slows you down.
Here’s what the shift actually looks like:

It doesn’t try to do your job it helps you do it faster
You still review the YAML.
You still apply it intentionally.
But instead of spending 10–15 minutes assembling a manifest (and double-checking syntax), you spend 30 seconds describing what you want.
It’s not automation. It’s assistive tooling and it feels good to use.
Limitations and considerations
As cool as kubectl-ai
is, it’s not magic. It’s a tool and like any tool, it works best when you understand its limits.
Here’s what you should keep in mind before using it in production (or blindly hitting enter):
Always review the output
Don’t just generate and apply.
Read what it gives you. Even good LLMs occasionally hallucinate fields or miss subtle details especially with complex CRDs or Helm values.
Use it to get 90% there, but keep your brain in the loop.
Results depend on model + prompt
The quality of what you get is tied to:
- The LLM provider you’re using (e.g., GPT-4o vs. a smaller local model)
- How clearly you write your prompt
Good prompts get good output. Vague ones? Not so much.
Not meant to replace kubectl, just help it
You still need to know how Kubernetes works.
This doesn’t mean you can skip learning what a Deployment
or Ingress
actually does.
But it helps offload the syntax part so you can focus on logic and structure.
Don’t let it become a crutch
It’s tempting to throw every YAML problem at the AI but remember that understanding the config is still important.
Use it as a starting point. Improve from there.
That’s how you level up, not level out.
How to get started
Getting up and running with kubectl-ai
is straightforward. You don’t need a huge setup or a new workflow just a few commands and an API key.
Step 1: install the plugin
Pick whichever method works for your setup:
via Homebrew (macOS/Linux):
brew install sozercan/kubectl-ai/kubectl-ai
via Krew (cross-platform):
kubectl krew install ai
manual binary (for custom installs):
Download from github.com/sozercan/kubectl-ai/releases
Step 2: set your API key
Depending on your provider (OpenAI, Azure, or LocalAI), you’ll need to provide an API key. You can export it in your terminal:
export OPENAI_API_KEY=your-key-here
For local models, check the docs on setting up the endpoint and config.
Step 3: try your first prompt
Start with something simple:
kubectl ai "create a deployment with 3 replicas of nginx"
Or try editing a file:
kubectl ai "add a liveness probe to this container" < your-deployment.yaml
Optional: use it in your editor or CI pipeline
- Add it to your
.bashrc
,.zshrc
, or editor terminal - Use it in GitHub Actions or GitLab CI to auto-generate manifests on-the-fly
- Combine with
kubectl diff
orkubectl apply -f -
for smooth workflows
Conclusion: you’re not bad at kubernetes it’s just too verbose
If you’ve ever felt frustrated writing Kubernetes YAML, you’re not alone.
It’s not that you don’t understand Kubernetes it’s that the way we interact with it hasn’t really evolved to match how we actually work now.
kubectl-ai
isn’t some futuristic DevOps revolution.
It’s just a really practical tool that helps you move faster, make fewer mistakes, and spend less time jumping between docs and your editor.
In a way, it’s not even about AI. It’s about intent-based infrastructure describing what you want, and letting software handle the boring parts.
This kind of tooling doesn’t replace knowledge. It amplifies it.
So yeah, YAML isn’t going away.
But maybe just maybe the way we write it will finally start to feel less like a chore.
Give it a shot. Write a prompt. See what happens.
And if it saves you even one context switch today?
That’s a win.
Helpful links
- kubectl-ai GitHub repo
- Kubernetes official docs
- Krew plugin manager
- LocalAI (if you’re going self-hosted)
- Sertaç Özercan’s demo

Top comments (0)