Designing a GitOps-native promotion pipeline from image tag to deployment — traceable, controllable, and rollback-friendly
📘 This is Part 4 of my GitOps Architecture series.
This series was originally written and published in a linear progression (Part 1 to 6).
On Dev.to, I’m republishing it starting from the final system design (Part 6), then tracing backward to how it was built — from system to source.
👉 Part 1: Why Argo CD Wasn’t Enough
👉 Part 2: From Kro RGD to Full GitOps: How I Built a Clean Deployment Flow with Argo CD
👉 Part 6: How I Scaled My GitOps Promotion Flow into a Maintainable Architecture
In the previous articles, I explained why Argo CD wasn’t enough for my needs, how Kro helped structure my deployment logic, and how I designed the Git repository layout.
Now it’s time to focus on the core of any GitOps workflow — promotion.
How do you go from a new image tag to a Deployment update in a way that’s traceable, controllable, and rollbackable?
🧩 What Do I Mean by “Promotion”?
When a new image tag is pushed, I want that version to be validated, written into Git, and deployed to Kubernetes.
Not triggered by CI scripts. Not patched via webhook.
This is a Git-first, GitOps-native promotion workflow — where everything flows through Git.
1. Why Is Promotion the Most Critical Part of GitOps?
This continues the story from the last three posts:
Argo CD limitations → Kro introduction → GitOps structure emerging
The instance.yaml file we discussed earlier doesn’t yet handle promotion. But promotion is one of the most frequent — and risky — operations in any GitOps flow.
🔁 It happens all the time. And if it’s not designed well, it becomes the first thing to go wrong.
2. What Happens Without a Proper Promotion Process?
At first, I manually edited the tag in instance.yaml and let Argo CD sync it. It worked — but:
- Easy to mistype or forget to commit
- No version record — rollback relies on memory
- No condition control — services might get silently upgraded
I also considered scripting this via CI, but it would spread logic across pipelines. Hard to trace. Harder to revert.
Why Not Use Argo CD Image Updater?
Yes, I looked into Argo CD’s image updater plugin.
It auto-detects new image tags and patches Helm values or manifests.
That’s convenient, but it didn’t meet my GitOps criteria:
✅ Detects new tags
❌ Doesn’t commit back to Git → no history
❌ Doesn’t support conditional promotion (e.g. “only after tests pass”)
❌ Hard to coordinate promotion across services at scale
It’s great for background automation.
But I wanted explicit control, Git history, and safe rollback.
🎯 What I Really Wanted Was:
A clean, observable promotion pipeline:
Image tag → Git commit → Deploy
Everything versioned. Everything traceable. Everything rollbackable.
3. Why I Chose Kargo
I didn’t want promotion logic to live inside CI scripts or webhook handlers — not because it can’t work, but because I preferred a Git-native flow where promotion history lives entirely in Git.
I wanted Git to be the source of truth and the engine for promotion.
Kargo gave me exactly that.
✅ Starts with an image tag → auto-creates a Freight
✅ Defines promotion logic with Stage
✅ Updates Git via yaml-update → git-commit → git-push
✅ Integrates with Argo CD + Kro
✅ Supports SemVer (I use tags like 1.2.3
)
And best of all — Kargo updates Git, which then triggers Argo CD, and Kro renders instance.yaml
into Kubernetes manifests.
4. Thinking in State Machines
You don’t need to think of promotion as a state machine — but if you do, it’s a surprisingly elegant way to model the logic behind Kargo.
- A new image tag is pushed → this acts as an event
- Kargo creates a Freight → a signal representing that new version
- A Stage receives that Freight → evaluates the promotion logic(current state)
- If conditions are satisfied → a transition is triggered
- That transition runs a PromotionTask → updates YAML → commits → to Git
And just like a finite state machine, this transition is deterministic — based on input (Freight), current state (Stage), and declared logic (Task).
📌 This replaces scattered CI scripts and conditional logic with a clean, declarative state transition — fully visible in Git history.
🧱 What Are Kargo’s Core Components?
If you’re new to Kargo, here’s a quick breakdown of its three building blocks:
Warehouse:Watches an image repo, emits a Freight when a new tag appears.
Freight:Represents a specific image version with metadata.
Stage:Evaluates Freight and executes the promotion logic.
Together, these power a declarative, Git-driven promotion engine.
5. Promotion Pipeline (Mermaid Diagram)
Here’s the visual flow I used to design this:
DockerHub->>Kargo Warehouse: New image tag pushed
Kargo Warehouse->>Freight: Create Freight
Freight->>Kargo Stage: Trigger promotion
Kargo Stage->>Git: Update instance.yaml → Commit → Push
Git->>Argo CD: Git change detected
Argo CD->>Kro: Pass updated instance.yaml
Kro->>Cluster: Render manifests and apply
6. How I Configure the Warehouse
Kargo supports various selection strategies — SemVer
, Lexical
, Newest
.
I use SemVer, and each service has its own Warehouse:
- Polling Interval: Every 5 minutes
- One Warehouse per service → clear separation, no cross-talk
7. How I Define Stage Conditions
Here’s the promotion pipeline I run in each Stage:
git-clone → yaml-parse → yaml-update → git-commit → git-push
All changes target a single file: instance.yaml
.
This makes the logic clear, trackable, and easy to debug.
You can go further with:
- ✅ Promote only if tests pass
- ✅ Require manual approval
- ✅ Ensure health check before promotion
The Stage is your promotion control tower.
8. How yaml-update
Handles Tag Precision
This step made everything cleaner:
- yaml-parse: read the original tag → store as oldTag
- yaml-update: set the latest tag
- git-commit: generate messages like:
Promote image from 1.2.1 to 1.2.2
📌 Example:
- uses: yaml-update
config:
path: ./repo/develop/frontend/instance.yaml
updates:
- key: spec.values.deployment.tag
value: ${{ quote(imageFrom(vars.imageRepo, warehouse(vars.warehouseName)
9. Why This Promotion Pipeline Is Reliable
This setup works because:
✅ Every change is committed → observable, reversible
✅ Only one file is updated → minimal blast radius
✅ No YAML desync → rollback = git revert
✅ Service logic is isolated → safe parallel promotion
📌 Rollback Plan
While I haven’t fully implemented rollback yet, the plan is already in place and aligns with the rest of the GitOps flow:
- Use previousTag() to fetch the last known version
- Write that tag into instance.yaml
- Commit → Push → Argo CD syncs
Eventually, I’ll create a **rollback-stage**
+ **rollback-task**
to make rollback a native GitOps operation — not a manual fix.
10. What’s Next: Syncing Only the Right App
After promotion, I don’t want to sync the entire namespace.
I just want to sync the one app that changed.
Kargo supports this with argocd-update and ApplicationSet annotations.
In the next post, I’ll share:
- How ApplicationSet works with Kargo Stage
- How to scope syncs using annotations
- How to stop “sync one = sync all”
- YAML examples for Warehouse, Stage, Freight
💬 Closing Thoughts: Promotion Can Be Clean
This article isn’t about pasting YAML.
It’s about building a promotion pipeline that is:
✅ Condition-driven
✅ Git-observable
✅ Fully rollbackable
Of course, promotion via CI or webhooks is totally valid — but I found Kargo’s declarative and Git-driven model a better fit for my goals.
If you’re designing a GitOps system and wondering where promotion logic belongs,I hope this gave you a clear, maintainable path forward.
Your turn — how are you handling promotions in GitOps?
Drop a comment, share your approach, or let’s compare notes —
we might be solving the same problem 🚀
Top comments (0)