I’ve been running Nomad, Consul, and Vault (aka the full HashiStack) on an AWS EC2 instance for a while now. It worked.
It still works. But it’s time to move, and GCP is calling.
Instead of manually replicating infra or writing shell scripts, I figured it’s time to automate the whole thing.
One command. One tool. A new, reproducible cloud setup.
That’s when I had two options on the table: Ansible or Terraform.
Why Terraform?
I leaned towards Terraform mainly because:
- It’s from HashiCorp — the same folks behind Nomad, Vault, and Consul. So the ecosystem feels like home.
- I wanted to treat infrastructure as code, just like I treat my application code.
- And honestly? The idea of running
terraform apply
and seeing my entire infra spin up on GCP just felt right.
But What Even Is Terraform?
Terraform is an open-source tool that lets you define cloud and infrastructure resources using declarative configuration files, in plain text (.tf
files).
It supports all major cloud providers (AWS, GCP, Azure, etc.) and many more (Docker, Kubernetes, GitHub, Datadog, you name it).
With Terraform, you:
- Define what you want (not how to do it).
- Use a single command to apply it.
- Get consistent infra across environments and teams.
What Makes Terraform So Useful?
Reusability
Write once, reuse everywhere.
I can spin up the same stack in dev, staging, or prod with minor changes.
Idempotency
You can run terraform apply
multiple times, if nothing has changed, nothing happens.
If something did, it only updates what’s needed.
Version Control
Infra as code means you can track changes in Git, roll back, open PRs, get reviews, just like any other codebase.
State Tracking
Terraform keeps a state file that knows what’s deployed.
This allows it to compare what exists in the cloud vs what’s in your code, and only change the diff.
Getting My Hands Dirty (with Docker First)
Before touching GCP, I decided to try Terraform locally using Docker — just to feel how the workflow goes.
Here’s what I did on my Linux Mint machine:
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
sudo apt-get install terraform
Create a sample project:
mkdir learn-terraform-docker-container
cd learn-terraform-docker-container
Then vim main.tf
:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "~> 3.0.1"
}
}
}
provider "docker" {}
resource "docker_image" "nginx" {
name = "nginx"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.image_id
name = "tutorial"
ports {
internal = 80
external = 8000
}
}
Run it:
terraform init
terraform apply
Visit localhost:8000
— boom, Nginx is live.
And when you’re done:
terraform destroy
Clean, declarative, no bash hacks.
Next Up: GCP Migration
With Docker as my test case, I'm now working on:
- Creating VPCs, subnetworks
- Spinning up VM instances
- Bootstrapping Nomad/Consul/Vault
- Possibly using Terraform modules to keep things clean
And the best part? Once it's ready, a single terraform apply
can rebuild the entire setup.
Final Thoughts
Terraform feels like git
for infra, declarative, trackable, and predictable.
If you’re managing cloud infra and want to avoid click-ops or bash scripts from hell, give it a spin.
My advice? Start with a small Docker example like I did, then move to cloud resources when you're comfortable.
Terraform's learning curve is mild — and it pays off quickly.
I’ve been actively working on a super-convenient tool called LiveAPI.
LiveAPI helps you get all your backend APIs documented in a few minutes
With LiveAPI, you can quickly generate interactive API documentation that allows users to execute APIs directly from the browser.
If you’re tired of manually creating docs for your APIs, this tool might just make your life easier.
Top comments (0)