🚀 Terraform Mastery Series - Part 2: From Local to EC2 - Automate Your First Instance with Terraform
Terraform Mastery — Part 2, Article 2
Hey Guys 👋
Following up on Part 1 of my Terraform Mastery Series, where we explored the fundamentals and deployed our first S3 bucket, I’m back with Part 2 — and this one’s exciting! 🔥
In this part, we’ll move from concept to real-world application — provisioning your very first EC2 instance using Terraform, step-by-step. Whether you’re a DevOps beginner or just new to IaC, this guide is built to make you confident and deployment-ready.
🔨 What You’ll Learn in This Part:
✅ Structuring a Terraform project for EC2
✅ Configuring and securing EC2 with Key Pairs
✅ Managing AMIs, Instance Types & Regions
✅ Terraform Variables & Outputs blocks
✅ Full init → plan → apply → destroy cycle
✅ A clean, modular workflow you can reuse anytime
⚙️ Let’s Get to Work
Step 1: Set Up Your Project Directory
Start by organizing your Terraform project:
terraform-ec2/
├── ec2.tf
├── provider.tf
├── terraform.tf
├── variables.tf
├── outputs.tf
🔧 terraform.tf: Define Required Providers
This file ensures consistent provider versions across environments. Create a terraform.tf file with the following content:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.94.1"
}
}
}
This tells Terraform to use the AWS provider from HashiCorp and pins the version for stability and compatibility.
You can copy this directly from the Terraform AWS Provider Documentation
Step 2: Provider Configuration
Before we begin provisioning resources, we need to define the provider. Terraform supports many providers like AWS, Azure, GCP, and local systems. In our case, we are working with AWS.
Create a file named provider.tf and define the AWS provider block. It looks like this:
provider "aws" {
region = "us-east-1"
}
Ensure AWS CLI is installed and configured. You must generate AWS access keys and SSH key pairs.
How to configure AWS access Keys : Read Here
Step 3: Create and Configure SSH Key Pair
Terraform needs an SSH key pair to allow secure access to the EC2 instance. Use the following command on your terminal after configuring AWS :
ssh-keygen
You can reference the public key in your Terraform script using the file() function.
Now we will create EC2.tf and use above mentioned :
resource "aws_key_pair" "my_key" {
key_name = "terra-key"
public_key = file("terra-key.pub") #if key is within the same dir
}
Step 4: Create Networking Components (VPC & Security Group)
EC2 requires a network setup. We’ll create:
Inside EC2.tf Create VPC :
resource "aws_default_vpc" "default" {
}
let’s pause and understand what are resources in AWS :
Example of Security Group block:
We have to provide Name , description and most important VPC id.
Just like we fill values in the AWS Console UI, we define them here in the security group resource block.
resource "aws_security_group" "my_security_group" {
name = "automated_sg"
description = "This will add TF-generated SG"
vpc_id = aws_default_vpc.default.id # This is called interpolation, a way to inherit/extract a value from another Terraform block
# Inbound rules
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Use an array as it can contain multiple IPs
description = "SSH open"
ipv6_cidr_blocks = []
prefix_list_ids = []
security_groups = []
self = false
}
# For HTTP
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "HTTP open"
ipv6_cidr_blocks = []
prefix_list_ids = []
security_groups = []
self = false
}
# For Flask app ( any app you wanna add )
ingress {
from_port = 8000
to_port = 8000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "Flask open"
ipv6_cidr_blocks = []
prefix_list_ids = []
security_groups = []
self = false
}
# Outbound rules
egress {
from_port = 0
to_port = 0
protocol = "-1" # Semantically equivalent to allowing all outbound traffic
cidr_blocks = ["0.0.0.0/0"]
description = "All access open outbound"
ipv6_cidr_blocks = []
prefix_list_ids = []
security_groups = []
self = false
}
tags = {
Name = "automated tag"
}
}
Note : all the syntax are provided in official terraform documents under each provider. (For example : Security Group)
Step 5: Create EC2 Instance
Now let’s provision an EC2 instance using the key pair and security group in same file EC2.tf :
resource "aws_instance" "my_instance" {
key_name = aws_key_pair.my_key.key_name
vpc_security_group_ids = [aws_security_group.my_security_group] # use array because sg can be more than one
ami = "ami-0c02fb55956c7d316" # ubuntu image - get it from aws ec2 section
instance_type = "t2.micro"
root_ebs_block_device {
volume_size = 15 # storage which we normally give while creating ec2 manually
volume_type = "gp3" # strings written in double quotes while numbers without it
}
tags = [
Name = "Terraform-EC2"
]
}
📄 Full ec2.tf File will look like this :
resource "aws_key_pair" "my_key" {
key_name = "terra-key"
public_key = file("terra-key.pub") #if key is within the same dir
}
#VPC
resource "aws_default_vpc" "default" {
}
# Security Group
resource "aws_security_group" "my_security_group" {
name = "automated_sg"
description = "This will add TF-generated SG"
vpc_id = aws_default_vpc.default.id # This is called interpolation, a way to inherit/extract a value from another Terraform block
# Inbound rules
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Use an array as it can contain multiple IPs
description = "SSH open"
ipv6_cidr_blocks = []
prefix_list_ids = []
security_groups = []
self = false
}
# For HTTP
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "HTTP open"
ipv6_cidr_blocks = []
prefix_list_ids = []
security_groups = []
self = false
}
# For Flask app ( any app you wanna add )
ingress {
from_port = 8000
to_port = 8000
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "Flask open"
ipv6_cidr_blocks = []
prefix_list_ids = []
security_groups = []
self = false
}
# Outbound rules
egress {
from_port = 0
to_port = 0
protocol = "-1" # Semantically equivalent to allowing all outbound traffic
cidr_blocks = ["0.0.0.0/0"]
description = "All access open outbound"
ipv6_cidr_blocks = []
prefix_list_ids = []
security_groups = []
self = false
}
tags = {
Name = "automated tag"
}
}
# EC2 Instance
resource "aws_instance" "my_instance" {
key_name = aws_key_pair.my_key.key_name
vpc_security_group_ids = [aws_security_group.my_security_group]
ami = var.ami_id
instance_type = var.instance_type
root_ebs_block_device {
volume_size = var.ec2_root_storage_size
volume_type = "gp3"
}
tags = [
Name = "Terraform-EC2"
]
}
Now finally create the ec2
Run the following Terraform commands:
terraform init # Initialize environment
terraform validate # Check code syntax
terraform plan # Review infrastructure changes
terraform apply # Apply the configuration
# use auto-approve flag
terraform apply -auto-approve
Recommended by LinkedIn
✅ Once applied, the EC2 instance will be created and visible on your AWS dashboard.
Step 6: Connect to EC2
After launching an EC2 instance, click “Connect” in the AWS console to get the SSH command, then run it in your terminal using your .pem key to access the server securely.
# if .pem key is in same directory
# ssh command will look like this :
ssh -i terra-key ubuntu@ec2-18-117-165-49.us-east-2.compute.amazonaws.com
Congratulations! You have now automated EC2 provisioning from our local machine using terraform.
🚨 Future Issue: What if I have to make changes in EC2 configurations (type, storage size, AMI ID, etc)?
Let’s say we have written the ec2.tf file and everything is working perfectly. But what happens if the we were asked for changes like:
We’d be stuck editing every single value manually. Not only is it time-consuming, but it’s also very error-prone and hard to manage in big projects.
🔧 Solution: Use variables instead of hardcoding values
Instead of hardcoding values like this:
resource "aws_instance" "ec2_example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
key_name = "terra-key"
}
We replace them with variables, like this:
resource "aws_instance" "ec2_example" {
ami = var.ami_id
instance_type = var.instance_type
key_name = var.key_name
}
# ami_id define in varaible.tf file and var. is way to interpolate its value
📁 Create variables.tf file :
In this file, we define all the variables we’ll use. Example:
# we can define varaiable by any names there is no specific syntax
variable "ec2_instance_type" {
default = "t2.micro"
type = string
}
variable "ec2_root_storage_size" {
default = 15
type = number
}
variable "ec2_ami_id" {
default = "ami-04f167a56786e4b09"
type = string
}
Now whenever we want changes, we just edit this one file instead of touching the main code. This keeps the code clean and makes automation easier.
📌 Result
✅ Clean code
✅Easy to scale
✅Fewer chances of mistakes
✅Real-world best practice
📡 Realization: After Launching the EC2 — We Still Need Info Like IP Address?
When we launched our EC2 instance successfully using Terraform, we celebrated! 🎉 But soon after, we realized…
“Wait… how do I even connect to this EC2? What’s its IP address?”
The only way to check was by going back to the AWS Console, searching for the instance, and copying the public IP manually.
That’s not efficient — especially in a DevOps workflow where we aim for automation and full CLI control.
💡 Solution: Use Terraform output Block to Fetch Key Info
Terraform provides a clean way to extract and display useful data after apply using an output block.
Here’s how we do it 👇
📁 Add to output.tf:
output "ec2_public_ip" {
value = aws_instance.my_instance.public_ip
}
✅ Now, when we run terraform apply, Terraform will automatically show the public IP of our EC2 instance in the terminal!
✅ Bonus Tip: You Can Output Other Useful Details Too
Example — extract Public dns, Private IP, Security group ,availability zone, and more:
output "ec2_public_ip" {
value = aws_instance.my_instance.public_ip
}
output "ec2_public_dns" {
value = aws_instance.my_instance.public_dns
}
output "ec2_private_ip" {
value = aws_instance.my_instance.private_ip
}
output "ec2_security_group" {
value = aws_instance.my_instance.security_groups
}
output "ec2_availability_zone" {
value = aws_instance.my_instance.availability_zone
}
This turns your Terraform output into a quick dashboard for critical data, without needing to open the AWS Console.
The Output will look like this :
🎯 Result
🧹 Cleaning Up with terraform destroy
Once you’re done testing or showcasing your EC2 setup, it’s important to tear it down to avoid unnecessary AWS charges. Terraform makes cleanup just as easy as deployment:
terraform destroy -auto-approve
This will safely remove all resources defined in your configuration — key pairs, security groups, EC2 instance, and more.
Always remember: provision responsibly, destroy confidently. ✅
✨ Wrapping It Up
Congratulations again 🎉
You’ve just taken a huge leap in your Terraform journey by launching your very first EC2 instance — end-to-end — all from your local machine. From setting up the provider to securely connecting via SSH, and even making your code cleaner with variables and outputs, you’ve now mastered the core building blocks of infrastructure as code on AWS.
This exact setup — with modular code, reusable variables, and automation-first mindset — is used by real-world DevOps teams to deploy cloud infrastructure at scale.
But we’re just getting started.
🔮 What’s Coming Next?
“Provisioning is power. Automation is freedom.”
In the next part of this series, we’ll go even deeper and learn:
Till then, keep building, keep automating. 💻⚙️
SOC Analyst | Threat Detection & Incident Response Skilled in SIEM , log analysis, and endpoint/network defense , ELK, and threat intelligence tools | ISC2 Certified Cyber Security |CompTIA Security+
5moVery informative
Linux Administrator | Python | CISSP | Rhel | AWS | Information security | Cyber Security | Soc Analyst | Ethical Hacker |
5moVery informative
Cloud & Cybersecurity Enthusiast | Linux Administrator | CISSP | Bash & Python Scripting | RHEL | ELK | AWS | DevOps | Kubernetes | Docker
5moThanks for sharing Muhammad Hanzala