Introduction
In this blog, I am gonna walk you through creating a simple yet robust web application deployment pipeline using Nginx, Docker, Amazon ECR, and AWS EC2. By the end of this tutorial, we'll have a fully automated deployment system that can serve as a foundation for more complex applications.
Github repository for Building and Deploying a Nginx Web Application with Docker and AWS EC2
What We're Building
This project demonstrates a complete deployment workflow featuring:
- A lightweight Nginx web server serving static content
- Containerized application using Docker
- Cloud-based image storage with Amazon ECR
- Automated deployment on AWS EC2
- CI/CD pipeline for seamless updates
Prerequisites
Before diving in, ensure we have:
- AWS CLI configured with appropriate permissions
- Docker installed on the local machine
- Basic understanding of Docker, AWS services, and command line operations
- An AWS account with ECR and EC2 access
Project Architecture Overview
The application follows a modern containerized deployment pattern:
This project structure is organized for clarity and maintainability:
nginx-app-project/
├── .github/workflows/
│ └── deploy.yml # CI/CD Pipeline
├── nginx/
│ └── nginx.conf # Web server configuration
├── infra/
│ └── infrastructure_stack.py # AWS infrastructure setup
├── test/unit/
│ └── test_infra_stack.py # Infrastructure tests
├── src/ # Static web content
├── Dockerfile # Container definition
└── README.md
Step 1: Setting Up The Local Development Environment
Start by creating the project directory and basic file structure:
> mkdir nginx-app-deployment
> cd nginx-app-deployment
> mkdir -p nginx src infra test/unit .github/workflows
Create the main HTML content in the src/
directory. This is where the static website files will live. For a simple example:
<!-- src/index.html -->
<!DOCTYPE html>
<html>
<head>
<title>My Nginx App</title>
</head>
<body>
<h1>Welcome to My Containerized Nginx Application</h1>
<p>Successfully deployed with Docker and AWS!</p>
</body>
</html>
Step 2: Configuring Nginx
Create a custom Nginx configuration file to optimize the web server:
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}
}
This configuration provides a basic web server setup with a health check endpoint that's useful for monitoring.
Step 3: Creating the Dockerfile
The Dockerfile defines how the application will be containerized:
FROM nginx:alpine
# Copy custom nginx configuration
COPY nginx/nginx.conf /etc/nginx/nginx.conf
# Copy static content
COPY src/ /usr/share/nginx/html/
# Expose port 80
EXPOSE 80
# Nginx runs in foreground by default in the base image
CMD ["nginx", "-g", "daemon off;"]
This lightweight Alpine-based image keeps the container size minimal while providing all necessary functionality.
Step 4: Building and Testing Locally
Before deploying to the cloud, test the application locally. From the laptop's terminal:
# Build the Docker image
> docker build -t nginx-app-local .
# Run the container
> docker run -d -p 8080:80 --name nginx-test nginx-app-local
# Test the application
> curl http://localhost:8080
> curl http://localhost:8080/health
Visit http://localhost:8080
in the browser to verify everything works correctly.
Step 5: Setting Up Amazon ECR
Amazon Elastic Container Registry will store the Docker images:
# Create ECR repository
> aws ecr create-repository --repository-name nginx-app --region the-region
# Get login token and authenticate Docker
> aws ecr get-login-password --region the-region | docker login --username AWS --password-stdin the-account-id.dkr.ecr.the-region.amazonaws.com
Step 6: Building and Pushing to ECR
Tag and push the image to ECR. In the laptop's terminal:
# Tag
> docker tag nginx-app-local:latest the-account-id.dkr.ecr.the-region.amazonaws.com/nginx-app:latest
# Push to ECR
> docker push the-account-id.dkr.ecr.the-region.amazonaws.com/nginx-app:latest
Step 7: Infrastructure Setup with Python CDK
Create the infrastructure setup script. It will deploy a few AWS resources including ec2, security group, iam roles, etc. It will also setup the user data that will be the command to be executed when the ec2 instance is deployed.
# infra/infrastructure_stack.py
from aws_cdk import (
Stack,
aws_ec2 as ec2,
aws_iam as iam,
Tags,
CfnOutput
)
from constructs import Construct
class NginxCicdStack(Stack):
def __init__(self, scope: Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# Use the default VPC
vpc = ec2.Vpc.from_lookup(self, "DefaultVpc", is_default=True)
# Define the security group
security_group = ec2.SecurityGroup(self, "SecurityGroup",
vpc=vpc,
description="Allow SSH and HTTP access",
allow_all_outbound=True
)
security_group.add_ingress_rule(ec2.Peer.ipv4("0.0.0.0/0"), ec2.Port.tcp(22), "Allow SSH access")
security_group.add_ingress_rule(ec2.Peer.ipv4("0.0.0.0/0"), ec2.Port.tcp(80), "Allow HTTP access")
# Create a role for the EC2 instance with ECR access
instance_role = iam.Role(self, "InstanceRole",
assumed_by=iam.ServicePrincipal("ec2.amazonaws.com")
)
# Add ECR policy to the role
instance_role.add_managed_policy(
iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEC2ContainerRegistryFullAccess")
)
# Define the EC2 instance with the custom role
ec2_instance = ec2.Instance(self, "Instance",
instance_type=ec2.InstanceType("t2.micro"),
machine_image=ec2.MachineImage.latest_amazon_linux2(),
vpc=vpc,
security_group=security_group,
key_name="rsakey",
role=instance_role, # Assign the role with ECR permissions
user_data=ec2.UserData.custom(self._get_user_data())
)
# Add a Name Tag to EC2 instance
Tags.of(ec2_instance).add("Name", "NginxInstance")
# Output the instance ID and public IP
CfnOutput(self, "InstanceId", value=ec2_instance.instance_id)
CfnOutput(self, "InstancePublicIp", value=ec2_instance.instance_public_ip)
def _get_user_data(self):
return """#!/bin/bash
# Update system packages
yum update -y
# Install Docker
amazon-linux-extras install docker -y
systemctl start docker
systemctl enable docker
usermod -a -G docker ec2-user
# Install AWS CLI v2 if needed
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip -q awscliv2.zip
./aws/install --update
rm -rf aws awscliv2.zip
# Create a status file to signal instance is ready
touch /tmp/instance_ready
# Log completion
echo "Instance setup complete"
"""
Step 8: Deploying Infrastructure
Deploy the infrastructure using CDK:
# Install CDK if not already installed
npm install -g aws-cdk
# Bootstrap CDK (first time only)
cdk bootstrap
# Deploy the stack
cdk deploy
Step 9: Automated Deployment with GitHub Actions
Why Create a CI/CD Pipeline?
Before diving into the implementation, it's important to understand why we're creating a Continuous Integration/Continuous Deployment (CI/CD) pipeline and how it transforms our development workflow.
The Problem Without CI/CD:
Without automation, deploying updates to the application involves numerous manual steps below (that are prone to human error):
- Manually building Docker images on your local machine
- Remembering to tag images with the correct version
- Manually pushing images to ECR
- SSH-ing into EC2 instances to pull new images
- Stopping and starting containers manually
- Risk of deploying different versions across environments
- No rollback strategy if something goes wrong
- Time-consuming process that discourages frequent deployments
What CI/CD Solves:
A well-designed CI/CD pipeline addresses these challenges by automating the entire deployment process:
- Consistency: Every deployment follows the exact same process, eliminating "it works on my machine" problems
- Speed: Automated deployments happen in minutes rather than hours
- Reliability: Reduces human error through automation
- Traceability: Every deployment is tied to a specific code commit
- Rollback capability: Easy to revert to previous versions if issues arise
- Testing integration: Automatically runs tests before deployment
- Multi-environment support: Can deploy to development, staging, and production with the same process
Our Pipeline Strategy:
The GitHub Actions pipeline implements a deployment workflow that triggers automatically when code changes are pushed to the main branch. Here's what happens behind the scenes:
- Trigger: Developer pushes code to the main branch
- Build: Pipeline automatically builds a new Docker image
- Test: (Can be extended to run automated tests)
- Tag: Image is tagged with the Git commit SHA for traceability
- Push: Image is pushed to Amazon ECR
- Deploy: EC2 instance is updated with the new image
- Verify: Health checks ensure the deployment was successful
This approach ensures that every code change goes through the same rigorous, automated process, making deployments predictable and reliable.
Create a CI/CD pipeline:
First iteration
In this first iteration of the deploy.yml file, we can see the steps it does to Build, tag, and push image to Amazon ECR and deploy the ec2 instance.
# .github/workflows/deploy.yml
name: Deploy Nginx App
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
AWS_REGION: us-east-1
ECR_REPOSITORY: nginx-app
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Deploy to EC2
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
# Add deployment script here
echo "Deployment completed with image: $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
Final Iteration:
Meanwhile in the final iteration of the workflow, this is how the entire process will look like for the cicd pipeline.
name: Deploy Nginx App
on:
push:
branches:
- main
jobs:
build-docker-setup-infra:
runs-on: ubuntu-latest
outputs:
instance_ip: ${{ steps.wait-for-instance.outputs.instance_ip }}
env:
REPOSITORY_URI: ${{ secrets.ECR_REPOSITORY_URI }}
CDK_DEFAULT_ACCOUNT: ${{ secrets.AWS_ACCOUNT_ID }}
CDK_DEFAULT_REGION: ap-southeast-2
steps:
- name: Checkout Code
uses: actions/checkout@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-southeast-2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, Tag, and Push Docker Image
run: |
set -x
IMAGE_TAG=latest
docker build -t $REPOSITORY_URI:$IMAGE_TAG .
docker push $REPOSITORY_URI:$IMAGE_TAG
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r infra/requirements.txt
- name: Install AWS CDK
run: npm install -g aws-cdk
- name: Deploy CDK Stack
working-directory: ./infra
run: cdk deploy --require-approval never
- name: Wait for EC2 Instance to be Ready
id: wait-for-instance
run: |
# Get the instance ID from CDK outputs if possible
STACK_OUTPUTS=$(aws cloudformation describe-stacks --stack-name nginx-cicd-stack --query "Stacks[0].Outputs" --output json)
INSTANCE_ID=$(echo $STACK_OUTPUTS | jq -r '.[] | select(.OutputKey=="InstanceId") | .OutputValue')
INSTANCE_IP=$(echo $STACK_OUTPUTS | jq -r '.[] | select(.OutputKey=="InstancePublicIp") | .OutputValue')
# Fallback to searching by tag if outputs aren't available
if [ -z "$INSTANCE_ID" ] || [ "$INSTANCE_ID" == "null" ]; then
echo "Looking up instance by tag..."
INSTANCE_ID=$(aws ec2 describe-instances --filters "Name=tag:Name,Values=NginxInstance" "Name=instance-state-name,Values=pending,running" --query "Reservations[0].Instances[0].InstanceId" --output text)
if [ -z "$INSTANCE_ID" ] || [ "$INSTANCE_ID" == "None" ]; then
echo "Error: No running EC2 instance with tag 'NginxInstance' found."
exit 1
fi
INSTANCE_IP=$(aws ec2 describe-instances --instance-ids $INSTANCE_ID --query "Reservations[0].Instances[0].PublicIpAddress" --output text)
fi
echo "Found instance $INSTANCE_ID with IP $INSTANCE_IP"
# Wait for the instance to be running
echo "Waiting for instance to be in running state..."
aws ec2 wait instance-running --instance-ids $INSTANCE_ID
# Wait for status checks to pass
echo "Waiting for instance status checks to pass..."
aws ec2 wait instance-status-ok --instance-ids $INSTANCE_ID
# Extra verification of SSH availability
echo "Verifying SSH connectivity..."
counter=0
max_attempts=10
while [ $counter -lt $max_attempts ]; do
if nc -z -w5 $INSTANCE_IP 22; then
echo "SSH port is open!"
break
fi
echo "Waiting for SSH port to open... (attempt $((counter+1))/$max_attempts)"
sleep 10
counter=$((counter+1))
done
if [ $counter -eq $max_attempts ]; then
echo "Warning: Could not verify SSH connectivity after $max_attempts attempts"
fi
echo "Instance ID: $INSTANCE_ID"
echo "Instance IP: $INSTANCE_IP"
echo "instance_ip=$INSTANCE_IP" >> $GITHUB_OUTPUT
deploy:
needs: build-docker-setup-infra
runs-on: ubuntu-latest
env:
REPOSITORY_URI: ${{ secrets.ECR_REPOSITORY_URI }}
steps:
- name: Debug Output IP
run: 'echo "Using IP address: ${{ needs.build-docker-setup-infra.outputs.instance_ip }}"'
- name: Install SSH Client
run: sudo apt-get install -y openssh-client
- name: Create .ssh Directory
run: mkdir -p ~/.ssh
- name: Add SSH Key
run: |
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keygen -lf ~/.ssh/id_rsa || echo "Key verification failed"
- name: SSH into EC2 and Deploy Docker Image
env:
INSTANCE_IP: ${{ needs.build-docker-setup-infra.outputs.instance_ip }}
REPO_URI: ${{ secrets.ECR_REPOSITORY_URI }}
run: |
ssh -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o ConnectTimeout=30 ec2-user@${INSTANCE_IP} "
# Check what's using port 80
echo 'Checking what process is using port 80...'
sudo lsof -i :80 || echo 'No process found by lsof'
# Stop Nginx if it's running
sudo systemctl stop nginx || echo 'Nginx not running or not installed'
# Kill any process using port 80
sudo fuser -k 80/tcp || echo 'No process killed'
# Stop any running Docker containers using port 80
sudo docker ps -q --filter publish=80 | xargs -r sudo docker stop
sudo docker ps -q --filter publish=80 | xargs -r sudo docker rm
# Start Docker and proceed with deployment
sudo systemctl start docker &&
sudo systemctl enable docker &&
aws ecr get-login-password --region ap-southeast-2 | sudo docker login --username AWS --password-stdin ${REPO_URI} &&
sudo docker pull ${REPO_URI}:latest &&
sudo docker run -d -p 80:80 ${REPO_URI}:latest"
Step 10: SSH Deployment and Container Management
Once the EC2 instance is running, connect and deploy the container:
# SSH into the EC2 instance
ssh -i the-key.pem ec2-user@the-ec2-public-ip
# Login to ECR
aws ecr get-login-password --region the-region | docker login --username AWS --password-stdin the-account-id.dkr.ecr.t-heregion.amazonaws.com
# Pull and run the image
docker pull the-account-id.dkr.ecr.the-region.amazonaws.com/nginx-app:latest
docker run -d -p 80:80 --name nginx-app the-account-id.dkr.ecr.the-region.amazonaws.com/nginx-app:latest
# Verify deployment
curl http://localhost/health
Step 11: Testing and Validation
Create unit tests for the infrastructure:
# test/unit/test_infra_stack.py
import aws_cdk as core
import aws_cdk.assertions as assertions
from infra.infrastructure_stack import InfrastructureStack
def test_ec2_instance_created():
app = core.App()
stack = InfrastructureStack(app, "test-stack")
template = assertions.Template.from_stack(stack)
template.has_resource_properties("AWS::EC2::Instance", {
"InstanceType": "t3.micro"
})
def test_security_group_rules():
app = core.App()
stack = InfrastructureStack(app, "test-stack")
template = assertions.Template.from_stack(stack)
template.has_resource_properties("AWS::EC2::SecurityGroup", {
"SecurityGroupIngress": [
{
"IpProtocol": "tcp",
"FromPort": 80,
"ToPort": 80
}
]
})
Run tests with:
python -m pytest test/
Monitoring and Maintenance
Monitor the deployment with these commands:
# Check container status
docker ps
# View logs
docker logs nginx-app
# Update deployment
docker pull the-account-id.dkr.ecr.the-region.amazonaws.com/nginx-app:latest
docker stop nginx-app
docker rm nginx-app
docker run -d -p 80:80 --name nginx-app the-account-id.dkr.ecr.the-region.amazonaws.com/nginx-app:latest
Next Steps and Enhancements
This foundation can be extended with:
- Load balancing with Application Load Balancer
- Auto Scaling Groups for high availability
- CloudWatch monitoring and alerting
- SSL/TLS certificate management
- Database integration
- Container orchestration with ECS or EKS
Conclusion
You've successfully created a complete deployment pipeline for a containerized Nginx application using modern DevOps practices. This setup provides a solid foundation for more complex applications while demonstrating key concepts in containerization, cloud infrastructure, and automated deployment.
The combination of Docker, AWS ECR, and EC2 creates a scalable, maintainable deployment solution that can grow with the application needs. The automated CI/CD pipeline ensures consistent deployments while the infrastructure-as-code approach makes the setup reproducible and version-controlled.
For full reference, here is the github I have created:
Github repository for Building and Deploying a Nginx Web Application with Docker and AWS EC2
Top comments (0)