Modern applications rarely run in isolation. Whether you're building a web application with a database, setting up a microservices architecture, or creating a development environment that mirrors production, you'll likely need multiple containers working together. Docker Compose makes orchestrating these multi-container applications straightforward and reproducible.
In this article, we'll explore practical, real-world examples of Docker Compose configurations that you can adapt for your own projects.
What is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. Using a YAML file, you can configure your application's services, networks, and volumes, then spin up your entire stack with a single command.
Key benefits include:
- Reproducible environments across development, testing, and production
- Simplified orchestration of complex multi-service applications
- Easy scaling and service management
- Network isolation and service discovery out of the box
Example 1: Full-Stack Web Application (MEAN Stack)
Let's start with a complete web application stack: MongoDB, Express.js API, Angular frontend, and Node.js backend.
version: '3.8'
services:
# MongoDB Database
mongodb:
image: mongo:6.0
container_name: mean_mongodb
restart: unless-stopped
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password123
MONGO_INITDB_DATABASE: meanapp
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
networks:
- mean_network
# Node.js API Backend
api:
build:
context: ./backend
dockerfile: Dockerfile
container_name: mean_api
restart: unless-stopped
environment:
NODE_ENV: development
MONGODB_URI: mongodb://admin:password123@mongodb:27017/meanapp?authSource=admin
JWT_SECRET: your-jwt-secret-key
PORT: 3000
ports:
- "3000:3000"
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- mongodb
networks:
- mean_network
# Angular Frontend
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
container_name: mean_frontend
restart: unless-stopped
environment:
- NODE_ENV=development
ports:
- "4200:4200"
volumes:
- ./frontend:/app
- /app/node_modules
depends_on:
- api
networks:
- mean_network
# Nginx Reverse Proxy
nginx:
image: nginx:alpine
container_name: mean_nginx
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- frontend
- api
networks:
- mean_network
volumes:
mongodb_data:
networks:
mean_network:
driver: bridge
Key Features:
- MongoDB with persistent data storage
- API backend with environment-specific configuration
- Frontend development server with hot reload
- Nginx reverse proxy for routing
- Custom network for service communication
Example 2: E-commerce Platform with Microservices
Here's a more complex example showing a microservices-based e-commerce platform:
version: '3.8'
services:
# API Gateway
api-gateway:
image: nginx:alpine
container_name: ecommerce_gateway
ports:
- "80:80"
- "443:443"
volumes:
- ./gateway/nginx.conf:/etc/nginx/nginx.conf:ro
- ./gateway/ssl:/etc/nginx/ssl:ro
depends_on:
- user-service
- product-service
- order-service
networks:
- ecommerce_network
# User Service
user-service:
build: ./services/user-service
container_name: user_service
environment:
DATABASE_URL: postgresql://user:password@user_db:5432/users
REDIS_URL: redis://redis:6379
JWT_SECRET: ${JWT_SECRET}
depends_on:
- user_db
- redis
networks:
- ecommerce_network
deploy:
replicas: 2
user_db:
image: postgres:15
container_name: user_database
environment:
POSTGRES_DB: users
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- user_db_data:/var/lib/postgresql/data
networks:
- ecommerce_network
# Product Service
product-service:
build: ./services/product-service
container_name: product_service
environment:
DATABASE_URL: postgresql://product:password@product_db:5432/products
ELASTICSEARCH_URL: http://elasticsearch:9200
depends_on:
- product_db
- elasticsearch
networks:
- ecommerce_network
product_db:
image: postgres:15
container_name: product_database
environment:
POSTGRES_DB: products
POSTGRES_USER: product
POSTGRES_PASSWORD: password
volumes:
- product_db_data:/var/lib/postgresql/data
networks:
- ecommerce_network
# Order Service
order-service:
build: ./services/order-service
container_name: order_service
environment:
DATABASE_URL: postgresql://order:password@order_db:5432/orders
RABBITMQ_URL: amqp://guest:guest@rabbitmq:5672/
PAYMENT_SERVICE_URL: http://payment-service:3000
depends_on:
- order_db
- rabbitmq
networks:
- ecommerce_network
order_db:
image: postgres:15
container_name: order_database
environment:
POSTGRES_DB: orders
POSTGRES_USER: order
POSTGRES_PASSWORD: password
volumes:
- order_db_data:/var/lib/postgresql/data
networks:
- ecommerce_network
# Payment Service
payment-service:
build: ./services/payment-service
container_name: payment_service
environment:
STRIPE_SECRET_KEY: ${STRIPE_SECRET_KEY}
WEBHOOK_SECRET: ${STRIPE_WEBHOOK_SECRET}
networks:
- ecommerce_network
# Shared Services
redis:
image: redis:7-alpine
container_name: ecommerce_redis
command: redis-server --appendonly yes
volumes:
- redis_data:/data
networks:
- ecommerce_network
rabbitmq:
image: rabbitmq:3-management
container_name: ecommerce_rabbitmq
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: password
ports:
- "15672:15672" # Management UI
volumes:
- rabbitmq_data:/var/lib/rabbitmq
networks:
- ecommerce_network
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.8.0
container_name: ecommerce_elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- ecommerce_network
volumes:
user_db_data:
product_db_data:
order_db_data:
redis_data:
rabbitmq_data:
elasticsearch_data:
networks:
ecommerce_network:
driver: bridge
Example 3: Development Environment with Monitoring
This example shows a development setup with comprehensive monitoring and logging:
version: '3.8'
services:
# Main Application
app:
build: .
container_name: myapp
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://postgres:password@db:5432/myapp
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
depends_on:
- db
- redis
networks:
- app_network
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Database
db:
image: postgres:15
container_name: myapp_db
environment:
POSTGRES_DB: myapp
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- app_network
# Redis Cache
redis:
image: redis:7-alpine
container_name: myapp_redis
ports:
- "6379:6379"
networks:
- app_network
# Prometheus (Metrics)
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
networks:
- app_network
# Grafana (Visualization)
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3001:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana_data:/var/lib/grafana
- ./monitoring/grafana/dashboards:/etc/grafana/provisioning/dashboards
- ./monitoring/grafana/datasources:/etc/grafana/provisioning/datasources
networks:
- app_network
# Jaeger (Distributed Tracing)
jaeger:
image: jaegertracing/all-in-one:latest
container_name: jaeger
ports:
- "16686:16686"
- "14268:14268"
environment:
- COLLECTOR_OTLP_ENABLED=true
networks:
- app_network
# ELK Stack for Logging
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.8.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- app_network
logstash:
image: docker.elastic.co/logstash/logstash:8.8.0
container_name: logstash
volumes:
- ./monitoring/logstash/pipeline:/usr/share/logstash/pipeline
depends_on:
- elasticsearch
networks:
- app_network
kibana:
image: docker.elastic.co/kibana/kibana:8.8.0
container_name: kibana
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
networks:
- app_network
volumes:
postgres_data:
prometheus_data:
grafana_data:
elasticsearch_data:
networks:
app_network:
driver: bridge
Best Practices and Tips
1. Use Environment Variables
Keep sensitive data and environment-specific configurations in .env
files:
# .env
POSTGRES_PASSWORD=your_secure_password
JWT_SECRET=your_jwt_secret
STRIPE_SECRET_KEY=sk_test_your_stripe_key
2. Health Checks
Add health checks to ensure services are ready:
services:
api:
# ... other config
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
3. Resource Limits
Prevent services from consuming too many resources:
services:
app:
# ... other config
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
4. Multi-Stage Builds
Optimize your Docker images with multi-stage builds:
# Dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:18-alpine AS runtime
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Common Commands
Here are essential Docker Compose commands for managing your applications:
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f [service_name]
# Scale a service
docker-compose up -d --scale api=3
# Stop all services
docker-compose down
# Remove everything including volumes
docker-compose down -v
# Rebuild and start
docker-compose up -d --build
# Execute commands in running containers
docker-compose exec api npm run migrate
Troubleshooting Common Issues
1. Port Conflicts
If you get port binding errors, check for conflicting services:
# Check what's using a port
lsof -i :3000
# or
netstat -tulpn | grep :3000
2. Network Issues
Services can't communicate? Verify they're on the same network:
# Inspect networks
docker network ls
docker network inspect <network_name>
3. Volume Permissions
Permission issues with mounted volumes:
services:
app:
# Set user ID to match host user
user: "${UID}:${GID}"
Conclusion
Docker Compose transforms complex multi-container applications into manageable, reproducible deployments. Whether you're building a simple web app or a complex microservices architecture, these examples provide a solid foundation you can build upon.
The key is to start simple and gradually add complexity as your application grows. Remember to use environment variables for configuration, implement proper health checks, and leverage Docker's networking capabilities for service discovery.
What multi-container setup are you planning to build? Share your Docker Compose configurations and experiences in the comments below!
Want to learn more about Docker and containerization? Follow me for more practical tutorials and real-world examples!
Top comments (0)