How AI-powered development transforms infrastructure provisioning from hours to minutes
Introduction
Traditional infrastructure setup often involves hours of research, documentation reading, and trial-and-error. But what if you could simply chat with an AI assistant to create an entire AWS environment?
In this article, I'll walk you through my real experience using Amazon Q CLI to set up a complete staging environment for our production infrastructure. We'll see how natural language prompts can replace tedious manual work and dramatically speed up the development process.
The Challenge
Our team needed to create a new staging environment for our ABC (AbcHQ) project. This involved:
- Setting up AWS profiles and authentication
- Creating Terraform configuration files
- Configuring networking (VPC, subnets, security groups)
- Setting up infrastructure components (bastion hosts, load balancers, databases)
- Managing state files and deployment scripts
Traditionally, this would take several hours of:
- Reading documentation
- Copying and modifying existing configurations
- Debugging configuration errors
- Testing and validation
Let's see how Amazon Q CLI changed this experience entirely.
Step 1: AWS Profile Configuration
My Prompt:
Add abc-stg environment to aws config with account ID 730335299999, same role as dev and prod
What Amazon Q CLI Did:
Amazon Q immediately understood I needed to add a new AWS profile and provided the exact configuration:
[profile abc-stg]
sso_session = abc
sso_account_id = 730335299999
sso_role_name = global_admin
region = ap-southeast-2
💡 The Magic: Instead of looking up AWS CLI documentation or copying from existing profiles, Q understood the context and provided the exact configuration format. This saved me 5-10 minutes of documentation browsing.
Step 2: Authentication
My Prompt:
aws login sso abc-stg
What Amazon Q CLI Did:
Q corrected my command syntax and provided the proper AWS SSO login command:
aws sso login --profile abc-stg
💡 The Magic: Q caught my syntax error and provided the correct command format. No need to check AWS CLI documentation or remember the exact parameter syntax.
Step 3: Environment Verification
My Prompt:
Check if I have any s3 buckets
What Amazon Q CLI Did:
Q provided the AWS CLI command and executed it, showing me the existing buckets in the staging account:
aws s3api list-buckets --profile abc-stg
Result: Found 4 existing buckets, confirming the account was accessible and had some existing resources.
💡 The Magic: Q understood I wanted to verify account access and chose the most appropriate verification method. This quick check saved potential debugging time later.
Step 4: The Big One - Terraform Configuration
My Prompt:
I need to work on tf-basestack repo, I need to deploy new environment to this `abc-stg` profile, Please prepare necessary files for me.
What Amazon Q CLI Did:
This is where Q really shined. It analyzed our existing repository structure and created multiple files:
4.1 Configuration Directory
mkdir -p ./tf-basestack/config/stg
4.2 Terraform Variables File (config/stg/au.tfvars
)
Q created a complete configuration file with:
- Account-specific settings
- Network configuration (VPC CIDR: 10.110.0.0/16)
- Subnet configurations for public, private, and database tiers
- Security group configurations
- Instance configurations
4.3 Environment Variables Script (scripts/stg_init_env_vars.sh
)
#!/bin/bash
BUCKETNAME="730335299999-abchq-tfstate-bucket"
REGION="ap-southeast-2"
DDBTABLENAME="tfstate-table"
PROFILE="abc-stg"
4.4 Terraform Initialization Script (terraform_init_stg_au.sh
)
A complete script for Terraform backend initialization with proper workspace management.
4.5 First Run Script (firstrun_stg.sh
)
Script to create S3 bucket and DynamoDB table for Terraform state management.
💡 The Magic: This single prompt replaced what would typically be 2-3 hours of work:
- Analyzing existing configurations
- Adapting them for the new environment
- Creating multiple script files
- Ensuring consistency across all files
- Setting up proper naming conventions
Q understood our project structure, naming patterns, and requirements without any additional context.
Step 5: Code Management
My Prompt:
Please commit and push the code for me!
What Amazon Q CLI Did:
Q created a proper Git workflow:
git checkout -b feat/add-stg-environment
git add config/stg/ scripts/stg_init_env_vars.sh terraform_init_stg_au.sh firstrun_stg.sh
git commit -m "Add staging environment configuration for tf-basestack"
git push -u origin feat/add-stg-environment
💡 The Magic: Q understood Git best practices, created a feature branch, and wrote a descriptive commit message. No need to think about proper Git workflow or commit message conventions.
Step 6: Infrastructure Initialization
My Prompt:
run file firstrun_stg.sh
What Amazon Q CLI Did:
Q executed the first-run script, creating:
- S3 bucket for Terraform state storage with versioning enabled
- DynamoDB table for state locking
- Proper error handling and verification
💡 The Magic: Q handled the execution and provided clear feedback about what was being created and why.
Step 7: Terraform Initialization
My Prompt:
run file terraform_init_stg_au.sh
What Amazon Q CLI Did:
Q executed the Terraform initialization:
- Configured the S3 backend
- Created the staging workspace
- Downloaded required providers
- Set up the working directory
💡 The Magic: Q managed the complex Terraform backend configuration automatically, something that often requires multiple attempts to get right.
Step 8: Handling Real-World Issues
My Prompt:
Please commit and push, also run plan again for me on stg
What Happened:
The Terraform plan failed because the Route53 hosted zone didn't exist yet.
What Amazon Q CLI Did:
Q identified the issue and explained that we needed to create the Route53 zone first.
💡 The Magic: Instead of leaving me to debug cryptic Terraform errors, Q immediately identified the root cause and suggested the solution.
Step 9: Problem Resolution
My Prompt:
Create hosted zone "au.stg.abchq.com" on route53 for me!
What Amazon Q CLI Did:
Q provided the exact AWS CLI command:
aws route53 create-hosted-zone \
--name "au.stg.abchq.com" \
--caller-reference "stg-au-zone-$(date +%Y-%m-%d-%H-%M)" \
--profile abc-stg
💡 The Magic: Q understood the DNS requirements, provided proper caller-reference generation, and used the correct profile. This saved me from looking up Route53 CLI documentation.
Step 10: Final Success
My Prompt:
update README for me? Remember to use English prompts!
What Amazon Q CLI Did:
After successful Terraform plan execution (43 resources to be created), Q updated our documentation with:
- Complete step-by-step instructions
- All the prompts used during the process
- Expected outputs and results
- Troubleshooting information
Final Result:
Plan: 43 to add, 0 to change, 0 to destroy.
A complete staging environment ready for deployment!
The Transformation: Before vs After
Traditional Approach (3-4 hours):
- Research (30 mins): Reading Terraform docs, AWS CLI references
- Configuration (90 mins): Manually creating and adapting config files
- Scripting (45 mins): Writing initialization and deployment scripts
- Debugging (60 mins): Fixing configuration errors, syntax issues
- Testing (30 mins): Validating the setup
- Documentation (45 mins): Writing setup instructions
With Amazon Q CLI (45 minutes):
- Natural conversation (30 mins): Describing what I needed
- Validation (10 mins): Reviewing Q's suggestions
- Execution (5 mins): Running the generated scripts
Key Insights
1. Context Understanding
Amazon Q CLI understood our project structure, naming conventions, and requirements without explicit explanation. It analyzed existing files and maintained consistency across the new environment.
2. Best Practices Built-In
Q automatically applied infrastructure best practices:
- Proper Git workflow with feature branches
- Terraform state management with locking
- Security configurations
- Resource naming conventions
3. Error Prevention
Q caught potential issues before they became problems:
- Syntax corrections in commands
- Missing dependencies (like Route53 zones)
- Configuration inconsistencies
4. Documentation Generation
Q automatically generated comprehensive documentation, including all the prompts used - creating a reproducible process for future environments.
Conclusion
Amazon Q CLI transformed what used to be a complex, time-consuming task into a natural conversation. The 80% time reduction isn't just about speed - it's about:
- Reduced cognitive load: No need to remember syntax or search documentation
- Fewer errors: AI catches mistakes before they cause problems
- Better practices: Built-in best practices and conventions
- Instant documentation: Automatic generation of setup guides
This experience shows how AI-powered development tools are changing the game. We're moving from "knowing how to code" to "knowing what to build" - and that's a powerful shift.
The future of infrastructure development isn't about memorizing Terraform syntax or AWS CLI commands. It's about clearly communicating your intent and letting AI handle the implementation details.
Try it yourself: Next time you need to set up infrastructure, try describing what you want in natural language first. You might be surprised by how much time you save.
What's your experience with AI-powered development tools? Have you tried Amazon Q CLI or similar tools? Share your thoughts in the comments below!
Top comments (0)