DEV Community

Cover image for My AWS Cloud Resume Challenge journey
Cory Morgan
Cory Morgan

Posted on • Edited on

My AWS Cloud Resume Challenge journey

Table of Contents

  1. Intro
  2. Why I Chose This Challenge
  3. AWS Cloud Resume Challenge
  4. Lessons Learned
  5. Get In Touch

Intro

My name is Cory and I’m going to share my journey through developing and deploying my resume for the AWS Cloud Resume Challenge. Join me as I share my process, thoughts, and behind-the-scenes insights along the way.

Some quick background:

  • I’m currently an AWS Certified Solutions Architect – Associate and work as a patient care coordinator/office manager at a physical therapy clinic in Brooklyn, NY. I’m looking to transition into a full-time Solutions Architect/Engineer role.
  • I’ve been interested in tech, AI, and coding ever since watching The Matrix, but I officially began my tech journey in 2022 through self-guided courses and a software engineering bootcamp.
  • After observing the tech landscape and the rapid pace of AI development, I decided to pivot my studies to align my interests, skills, and learning curve with industry demand.
  • Because lower-level software engineering tasks are increasingly automated by AI and AI engineering now demands full-time focus, I chose to pursue Cloud/Solutions Architecture. With the help and guidance of my mentor (thank you, Harry), I discovered a role that truly combines my passion for software engineering and AI.
  • As a Solutions Architect, I’ll be curating Virtual Infrastructures Built for Evolution (VIBEs). Companies across AI, healthcare, retail, and beyond will rely on these VIBEs for scalability, reliability, disaster recovery, automation, and cost efficiency.
  • Solutions Architecture touches on everything I’ve experienced so far: problem solving, client care, AI, and coding. So, fresh off my certification, I dove right into my first AWS project: the AWS Cloud Resume Challenge.

Why I Chose This Challenge

I needed cloud projects for my portfolio but didn’t know where to start. A good friend from high school suggested the AWS Cloud Resume Challenge while I was preparing for my certification exam (thank you, Dani).

After looking into the challenge and its requirements, here’s what I found:

  • My experience building static sites would provide a solid front-end.
  • 50+ job postings on LinkedIn and Indeed listed multi-tier migration experience as a top requirement.
  • Terraform (IaC) and CI/CD skills are in high demand.

So here's the game plan:

  • Learn Terraform
  • Migrate a legacy site to AWS
  • Apply AWS serverless best practices
  • Implement comprehensive CI/CD

Right before diving into the challenge, I spent a week learning Terraform with this super helpful 8-part Terraform course on YouTube.


AWS Cloud Resume Challenge

Architecture Diagram

cloud-resume-architecture

Implementation

With a new AWS Solutions Architect certification, Terraform knowledge and a game plan, I'm feeling pretty confident going into this challenge. Here's a high-level overview of how I tackled it:

1. Legacy Resume Site

I used old files from my software engineering bootcamp as boilerplate reference and repurpose them for the front-end. Also, I added a visit counter to the footer. The resume format is simple and straightforward but I'll likely update the style later on.

legacy-resume-site-screenshot

Legacy resume stack:

  • HTML+CSS
  • Javascript
  • Node.js
  • Express
  • SQLite

2. Terraform IaC

After my resume site was created, I knew I was going to have to use Terraform. So using what I learned from YouTube and the Terraform Registry, I configured Terraform with remote state in an S3 bucket (plus DynamoDB locking). The first modules created:

  • A private S3 bucket with a public-access block
  • A CloudFront distribution (OAI attached)
  • A public Route 53 hosted zone
  • An ACM certificate for my custom domain

I registered my custom domain name via GoDaddy since VIBEbyCory.dev wasn't available in Route 53.

3. Deploying my Resume Site + Visitor Counter

With the terraform infrastructure in place, I used an S3 bucket to upload my HTML/CSS/JS files and set static site hosting. A quick curl to the CloudFront URL confirmed the resume was live over HTTPS.

Then, I wrote a Python Lambda backed by DynamoDB to track visits, then exposed it via API Gateway v2. My first front-end fetch calls hit a CORS error, but fixing the API Gateway CORS settings got it working in minutes.

4. Set Up CI/CD

Next, I added a GitHub Actions workflow that builds and syncs the static site whenever I merge updates to the main branch. There's a separate workflow for front-end merges as well as one for back-end merges. The initial back-end run failed due to an IAM policy gap in the CodeBuild role. Once I scoped the right permissions, the pipeline deployed end-to-end without manual work.

front-end-github-actions

back-end-github-actions

5. Testing & Fine-Tuning

Finally, I tweaked the Lambda’s memory and timeout based on CloudWatch logs, verified cost estimates (all under $1/month), and did a global performance check via CloudFront metrics.

Roadblocks and Solutions

Implementing this is easier said than done and didn't happen without a fair share of errors. These are the larger issues I came across during this project and how I solved them:

DNS & Certificate Validation Stalled

  • What happened: My custom domain (VIBEbyCory.dev) never resolved properly and the ACM certificate stayed in PENDING_VALIDATION. Browser calls returned 404, even though CloudFront and the API worked via curl.

  • Digging in: I’d created the Route 53 hosted zone and ACM cert in Terraform, but forgot to update my NS records at GoDaddy to point to those Route 53 name servers. Without that, DNS never propagated and ACM could never validate the CNAME.

The ACM certificate status can sometimes stay in the PENDING_VALIDATION state for a while (~5-30 minutes) so this could've also been a symptom of a non-issue.

  • Solution: Copied the four NS values from my Terraform-managed hosted zone into GoDaddy’s DNS settings. Once the registrar change propagated (~10–15 min), dig VIBEbycory.dev showed the right servers, the ACM certificate flipped to ISSUED, and my site and /count endpoint worked over HTTPS.

CORS Blocking the Visitor Counter

  • What happened: My front-end fetch('/count') calls were blocked by CORS.

  • Digging in: Browser console flagged a missing Access-Control-Allow-Origin header—API Gateway v2 doesn’t enable CORS by default.

  • Solution: In my Terraform aws_apigatewayv2_api, I added a cors_configuration block (allow_origins = ["https://VIBEbycory.dev"], etc.) and redeployed.

CI/CD IAM Policy Gaps

  • What happened: GitHub Actions hit AccessDenied errors during terraform apply (specifically on S3 and DynamoDB).

  • Digging in: The logs showed missing permissions on my state bucket and lock table.

  • Solution: Updated the CodeBuild role’s IAM policy to include s3:ListBucket, s3:GetObject, s3:PutObject on the bucket and dynamodb:* on the lock table.

DynamoDB Lock Table Deadlocks

  • What happened: Terraform hung trying to acquire/release the remote state lock.

  • Digging in: My lock table had no TTL on lock items, so stale locks never expired—and my role couldn’t delete old entries.

  • Solution: Added a TTL attribute (terraform_locked_at) to the table schema and granted dynamodb:DeleteItem to the pipeline role.

Lessons Learned

Looking back at the journey, here are my key takeaways from building, and debugging, my Cloud Resume project:

Registrar ↔ Route 53 First
Make sure to sync your domain registrar’s NS records with your Route 53 hosted zone before you even think about SSL or alias records. Skipping this step stalled both DNS resolution and ACM certificate validation and had me waiting around for something to happen for over an hour.

Automate for DNS Eventual Consistency
DNS propagation doesn't happen right away. Adding a simple polling loop (e.g. a null_resource in Terraform that retries aws acm describe-certificate) saved me from manual waits and timeouts.

Don’t Forget CORS
Browser-based fetch() calls to API Gateway need explicit CORS headers. Always add a cors_configuration block in your aws_apigatewayv2_api resource...even if curl works, the browser may still block you.

Least-Privilege CI/CD IAM
Map out every resource your pipeline touches, including state buckets, lock tables, S3 objects, CloudWatch, Budgets, etc. and grant only the permissions you need. This prevents AccessDenied surprises in GitHub Actions.

TTL & Cleanup for DynamoDB Locks
A lock table without a TTL or delete permissions will result in deadlocks. Enabling item TTL and giving your CI/CD pipeline role dynamodb:DeleteItem ensures stale locks self-clean.

Keep Git Workflow Disciplined
I lost time deleting and recreating repos because of scattered commits. Commit early, use feature branches if necessary, and tag/comment milestones so you’re never like “where did that change go?” I even started to keep a list of Git workflow commands handy to maintain my progress seamlessly.

Check Monitoring & Budget Alerts
When adding CloudWatch alarms or AWS Budget alerts via Terraform, double-check that your role can create, view, and test those resources, otherwise your alerts will never fire. Looking at a blank AWS console page when you're confident you should see some activity or alert is always humbling.

These lessons not only improved this project’s reliability and maintainability, but they’ve become part of my standard toolkit for any future AWS Cloud/DevOps/Solutions work.


Get In Touch

Thanks for reading! If you try this challenge yourself, feel free to message me on LinkedIn, or leave a comment below!
I'd love to connect with new people in the industry, especially if you're in NYC!

LinkedIn | Email | Visit my site! | AWS Cloud Resume Challenge Github Repo

Top comments (0)