Hey Devs π,
If youβre starting out in data engineering or curious how real-world data pipelines work, this post is for you.
As an Associate Data Engineer Intern, I wanted to go beyond watching tutorials and actually build a working pipeline β one that pulls real-world data daily, processes it, stores it, and is fully containerized.
So I picked something simple but meaningful: global COVID-19 stats.
Hereβs a breakdown of what I built, how it works, and what I learned.
π What This Pipeline Does
This mini-project automates the following:
β
Pulls daily global COVID-19 stats from a public API
β
Uses Airflow to schedule and monitor the task
β
Stores the results in a PostgreSQL database
β
Runs everything inside Docker containers
It's a beginner-friendly, end-to-end project to get your hands dirty with core data engineering tools.
π§° The Tech Stack
- Python β for the main fetch/store logic
- Airflow β to orchestrate and schedule tasks
- PostgreSQL β for storing daily data
- Docker β to containerize and simplify setup
- disease.sh API β open-source COVID-19 stats API
β
Want this as a .md
file to post on Dev.to?
Let me know β I can prep and format it for you in one go.
Also happy to help with a LinkedIn summary or visual for carousels if you're planning to cross-post.
βοΈ How It Works (Behind the Scenes)
- Airflow DAG triggers once per day
- A Python script sends a request to the COVID-19 API
- Parses the JSON response
- Inserts the cleaned data into a PostgreSQL table
- Logs everything (success/failure) into Airflow's UI
Everything runs locally via docker-compose
β one command and you're up and running.
ποΈ Project Structure
airflow-docker/
βββ dags/ # Airflow DAG (main logic)
βββ scripts/ # Python file to fetch + insert data
βββ docker-compose.yaml # Setup for Airflow + Postgres
βββ logs/ # Logs generated by Airflow
βββ plugins/ # (Optional) Airflow plugins
You can check the full repo here:
π GitHub: mohhddhassan/covid-data-pipeline
π§ Key Learnings
β
How to build and run a simple Airflow DAG
β
Using Docker to spin up services like Postgres & Airflow
β
How Python connects to a DB and inserts structured data
β
Observing how tasks are logged, retried, and managed in Airflow
This small project gave me confidence in how the core parts of a pipeline talk to each other.
π Sample Output from API
Hereβs a snippet of the JSON response from the API:
{
"cases": 708128930,
"deaths": 7138904,
"recovered": 0,
"updated": 1717689600000
}
And hereβs a sample SQL insert triggered via Python:
INSERT INTO covid_stats (date, total_cases, total_deaths, recovered)
VALUES ('2025-06-06', 708128930, 7138904, 0);
π§ Whatβs Next?
Iβm planning to:
π§ Add deduplication logic (so it doesnβt insert same data daily)
π Maybe create a Streamlit dashboard on top of the database
βοΈ Play with sensors, templates, and XComs in Airflow
β‘ Extend the pipeline with ClickHouse for OLAP-style analytics
π Why You Should Try Something Like This
If you're learning data engineering:
- Start small, but make it real
- Use public APIs to practice fetching and storing data
- Wrap it with orchestration + containerization β itβs closer to the real thing
This project taught me way more than passively following courses ever could.
πββοΈ About Me
Mohamed Hussain S
Associate Data Engineer Intern
LinkedIn | GitHub
π Learning in public, one pipeline at a time.
Top comments (0)