Manual deployments are fine—until they aren't. As your application evolves, relying on local scripts or manual file uploads becomes a liability: it's slow, inconsistent, and prone to human error. In a production environment where uptime, reliability, and speed matter, you need something better.
That’s where CI/CD comes in.
Continuous Integration and Continuous Deployment (CI/CD) introduces automation and structure to your development workflow. With it, every change pushed to your repository can be automatically tested, built, and deployed—without manual steps. This reduces errors, accelerates delivery, and ensures your team spends less time deploying and more time building.
In this guide, we’ll walk through setting up a CI/CD pipeline for a Python backend application using GitHub Actions and deploying to Fly.io—a developer-friendly platform for running apps globally. You’ll learn how to automate builds, trigger deployments from code changes, and securely manage secrets.
By the end, you’ll have a fully functional deployment pipeline that builds confidence every time you push to main.
Prerequisites
Before you begin, make sure you have:
- A Fly.io account — fly.io
- Docker installed locally
- Python 3.8+
- Familiarity with Git, Docker, and CI/CD pipelines
Optional:
- A virtual environment tool (
venv
,poetry
, etc.) for managing local Python dependencies
Table of Contents
- Cloning the Starter Repository
- Understanding the Python Application
- Reviewing the Dockerfile
- Configuring GitHub Actions for CI/CD
- Setting Up the Fly.io API Token
- Deploying and Testing the Application
- Conclusion
Cloning the Starter Repository
To begin, clone the project repository and navigate to the backend
directory, which contains the application we’ll be deploying.
git clone https://github.com/EphraimX/blbjzl-ai-accountability-application-github-actions.git
cd blbjzl-ai-accountability-application-github-actions/backend
The backend/
directory contains:
-
server.py
— the main application entry point -
requirements.txt
— Python dependencies -
Dockerfile
— used to containerize the application -
fly.toml
— Fly.io deployment configuration -
.gitignore
— specifies files to exclude from version control
We’ll be focusing on this directory throughout the article. The GitHub Actions workflow will be added and configured later in the root of the repository.
Understanding the Python Application
The backend of this accountability AI application is built using FastAPI, a high-performance framework for Python. The app allows users to communicate with the AI, which helps them stay accountable by generating responses based on their input. Below is a breakdown of the key components used in the code:
1. Environment Setup
The application starts by loading environment variables using the dotenv
library. This ensures that sensitive information, like the API key for Google Gemini, is kept secure and not hard-coded into the code.
To use the Gemini API, you need to set the GEMINI_API_KEY
in a .env
file:
GEMINI_API_KEY=your-google-gemini-api-key
2. API Client Configuration
The application uses the Google Gemini API to generate responses for user inputs. The genai
library from Google is used to interact with the API. You can get your API key from Google AI Studio.
The genai.Client
is configured with the API key from the environment to authenticate requests:
client = genai.Client(api_key=API_KEY)
3. FastAPI and Middleware Setup
FastAPI is used to create and manage the application’s API. It handles HTTP requests and responses efficiently. To support cross-origin requests (such as when the frontend is hosted on a different domain), CORS middleware is added.
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # You can specify your frontend URL here instead of "*"
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
This middleware allows all domains to interact with the API. You can change the allow_origins
to limit access to specific domains, to further improve security.
4. Session Management
The application needs to track each user’s conversation with the AI. This is done by storing chat sessions in memory using the user_chats
dictionary. Each session is identified by a unique session_id
.
When a user starts a conversation, a new session_id
is generated if one isn’t provided.
user_chats: Dict[str, Any] = {}
5. Request and Response Models
The application uses Pydantic models to define the structure of data sent to and received from the API. This ensures that the incoming and outgoing data is properly validated.
The ChatRequest
model is used to validate the data sent by the user, while the ChatResponse
model defines the structure of the AI’s reply.
class ChatRequest(BaseModel):
user_input: str
session_id: str = None # Optional: If not provided, a new session is created
class ChatResponse(BaseModel):
reply: str
session_id: str # Returned to keep the session alive on frontend
6. Chat Endpoint
The /chat
POST endpoint receives the user’s input and either creates a new chat session or continues an existing session. The user’s message is sent to the Gemini API, which generates a response. The session ID is returned so the user can continue their conversation in future interactions.
@app.post("/chat", response_model=ChatResponse)
async def chat(request: ChatRequest):
if not request.session_id or request.session_id not in user_chats:
session_id = str(uuid4())
chat_session = client.chats.create(model="gemini-2.0-flash")
user_chats[session_id] = chat_session
else:
session_id = request.session_id
chat_session = user_chats[session_id]
response = chat_session.send_message(request.user_input)
return ChatResponse(reply=response.text, session_id=session_id)
Here, the AI responds based on the user_input
sent to the /chat
endpoint. If a session_id
is provided, the conversation continues from the previous state. Otherwise, a new session is created.
Testing the Application
To test the application locally:
- Install the necessary libraries by running:
pip install -r requirements.txt
- Run the FastAPI application using the following command:
python server.py
The application will start, and you can access the API by navigating to http://127.0.0.1:8000
. You can test the /chat
endpoint using a tool like Postman or CURL.
Dockerfile
The Dockerfile defines the steps needed to create a Docker image for your Python backend application. It ensures that the application can run consistently in any environment by packaging it with all its dependencies. Here's a breakdown of the Dockerfile:
1. Base Image
# Use an official Python runtime as a parent image
FROM python:3.12-slim-bookworm
The FROM
instruction sets the base image for the Docker container. Here, we are using the official python:3.12-slim-bookworm
image, which provides a minimal version of Python 3.12 running on Debian's "bookworm" release. The "slim" variant is preferred for smaller image sizes, containing only the essential components needed to run Python.
2. Installing Dependencies
RUN apt-get update && apt-get install -y python3 python3-pip
The RUN
instruction updates the package lists and installs Python 3 and pip
(Python’s package installer) inside the container. This ensures that Python and pip
are available, even though the slim image might not include all necessary components by default.
3. Setting the Working Directory
WORKDIR /app
The WORKDIR
instruction sets the working directory inside the container to /app
. This means all subsequent commands will be executed in this directory. If the directory does not exist, Docker will create it.
4. Copying Dependencies
COPY requirements.txt .
The COPY
instruction copies the requirements.txt
file from the host machine to the /app
directory inside the container. This file contains the list of Python dependencies required by the application.
5. Installing Python Dependencies
RUN pip install --no-cache-dir --upgrade -r requirements.txt
This RUN
instruction installs the Python dependencies listed in the requirements.txt
file using pip
. The --no-cache-dir
flag ensures that pip
does not store its downloaded packages, which helps reduce the size of the Docker image. The --upgrade
flag ensures that the packages are up-to-date.
6. Copying the Application Code
COPY . .
This COPY
instruction copies the entire application code from the current directory on the host machine into the container’s /app
directory. This includes the server.py
and all other project files.
7. Exposing the Port
EXPOSE 5500
The EXPOSE
instruction informs Docker that the application will be listening on port 5500
. While this does not actually open the port, it serves as a documentation feature and allows for easier mapping of the container port to a host machine port when the container is run.
8. Starting the Application
CMD ["uvicorn", "server:app", "--host", "0.0.0.0", "--port", "5500"]
The CMD
instruction defines the command that will be executed when the container starts. Here, we are using uvicorn
to run the FastAPI application.
-
server:app
refers to theapp
object inside theserver.py
file, which is the FastAPI instance. -
--host 0.0.0.0
binds the application to all available network interfaces, making it accessible outside the container. -
--port 5500
sets the port on which the application will listen.
GitHub Actions Workflow
This section explains how to automate the deployment of the backend server to Fly.io using GitHub Actions. The workflow will trigger every time changes are pushed to the main
branch within the backend/
directory.
To set this up, you need to create a workflow file in the following location of your repository:
.github/workflows/deploy.yml
GitHub Actions requires workflow files to be placed in this directory to function correctly. Here's a detailed breakdown of the workflow file:
1. Workflow Name
name: Deploying Backend Server To Fly.io
The name
field defines the name of the workflow. It is displayed in GitHub Actions, so naming it something descriptive like "Deploying Backend Server To Fly.io" helps to understand its purpose.
2. Triggering the Workflow
on:
push:
paths:
- 'backend/**'
branches:
- main
The on
field defines the event that triggers the workflow. In this case:
-
push
: This event is triggered when there is a push to the repository. -
paths
: The workflow is only triggered if changes are made to files inside thebackend/
directory. This prevents unnecessary deployments if changes are made outside of this directory. -
branches
: The workflow is triggered only when changes are pushed to themain
branch.
3. Defining the Job
jobs:
deploy-to-fly-io:
runs-on: ubuntu-latest
The jobs
section defines a set of steps to be executed. In this case, the job is called deploy-to-fly-io
, and it runs on the latest Ubuntu environment, specified by runs-on: ubuntu-latest
.
4. Checkout Repository
steps:
- name: Checkout Repository
uses: actions/checkout@v2
The first step in the workflow is to check out the repository using the actions/checkout@v2
action. This allows the subsequent steps to access the code and configuration in the repository. It ensures that the workflow has the latest changes.
5. Setup Flyctl
- name: Setup Superfly
uses: superfly/flyctl-actions/setup-flyctl@master
The next step is to install and configure the Flyctl CLI using the superfly/flyctl-actions/setup-flyctl
GitHub Action. Flyctl is the command-line interface for interacting with Fly.io, and this action ensures that Flyctl is installed and ready to use during the workflow.
6. Deploying to Fly.io
- name: Run Flyctl Deploy
run: flyctl deploy backend --remote-only
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
In this step, the actual deployment is performed using the flyctl deploy
command. The --remote-only
flag ensures that the deployment happens remotely, meaning the deployment is handled entirely by Fly.io, without the need for local resources.
The environment variable FLY_API_TOKEN
is provided securely via GitHub Secrets, allowing the workflow to authenticate with Fly.io without exposing sensitive tokens. This API token must be set in the repository's GitHub Secrets under the name FLY_API_TOKEN
.
Getting the API Token and Adding It to GitHub Secrets
To deploy your backend to Fly.io via GitHub Actions, you'll need an API token from Fly.io. Follow the steps below to generate your Fly.io API token and securely add it to your GitHub repository's secrets.
1. Sign in to Fly.io
If you haven't already signed up for Fly.io, go to Fly.io and create an account. Once you have an account, sign in to the Fly.io dashboard.
2. Generate the Fly.io API Token
- On the Fly.io dashboard, click on your Account at the top-right corner of the page.
- From the dropdown menu, select Access Tokens.
- You’ll be directed to the Tokens page. Click on the Create Token button.
- In the "Create Token" dialog:
- Set the Token Type to Org Deploy Token.
- Give the token a name (e.g., "GitHub Actions Deployment").
- Set an expiry for the token (optional, but recommended for security).
- Choose the organization that you want the token to be associated with (this should be your default organization).
Once the token is generated, copy it — you'll need it for the next step.
3. Add the API Token to GitHub Secrets
Now that you have the Fly.io API token, it needs to be added to your GitHub repository's secrets so that it can be accessed securely during the CI/CD process.
Follow these steps to add the token to your repository:
- Go to your GitHub repository.
- Click on the Settings tab in the top menu.
- On the left-hand sidebar, scroll down and click on Secrets and variables, then select Actions.
- Click the New repository secret button.
- In the Name field, enter
FLY_API_TOKEN
(this is the name that GitHub Actions will look for). - In the Value field, paste the API token you copied from Fly.io.
- Click Add secret to save it.
Once you've added the API token to GitHub Secrets, it will be securely available for use in your GitHub Actions workflow.
Deploying and Testing the Application
Once you've completed all the previous steps and pushed the changes to your GitHub repository, the GitHub Actions workflow will be triggered automatically. Here's what happens next:
Push Changes to GitHub: Make sure you've committed your changes to the
main
branch and pushed them to your repository. This will automatically trigger the CI/CD pipeline defined in the GitHub Actions workflow file.Monitor the GitHub Actions Workflow:
- To monitor the progress of the deployment, head over to the Actions tab in your GitHub repository. Here, you will see a list of workflows that have run, including the one you just triggered.
- Click on the most recent workflow run to view detailed logs of each step. This helps you identify any potential issues during the deployment and ensure that the process completes successfully.
- Access the Fly.io Dashboard:
- After successful deployment, head over to the Fly.io dashboard. Here, you can monitor your application, check its status, and view any logs.
- Fly.io provides real-time insights into your application’s health and performance, including CPU usage, request logs, and more.
- Test the Application:
- Once deployed, you can test your backend application by sending requests to the public Fly.io URL associated with your application. You should see the FastAPI server running and responding to API requests.
- The application will be running on the port specified in the
Dockerfile
(5500
), so you can test it by making requests to the appropriate endpoint.
You can find your Fly.io application URL on the Fly.io dashboard on the Overview
page, and it should look something like this:
https://your-app-name.fly.dev
Conclusion
In this guide, we've walked through the entire process of setting up a CI/CD pipeline to deploy a Python backend application to Fly.io using GitHub Actions.
We covered:
- How to set up the backend application with the necessary libraries and environment.
- How to containerize the application using Docker to ensure consistent deployment.
- The creation of a GitHub Actions workflow that automates the deployment process to Fly.io.
- How to securely store and use your Fly.io API token in GitHub Secrets for authentication during deployments.
With this setup in place, you can ensure smooth, automated deployments every time changes are pushed to the main
branch. This approach not only saves time but also guarantees that your deployments are repeatable and consistent.
If you found this guide helpful, feel free to connect with me on LinkedIn for updates on future articles, and check out some of my other projects and works on my portfolio.
Top comments (0)