Sitemap

Deploying a MCP server on AWS

3 min readMay 15, 2025

The goal of this post is to demonstrate how to build and deploy a Model Context Protocol (MCP) on AWS using a FastAPI-based API.

Repo in github: https://github.com/alejofig/mcp-berghain

Solution Architecture

  1. Data Extraction: Using Firecrawl MCP to scrape data from berghain.berlin.
  2. Data Persistence: Storing events in AWS DynamoDB.
  3. REST API: Exposing the data through FastAPI.
  4. MCP Publication: Publishing endpoints as MCP tools using FastMCP.
  5. Deployment: Using Docker and AWS App Runner to deploy the solution.
  6. AI Agent: Querying the MCP through PydanticAI using natural language.

Data Extraction with Firecrawl MCP

Firecrawl MCP simplifies web scraping by providing structured JSON output through an API.

Configuration example:

{
"firecrawl-mcp": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": { "FIRECRAWL_API_KEY": "fc-..." }
}

After running the script, JSON files containing the event data are generated, ready to be processed.

Storing Data in DynamoDB

Data is stored in a DynamoDB table using id as the partition key.

Creating the table:

python create_dynamodb_table.py --table berghain --region us-east-1

Table schema:

table = dynamodb.create_table(
TableName=table_name,
KeySchema=[{'AttributeName': 'id', 'KeyType': 'HASH'}],
AttributeDefinitions=[{'AttributeName': 'id', 'AttributeType': 'S'}],
ProvisionedThroughput={'ReadCapacityUnits': 5, 'WriteCapacityUnits': 5}
)

Loading JSON data into the table:

python import_events.py --path ./events --table berghain --region us-east-1

Each record contains the following fields:

  • id (UUID)
  • date
  • title
  • location
  • artists (array)
  • url

Building the API with FastAPI

The API is built using FastAPI to expose the event data with filtering and pagination support.

Main Endpoints:

  • /{event_id} → Event by ID.
  • /year/{year}/month/{month} → Events by year and month.
  • /location/{location} → Events by location.
  • /artist/{artist} → Events by artist.

Example: Get Events by Month

@router.get("/year/{year}/month/{month}", response_model=List[Event])
async def get_events_by_month(year: int, month: int, repo: EventRepository = Depends()):
events = await repo.get_by_year_month(year, month)
return events

The repository layer abstracts DynamoDB access and handles pagination using limit and last_evaluated_key.

Publishing the API as MCP Tools with FastMCP

Using FastMCP, the API endpoints are exposed as MCP tools, making them easily accessible by AI agents.

Configuring the endpoints as MCP tools:

custom_maps = [
RouteMap(methods=["GET"], pattern=r"^/api/v1/year/.*", route_type=RouteType.TOOL),
RouteMap(methods=["GET"], pattern=r"^/api/v1/location/.*", route_type=RouteType.TOOL),
RouteMap(methods=["GET"], pattern=r"^/api/v1/artist/.*", route_type=RouteType.TOOL),
]

if __name__ == "__main__":
async def main():
mcp = FastMCP.from_fastapi(app=app, route_maps=custom_maps)
await check_mcp(mcp)
await mcp.run_async(transport="sse", host="0.0.0.0", port=8000) # Usa la versión async

asyncio.run(main())

This enables the API to integrate seamlessly with any agent supporting the MCP standard. Also, you can put the operation_id param at the router to name the tool.

Deployment on AWS App Runner

The solution is containerized and deployed using AWS App Runner, a fully managed service for running containerized web apps.

Deployment Steps:

  1. Build and push the Docker image to Amazon ECR.
  2. Deploy the image using AWS App Runner with the public endpoint configured.

You can also use the main.tf provided in the repository to deploy App Runner along with a role that grants DynamoDB access.

Consuming the MCP Using an AI Agent (PydanticAI)

Once deployed, you can query the MCP directly from an AI agent using natural language.

agent = Agent(
"openai:gpt-4o-mini",
system_prompt="You are a helpful assistant that can answer questions and help with tasks.",
mcp_servers=[MCPServerHTTP(url="https://mcp-berghain.alejofig.com/sse")]
)

You can test the live solution here:
https://mcp-berghain.alejofig.com/sse

user_query = “I want to know the events of berghain for march 2025”

Conclusion

This solution demonstrates how to efficiently orchestrate data extraction, API development, and AI integration using a fully serverless and scalable approach on AWS.

Key outcomes:

  • Automated web data extraction with Firecrawl MCP.
  • Efficient data storage and querying with AWS DynamoDB
  • MCP-compatible API publication with FastMCP.
  • Seamless deployment using Docker and AWS App Runner.
  • Natural language interaction with APIs through AI agents using PydanticAI.

This architecture is reusable for any use case that involves exposing structured data to AI models, enabling intelligent, conversational access to real-time information.

--

--

Daniel Alejandro Figueroa Arias
Daniel Alejandro Figueroa Arias

Written by Daniel Alejandro Figueroa Arias

I love data. I'm building products in AWS related to Data every day. www.alejofig.com

Responses (1)