DEV Community

Cover image for How to Build an AI Reporting App with Python, LLM, and Model Context Protocol (MCP)
Dandi Pangestu
Dandi Pangestu

Posted on

How to Build an AI Reporting App with Python, LLM, and Model Context Protocol (MCP)

Introduction

In recent years, AI-powered applications have revolutionized the tech industry by automating complex workflows and introducing intelligence at scale. Among these advancements, AI-based reporting systems have emerged as a transformative tool, enabling organizations to extract meaningful insights from data with unprecedented ease and flexibility. These systems are reshaping how businesses approach data analysis, offering intuitive solutions for generating reports tailored to specific requirements.

Two pivotal technologies drive the success of these systems: Large Language Models (LLM) and Model Context Protocol (MCP). LLM, such as Anthropic's cutting-edge model, excel at processing and generating human-like text, making them ideal for interpreting user queries and producing actionable analytical insights. On the other hand, MCP (Model Context Protocol) provides a framework for seamless interaction between AI models and external systems, ensuring that data context is preserved and effectively leveraged throughout the analysis process. Together, LLM and MCP form the backbone of modern AI-driven data solutions.

In this article, we will explore a hands-on mini-project that demonstrates how to build an AI-powered reporting application. This project empowers users to generate flexible, criteria-driven reports effortlessly. By simply typing their reporting criteria in natural language, users can tap into the power of an LLM (Anthropic) and MCP servers to produce analytical results and visualizations automatically. The system integrates two MCP servers:

  • A Postgres MCP server for querying data sources.
  • A custom MCP server for generating visualizations.

This architecture enables dynamic, user-driven reporting while minimizing manual setup, showcasing how AI can simplify complex data analysis workflows. Whether you're a data enthusiast or a developer, this project offers insights into leveraging Python, LLM, and MCP to create smarter, more efficient reporting solutions.

You can find the complete code for this project in the Github repository.

Architecture

High Level Architecture

The architecture of the AI-based reporting app is designed to seamlessly integrate user queries, AI-powered analysis, and dynamic data visualization. At the core of the system is the App, which acts as the central orchestrator. The workflow begins with users submitting a query, which is processed by the app through the following key steps:

  1. Connection to MCP Servers:

    The app establishes connections with two specialized MCP servers:

    • Postgres MCP Server for querying structured data sources.
    • Visualization MCP Server for creating and managing visual representations of the data.
  2. Tool Integration:

    Tools from the MCP servers are dynamically added to the app, enabling flexibility in data manipulation and visualization.

  3. Query to LLM:

    The app sends the user query to an LLM (Anthropic) for advanced natural language processing and analysis.

  4. LLM Tool Calls:

    If the LLM identifies the need for external tools, it makes a call to the appropriate MCP server (e.g., for data queries or visualizations).

  5. Response Generation:

    The app consolidates the LLM’s output and any results from the MCP servers, returning a comprehensive response to the user.

Here is the flow diagram:

Flow Diagram

Prerequisites

Before starting this project, ensure that you have the following tools and resources set up:

  • Python v3.12 or later: The application is built using Python, so you'll need the latest version installed to take full advantage of its features and compatibility.
  • Postgres Database: The reporting app relies on a Postgres database as the data source. Make sure to have Postgres installed and running on your system.
  • Postgres MCP Server: This project uses a Postgres MCP Server to query data from the database. MCP (Model Context Protocol) enhances the context management capabilities of the app by allowing dynamic interactions between the app and external data sources.
  • Anthropic API Key: The app integrates with Anthropic's LLM (Large Language Model) for natural language understanding and data analysis. You’ll need an API key from Anthropic to access their services.
  • Node v23.10 or later: Node.js is required to set up and run the Postgres MCP Server. Ensure you have the latest version installed to avoid compatibility issues.

Additionally, an example table schema is provided in the repository to serve as the data source for reporting. You can find the schema here. To get started, execute the SQL script included in the repository to set up the table in your Postgres database. This table will act as the example dataset for the reporting app.

How to Run

Follow these simple steps to set up and run the application:

1. Clone the Repository

First, clone the project repository to your local machine and navigate into the project directory:

$ git clone https://github.com/Dandi-Pangestu/ai-based-reporting-app.git
$ cd ai-based-reporting-app
Enter fullscreen mode Exit fullscreen mode

2. Install Dependencies

Ensure you have Python installed, then use pip to install the required dependencies for the project:

$ pip install .
Enter fullscreen mode Exit fullscreen mode

This will install all the necessary libraries and dependencies listed in the project.

3. Set Up Environment Variables

The application relies on specific environment variables to connect to external services like the database, Anthropic API, and visualization server. To configure these:

  1. Navigate to the api folder and locate the .env.example file.
  2. Copy the .env.example file to a new .env file:
   $ cp api/.env.example api/.env
Enter fullscreen mode Exit fullscreen mode
  1. Open the .env file and fill out the following variables based on your environment:
   ANTHROPIC_API_KEY=your_anthropic_api_key
   DATABASE_HOST=your_database_host
   DATABASE_USERNAME=your_database_username
   DATABASE_PASSWORD=your_database_password
   DATABASE_NAME=your_database_name
   DATABASE_PORT=your_database_port
   VISUALIZATION_SERVER_PATH=your_visualization_server_path
Enter fullscreen mode Exit fullscreen mode

Replace placeholders such as your_anthropic_api_key with the actual API key specific to your setup, and update your_visualization_server_path with the absolute path to the visualization_server.py file, which is located in the mcp_server folder of your project directory.

4. Run the Application

Start the application using the following command:

$ uvicorn main:app --reload
Enter fullscreen mode Exit fullscreen mode

This command runs the application using Uvicorn, a fast ASGI server, with auto-reload enabled for development.

5. Access the Application

Once the server is running, you can access the API on your local machine by navigating to:

http://localhost:8000
Enter fullscreen mode Exit fullscreen mode

The application will now be live and ready to process user queries for generating reports.

Testing Application

Below are the key API endpoints available in the application:

1. Get Available Servers

Retrieve the list of MCP servers connected to the application.

curl --location 'http://localhost:8000/servers'
Enter fullscreen mode Exit fullscreen mode

Response:

Response Available Servers

2. Get Available Tools

Retrieve the list of tools dynamically added to the application from MCP servers.

curl --location 'http://localhost:8000/tools'
Enter fullscreen mode Exit fullscreen mode

Response:

Response Available Tools

3. Query the AI Reporting App

Send a query to the application to generate results and visualizations based on the specified criteria.

curl --location 'http://localhost:8000/query' \
--header 'Content-Type: application/json' \
--data '{
    "query": "List the top 3 organization id with the highest average messages count from metrics_rooms. Please generate chart visualization with type bar and also please give me analysis for that result."
}'
Enter fullscreen mode Exit fullscreen mode

Response:

Response Query

Response base64

In the /query endpoint, one of the outputs is a base64-encoded chart visualization. This visualization is embedded as a base64 string in the response, representing a chart (e.g., a bar chart or line chart) generated based on your query. You can please copy-paste that base64 to the online base64 image decoder, for example:

Base64 Decoder

The /query endpoint also provides an AI-generated analysis that summarizes trends, highlights key findings, and offers observations based on the query results, making the report easier to understand.

Analysist Result

Analysis of the Results:

1. Organization with ID "51160f9a-253c-4db2-a564-80af1321874f" has the highest average message count of 25 messages, showing the most active communication within their rooms.
2. The second highest is organization "b223aab1-68cc-49e6-9b52-b459a0c917e4" with an average of 20 messages, showing moderate but still significant communication activity.
3. The third organization "b462ea35-c804-4cc6-b7a5-535023924ee1" averages 18 messages, which is still considerable but shows less communication intensity compared to the top two.

Key Observations:
- There is a clear hierarchy in communication intensity among these organizations
- The difference between the highest and lowest is 7 messages on average
- All three organizations maintain relatively high message counts, suggesting active engagement in their communication rooms
- The distribution shows a gradual decrease in average message counts rather than sharp drops, indicating consistent usage patterns across these top organizations

This data suggests that these organizations have established effective communication practices, with the top organization particularly standing out in terms of message exchange frequency.
Enter fullscreen mode Exit fullscreen mode

Conclusion

AI-powered reporting applications are transforming the way organizations analyze and interpret their data. By combining the capabilities of Large Language Models (LLM) and Model Context Protocol (MCP), we can create flexible, user-driven systems that simplify complex workflows and deliver meaningful insights effortlessly.

In this article, we explored the architecture and functionality of an AI-based reporting app built with Python, Anthropic’s LLM, and MCP servers. We discussed how the application allows users to generate custom, criteria-driven reports by integrating a Postgres MCP server for querying data and a custom MCP server for creating visualizations. By walking through the process of running and testing the app, we demonstrated how this system leverages cutting-edge technologies to streamline reporting tasks.

Whether you are a developer looking to build similar applications or a data enthusiast exploring new tools for data analysis, this project serves as a practical example of how AI and modern protocols can be applied to solve real-world problems. The combination of Python, LLM, and MCP offers a powerful framework for building smarter, more intuitive solutions.


We hope this guide has provided valuable insights into the potential of AI in reporting systems. Feel free to explore the repository and experiment with the application to see its capabilities firsthand. Happy coding!

Top comments (0)