This repository contains components for a generative AI project, including a chat application, client, and server.
This project demonstrates a generative AI system with a chat interface. It is divided into three main components:
- chat-app: The frontend chat application.
- mcp-client: The client that communicates with the server.
- mcp-server: The backend server handling AI generation.
- ollama: The backend service handling ollama models.
The above diagram illustrates the interaction between the chat-app, mcp-client, mcp-server, and the Ollama service.
A user-friendly web interface for interacting with the generative AI. Built with modern web technologies. Check out README.md
Handles communication between the chat-app and the mcp-server. Responsible for sending user messages and receiving AI responses. Check out README.md
The backend service that processes requests and generates responses using AI models. Check out README.md
This is the docker service to use run ollama as a docker image. It it recommended to use docker to run ollama. Check out README.md
Clone the repository:
git clone https://github.com/vcse59/Generative-AI-MCP-Application.git
cd Generative-AI-MCP-Application
- Unix/Linux/macOS (bash/zsh/fish)
cd "$(git rev-parse --show-toplevel)"
- PowerShell (Windows)
cd (git rev-parse --show-toplevel)
- Command Prompt (cmd.exe on Windows)
for /f "delims=" %i in ('git rev-parse --show-toplevel') do cd "%i"
-
Windows (Command Prompt):
python -m venv .venv .venv\Scripts\activate pip install poetry
-
Windows (PowerShell):
python -m venv .venv .venv\Scripts\Activate.ps1 pip install poetry
-
Unix/Linux/macOS (bash/zsh):
python3 -m venv .venv source .venv/bin/activate pip install poetry
-
Install dependencies for each component in separate terminal with active virtual environment :
cd chat-app npm install cd mcp-client poetry install cd mcp-server poetry install
-
Navigate to mcp-server project and start the MCP server in new terminal with active virtual envrionment:
poetry run mcp-server
-
Follow the instructions to start the ollama service in docker in new terminal:
-
Navigate to MCP client root directory in new terminal and configure following bash variables for mcp-client
OLLAMA_LLM_MODEL_NAME=llama3.2:latest # Can be changed MCP_SERVER_ENDPOINT=http://127.0.0.1:8080/mcp OLLAMA_API_URL=http://127.0.0.1:11434
-
Navigate to mcp-client project and start the MCP client in new terminal with active virtual envrionment:
poetry run uvicorn src.mcp_client.app:app --host 0.0.0.0 --port 8000 --reload
-
Navigate to chat-app project and start the chat app in new terminal:
npm start
You can run all components using Docker Compose for easier setup and deployment.
- Navigate to repoistory root directory
-
Build and start all services:
docker-compose up --build
-
Stop the services:
docker-compose down
Make sure you have Docker and Docker Compose installed.
The docker-compose.yml
file defines services for chat-app
, mcp-client
, and mcp-server
. Each service is built from its respective directory.
- Open the chat-app in your browser(http://localhost:5000).
- Enter your message and interact with the AI. e.g,
Add 10 and 299 numbers
- The mcp-client and mcp-server handle message routing and AI generation.
For information about security policies, reporting vulnerabilities, and best practices, please refer to the SECURITY.md document.
This project is licensed under the MIT License. See the LICENSE file for details.