Telegram AI Companion is a Rust-based application that integrates a Telegram bot with a local AI model (e.g., Mistral via LocalAI). The project demonstrates how to build an asynchronous web application using Actix Web, and it can operate without relying on external APIs like OpenAI.
- βοΈ Telegram bot that receives messages from users
- π€ Integration with a local language model via LocalAI
- π Optional support for OpenAI API β simply configure parameters in
.env
- β Asynchronous web server using Actix Web
- π REST API
- π¬ Test coverage
- π Ready to run in Docker containers
βββ src/ # Main application code
β βββ handlers/ # HTTP handlers
β βββ middleware/ # Middleware (e.g., authorization)
β βββ models/ # Structs for Telegram, chat, etc.
β βββ routes/ # Route definitions
β βββ services/ # Business logic and API integrations
βββ tests/ # Integration tests
βββ images/ # Dockerfiles
βββ models/ # Models for LocalAI (.gguf + .yaml)
βββ volumes/ # Config files for containers
βββ docker-compose.yml # Docker Compose configuration
βββ .env # Environment variables
βββ Cargo.toml / Cargo.lock # Rust dependencies
By default, LocalAI is used. You can switch to OpenAI by changing .env
:
OPEN_AI_URL=http://localai:8080 # or https://api.openai.com
OPEN_AI_MODEL=mistral # or gpt-3.5-turbo / gpt-4
OPEN_AI_API_KEY=your_openai_key # required if using OpenAI
-
Navigate to the
models/
directory. -
Download the model (e.g., Mistral):
wget https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf
-
Create a
mistral.yaml
config file, for example:
name: mistral
backend: llama
parameters:
model: mistral-7b-instruct-v0.2.Q4_K_M.gguf
temperature: 0.7
top_p: 0.9
top_k: 40
n_ctx: 4096
- Docker + Docker Compose
- Rust (if running outside Docker)
git clone [email protected]:di-zed/tg-ai-companion.git
cd tg-ai-companion
Copy .env.sample
to .env
:
cp .env.sample .env
Edit the .env
file to match your setup.
cp volumes/root/.bash_history.sample volumes/root/.bash_history
docker-compose up --build
Once running, the app will be available at localhost:80
, and LocalAI at localhost:8080
.
Docker Compose runs two containers:
rust-tac
: the Rust applicationlocalai-tac
: the language model server
They communicate via an internal Docker network. The Rust app communicates with LocalAI at http://localai:8080
(configured via OPEN_AI_URL
in .env
).
Docker volumes:
./models
β contains.gguf
and.yaml
model files for LocalAI./volumes/
β used for bash history, autocomplete, and cargo registry cache
docker-compose exec rust bash
cargo run
Or build a binary and run:
cargo build --release
./target/release/tg_ai_companion
- Accepts updates from Telegram (Webhook)
- Processes incoming messages and replies back via Telegram API
You must configure the Telegram webhook for your bot
Sample Telegram request body:
{
"update_id": 123456789,
"message": {
"message_id": 1,
"chat": {
"id": 987654321
},
"text": "Hello bot"
}
}
The bot response will be sent back to the user via the Telegram API.
- Accepts JSON with a
prompt
- Returns a model response (LocalAI or OpenAI)
- Can be used directly (outside of Telegram), for example, in custom UIs or API clients
- Requires Bearer token in the
Authorization
header
Example:
{
"prompt": "Hi, who are you?"
}
You can use either LocalAI or OpenAI depending on your configuration
docker-compose exec rust bash
cargo test
Test coverage includes:
Telegram
andChat
handlersChatApi
andTelegramApi
services- External API integration using
httpmock
Want to try the bot locally and see it in action? Follow these steps:
Use the instructions that were given above.
By default, the bot listens on http://localhost:80
.
Telegram needs a publicly accessible URL for webhook updates. Use ngrok to create a secure tunnel.
-
Download and install ngrok from https://ngrok.com/download
-
Start a tunnel forwarding your local port (default 80):
ngrok http 80
-
Copy the generated HTTPS forwarding URL (e.g.
https://123-456-789.ngrok-free.app
)
Use your bot token (get it from BotFather) and set the webhook URL:
-
Go to Telegram and find the bot @BotFather
-
Send command:
/newbot
-
Enter your name and username (must end in bot)
-
You will receive a BOT_TOKEN of the following type:
123456789:AAH6kDkKvkkkT-PWTwMg6cYtHEb3vY_tS1k
. Save it to the.env file, in the TELEGRAM_BOT_TOKEN parameter. -
Request the webhook to be installed using the following command:
curl -X POST "https://api.telegram.org/bot<TELEGRAM_BOT_TOKEN>/setWebhook" \ -H "Content-Type: application/json" \ -d '{"url": "https://YOUR_NGROK_URL/telegram/webhook"}'
Replace
YOUR_NGROK_URL
with your ngrok HTTPS URL and<TELEGRAM_BOT_TOKEN>
with your Telegram bot token.
Send messages to your bot in Telegram. Your bot will respond using the AI chat API.
- Add conversation memory support
- Create a web interface
This project is licensed under the MIT License. See LICENSE.
- LocalAI for providing an excellent OpenAI-compatible alternative