Hey dev.to! π
Recently I built a small but fun pet project β Telegram AI Companion.
It's a Telegram bot that chats with you using a local LLM via LocalAI.
No OpenAI, no clouds β everything runs on your own machine! π§ π»
The goal? Not to reinvent AI, but to explore Rust, async programming, Telegram API, and local LLMs. Think of it as a βdeveloper's companion botβ. π
π§© What It Can Do
β
Replies to any message in Telegram
β
Works with LocalAI (or OpenAI if you want)
β
Runs via Docker + Docker Compose
β
Written in Rust with Actix Web
β
Has a REST API (/chat
) β hook up any UI
β
Includes tests and has a clean project structure
βοΈ How It Works
Overview
- User sends a message to the Telegram bot
- Telegram calls our webhook (
/telegram/webhook
) - Rust app sends the prompt to LocalAI
- Gets a reply and sends it back to the user
Tech Stack
π¦ Rust β strict but powerful
π Actix Web β high-perf async framework
π¦ Docker & Compose β clean and reproducible
π§ LocalAI β local alternative to OpenAI, supports GGUF/LLaMa
βοΈ Optional: OpenAI support via .env
π Quickstart
Clone the repo:
git clone https://github.com/di-zed/tg-ai-companion
cd tg-ai-companion
Download a model (e.g., Mistral 7B) and configure:
cd models/
wget https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf
Create mistral.yaml
:
name: mistral
backend: llama
parameters:
model: mistral-7b-instruct-v0.2.Q4_K_M.gguf
temperature: 0.7
top_p: 0.9
top_k: 40
n_ctx: 4096
Or configure OpenAI in .env
:
OPEN_AI_URL=http://localai:8080
OPEN_AI_MODEL=mistral
OPEN_AI_API_KEY=your_openai_key
Start the app (don't forget to edit .env
):
cp .env.sample .env
cp volumes/root/.bash_history.sample volumes/root/.bash_history
docker-compose up --build
docker-compose exec rust bash
cargo run
Now your bot runs locally, and LocalAI listens on localhost:8080
.
π€ Create Your Telegram Bot
- Open Telegram and talk to @BotFather
- Run
/newbot
, set a name and a unique username (something_bot
) - You'll get a token like:
123456789:AAH6kDkKvkkkT-PWTwMg6cYtHEb3vY_tS1k
Paste it into .env
:
TELEGRAM_BOT_TOKEN=your_token_here
π Expose Webhook via ngrok
Make your local server reachable:
ngrok http 80
Then set webhook:
curl -X POST "https://api.telegram.org/bot<YOUR_TOKEN>/setWebhook" \
-H "Content-Type: application/json" \
-d '{"url": "https://your-subdomain.ngrok-free.app/telegram/webhook"}'
π API Mode (No Telegram)
You can also call it like a standard LLM API:
POST /chat
Header: Authorization: Bearer YOUR_TOKEN
{
"prompt": "Hi, who are you?"
}
LocalAI (or OpenAI) responds.
π§ Why I Built This
Main goals:
- Learn Rust hands-on
- Explore local LLMs without API keys
- Build something fun and useful
- Play with Telegram bots π
This can be a base for future AI bots with memory, content generation, assistants, and more.
π Whatβs Next?
- Memory + conversation context
- Web interface
- Multi-model support
π¬ Final Thoughts
If you're just starting with Rust or want to try local LLMs β this might be a perfect playground.
The code is clean, the stack is modern, and setup is smooth.
I kept this post light β for deep dives, check the full README:
π GitHub: tg-ai-companion
π Useful Links
π§ LocalAI β LLM backend
π¦ Rust Book β start here
βοΈ ngrok β webhook tunneling
Thanks for reading! π
If the bot responds cheerfully β thatβs on me.
If itβs silent β blame Telegram or ngrok π
Top comments (0)