Over the past few weeks, Iβve been building LibreAI β a simple, privacy-focused AI chat app that runs entirely on your own infrastructure using open-source models like Mistral, LLaMA 3, and Phi via Ollama.
LibreAI is built with privacy and simplicity at its core. It streams responses in real-time, stores nothing, and has zero telemetry. No OpenAI, no account wall, no JavaScript bloat.
β
Try it live
π¬ Discuss it on Hacker News
π§ Why I Built LibreAI
A lot of modern AI tools work well, but they come with trade-offs: vendor lock-in, data collection, or heavy dependencies.
I wanted an AI assistant I could trust and control, powered by models that run locally β without relying on big cloud providers or complex frontend stacks.
So I built LibreAI as a lightweight, clean alternative.
What I focused on:
A UI that loads fast, even on slow networks
Real-time streaming output
Support for multiple open models via Ollama
Full self-hostability β no cloud APIs required
No React, no tracking, no unnecessary complexity
βοΈ The Stack
LibreAI is built using:
Go Fiber β backend web framework
HTMX β frontend interactivity without JS frameworks
TailwindCSS β utility-first styling
Ollama β for serving local LLMs
Plausible β privacy-first analytics (no cookies or tracking)
π§ͺ What Works (and Whatβs Coming)
Right now, LibreAI supports:
Real-time streamed responses
Multiple open models via Ollama
Fast performance with a lightweight UI
What Iβm improving:
Mobile layout and responsiveness
Model selection UX
Easier deployment options
LibreAI is intentionally minimal.
π¬ I'd Love Your Feedback
If you're working on AI tools, local model deployments, or care about privacy-first UX, Iβd love to hear your thoughts.
What models are you running locally?
What do you look for in a self-hosted AI tool?
Does lightweight and private beat "smart and centralized" for you?
π Try Libre
π£ Join the Hacker News discussion
Thanks for reading π
Top comments (0)