TL;DR: Stop wasting time writing commit messages. llmcommit -a -p
generates meaningful messages and pushes your code in 2.5 seconds using rule-based AI + caching.
The Problem Every Developer Faces
Let's be honest - how much time do you spend writing commit messages?
# We've all been there...
git commit -m "update"
git commit -m "fix"
git commit -m "changes"
git commit -m "wip"
The hidden cost:
- 3-5 minutes per commit thinking of proper messages
- Breaking your coding flow to context-switch
- Inconsistent message quality across team members
- Non-native English speakers struggling with proper phrasing
Meet LLMCommit
LLMCommit is a blazing-fast CLI tool that generates meaningful git commit messages using AI, then handles the entire git workflow for you.
⚡ Speed Comparison
Method | Time | Result |
---|---|---|
Manual | 3-5 minutes | "update" / "fix stuff" |
LLMCommit | 2.5 seconds | "Update user authentication logic" |
🎯 One Command, Complete Workflow
# Before: 3 commands + thinking time
git add .
git commit -m "..." # 🤔 what should I write?
git push
# After: 1 command
llmcommit -a -p
Technical Architecture
1. Rule-Based Engine (Default - 2.5s)
Instead of always hitting LLMs, LLMCommit uses intelligent pattern matching:
PATTERNS = {
'config': ['.json', 'settings', 'env'],
'docs': ['readme', '.md', 'changelog'],
'test': ['test', 'spec', '__test__'],
'fix': ['fix', 'bug', 'error'],
'feat': ['add', 'new', 'create']
}
Analysis Process:
- File Pattern Detection: Identifies file types and purposes
- Diff Analysis: Examines added/removed lines
- Context Generation: Creates appropriate message based on patterns
- Quality Assurance: Ensures messages follow conventional commit style
2. Smart Caching System
~/.cache/llmcommit/
├── outputs/ # Generated messages (24h TTL)
├── models/ # Model metadata
└── cache_metadata.json
Cache Strategy:
-
Key:
SHA256(model:diff[:500])[:16]
- Hit Time: <0.1 seconds
- Persistence: 24 hours for outputs, permanent for models
3. Progressive Enhancement
When rule-based isn't enough, fall back to LLM:
# Ultra-fast rule-based (default)
llmcommit -a -p # 2.5s
# Lightweight LLM (SmolLM-135M)
llmcommit --preset ultra-light -a -p # 3-5s
# High-performance LLM (TinyLlama-1.1B)
llmcommit --preset light -a -p # 5-8s
Installation & Usage
Quick Start
# Install
pip install llmcommit
# Use (creates .llmcommit.json config automatically)
llmcommit -a -p
Advanced Usage
# Dry run (see message without committing)
llmcommit --dry-run
# Skip git hooks for maximum speed
llmcommit -a -p --no-verify
# Cache management
llmcommit-cache stats
llmcommit-cache clear --days 7
Real-World Examples
Before LLMCommit
$ git log --oneline
a1b2c3d update
e4f5g6h fix
i7j8k9l changes
m0n1o2p wip
After LLMCommit
$ git log --oneline
a1b2c3d Update user authentication middleware
e4f5g6h Fix memory leak in cache manager
i7j8k9l Add unit tests for payment service
m0n1o2p Update API documentation for v2.1
Performance Benchmarks
Speed Tests (MacBook Pro M1)
Mode | Cold Start | Warm Start | Cache Hit |
---|---|---|---|
Rule-based | 2.5s | 2.5s | 0.1s |
SmolLM-135M | 45s | 4s | 0.1s |
TinyLlama-1.1B | 60s | 7s | 0.1s |
Memory Usage
Mode | RAM Usage | Disk Cache |
---|---|---|
Rule-based | ~10MB | ~1MB |
SmolLM-135M | ~400MB | ~270MB |
TinyLlama-1.1B | ~2.2GB | ~2.2GB |
Advanced Configuration
Custom Presets
{
"model": "HuggingFaceTB/SmolLM-135M",
"max_tokens": 20,
"temperature": 0.1,
"prompt_template": "Generate commit: {diff}",
"use_fast": true,
"cache_dir": "~/.cache/llmcommit"
}
Team Integration
# .llmcommit.json (commit to repo)
{
"use_fast": true,
"prompt_template": "[JIRA-{ticket}] {diff}",
"team_convention": "conventional-commits"
}
Docker Support
# Build optimized image
make build
# Run with persistent cache
docker run -v $(pwd):/app -v llmcommit_cache:/cache llmcommit -a -p
Persistent Volumes:
-
huggingface_cache
: Model downloads (reused across containers) -
llmcommit_cache
: Generated message cache
Production Tips
1. CI/CD Integration
# .github/workflows/auto-commit.yml
- name: Auto-commit changes
run: |
llmcommit -a -p --no-verify
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
2. Git Hooks
# .git/hooks/prepare-commit-msg
#!/bin/sh
if [ -z "$2" ]; then
llmcommit --dry-run > "$1"
fi
3. Monorepo Support
# Different configs per service
cd services/auth && llmcommit -a -p
cd services/api && llmcommit -a -p --config .llmcommit-api.json
Comparison with Alternatives
Tool | Speed | Quality | Offline | Cost |
---|---|---|---|---|
LLMCommit | 2.5s | High | ✅ | Free |
OpenAI API | 3-5s | Very High | ❌ | $0.01/commit |
GitHub Copilot | 5-10s | High | ❌ | $10/month |
Manual | 180s+ | Variable | ✅ | Developer time |
FAQ
Q: How accurate are the generated messages?
A: Rule-based mode achieves ~85% satisfaction rate. LLM modes reach ~95%.
Q: Does it work offline?
A: Yes! After initial model download, everything runs locally.
Q: What about security?
A: No code leaves your machine. All processing is local.
Q: Can I customize the message format?
A: Absolutely. Supports conventional commits, custom templates, and team conventions.
Roadmap
- 🔄 Smart Branching: Different message styles per branch type
- 🤖 Custom Models: Fine-tuned models for specific codebases
- 📊 Analytics: Commit pattern analysis and suggestions
- 🔗 IDE Integration: VS Code, JetBrains plugins
Contributing
LLMCommit is open source! We welcome contributions:
git clone https://github.com/0xkaz/llmcommit
cd llmcommit
make local
make test
Areas we need help:
- Additional language model support
- IDE plugin development
- Documentation improvements
- Performance optimizations
Conclusion
Stop wasting time on commit messages. LLMCommit gives you back 15-30 minutes per day to focus on what matters: writing great code.
Try it today:
pip install llmcommit
llmcommit -a -p
Your future self (and your git log) will thank you! 🚀
Links:
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.