A comprehensive CLI tool for AI model training, evaluation, and deployment with advanced RAG capabilities and MCP (Model Context Protocol) integration. Train, fine-tune, and deploy language models with enterprise-grade features.
- One-Click Setup - Automated Python 3.12 environment with all dependencies
- Advanced RAG - Traditional + Agentic RAG with multiple vector stores,RAGAS
- Flexible Training - LoRA, QLoRA, and full fine-tuning support
- Comprehensive Evaluation - Built-in benchmarks + custom metrics
- Docker & Kubernetes - Containerize and deploy models with FastAPI servers
- MCP Integration - GitHub, Terraform and AWS integrations
- Multi-Agent Workflows - CrewAI pipeline support
- Configuration Validation - YAML validation and schema checking
# One-line installation
git clone https://github.com/ideaweaver-ai-code/ideaweaver.git
cd ideaweaver
chmod +x setup_environments.sh
./setup_environments.sh
⚠️ Important: IdeaWeaver requires Python 3.12. Make sure you have Python 3.12 installed before proceeding.
- Check Python Version
python --version
# Should show Python 3.12.x- Activate the Environment
# On Unix/macOS
source ideaweaver-env/bin/activate- Verify Installation
ideaweaver --help# Train a model using a config file
ideaweaver train --config configs/training_config.yml
# Or train with command-line options
ideaweaver train \
--model google/bert_uncased_L-2_H-128_A-2 \
--dataset ./datasets/training_data.csv \
--task text_classification \
--project-name cli-final-test \
--epochs 1 \
--batch-size 4 \
--learning-rate 2e-05 \
--verbose# Initialize a new RAG system
ideaweaver rag init --name my_rag_system
# 1. Create a knowledge base
ideaweaver rag create-kb --name mykb --embedding-model sentence-transformers/all-MiniLM-L6-v2
# 2. Ingest documents into the knowledge base
ideaweaver rag ingest --kb mykb --source ./documents/
# 3. Query the knowledge base
ideaweaver rag query --kb mykb --question "What is machine learning?"# See all available MCP integrations
ideaweaver mcp list-servers
# Set Up GitHub Integration
# 1. Set up GitHub authentication (will prompt for your token)
ideaweaver mcp setup-auth github
# 2. Enable the GitHub MCP server
ideaweaver mcp enable github
# 3. List available MCP servers (to verify)
ideaweaver mcp list-servers
# 4. Call a tool on the GitHub MCP server (example: list issues)
ideaweaver mcp call-tool github list_issues --args '{"owner": "your-username/org name", "repo": "your-repo"}'ideaweaver finetune full \
--model microsoft/DialoGPT-small \
--dataset datasets/instruction_following_sample.json \
--output-dir ./test_full_basic \
--epochs 5 \
--batch-size 2 \
--gradient-accumulation-steps 2 \
--learning-rate 5e-5 \
--max-seq-length 256 \
--gradient-checkpointing \
--verbose# Basic evaluation with local results only
ideaweaver evaluate ./downloaded_model \
--tasks hellaswag,arc_easy,winogrande \
--output-path results.json \
--report-to none
# Evaluation with TensorBoard logging
ideaweaver evaluate ./downloaded_model \
--tasks hellaswag,arc_easy,winogrande \
--output-path results.json \
--report-to tensorboard
# Evaluation with Weights & Biases logging
ideaweaver evaluate ./downloaded_model \
--tasks hellaswag,arc_easy,winogrande \
--output-path results.json \
--report-to wandb \
--wandb-project my-evaluation-project
⚠️ Troubleshooting:
- If the command appears to hang, check if you have specified
--report-tooption- For wandb logging, ensure you're logged in (
wandb login) or use--report-to none- For TensorBoard logging, ensure tensorboard is installed (
pip install tensorboard)- Use
--verboseflag for detailed progress information
ideaweaver agent generate_storybook --theme "brave little mouse" --target-age "3-5"AI-powered system performance analysis with real command execution:
# Basic system diagnostics
ideaweaver agent system_diagnostics
# Detailed analysis with verbose output
ideaweaver agent system_diagnostics --verbose --openai-api-key your_key📋 Comprehensive Documentation: See System Diagnostics README for complete feature documentation, examples, and troubleshooting.
After training a model with IdeaWeaver, you can containerize and deploy it to Kubernetes for production use.
Install the required tools:
# Install Docker (macOS with Homebrew)
brew install docker
# Install kind (Kubernetes in Docker)
brew install kind
# Install kubectl
brew install kubectlDeploy a trained model in one command:
# Deploy a model end-to-end (Docker + Kubernetes)
ideaweaver deploy-model \
--model-path ./my-model \
--deployment-name my-model-api \
--verboseThis command will:
- Build a Docker image with your model and FastAPI server
- Create a kind cluster (if it doesn't exist)
- Deploy the model to Kubernetes
- Expose the API on http://localhost:30080
For more control, you can do each step manually:
# Build Docker image for your trained model
ideaweaver docker build \
--model-path ./my-model \
--image-name my-model:latest \
--port 8000 \
--verbose# Create a kind cluster
ideaweaver k8s create-cluster \
--cluster-name ideaweaver-cluster \
--verbose# Deploy the Docker image to Kubernetes
ideaweaver k8s deploy \
--image-name my-model:latest \
--deployment-name my-model-api \
--replicas 1 \
--verbose# Build model image
ideaweaver docker build --model-path ./path/to/model --image-name my-model:latest
# Run container locally
ideaweaver docker run --image-name my-model:latest --port-mapping 8000:8000
# List images
ideaweaver docker list
# Remove image
ideaweaver docker remove --image-name my-model:latest# Cluster management
ideaweaver k8s create-cluster --cluster-name ideaweaver-cluster
ideaweaver k8s delete-cluster --cluster-name ideaweaver-cluster
ideaweaver k8s cluster-info
# Model deployment
ideaweaver k8s deploy --image-name my-model:latest --deployment-name my-model-api
ideaweaver k8s undeploy --deployment-name my-model-api
ideaweaver k8s list-deploymentsOnce deployed, your model exposes a FastAPI server:
# Health check
curl http://localhost:30080/health
# Model information
curl http://localhost:30080/info
# Text generation
curl -X POST http://localhost:30080/generate \
-H "Content-Type: application/json" \
-d '{
"text": "Hello, how are you?",
"max_length": 50,
"temperature": 0.7
}'
# Interactive API docs
# Visit: http://localhost:30080/docsPlease refer to the official documentation.
-
Environment Setup
- Python 3.12 environment creation
- Dependency installation
- Repository cloning
-
Model Fine-tuning
- Full fine-tuning with DialoGPT
- Custom dataset support
- Training parameter configuration
-
Model Evaluation
- Multiple benchmark tasks
- Results logging
- TensorBoard integration
-
Agent Workflows
- Storybook generation
- CrewAI integration
We welcome contributions! Please see our contributing guidelines for more details.
This project is licensed under the MIT License - see the LICENSE file for details.
For detailed documentation, tutorials, and API references, please visit our documentation site.
- Some features may require additional setup
