AI Automation: Build LLM Apps & AI-Agents with n8n & APIs
A Docker-based platform for building AI applications and agents using n8n, Ollama, and Qdrant
Overview
The AI Automation Platform is an open-source toolkit that enables developers to build, deploy, and manage AI-powered applications and agents with minimal effort. By combining the power of:
This platform provides everything you need to create sophisticated AI applications, from simple chatbots to complex RAG (Retrieval Augmented Generation) agents that can access and reason over your data.
Features
Quick Start
Prerequisites
Installation
git clone https://github.com/shanojpillai/ai-automation.git cd ai-automation
chmod +x startup.sh ingest_documents.sh
2. Start the platform:
./startup.sh
3. Access the services:
Usage
Basic LLM Query
Test the basic LLM query workflow:
curl -X POST http://localhost:5678/webhook/query \
-H "Content-Type: application/json" \
-d '{"query": "What is artificial intelligence?"}'
Document Ingestion
Recommended by LinkedIn
./ingest_documents.sh
RAG-enabled AI Agent
After ingesting documents, query your knowledge base:
curl -X POST http://localhost:5678/webhook/rag-query \
-H "Content-Type: application/json" \
-d '{"query": "What does the document say about machine learning?"}'
Architecture
The platform consists of four main components:
These components work together to provide a complete AI application development environment:
Repository Structure
shanojpillai-ai-automation/
├── README.md
├── config.json
├── ingest_documents.sh
├── LICENSE
├── requirements.txt
├── startup.sh
├── data/
│ ├── documents/
│ │ └── .gitkeep
│ └── examples/
│ ├── sample_document.txt
│ └── sample_query.json
├── docker/
│ ├── docker-compose.yml
│ └── helper/
│ ├── Dockerfile
│ └── requirements.txt
├── docs/
│ ├── guides/
│ │ ├── building_agents.md
│ │ ├── custom_models.md
│ │ └── getting_started.md
│ └── images/
├── scripts/
│ ├── ingest_documents.py
│ ├── setup_n8n_workflow.py
│ ├── setup_vectordb.py
│ └── test_ollama.py
├── workflows/
│ ├── basic_llm_query.json
│ └── rag_ai_agent.json
└── .github/
└── workflows/
└── ci.yml
Configuration
The platform can be configured by editing the config.json file:
{
"llm": {
"provider": "ollama",
"host": "http://ollama:11434",
"model": "llama3",
"parameters": {
"temperature": 0.7,
"max_tokens": 2048
}
},
"vectordb": {
"provider": "qdrant",
"host": "http://qdrant:6333",
"collection_name": "documents",
"embedding_model": "sentence-transformers/all-MiniLM-L6-v2",
"dimension": 384
},
"n8n": {
"host": "http://n8n:5678",
"api_key": "",
"workflows": {
"basic_llm_query": "/app/workflows/basic_llm_query.json",
"rag_ai_agent": "/app/workflows/rag_ai_agent.json"
}
}
}
Documentation
Detailed documentation is available in the docs/guides directory:
Examples
The platform includes example data and workflows to help you get started:
Acknowledgements
#AIAutomation #LocalLLM #n8n #RAG #OpenSourceAI
Thanks for the shout-out! 😎