AI Automation: Build LLM Apps & AI-Agents with n8n & APIs

AI Automation: Build LLM Apps & AI-Agents with n8n & APIs

A Docker-based platform for building AI applications and agents using n8n, Ollama, and Qdrant

Overview

The AI Automation Platform is an open-source toolkit that enables developers to build, deploy, and manage AI-powered applications and agents with minimal effort. By combining the power of:

  • n8n: A workflow automation tool for creating custom integrations and workflows
  • Ollama: A framework for running large language models locally
  • Qdrant: A vector database for efficient similarity search

This platform provides everything you need to create sophisticated AI applications, from simple chatbots to complex RAG (Retrieval Augmented Generation) agents that can access and reason over your data.

Article content

Features

  • Workflow Automation: Build complex AI workflows using n8n’s visual editor
  • Local LLM Integration: Run powerful language models locally with Ollama
  • Vector Search: Store and retrieve documents semantically with Qdrant
  • Document Ingestion: Process and embed documents for AI retrieval
  • RAG Implementation: Create AI agents that can reference your knowledge base
  • Docker-based: Easy deployment with Docker Compose
  • Extensible: Add custom components and integrations as needed

Quick Start

Prerequisites

Installation

  1. Clone the repository:

git clone https://github.com/shanojpillai/ai-automation.git cd ai-automation        

  1. Make the scripts executable:

chmod +x startup.sh ingest_documents.sh        

2. Start the platform:

./startup.sh        

3. Access the services:

Usage

Basic LLM Query

Test the basic LLM query workflow:

curl -X POST http://localhost:5678/webhook/query \
  -H "Content-Type: application/json" \
  -d '{"query": "What is artificial intelligence?"}'        

Document Ingestion

  1. Place your text documents in the data/documents directory
  2. Run the ingestion script:

./ingest_documents.sh        

RAG-enabled AI Agent

After ingesting documents, query your knowledge base:

curl -X POST http://localhost:5678/webhook/rag-query \
  -H "Content-Type: application/json" \
  -d '{"query": "What does the document say about machine learning?"}'        

Architecture

The platform consists of four main components:

Article content

  1. n8n: The workflow automation engine that orchestrates the AI workflows
  2. Ollama: Provides LLM capabilities for text generation and embeddings
  3. Qdrant: Vector database for storing and retrieving document embeddings
  4. Helper Container: Python environment for running utility scripts

These components work together to provide a complete AI application development environment:

Repository Structure

shanojpillai-ai-automation/
├── README.md
├── config.json
├── ingest_documents.sh
├── LICENSE
├── requirements.txt
├── startup.sh
├── data/
│   ├── documents/
│   │   └── .gitkeep
│   └── examples/
│       ├── sample_document.txt
│       └── sample_query.json
├── docker/
│   ├── docker-compose.yml
│   └── helper/
│       ├── Dockerfile
│       └── requirements.txt
├── docs/
│   ├── guides/
│   │   ├── building_agents.md
│   │   ├── custom_models.md
│   │   └── getting_started.md
│   └── images/
├── scripts/
│   ├── ingest_documents.py
│   ├── setup_n8n_workflow.py
│   ├── setup_vectordb.py
│   └── test_ollama.py
├── workflows/
│   ├── basic_llm_query.json
│   └── rag_ai_agent.json
└── .github/
    └── workflows/
        └── ci.yml        

Configuration

The platform can be configured by editing the config.json file:

{
  "llm": {
    "provider": "ollama",
    "host": "http://ollama:11434",
    "model": "llama3",
    "parameters": {
      "temperature": 0.7,
      "max_tokens": 2048
    }
  },
  "vectordb": {
    "provider": "qdrant",
    "host": "http://qdrant:6333",
    "collection_name": "documents",
    "embedding_model": "sentence-transformers/all-MiniLM-L6-v2",
    "dimension": 384
  },
  "n8n": {
    "host": "http://n8n:5678",
    "api_key": "",
    "workflows": {
      "basic_llm_query": "/app/workflows/basic_llm_query.json",
      "rag_ai_agent": "/app/workflows/rag_ai_agent.json"
    }
  }
}        

Documentation

Detailed documentation is available in the docs/guides directory:

Examples

The platform includes example data and workflows to help you get started:

  • Sample Document: data/examples/sample_document.txt
  • Sample Query: data/examples/sample_query.json
  • Basic LLM Query Workflow: workflows/basic_llm_query.json
  • RAG AI Agent Workflow: workflows/rag_ai_agent.json

Acknowledgements

  • n8n — Workflow automation tool
  • Ollama — Run large language models locally
  • Qdrant — Vector database for similarity search
  • LangChain — Framework for LLM applications
  • Sentence Transformers — Text embeddings


#AIAutomation #LocalLLM #n8n #RAG #OpenSourceAI


Thanks for the shout-out! 😎

To view or add a comment, sign in

More articles by Shanoj Kumar V

Others also viewed

Explore content categories