A powerful Ruby library for working with Large Language Models (LLMs) with intelligent tool system
LLMChain is a Ruby analog of LangChain, providing a unified interface for interacting with various LLMs, built-in tool system, and RAG (Retrieval-Augmented Generation) support.
- ✅ Google Search Integration - Accurate, up-to-date search results
- ✅ Fixed Calculator - Improved expression parsing and evaluation
- ✅ Enhanced Code Interpreter - Better code extraction from prompts
- ✅ Production-Ready Output - Clean interface without debug noise
- ✅ Quick Chain Creation - Simple
LLMChain.quick_chain
method - ✅ Simplified Configuration - Easy setup with sensible defaults
- 🤖 Unified API for multiple LLMs (OpenAI, Ollama, Qwen, LLaMA2, Gemma)
- 🛠️ Intelligent tool system with automatic selection
- 🧮 Built-in tools: Calculator, web search, code interpreter
- 🔍 RAG-ready with vector database integration
- 💾 Flexible memory system (Array, Redis)
- 🌊 Streaming output for real-time responses
- 🏠 Local models via Ollama
- 🔧 Extensible architecture for custom tools
gem install llm_chain
Or add to Gemfile:
gem 'llm_chain'
-
Install Ollama for local models:
# macOS/Linux curl -fsSL https://ollama.ai/install.sh | sh # Download models ollama pull qwen3:1.7b ollama pull llama2:7b
-
Optional: API keys for enhanced features
# For OpenAI models export OPENAI_API_KEY="your-openai-key" # For Google Search (get at console.developers.google.com) export GOOGLE_API_KEY="your-google-key" export GOOGLE_SEARCH_ENGINE_ID="your-search-engine-id"
require 'llm_chain'
# Quick start with default tools (v0.5.1+)
chain = LLMChain.quick_chain
response = chain.ask("Hello! How are you?")
puts response
# Or traditional setup
chain = LLMChain::Chain.new(model: "qwen3:1.7b")
response = chain.ask("Hello! How are you?")
puts response
# Quick setup (v0.5.1+)
chain = LLMChain.quick_chain
# Tools are selected automatically
chain.ask("Calculate 15 * 7 + 32")
# 🧮 Result: 137
chain.ask("Which is the latest version of Ruby?")
# 🔍 Result: Ruby 3.3.6 (via Google search)
chain.ask("Execute code: puts (1..10).sum")
# 💻 Result: 55
# Traditional setup
tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
chain = LLMChain::Chain.new(
model: "qwen3:1.7b",
tools: tool_manager
)
calculator = LLMChain::Tools::Calculator.new
result = calculator.call("Find square root of 144")
puts result[:formatted]
# Output: sqrt(144) = 12.0
# Google search for accurate results (v0.5.1+)
search = LLMChain::Tools::WebSearch.new
results = search.call("Latest Ruby version")
puts results[:formatted]
# Output: Ruby 3.3.6 is the current stable version...
# Fallback data available without API keys
search = LLMChain::Tools::WebSearch.new
results = search.call("Which is the latest version of Ruby?")
# Works even without Google API configured
interpreter = LLMChain::Tools::CodeInterpreter.new
result = interpreter.call(<<~CODE)
```ruby
def factorial(n)
n <= 1 ? 1 : n * factorial(n - 1)
end
puts factorial(5)
CODE puts result[:formatted]
## ⚙️ Configuration (v0.5.1+)
```ruby
# Global configuration
LLMChain.configure do |config|
config.default_model = "qwen3:1.7b" # Default LLM model
config.search_engine = :google # Google for accurate results
config.memory_size = 100 # Memory buffer size
config.timeout = 30 # Request timeout (seconds)
end
# Quick chain with default settings
chain = LLMChain.quick_chain
# Override settings per chain
chain = LLMChain.quick_chain(
model: "gpt-4",
tools: false, # Disable tools
memory: false # Disable memory
)
class WeatherTool < LLMChain::Tools::BaseTool
def initialize(api_key:)
@api_key = api_key
super(
name: "weather",
description: "Gets weather information",
parameters: {
location: {
type: "string",
description: "City name"
}
}
)
end
def match?(prompt)
contains_keywords?(prompt, ['weather', 'temperature', 'forecast'])
end
def call(prompt, context: {})
location = extract_location(prompt)
# Your weather API integration
{
location: location,
temperature: "22°C",
condition: "Sunny",
formatted: "Weather in #{location}: 22°C, Sunny"
}
end
private
def extract_location(prompt)
prompt.scan(/in\s+(\w+)/i).flatten.first || "Unknown"
end
end
# Usage
weather = WeatherTool.new(api_key: "your-key")
tool_manager.register_tool(weather)
Model Family | Backend | Status | Notes |
---|---|---|---|
OpenAI | Web API | ✅ Supported | GPT-3.5, GPT-4, GPT-4 Turbo |
Qwen/Qwen2 | Ollama | ✅ Supported | 0.5B - 72B parameters |
LLaMA2/3 | Ollama | ✅ Supported | 7B, 13B, 70B |
Gemma | Ollama | ✅ Supported | 2B, 7B, 9B, 27B |
Mistral/Mixtral | Ollama | 🔄 In development | 7B, 8x7B |
Claude | Anthropic | 🔄 Planned | Haiku, Sonnet, Opus |
Command R+ | Cohere | 🔄 Planned | Optimized for RAG |
# OpenAI
openai_chain = LLMChain::Chain.new(
model: "gpt-4",
api_key: ENV['OPENAI_API_KEY']
)
# Qwen via Ollama
qwen_chain = LLMChain::Chain.new(model: "qwen3:1.7b")
# LLaMA via Ollama with settings
llama_chain = LLMChain::Chain.new(
model: "llama2:7b",
temperature: 0.8,
top_p: 0.95
)
memory = LLMChain::Memory::Array.new(max_size: 10)
chain = LLMChain::Chain.new(
model: "qwen3:1.7b",
memory: memory
)
chain.ask("My name is Alex")
chain.ask("What's my name?") # Remembers previous context
memory = LLMChain::Memory::Redis.new(
redis_url: 'redis://localhost:6379',
max_size: 100,
namespace: 'my_app'
)
chain = LLMChain::Chain.new(
model: "qwen3:1.7b",
memory: memory
)
# Initialize components
embedder = LLMChain::Embeddings::Clients::Local::OllamaClient.new(
model: "nomic-embed-text"
)
vector_store = LLMChain::Embeddings::Clients::Local::WeaviateVectorStore.new(
embedder: embedder,
weaviate_url: 'http://localhost:8080'
)
retriever = LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new(
embedder: embedder
)
# Create chain with RAG
chain = LLMChain::Chain.new(
model: "qwen3:1.7b",
retriever: retriever
)
documents = [
{
text: "Ruby supports OOP principles: encapsulation, inheritance, polymorphism",
metadata: { source: "ruby-guide", page: 15 }
},
{
text: "Modules in Ruby are used for namespaces and mixins",
metadata: { source: "ruby-book", author: "Matz" }
}
]
# Add to vector database
documents.each do |doc|
vector_store.add_document(
text: doc[:text],
metadata: doc[:metadata]
)
end
# Regular query
response = chain.ask("What is Ruby?")
# Query with RAG
response = chain.ask(
"What OOP principles does Ruby support?",
rag_context: true,
rag_options: { limit: 3 }
)
chain = LLMChain::Chain.new(model: "qwen3:1.7b")
# Streaming with block
chain.ask("Tell me about Ruby history", stream: true) do |chunk|
print chunk
$stdout.flush
end
# Streaming with tools
tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
chain = LLMChain::Chain.new(
model: "qwen3:1.7b",
tools: tool_manager
)
chain.ask("Calculate 15! and explain the process", stream: true) do |chunk|
print chunk
end
# OpenAI
export OPENAI_API_KEY="sk-..."
export OPENAI_ORGANIZATION_ID="org-..."
# Search
export SEARCH_API_KEY="your-search-api-key"
export GOOGLE_SEARCH_ENGINE_ID="your-cse-id"
# Redis
export REDIS_URL="redis://localhost:6379"
# Weaviate
export WEAVIATE_URL="http://localhost:8080"
# From configuration
tools_config = [
{
class: 'calculator'
},
{
class: 'web_search',
options: {
search_engine: :duckduckgo,
api_key: ENV['SEARCH_API_KEY']
}
},
{
class: 'code_interpreter',
options: {
timeout: 30,
allowed_languages: ['ruby', 'python']
}
}
]
tool_manager = LLMChain::Tools::ToolManager.from_config(tools_config)
# Qwen with custom parameters
qwen = LLMChain::Clients::Qwen.new(
model: "qwen2:7b",
temperature: 0.7,
top_p: 0.9,
base_url: "http://localhost:11434"
)
# OpenAI with settings
openai = LLMChain::Clients::OpenAI.new(
model: "gpt-4",
api_key: ENV['OPENAI_API_KEY'],
temperature: 0.8,
max_tokens: 2000
)
begin
chain = LLMChain::Chain.new(model: "qwen3:1.7b")
response = chain.ask("Complex query")
rescue LLMChain::UnknownModelError => e
puts "Unknown model: #{e.message}"
rescue LLMChain::ClientError => e
puts "Client error: #{e.message}"
rescue LLMChain::TimeoutError => e
puts "Timeout exceeded: #{e.message}"
rescue LLMChain::Error => e
puts "General LLMChain error: #{e.message}"
end
require 'llm_chain'
class ChatBot
def initialize
@tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
@memory = LLMChain::Memory::Array.new(max_size: 20)
@chain = LLMChain::Chain.new(
model: "qwen3:1.7b",
memory: @memory,
tools: @tool_manager
)
end
def chat_loop
puts "🤖 Hello! I'm an AI assistant with tools. Ask me anything!"
loop do
print "\n👤 You: "
input = gets.chomp
break if input.downcase.in?(['exit', 'quit', 'bye'])
response = @chain.ask(input, stream: true) do |chunk|
print chunk
end
puts "\n"
end
end
end
# Run
bot = ChatBot.new
bot.chat_loop
data_chain = LLMChain::Chain.new(
model: "qwen3:7b",
tools: LLMChain::Tools::ToolManager.create_default_toolset
)
# Analyze CSV data
response = data_chain.ask(<<~PROMPT)
Analyze this code and execute it:
```ruby
data = [
{ name: "Alice", age: 25, salary: 50000 },
{ name: "Bob", age: 30, salary: 60000 },
{ name: "Charlie", age: 35, salary: 70000 }
]
average_age = data.sum { |person| person[:age] } / data.size.to_f
total_salary = data.sum { |person| person[:salary] }
puts "Average age: #{average_age}"
puts "Total salary: #{total_salary}"
puts "Average salary: #{total_salary / data.size}"
PROMPT
puts response
## 🧪 Testing
```bash
# Run tests
bundle exec rspec
# Run demo
ruby -I lib examples/tools_example.rb
# Interactive console
bundle exec bin/console
LLMChain::Chain
- Main class for creating chainsLLMChain::Tools::ToolManager
- Tool managementLLMChain::Memory::Array/Redis
- Memory systemsLLMChain::Clients::*
- Clients for various LLMs
chain = LLMChain::Chain.new(options)
# Main method
chain.ask(prompt, stream: false, rag_context: false, rag_options: {})
# Initialization parameters
# - model: model name
# - memory: memory object
# - tools: array of tools or ToolManager
# - retriever: RAG retriever
# - client_options: additional client parameters
- ReAct agents and multi-step reasoning
- More tools (file system, database queries)
- Claude integration
- Enhanced error handling
- Multi-agent systems
- Task planning and workflows
- Web interface for testing
- Metrics and monitoring
- Stable API with semantic versioning
- Complete documentation coverage
- Production-grade performance
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open Pull Request
git clone https://github.com/FuryCow/llm_chain.git
cd llm_chain
bundle install
bundle exec rspec
This project is distributed under the MIT License.
- Ollama team for excellent local LLM platform
- LangChain developers for inspiration
- Ruby community for support
Made with ❤️ for Ruby community
Documentation | Examples | Changelog | Issues | Discussions