Creating intelligent AI applications that can handle complex tasks requires more than just making basic API calls. This tutorial will guide you through building a complete agentic AI workflow using OpenRouter API, which gives you access to multiple AI models (from OpenAI, Anthropic, Google, and more) through a single, consistent interface.
What You'll Learn
- Setting up a proper development environment for AI applications
- Managing API keys securely using environment variables
- Implementing a simple yet powerful agentic workflow
- Accessing models from different providers through OpenRouter
- Testing and comparing responses from various AI models
What is OpenRouter?
OpenRouter is a unified API gateway that gives you access to hundreds of AI models from various providers through a single endpoint. Instead of managing multiple API integrations, you can:
- Access models from OpenAI, Anthropic, Google, and others with one API
- Switch between models without changing your code
- Take advantage of automatic fallbacks and cost optimization
- Build more resilient AI applications with multi-model support
Prerequisites
To follow this tutorial, you'll need:
- Python 3.8 or higher
- Basic familiarity with Python and API concepts
- A free OpenRouter account (sign up at openrouter.ai)
Step 1: Environment Setup
Let's start by setting up our project structure:
mkdir agentic-ai-workflow
cd agentic-ai-workflow
Create a requirements.txt
file with the necessary dependencies:
openai>=1.0.0
python-dotenv>=1.0.0
requests>=2.28.2
jupyter>=1.0.0
notebook>=6.5.3
It's best practice to create a virtual environment to isolate your project dependencies. Here's how to set it up:
# Create a virtual environment (you can use python3.12 or whatever version you have)
python3 -m venv venv
# Activate the virtual environment
# On macOS/Linux:
source venv/bin/activate
# On Windows:
# venv\Scripts\activate
Now install the dependencies within your virtual environment:
pip install -r requirements.txt
If the above command doesn't work, you might need to specify python3:
python3 -m pip install -r requirements.txt
Create a .env
file to store your API key securely (never commit this to version control):
# OpenRouter API Key
# Get yours at https://openrouter.ai/keys
OPENROUTER_API_KEY=your_openrouter_api_key_here
# Your site URL and name (optional, used for OpenRouter leaderboard)
YOUR_SITE_URL=https://yourdomain.com
YOUR_SITE_NAME=Your App Name
Step 2: Basic Client Setup
Let's create our first script to set up and test the OpenRouter client:
# basic_setup.py
"""
Basic OpenRouter API setup example
This script demonstrates how to properly set up the OpenRouter API client
using the OpenAI SDK with environment variables for API key management.
"""
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables from .env file
load_dotenv()
def setup_openrouter_client():
"""
Initialize the OpenRouter client using the OpenAI SDK
with proper configuration.
Returns:
OpenAI: Configured OpenAI client pointing to OpenRouter
"""
# Get API key from environment variables
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise ValueError(
"OpenRouter API key not found. Please set the OPENROUTER_API_KEY "
"environment variable in your .env file."
)
# Initialize the client with OpenRouter configuration
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
# Optional headers for OpenRouter leaderboard
default_headers={
"HTTP-Referer": os.getenv("YOUR_SITE_URL", "http://localhost:5000"),
"X-Title": os.getenv("YOUR_SITE_NAME", "Agentic AI Demo")
}
)
return client
def test_connection():
"""Test the connection to OpenRouter API with a simple completion request."""
try:
client = setup_openrouter_client()
# Make a simple test request
completion = client.chat.completions.create(
model="openai/gpt-3.5-turbo", # OpenRouter model format
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, world!"}
],
)
print("✅ Successfully connected to OpenRouter API!")
print(f"Model response: {completion.choices[0].message.content}")
except Exception as e:
print(f"❌ Error connecting to OpenRouter API: {e}")
if __name__ == "__main__":
test_connection()
Run this script to test your connection:
python basic_setup.py
If everything is set up correctly, you should see a successful connection message and a response from the AI model:
✅ Successfully connected to OpenRouter API!
Model response: Hello! How can I assist you today?
Step 3: Comparing Different Models
One of the key benefits of OpenRouter is the ability to switch between different AI models easily. Let's create a script to compare responses from various models:
# model_comparison.py
"""
OpenRouter Model Comparison Example
This script demonstrates how to access models from different providers
(OpenAI, Anthropic, Google, etc.) using OpenRouter's unified API.
"""
import os
from dotenv import load_dotenv
from openai import OpenAI
# Load environment variables from .env file
load_dotenv()
# Initialize the OpenRouter client
def get_client():
"""
Initialize and return the OpenRouter client
"""
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise ValueError("OPENROUTER_API_KEY not found in environment variables")
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
default_headers={
"HTTP-Referer": os.getenv("YOUR_SITE_URL", "http://localhost:5000"),
"X-Title": os.getenv("YOUR_SITE_NAME", "Model Comparison Demo")
}
)
return client
def generate_response(model, prompt):
"""
Generate a response using the specified model
Args:
model (str): OpenRouter model identifier
prompt (str): Text prompt to send to the model
Returns:
str: The model's response
"""
client = get_client()
response = client.chat.completions.create(
model=model,
messages=[
{"role": "user", "content": prompt}
]
)
return response.choices[0].message.content
def compare_models(prompt):
"""
Compare responses from different models for the same prompt
Args:
prompt (str): The prompt to send to all models
"""
# Define models from different providers to test
models = {
"OpenAI GPT-3.5": "openai/gpt-3.5-turbo",
"OpenAI GPT-4": "openai/gpt-4",
"Anthropic Claude": "anthropic/claude-instant-v1",
"Google PaLM 2": "google/palm-2-chat-bison",
"Mistral": "mistralai/mistral-7b-instruct-v0.2"
}
print(f"Prompt: {prompt}\n")
print("-" * 50)
for name, model_id in models.items():
try:
print(f"\n{name} ({model_id}):")
response = generate_response(model_id, prompt)
print(f"Response: {response}\n")
print("-" * 50)
except Exception as e:
print(f"Error with {name}: {str(e)}")
print("-" * 50)
if __name__ == "__main__":
# Example prompt for comparison
test_prompt = "Explain quantum computing in simple terms."
compare_models(test_prompt)
Run this script to see how different models respond to the same prompt:
python model_comparison.py
This allows you to compare the strengths and weaknesses of different models for your specific use case. Here's an example of the output you might see:
Prompt: Explain quantum computing in simple terms.
--------------------------------------------------
OpenAI GPT-3.5 (openai/gpt-3.5-turbo):
Response: Quantum computing is a type of computing that uses the principles of quantum mechanics to perform calculations. In traditional computing, information is stored in bits, which can either be a 0 or a 1. However, in quantum computing, information is stored in qubits, which can be 0, 1, or both at the same time. This allows quantum computers to perform multiple calculations simultaneously, making them much faster and more powerful than traditional computers for certain tasks. Quantum computing has the potential to revolutionize fields such as cryptography, drug discovery, and artificial intelligence.
--------------------------------------------------
OpenAI GPT-4 (openai/gpt-4):
Response: Quantum computing is a type of computing that's very different from the computers we use every day. It uses principles of quantum mechanics (a branch of physics that deals with phenomena on a very small scale, like molecules, atoms, and subatomic particles) to process information.
In regular computers, the fundamental unit of information is a "bit", which can be either a 0 or a 1. But in a quantum computer, it uses "quantum bits" or "qubits". A qubit can be both 0 and 1 at the same time, thanks to a property in quantum mechanics called superposition.
Additionally, qubits can be entangled, another property in quantum mechanics. When qubits are entangled, the state of one qubit is directly related to the state of the other, no matter how far they are.
These properties allow quantum computers to process a vast number of possibilities all at the same time, solve complex problems more rapidly compared to classical machines, and could revolutionize fields such as cryptography, optimization, drug discovery, and more. However, it is also important to note that quantum computing is still in early stages of development.
--------------------------------------------------
Mistral (mistralai/mistral-7b-instruct-v0.2):
Response: Quantum computing is a type of computing that uses the principles of quantum mechanics to perform operations on data. While classical computers use bits, which can only be in two states (0 or 1), quantum computers use quantum bits, or qubits. Qubits can exist in multiple states simultaneously, thanks to quantum mechanics phenomena such as superposition and entanglement.
Here's a simple analogy to help understand the concept of a qubit: Imagine a classic coin. In a classical computer, the coin can be heads or tails, just like a bit can be either 0 or 1. However, in a quantum computer, the qubit can be both heads and tails simultaneously, thanks to superposition. This means that quantum computers can perform many calculations at once, making them potentially much faster than classical computers for certain tasks.
Quantum computing is still in its infancy and faces significant challenges before it becomes a mainstream technology. But its potential to solve complex problems more efficiently than classical computers has made it an exciting area of research. Some of the potential applications include breaking encryption codes, optimizing complex systems, and simulating chemical reactions for drug discovery.
--------------------------------------------------
Note: You might see errors for some models if they're not available through your OpenRouter account or if the model IDs have changed since this tutorial was written.
Step 4: Building an Agentic AI Workflow
Now, let's create the main agentic workflow. An agentic AI workflow involves breaking down complex tasks into manageable steps and executing them sequentially:
# agent_example.py
"""
Simple Agentic AI Workflow Example with OpenRouter
This script demonstrates a basic agentic workflow where an AI agent:
1. Analyzes a user query
2. Breaks it down into steps
3. Executes each step sequentially
4. Compiles a final response
TODO: Add support for memory/context persistence between sessions
"""
import os
import json
from dotenv import load_dotenv
from openai import OpenAI
import time
# import logging # Will add this later for better debugging
# Load environment variables from .env file
load_dotenv()
class Agent:
"""A simple AI agent that can solve tasks through multi-step reasoning"""
def __init__(self, model="openai/gpt-4"):
"""
Initialize the agent with the specified model
Args:
model (str): The OpenRouter model identifier to use
"""
self.model = model
self.client = self._setup_client()
self.conversation_history = [] # FIXME: Not actually using this yet
# self.max_retries = 3 # Might need this for production
def _setup_client(self):
"""Set up the OpenRouter client"""
api_key = os.getenv("OPENROUTER_API_KEY")
if not api_key:
raise ValueError(
"OpenRouter API key not found. Please set the OPENROUTER_API_KEY "
"environment variable in your .env file."
)
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key=api_key,
default_headers={
"HTTP-Referer": os.getenv("YOUR_SITE_URL", "http://localhost:5000"),
"X-Title": os.getenv("YOUR_SITE_NAME", "Agentic AI Demo")
}
)
return client
def _call_llm(self, messages):
"""
Make an API call to the language model
Args:
messages (list): List of message objects for the conversation
Returns:
str: The model's response content
"""
try:
# Old implementation with temperature=0.7
# response = self.client.chat.completions.create(
# model=self.model,
# messages=messages,
# temperature=0.7
# )
response = self.client.chat.completions.create(
model=self.model,
messages=messages
)
return response.choices[0].message.content
except Exception as e:
print(f"Error calling LLM: {e}")
# Maybe we should retry here?
return None
def analyze_task(self, user_query):
"""
Break down a user query into discrete steps
Args:
user_query (str): The user's request
Returns:
list: List of steps to solve the task
"""
system_prompt = """
You are an AI task planner. Your job is to break down a user's request
into a series of clear, discrete steps that can be executed sequentially.
Respond with a JSON array of steps, where each step has:
1. A "description" field describing what needs to be done
2. A "reasoning" field explaining why this step is necessary
Format your response as a valid JSON array without any additional text.
"""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Break down this task into steps: {user_query}"}
]
response = self._call_llm(messages)
try:
# Extract the JSON array from the response
steps = json.loads(response)
return steps
except json.JSONDecodeError:
print("Error: Could not parse response as JSON")
print(f"Raw response: {response}")
return []
def execute_step(self, step, context):
"""
Execute a single step in the plan
Args:
step (dict): The step to execute
context (str): Context from previous steps
Returns:
str: Result of executing the step
"""
system_prompt = """
You are an AI assistant focusing on executing a specific task step.
Use the provided context and step description to complete this specific step only.
Your response should be detailed and directly address the step's requirements.
"""
step_msg = f"Context so far: {context}\n\nExecute this step: {step['description']}\n\nReasoning: {step['reasoning']}"
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": step_msg}
]
return self._call_llm(messages)
def compile_results(self, steps_results, user_query):
"""
Compile the results of all steps into a final response
Args:
steps_results (list): Results from each executed step
user_query (str): The original user query
Returns:
str: Final compiled response
"""
system_prompt = """
You are an AI assistant that compiles information from multiple processing steps
into a coherent, unified response. Your goal is to present the information clearly
and directly address the user's original query.
"""
# Join step results - could probably be a one-liner but this is clearer
step_texts = []
for i, res in enumerate(steps_results):
step_num = i + 1
step_texts.append(f"Step {step_num} result: {res}")
steps_text = "\n\n".join(step_texts)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Original query: {user_query}\n\nResults from steps:\n{steps_text}\n\nPlease provide a comprehensive, unified response to the original query."}
]
return self._call_llm(messages)
def solve(self, user_query):
"""
Solve a task through multi-step reasoning
Args:
user_query (str): The user's request
Returns:
dict: A dictionary containing the original query, steps taken,
results of each step, and the final response
"""
print(f"🤔 Analyzing task: {user_query}")
steps = self.analyze_task(user_query)
if not steps:
return {"error": "Could not break down the task into steps"}
print(f"📋 Breaking down into {len(steps)} steps:")
for i, step in enumerate(steps):
print(f" {i+1}. {step['description']}")
step_results = [] # Going with snake_case here, inconsistent with camelCase elsewhere
context = ""
for i, step in enumerate(steps):
print(f"\n⚙️ Executing step {i+1}: {step['description']}")
result = self.execute_step(step, context)
step_results.append(result)
context += f"\nStep {i+1} result: {result}"
print(f" ✅ Completed")
# Add a small delay to avoid rate limits
time.sleep(1) # Maybe this should be configurable?
print("\n🔄 Compiling final response...")
final_response = self.compile_results(step_results, user_query)
return {
"query": user_query,
"steps": steps,
"step_results": step_results, # Note: variable name changed from steps_results
"final_response": final_response
}
def main():
"""Main function to demonstrate the agent's capabilities"""
# Initialize the agent with a capable model
agent = Agent(model="openai/gpt-4")
# Example query - we used to have a different one about travel planning
user_query = "Research and suggest three possible vacation destinations for a family with young children, considering budget-friendly options."
# Solve the task
result = agent.solve(user_query)
# Print the final response
print("\n" + "=" * 50)
print("FINAL RESPONSE:")
print("=" * 50)
print(result["final_response"])
# Entry point
if __name__ == "__main__":
main()
Run the agent example with:
python agent_example.py
When you run this script, you'll see the agent breaking down the task, executing each step, and compiling a final response. Here's what the output might look like:
🤔 Analyzing task: Research and suggest three possible vacation destinations for a family with young children, considering budget-friendly options.
📋 Breaking down into 6 steps:
1. Identify the important factors to consider when choosing a family vacation destination
2. Consider the interests and needs of young children
3. Determine the family's budget for the vacation
4. Research vacation destinations that match the identified factors and budget
5. Narrow down the list to three possible vacation destinations
6. Provide a brief overview for each suggested destination, including attractions, accommodations, and estimated cost
⚙️ Executing step 1: Identify the important factors to consider when choosing a family vacation destination
✅ Completed
⚙️ Executing step 2: Consider the interests and needs of young children
✅ Completed
⚙️ Executing step 3: Determine the family's budget for the vacation
✅ Completed
⚙️ Executing step 4: Research vacation destinations that match the identified factors and budget
✅ Completed
⚙️ Executing step 5: Narrow down the list to three possible vacation destinations
✅ Completed
⚙️ Executing step 6: Provide a brief overview for each suggested destination, including attractions, accommodations, and estimated cost
✅ Completed
🔄 Compiling final response...
==================================================
FINAL RESPONSE:
==================================================
For a family vacation with young children, three budget-friendly options could be Disney World in Florida, Yellowstone National Park in Wyoming, and San Diego, California.
Disney World is a classic choice, providing a magical experience for children. The park offers numerous entertainment options such as Magic Kingdom Park, Epcot, Disney's Animal Kingdom, and Disney's Hollywood Studios. The estimated cost for a family of four for a week, including park tickets, meals, and accommodation, may range from $3,500 to $6,000. You can choose to stay in one of Disney's resort hotels or opt for a vacation rental outside the park to save costs.
Alternatively, Yellowstone National Park allows your family to explore nature's wonders. Major attractions include wildlife viewing, geothermal features like Old Faithful, hiking trails, and ranger-led programs tailored for kids. Accommodation options range from campsites starting at $30 per night or lodges like the Old Faithful Inn. A week-long trip for a family of four, taking into account park admission, accommodation, and meals, could range from $1,000 - $4,000.
Lastly, San Diego offers a mix of city and marine life. You can visit the famous San Diego Zoo or the New Children's Museum, relax at the beach, or get adventurous at SeaWorld and Legoland. Family-friendly hotels like the Paradise Point Resort & Spa start from $200 per night, and vacation rentals in the city could be a more economical choice. A one-week stay's estimated cost may range between $2,500 -$4,500, inclusive of accommodation, food, and entry to various attractions.
These estimates serve as a rough guide and may vary based on factors like the time of travel, specific activities chosen, and the mode of transportation used. Regardless of the destination you choose, ensure that it aligns with your children's interests, has kid-friendly attractions and accommodation, food options that cater to their palate, and is safe and easily accessible. It's also essential to double-check any travel restrictions due to COVID-19.
As you can see, the agent has successfully:
- Broken down the complex query into logical steps
- Executed each step with careful consideration
- Compiled the information into a comprehensive, well-structured response
Understanding the Agentic Workflow
Let's break down what's happening in our agent_example.py
:
- Task Planning: The agent uses a "task planner" persona to break down the user's request into discrete steps.
- Structured Output: The planning phase returns a JSON structure with clear steps and reasoning.
- Sequential Execution: Each step is executed in order, building on the context of previous steps.
- Progressive Context: Each step's result is added to the context for future steps.
- Final Synthesis: All results are compiled into a coherent, comprehensive response.
This approach has several advantages:
- Complex Problem Solving: Breaking down complex problems into manageable steps.
- Improved Reasoning: Each step can focus on a specific aspect of the problem.
- Better Explainability: The process is transparent and the reasoning is visible.
- Flexibility: You can swap out models for different steps based on their strengths.
Practical Applications
This agentic workflow pattern can be applied to many real-world scenarios:
- Research Assistants: Breaking down research questions into investigation steps
- Content Creation: Planning, researching, drafting, and refining content
- Data Analysis: Processing, analyzing, and interpreting data
- Customer Support: Diagnosing issues, finding solutions, and providing explanations
- Decision Support: Analyzing options, weighing pros and cons, and making recommendations
Next Steps
Here are some ways to extend this project:
- Add Memory: Implement a vector database to give your agent long-term memory.
- Add Tools: Enable your agent to use tools (like web search, calculators, etc.).
- Optimize Cost: Implement a model router that uses cheaper models for simpler tasks.
- Improve Error Handling: Add retry logic and better error handling.
- Add User Feedback: Implement a feedback loop to improve the agent's responses.
Conclusion
You've built a complete agentic AI workflow using OpenRouter API! This approach allows you to create more sophisticated AI applications that can break down complex problems into manageable steps.
The key benefits of using OpenRouter for your agentic workflow include:
- Model Flexibility: Easily switch between models from different providers.
- Cost Optimization: Use the most cost-effective model for each task.
- Redundancy: If one provider is unavailable, your application can fall back to others.
- Simplified Integration: One API for many models means less code to maintain.
Now, you have the foundation to build more sophisticated AI agents that can solve complex problems through multi-step reasoning. The possibilities are endless!
Get the complete code on GitHub.
Happy building! 🚀
Open for Projects
I'm currently available to take on new projects in the following areas:
- Artificial Intelligence solutions (both no-code and custom development)
- No-code automation with n8n (and open to other automation platforms)
- React.js frontend development
- Node.js backend/API development
- WooCommerce development and customization
- Stripe payment integration and automation
- PHP applications and frameworks
- Python development
- Supabase, Vercel & GitHub integration
My Expertise
I'm a Senior Web Developer with growing expertise in AI/ML solutions, passionate about creating practical applications that leverage artificial intelligence to solve real-world problems. While relatively new to AI/ML development (less than a year of focused experience), I've quickly built a portfolio of functional projects that demonstrate my ability to integrate AI capabilities into useful applications. My specialized skills include:
- AI Integration: Connecting pre-trained AI models with web applications through APIs and direct implementation
- Computer Vision & NLP: Implementing image captioning, sentiment analysis, text summarization, chatbots, and language translation applications
- Agentic AI Workflows: Creating intelligent autonomous agents that can execute complex tasks through multi-step reasoning
- Full-Stack Development: Crafting seamless experiences with React.js frontends and Python/Flask or Node.js backends
- E-commerce Solutions: Expert in WooCommerce/Stripe integrations with subscription management and payment processing
- Automation Tools: Python scripts and n8n workflows for business-critical processes and data pipelines
- Content Automation: Creating AI-powered systems that generate complete content packages from blog posts to social media updates
Featured Projects
Personal AI Chatbot - A complete conversational AI application built with React and Flask, powered by Microsoft's DialoGPT-medium model from Hugging Face. This project demonstrates how to create an interactive chatbot with a clean, responsive interface that understands and generates human-like text responses.
Image Captioning App - A full-stack application that generates descriptive captions for uploaded images using AI. Built with React for the frontend and Flask for the backend, this app leverages Salesforce's BLIP model via Hugging Face's transformers library to analyze images and create natural language descriptions of their content.
Sentiment Analysis App - A lightweight full-stack application that performs sentiment analysis on user-provided text using React.js for the frontend and Flask with Hugging Face Transformers for the backend. This project demonstrates how easily powerful pre-trained NLP models can be integrated into modern web applications.
Agentic AI Workflow - A Python-based framework for building intelligent AI agents that can break down complex tasks into manageable steps and execute them sequentially. This project demonstrates how to leverage OpenRouter API to access multiple AI models (OpenAI, Anthropic, Google, etc.) through a unified interface, enabling more sophisticated problem-solving capabilities and better reasoning in AI applications.
WiseCashAI - A revolutionary privacy-first financial management platform that operates primarily in your browser, ensuring your sensitive financial data never leaves your control. Unlike cloud-based alternatives that collect and monetize your information, WiseCashAI offers AI-powered features like intelligent transaction categorization, envelope-based budgeting, and goal tracking while keeping your data local. Optional Google Drive integration with end-to-end encryption provides cross-device access without compromising privacy.
Content Automation Workflow Pro - AI-powered content generation system that transforms content creation with a single command. This Python-based workflow leverages OpenRouter and Replicate to generate SEO-optimized blog posts, custom thumbnail images, and platform-specific social media posts across 7+ platforms, reducing content creation time from hours to minutes.
Stripe/WooCommerce Integration Tools:
- Stripe Validator Tool - Cross-references WooCommerce subscription data with the Stripe API to prevent payment failures (78% reduction in failures)
- Invoice Notifier System - Automatically identifies overdue invoices and sends strategic payment reminders (64% reduction in payment delays)
- WooCommerce Bulk Refunder - Python script for efficiently processing bulk refunds with direct payment gateway API integration
Open-Source AI Mini Projects
I'm actively developing open-source AI applications that solve real-world problems:
- Image Captioning App - Generates descriptive captions for images using Hugging Face's BLIP model
- AI Resume Analyzer - Extracts key details from resumes using BERT-based NER models
- Document Summarizer - Creates concise summaries from lengthy documents using BART models
- Multilingual Translator - Real-time translation tool supporting multiple language pairs
- Toxic Comment Detector - Identifies harmful or offensive language in real-time
- Recipe Finder - AI-powered tool that recommends recipes based on available ingredients
- Personal AI Chatbot - Customizable chat application built with DialoGPT
All these projects are available on my GitHub with full source code.
Development Philosophy
I believe in creating technology that empowers users without compromising their privacy or control. My projects focus on:
- Privacy-First Design: Keeping sensitive data under user control by default
- Practical AI Applications: Leveraging AI capabilities to solve real-world problems
- Modular Architecture: Building systems with clear separation of concerns for better maintainability
- Accessibility: Making powerful tools available to everyone regardless of technical expertise
- Open Source: Contributing to the community and ensuring transparency
Technical Articles & Tutorials
I regularly share detailed tutorials on AI development, automation, and integration solutions:
- Building a Personal AI Chatbot with React and Flask - Complete guide to creating a conversational AI application
- Building an Image Captioning App with React, Flask and BLIP - Learn how to create a computer vision application that generates natural language descriptions of images
- Building a Sentiment Analysis App with React and Flask - Step-by-step guide to creating a full-stack NLP application
- Creating an Agentic AI Workflow with OpenRouter - Tutorial on building intelligent AI agents
- Getting Started with Content Automation Workflow Pro - Comprehensive guide to automated content creation
- Building Privacy-First AI Applications - Techniques for implementing AI features while respecting user privacy
I specialize in developing practical solutions that leverage AI and automation to solve real business problems and deliver measurable results. Find my tutorials on DEV.to and premium tools in my Gumroad store.
If you have a project involving e-commerce, content automation, financial tools, or custom AI applications, feel free to reach out directly at [email protected].
Top comments (1)
script
agent_example.py
seems LLM generated.