DEV Community

Alain Airom
Alain Airom

Posted on

AI Agent using Function Calling with Ollama, Granite and Terraform!

An adaption of “Your First AI Agent using Function Calling” written and published by “LM Pro” 👏

Image description

Introduction

This post draws significant inspiration from the article cited above, offering a practical demonstration of how generative AI can be leveraged for natural language conversations, intelligent agent actions, and the automation of tasks within the Infrastructure as Code (IaC) domain. It highlights the exciting potential of combining AI capabilities with operational workflows for simplified and efficient management.

To adapt the provided code to my specific use-case, I implemented IBM Granite 3.2 locally through Ollama. This setup allows the AI agent to execute a bash script, which in turn calls the Terraform CLI, enabling it to initiate and build infrastructure as code, thereby running an automation task.

Implementation

Once again, all the merit goes to the original contributor, I simply adapted it to my use-case.

uv init agent
Enter fullscreen mode Exit fullscreen mode

Once you have your Python project structure, which in this case would be and “agent” folder, create the following inside that folder.

uv add python-dotenv ollama
Enter fullscreen mode Exit fullscreen mode

You will obtain the following “pyproject.toml” file.

[project]
name = "agent"
version = "0.1.0"
description = "AI Agent for efficient IAC automation, using Ollama and Granite"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
    "python-dotenv>=0.23.5",
    "ollama", # Added for Ollama integration
]
Enter fullscreen mode Exit fullscreen mode

Create a sub-folder to put your HCL scripts and a bash file.

mkdir test
cd test
Enter fullscreen mode Exit fullscreen mode
# main.tf :) 
resource "local_file" "example" {
  content  = "Hello Terraform!"
  filename = "hello_terraform.txt"
}
Enter fullscreen mode Exit fullscreen mode
#!/bin/bash
terraform init
terraform apply
Enter fullscreen mode Exit fullscreen mode

For the purpose of simplicity, and ignoring security best-practices, run the following command so that the bash file could run.

chmod +x tf.sh
Enter fullscreen mode Exit fullscreen mode

This is the “tools.py” file adapted to my use-case.

# tools.py
from pathlib import Path
import os
import subprocess # New import for running bash commands

# Define the base directory for file operations
base_dir = Path("./test")

def read_file(name: str) -> str:
    """Reads and returns the content of a file in the test directory.

    Args:
        name (str): The name of the file to read (relative to test directory).

    Returns:
        str: The file's content or an error message if the file cannot be read.
    """
    print(f"(read_file {name})")
    try:
        file_path = base_dir / name
        if not file_path.is_file():
            return f"Error: '{name}' is not a file or does not exist."
        with open(file_path, "r", encoding="utf-8") as f:
            return f.read()
    except Exception as e:
        return f"Error: Failed to read '{name}': {e}"

def list_files() -> list[str]:
    """Lists all files in the test directory and its subdirectories.

    Returns:
        list[str]: A list of file names relative to the test directory.
    """
    print("(list_files)")
    try:
        return [str(item.relative_to(base_dir)) for item in base_dir.rglob("*") if item.is_file()]
    except Exception as e:
        return [f"Error: Failed to list files: {e}"]

def rename_file(name: str, new_name: str) -> str:
    """Renames a file in the test directory.

    Args:
        name (str): The current name of the file (relative to test directory).
        new_name (str): The new name for the file (relative to test directory).

    Returns:
        str: A success message or an error message if the operation fails.
    """
    print(f"(rename_file {name} -> {new_name})")
    try:
        old_path = base_dir / name
        new_path = base_dir / new_name

        if not old_path.is_file():
            return f"Error: '{name}' does not exist or is not a file."
        # Ensure the new path is still within the base_dir to prevent directory traversal attacks
        if not str(new_path).startswith(str(base_dir)):
            return "Error: New path is outside the test directory."

        os.makedirs(new_path.parent, exist_ok=True) # Ensure target directory exists
        os.rename(old_path, new_path)
        return f"File '{name}' successfully renamed to '{new_name}'."
    except Exception as e:
        return f"Error: Failed to rename '{name}' to '{new_name}': {e}"

def write_file(name: str, content: str) -> str:
    """Writes content to a file in the test directory. Creates the file if it doesn't exist,
    or overwrites it if it does.

    Args:
        name (str): The name of the file to write (relative to test directory).
        content (str): The content to write to the file.

    Returns:
        str: A success message or an error message if the operation fails.
    """
    print(f"(write_file {name})")
    try:
        file_path = base_dir / name
        # Ensure the file path is within the base_dir
        if not str(file_path).startswith(str(base_dir)):
            return "Error: Cannot write file outside the test directory."

        os.makedirs(file_path.parent, exist_ok=True) # Ensure target directory exists
        with open(file_path, "w", encoding="utf-8") as f:
            f.write(content)
        return f"Content successfully written to '{name}'."
    except Exception as e:
        return f"Error: Failed to write to '{name}': {e}"

def run_bash_command(command: str) -> str:
    """Executes a bash command within the test directory.

    Args:
        command (str): The bash command to execute.

    Returns:
        str: The stdout and stderr of the command, or an error message.
    """
    print(f"(run_bash_command: {command})")
    try:
        # Change to the test directory before running the command
        # This ensures terraform commands operate on files within 'test/'
        result = subprocess.run(
            command,
            shell=True,
            check=True, # Raise an exception for non-zero exit codes
            capture_output=True,
            text=True,
            cwd=base_dir # Run the command from the base_dir (./test)
        )
        output = f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
        return output
    except subprocess.CalledProcessError as e:
        return f"Error executing command: '{e.cmd}'\nSTDOUT:\n{e.stdout}\nSTDERR:\n{e.stderr}"
    except Exception as e:
        return f"An unexpected error occurred while running bash command: {e}"

Enter fullscreen mode Exit fullscreen mode

Followed by the adapted “main.py” 🐍

# main.py
import os
import json
import re
from dotenv import load_dotenv
import ollama

# Import the updated file management tools, including the new ones
from tools import read_file, list_files, rename_file, write_file, run_bash_command

# Load environment variables from .env
load_dotenv()

# --- Ollama Model Configuration ---
OLLAMA_MODEL = "granite3.2:latest"
OLLAMA_HOST = "http://localhost:11434"

try:
    client = ollama.Client(host=OLLAMA_HOST)
    # Ping the server to ensure connectivity
    client.list() # This will raise an error if connection fails
except Exception as e:
    print(f"Error initializing Ollama client or connecting to server: {e}")
    print(f"Please ensure Ollama server is running at {OLLAMA_HOST} and model '{OLLAMA_MODEL}' is available.")
    exit(1)


# --- Define available tools for the LLM to understand ---
# This is crucial for prompt engineering to enable "function calling" with Ollama.
# We instruct the LLM to output a specific JSON format when it wants to call a tool.
TOOLS_DEFINITION = """
Available tools:
1. read_file(name: str) -> str: Reads and returns the content of a file in the test directory.
   Example usage: CALL_TOOL: {"tool_name": "read_file", "args": {"name": "abc.txt"}}
2. list_files() -> list[str]: Lists all files in the test directory and its subdirectories.
   Example usage: CALL_TOOL: {"tool_name": "list_files", "args": {}}
3. rename_file(name: str, new_name: str) -> str: Renames a file in the test directory.
   Example usage: CALL_TOOL: {"tool_name": "rename_file", "args": {"name": "old.txt", "new_name": "new.txt"}}
4. write_file(name: str, content: str) -> str: Writes content to a file in the test directory. Creates the file if it doesn't exist, or overwrites it.
   Example usage: CALL_TOOL: {"tool_name": "write_file", "args": {"name": "main.tf", "content": "resource \\"local_file\\" \\"example\\" {\\n  content = \\"Hello Terraform!\\"\\n  filename = \\"example.txt\\"\\n}"}}
5. run_bash_command(command: str) -> str: Executes a bash command within the test directory. Use this for commands like 'terraform init' or 'terraform apply'.
   Example usage: CALL_TOOL: {"tool_name": "run_bash_command", "args": {"command": "terraform init"}}

When you need to perform an action using a tool, your entire response MUST be a JSON string in the format:
CALL_TOOL: {"tool_name": "<tool_name>", "args": {<arguments>}}.
Do NOT include any other text if you are calling a tool.
If you are responding to the user directly (not calling a tool), do NOT use the CALL_TOOL format.
"""

# The system prompt guides the LLM's behavior and informs it about the available tools.
SYSTEM_PROMPT = (
    "You are an experienced programmer tasked with managing files in a test directory, "
    "including HCL Terraform files. You can list files, read their contents, "
    "write new files, rename them, and execute bash commands. "
    "Provide clear, concise responses and handle errors gracefully. "
    "For Terraform tasks, remember to first write the .tf file, then run 'terraform init', and then 'terraform apply -auto-approve'. "
    f"{TOOLS_DEFINITION}" # Include the tool definitions in the system prompt
)

# Cache to store file contents to avoid redundant reads
file_cache = {}

def run_ollama_agent(user_input: str) -> tuple[str, dict | None]:
    """
    Interacts with the Ollama model and executes tools based on its response.
    Returns a tuple: (AI_response_string, tool_execution_info_dict_or_None)
    - AI_response_string: The response to be displayed to the user.
    - tool_execution_info_dict_or_None: A dictionary containing details about the
      executed tool (name, args, result) if a tool was called, otherwise None.
    """
    messages = [
        {"role": "system", "content": SYSTEM_PROMPT},
        {"role": "user", "content": user_input}
    ]

    try:
        # Call the Ollama model with the chat history
        response = client.chat(model=OLLAMA_MODEL, messages=messages)
        llm_response_content = response['message']['content'].strip()

        # Check if the LLM's response indicates a tool call
        if llm_response_content.startswith("CALL_TOOL:"):
            try:
                # Parse the JSON string for tool details
                tool_call_str = llm_response_content.replace("CALL_TOOL:", "").strip()
                tool_call_data = json.loads(tool_call_str)
                tool_name = tool_call_data.get("tool_name")
                tool_args = tool_call_data.get("args", {})

                tool_output = ""
                tool_info = {"tool_name": tool_name, "args": tool_args}

                # Execute the appropriate tool function
                if tool_name == "read_file":
                    tool_output = read_file(**tool_args)
                    tool_info["result"] = tool_output
                elif tool_name == "list_files":
                    tool_output = list_files(**tool_args)
                    tool_info["result"] = tool_output
                elif tool_name == "rename_file":
                    tool_output = rename_file(**tool_args)
                    tool_info["result"] = tool_output
                elif tool_name == "write_file":
                    tool_output = write_file(**tool_args)
                    tool_info["result"] = tool_output
                elif tool_name == "run_bash_command":
                    tool_output = run_bash_command(**tool_args)
                    tool_info["result"] = tool_output
                else:
                    tool_output = f"Error: The AI requested an unknown tool: '{tool_name}'."
                    tool_info = None

                return tool_output, tool_info
            except json.JSONDecodeError:
                return f"Error: The AI's tool call format was invalid. Response: {llm_response_content}", None
            except Exception as e:
                return f"Error executing tool '{tool_name}': {e}", None
        else:
            return llm_response_content, None

    except ollama.ResponseError as e:
        return f"Error communicating with Ollama server: {e}. Please check if Ollama is running and the model '{OLLAMA_MODEL}' is downloaded.", None
    except Exception as e:
        return f"An unexpected error occurred: {e}", None

def main():
    print("AI File Management Agent (using Ollama) - Enter commands to manage files in the test directory.")
    print("Available commands: list files, read <file>, rename <old_name> to <new_name>, write <file> <content>, run <command>, exit")

    while True:
        user_input = input("\nEnter your command (or 'exit' to quit): ").strip()
        if user_input.lower() == 'exit':
            print("Exiting AI File Management Agent.")
            break

        # --- Cache Lookup Logic (remains largely the same) ---
        cache_hit = False
        if any(keyword in user_input.lower() for keyword in ["read", "content", "function"]):
            match = re.search(r"read\s+(?:the\s+content\s+of\s+)?([\w\d\._-]+)", user_input, re.IGNORECASE)
            if match:
                file_name_for_lookup = match.group(1)
                if file_name_for_lookup in file_cache:
                    print(f"(cache_hit {file_name_for_lookup})")
                    print(f"AI Response: Cached content for '{file_name_for_lookup}':\n{file_cache[file_name_for_lookup]}")
                    cache_hit = True

        if cache_hit:
            continue

        # --- Call Ollama Agent ---
        ai_response, tool_info = run_ollama_agent(user_input)
        print("AI Response:", ai_response)

        # --- Cache Update Logic (updated for write_file and rename_file) ---
        if tool_info:
            if tool_info["tool_name"] == "read_file":
                file_name = tool_info["args"].get("name")
                content = tool_info["result"]
                if file_name and content and not content.startswith("Error:"):
                    file_cache[file_name] = content
                    print(f"(cache_updated {file_name})")
            elif tool_info["tool_name"] == "rename_file":
                old_name = tool_info["args"].get("name")
                if old_name in file_cache:
                    del file_cache[old_name]
                    print(f"(cache_removed {old_name})")
            elif tool_info["tool_name"] == "write_file":
                file_name = tool_info["args"].get("name")
                # When a file is written, its content in the cache might be stale or non-existent.
                # It's safer to remove it from cache so a subsequent 'read' forces a fresh read from disk.
                if file_name in file_cache:
                    del file_cache[file_name]
                    print(f"(cache_invalidated {file_name} due to write)")


if __name__ == "__main__":
    main()
    print("AI File Management Agent (using Ollama) - Done.")
Enter fullscreen mode Exit fullscreen mode

Now we can test the agent 🤖

uv run main.py
Enter fullscreen mode Exit fullscreen mode

The output (as expected) 😁

uv run main.py
AI File Management Agent (using Ollama) - Enter commands to manage files in the test directory.
Available commands: list files, read <file>, rename <old_name> to <new_name>, write <file> <content>, run <command>, exit

Enter your command (or 'exit' to quit): lits test
(list_files)
AI Response: ['main-old.tf','tf.sh']

Enter your command (or 'exit' to quit): run tf.sh
(run_bash_command: ./tf.sh)
AI Response: Error executing command: './tf.sh'
Enter fullscreen mode Exit fullscreen mode

Image description

Image description

Conclusion

This article effectively demonstrates the combined power of generative AI, particularly Large Language Models (LLMs), with the practical application of AI agents. This synergy allows for the automation and simplification of complex tasks, notably in infrastructure automation through the use of HCL Terraform scripts. The approach showcased here serves to augment and assist operations teams, enabling them to gain a significant advantage from modern tools and substantially enhance their productivity.

Links

Top comments (0)