Introduction
This document (notebook) shows how to do Function Calling workflows with Large Language Models (LLMs) of OpenAI.
The Raku package “WWW::OpenAI”, [AAp1], is used.
Outline of the overall process
The overall process is (supposed to be) simple:
- Implement a “tool”, i.e. a function/sub
- The tool is capable of performing (say, quickly and reliably) certain tasks.
- More than one tool can be specified.
- Describe the tool(s) using a certain JSON format
- The JSON description is to be “understood” by the LLM.
- JSON-schema is used for the arguments.
- Using the description, the LLM figures out when to make requests for computations with the tool and with what parameters and corresponding values.
- Make a first call to the LLM using suitably composed messages that have the tool JSON description(s).
- Examine the response of the LLM:
- If the response indicates that the (local) tool has to be evaluated:
- Process the tool names and corresponding parameters.
- Make a new message with the tool result(s).
- Send the messages to the LLM.
- Goto Step 4.
- Otherwise, give that “final” response.
(Currently) OpenAI indicates its tool evaluation requests with having the rule finish_reason => tool_calls
in its responses.
Diagram
Here is a Mermaid-JS diagram that shows single-pass LLM-and-tool interaction:

Remark: Instead of a loop — as in the outline above — only one invocation of a local tool is shown in the diagram.
Examples and big picture
The rest of the document gives concrete code how to do function calling with OpenAI’s LLMs using Raku.
There are similar workflows with other LLM providers. (Like, Google’s Gemini.) They follow the same structure, although there are some small differences. (Say, in the actual specifications of tools.)
It would be nice to have:
- Universal programming interface for those function calling interfaces.
- Facilitation of tool descriptions derivations.
- Via Raku’s introspection or using suitable LLM prompts.
- “LLM::Functions”, [AAp3], can be used for both approaches.
- Via Raku’s introspection or using suitable LLM prompts.
This document belongs to a collection of documents describing how to do LLM function calling with Raku.
Setup
Load packages:
use WWW::OpenAI;
use JSON::Fast;
Choose a model:
my $model = "gpt-4.1";
Workflow
Define a local function
This is the “tool” to be communicated to OpenAI. (I.e. define the local function/sub.)
sub get-current-weather(Str $location, Str $unit = "fahrenheit") returns Str {
return "It is currently sunny in $location with a temperature of 72 degrees $unit.";
}
Define the function specification (as prescribed in OpenAI’s function calling documentation):
my $function-spec = {
type => "function",
function => {
name => "get-current-weather",
description => "Get the current weather for a given location",
parameters => {
type => "object",
properties => {
'$location' => {
type => "string",
description => "The city and state, e.g., San Francisco, CA"
},
'$unit' => {
type => "string",
enum => ["celsius", "fahrenheit"],
description => "The temperature unit to use"
}
},
required => ["location"]
}
}
};
First communication with OpenAI
Initialize messages and tools:
my @messages =
{role => "system", content => "You are a helpful assistant that can provide weather information."},
{role => "user", content => "What's the weather in Boston, MA?"}
;
my @tools = [$function-spec,];
Send the first chat completion request:
my $response = openai-chat-completion(
@messages,
:@tools,
:$model,
max-tokens => 4096,
format => "raku",
temperature => 0.45
);
# [{finish_reason => tool_calls, index => 0, logprobs => (Any), message => {annotations => [], content => (Any), refusal => (Any), role => assistant, tool_calls => [{function => {arguments => {"$location":"Boston, MA"}, name => get-current-weather}, id => call_ROi3n0iICSrGbetBKZ9KVG4E, type => function}]}}]
Refine the response with functional calls
The following copy of the messages is not required, but it makes repeated experiments easier:
my @messages2 = @messages;
Process the response — invoke the tool, give the tool result to the LLM, get the LLM answer:
my $assistant-message = $response[0]<message>;
if $assistant-message<tool_calls> {
@messages2.push: {
role => "assistant",
tool_calls => $assistant-message<tool_calls>
};
my $tool-call = $assistant-message<tool_calls>[0];
my $function-name = $tool-call<function><name>;
my $function-args = from-json($tool-call<function><arguments>);
if $function-name eq "get-current-weather" {
my $result = get-current-weather(
$function-args{'$location'} // $function-args<location>,
$function-args{'$unit'} // $function-args<unit> // "fahrenheit"
);
@messages2.push: {
role => "tool",
content => $result,
tool_call_id => $tool-call<id>
};
# Send the second request with function result
my $final-response = openai-chat-completion(
@messages2,
:@tools,
#tool_choice => "auto",
:$model,
format => "raku"
);
say "Assistant: $final-response[0]<message><content>";
}
} else {
say "Assistant: $assistant-message<content>";
}
# Assistant: The weather in Boston, MA is currently sunny with a temperature of 72
Show all messages:
.say for @messages2
# {content => You are a helpful assistant that can provide weather information., role => system}
# {content => What's the weather in Boston, MA?, role => user}
# {role => assistant, tool_calls => [{function => {arguments => {"$location":"Boston, MA"}, name => get-current-weather}, id => call_ROi3n0iICSrGbetBKZ9KVG4E, type => function}]}
# {content => It is currently sunny in Boston, MA with a temperature of 72 degrees fahrenheit., role => tool, tool_call_id => call_ROi3n0iICSrGbetBKZ9KVG4E}
In general, there should be an evaluation loop that checks the finishing reason(s) in the LLM answers and invokes the tools as many times as it is required. (I.e., there might be several back-and-forth exchanges in the LLM, requiring different tools or different tool parameters.)

References
[AAp1] Anton Antonov, WWW::OpenAI Raku package, (2023-2025), GitHub/antononcube.
[AAp2] Anton Antonov, WWW::Gemini Raku package, (2023-2025), GitHub/antononcube.
[AAp3] Anton Antonov, LLM::Functions Raku package, (2023-2025), GitHub/antononcube.
3 thoughts on “LLM function calling workflows (Part 1, OpenAI)”