Skip to Content
πŸ“– Guide DocumentsπŸ“š Core LibraryAgentic AI

@agentica/core

The simplest Agentic AI library, specialized in LLM Function Calling.

@agentica/core is an agent library utilizing LLM function calling feature, provided from Swagger/OpenAPI document and TypeScript class functions, enhanced by compiler and validation feedback strategy.

With these strategies, you can build Agentic AI chatbot only with Swagger documents or TypeScript class types. Complex agent workflows and graphs required in conventional AI agent development are not necessary in @agentica/core. Only by listing up functions, @agentica/core will do everything with the function calling.

Look at the below demonstration, and feel how @agentica/core is easy and powerful. You can let users to search and purchase products only with conversation texts. The backend API and TypeScript class functions would be adequately called in the AI chatbot with LLM function calling.

src/main.ts
import { Agentica, assertHttpController } from "@agentica/core"; import { AgenticaPgVectorSelector } from "@agentica/pg-vector-selector"; import typia from "typia"; const main = async (): Promise<void> => { const agent = new Agentica({ model: "chatgpt", vendor: { api: new OpenAI({ apiKey: "*****" }), model: "gpt-4o-mini", }, controllers: [ assertHttpController({ model: "chatgpt", name: "shopping", document: await fetch( "https://shopping-be.wrtn.ai/editor/swagger.json", ).then((r) => r.json()), connection: { host: "https://shopping-be.wrtn.ai", headers: { Authorization: "Bearer *****", }, }, }), { protocol: "class", name: "counselor", application: typia.llm.application<ShoppingCounselor, "chatgpt">(), execute: new ShoppingCounselor(), }, { protocol: "class", name: "policy", application: typia.llm.application<ShoppingPolicy, "chatgpt">(), execute: new ShoppingPolicy(), }, { protocol: "class", name: "rag", application: typia.llm.application<ShoppingSearchRag, "chatgpt">(), execute: new ShoppingSearchRag(), }, ], config: { executor: { select: AgenticaPgVectorSelector.boot<"chatgpt">( 'https://your-connector-hive-server.com' ), }, }, }); await agent.conversate("I wanna buy MacBook Pro"); }; main().catch(console.error);

Setup

Terminal
npm install @agentica/core @samchon/openapi typia npx typia setup

To install @agentica/core, you also have to install @samchon/openapi and typia.

@samchon/openapi is an OpenAPI specification library which can convert Swagger/OpenAPI document to LLM function calling schema. And typia is a transformer (compiler) library which can compose LLM function calling schema from a TypeScript class type.

By the way, as typia is a transformer library analyzing TypeScript source code in the compilation level, it needs additional setup command npx typia setup. If you’re not using non-standard TypeScript compiler (not tsc) or developing the agent in the frontend environment, you have to setup @ryoppippi/unplugin-typia following its guide.

Facade Controller

undefined

@agentica/core/Agentica
/** * Nestia A.I. chatbot agent. * * `Agentica` is a facade class for the super A.I. chatbot agent * which performs the {@link conversate user's conversation function} * with LLM (Large Language Model) function calling and manages the * {@link getHistories prompt histories}. * * To understand and compose the `Agentica` class exactly, reference * below types concentrating on the documentation comments please. * Especially, you have to be careful about the {@link IAgenticaProps} * type which is used in the {@link constructor} function. * * - Constructors * - {@link IAgenticaProps} * - {@link IAgenticaVendor} * - {@link IAgenticaController} * - {@link IAgenticaConfig} * - {@link IAgenticaSystemPrompt} * - Accessors * - {@link AgenticaOperation} * - {@link AgenticaHistory} * - {@link AgenticaEvent} * - {@link AgenticaTokenUsage} * * @author Samchon */ export class Agentica<Model extends ILlmSchema.Model> { /** * Initializer constructor. * * @param props Properties to construct the agent */ public constructor(private readonly props: IAgenticaProps<Model>); /** * Conversate with the A.I. chatbot. * * User talks to the A.I. chatbot with the content. * * When the user's conversation implies the A.I. chatbot to execute a * function calling, the returned chat prompts will contain the * function calling information like {@link AgenticaHistory.Execute}. * * @param content The content to talk * @returns List of newly created chat prompts */ public async conversate(content: string): Promise<AgenticaHistory<Model>[]>; /** * Add an event listener. * * Add an event listener to be called whenever the event is emitted. * * @param type Type of event * @param listener Callback function to be called whenever the event is emitted */ public on<Type extends AgenticaEvent.Type>( type: Type, listener: ( event: AgenticaEvent.Mapper<Model>[Type], ) => void | Promise<void>, ): this; /** * Erase an event listener. * * Erase an event listener to stop calling the callback function. * * @param type Type of event * @param listener Callback function to erase */ public off<Type extends AgenticaEvent.Type>( type: Type, listener: ( event: AgenticaEvent.Mapper<Model>[Type], ) => void | Promise<void>, ): this; /** * Get the chatbot's prompt histories. * * Get list of chat prompts that the chatbot has been conversated. * * @returns List of chat prompts */ public getHistories(): AgenticaHistory<Model>[]; /** * Get token usage of the A.I. chatbot. * * Entire token usage of the A.I. chatbot during the conversating * with the user by {@link conversate} method callings. * * @returns Cost of the A.I. chatbot */ public getTokenUsage(): AgenticaTokenUsage; }

API Vendor

When creating an Agentica class instance, you have to specify the LLM service vendor.

Agentica is utilizing OpenAI SDK, but it does not mean that you can use only OpenAI’s GPT model in the Agentica. The OpenAI SDK is just a connection tool to the LLM vendor’s API, and you can use other LLM vendors by configuring its api.baseURL and vendor properties.

For example, if you want to use Llama instead of GPT, you can do it like below. Instead, as LLM schema models are different by the vendor, you have to define more property IAgenticaProps.model and you also have to make LLM function calling schema following the vendor’s specification.

src/main.llama.ts
import { Agentica, IAgenticaController, IAgenticaProps, IAgenticaVendor } from "@agentica/core"; import OpenAI from "openai"; import { BbsArticleService } from "./services/BbsArticleService"; const agent: Agentica<"llama"> = new Agentica({ model: "llama", vendor: { model: "llama3.3-70b", api: new OpenAI({ apiKey: "********", baseURL: "https://api.llama-api.com", }), } satisfies IAgenticaVendor, controllers: [ { protocol: "class", name: "bbs", application: typia.llm.application<BbsArticleService, "llama">(), execute: new BbsArticleService(), } satisfies IAgenticaController<"llama">, ] } satisfies IAgenticaProps<"llama">); await agent.conversate("I wanna buy MacBook Pro");

Function Controller

In @agentica/core, there is a concept of controller, a set LLM function calling schemas (application schema) and execute functions for actual function calling. And @agentica/core supports two protocol types of controllers; HTTP server and TypeScript class.

When you’re using HTTP server controller, create LLM application schema from assertHttpController() or validateHttpController() function. These functions will validate whether the target Swagger/OpenAPI document is correct or not. And then, configure connection information to the HTTP server with destination URL and headers.

Otherwise you want to serve the function calling from a TypeScript class, create LLM application schema from typia.llm.application<Class, Model>() function. And provide the class instance for the actual function calling.

For reference, IAgenticaController.name must be unique, because it is used to identify the controller in the agent. Also, if number of your controller functions are too many, it would better to configure executor.selector as vector selector of plugin. If you don’t do it, too much LLM token costs would be consumed.

src/main.ts
import { Agentica, assertHttpController } from "@agentica/core"; import { AgenticaPgVectorSelector } from "@agentica/pg-vector-selector"; import typia from "typia"; const main = async (): Promise<void> => { const agent = new Agentica({ model: "chatgpt", vendor: { api: new OpenAI({ apiKey: "*****" }), model: "gpt-4o-mini", }, controllers: [ assertHttpController({ name: "shopping", model: "chatgpt", document: await fetch( "https://shopping-be.wrtn.ai/editor/swagger.json", ).then((r) => r.json()), connection: { host: "https://shopping-be.wrtn.ai", headers: { Authorization: "Bearer *****", }, }, }), { protocol: "class", name: "counselor", application: typia.llm.application<ShoppingCounselor, "chatgpt">(), execute: new ShoppingCounselor(), }, { protocol: "class", name: "policy", application: typia.llm.application<ShoppingPolicy, "chatgpt">(), execute: new ShoppingPolicy(), }, { protocol: "class", name: "rag", application: typia.llm.application<ShoppingSearchRag, "chatgpt">(), execute: new ShoppingSearchRag(), }, ], config: { executor: { select: AgenticaPgVectorSelector.boot<"chatgpt">( 'https://your-connector-hive-server.com' ), }, }, }); await agent.conversate("I wanna buy MacBook Pro"); }; main().catch(console.error);

Conversation

When you call Agentica.conversate() function, the agent will start the #Multi Agent Orchestration to the internal sub agents including function calling and executions, and returns the list of newly created prompts in the orchestration process.

If you want to archive the conversation state of current agent, store the returned prompts to your database. When you want to restore the agent, you can do it by creating a new Agentica instance with IAgenticaProps.histories property assignment.

Also, if you want to trace the conversation process, you can add event listeners to the agent. The agent emits events when the conversation is started, the function calling is selected or executed, and the description prompt to the function calling result is created.

Configuration

@agentica/core/IAgenticaConfig
import type { ILlmSchema } from "@samchon/openapi"; import type { AgenticaContext } from "../context/AgenticaContext"; import type { IAgenticaExecutor } from "./IAgenticaExecutor"; import type { IAgenticaSystemPrompt } from "./IAgenticaSystemPrompt"; /** * Configuration for Agentic Agent. * * `IAgenticaConfig` is an interface that defines the configuration * properties of the {@link Agentica}. With this configuration, you * can set the user's {@link locale}, {@link timezone}, and some of * {@link systemPrompt system prompts}. * * Also, you can affect to the LLM function selecing/calling logic by * configuring additional properties. For an example, if you configure the * {@link capacity} property, the AI chatbot will divide the functions * into the several groups with the configured capacity and select proper * functions to call by operating the multiple LLM function selecting * agents parallelly. * * @author Samchon */ export interface IAgenticaConfig<Model extends ILlmSchema.Model> { /** * Agent executor. * * Executor function of Agentic AI's iteration plan to internal agents * running by the {@link Agentica.conversate} function. * * If you want to customize the agent execution plan, you can do it * by assigning you logic function of entire or partial to this property. * When customizing it, it would better to reference the * {@link ChatGptAgent.execute} function. * * @param ctx Context of the agent * @default ChatGptAgent.execute */ executor?: | Partial<IAgenticaExecutor<Model>> | ((ctx: AgenticaContext<Model>) => Promise<void>); /** * System prompt messages. * * System prompt messages if you want to customize the system prompt * messages for each situation. */ systemPrompt?: IAgenticaSystemPrompt<Model>; /** * Locale of the A.I. chatbot. * * If you configure this property, the A.I. chatbot will conversate with * the given locale. You can get the locale value by * * - Browser: `navigator.language` * - NodeJS: `process.env.LANG.split(".")[0]` * * @default your_locale */ locale?: string; /** * Timezone of the A.I. chatbot. * * If you configure this property, the A.I. chatbot will consider the * given timezone. You can get the timezone value by * `Intl.DateTimeFormat().resolvedOptions().timeZone`. * * @default your_timezone */ timezone?: string; /** * Retry count. * * If LLM function calling composed arguments are invalid, * the A.I. chatbot will retry to call the function with * the modified arguments. * * By the way, if you configure it to 0 or 1, the A.I. chatbot * will not retry the LLM function calling for correcting the * arguments. * * @default 3 */ retry?: number; /** * Capacity of the LLM function selecting. * * When the A.I. chatbot selects a proper function to call, if the * number of functions registered in the * {@link IAgenticaProps.applications} is too much greater, * the A.I. chatbot often fallen into the hallucination. * * In that case, if you configure this property value, `Agentica` * will divide the functions into the several groups with the configured * capacity and select proper functions to call by operating the multiple * LLM function selecting agents parallelly. * * @default 100 */ capacity?: number; /** * Eliticism for the LLM function selecting. * * If you configure {@link capacity}, the A.I. chatbot will complete * the candidate functions to call which are selected by the multiple * LLM function selecting agents. * * Otherwise you configure this property as `false`, the A.I. chatbot * will not complete the candidate functions to call and just accept * every candidate functions to call which are selected by the multiple * LLM function selecting agents. * * @default true */ eliticism?: boolean; }

Executor

When you call Agentica.conversate() function, the agent will start the #Multi Agent Orchestration to the internal sub agents including function calling and executions, and returns the list of newly created prompts in the orchestration process.

And you can change some of internal agent’s behavior by configuring the IAgenticaExecutor property. For example, if you assign null value to the IAgenticaExecutor.initialize, the agent will skip the initialize process and directly go to the select process.

Otherwise you configure IAgenticaExecutor.select to another function like PG Vector Selector, the agent will select the functions to call by the PG Vector Selector’s strategy. And this way is recommend when your number of controller functions are too many. If you don’t do that with a lot of controller functions, your agent will consume a lot of LLM token costs.

System Prompts

You can change system prompts by configuring IAgenticaSystemPrompt properties.

This is useful when you want to configure β€œtone and manner” of the AI chatbot, or you need to restrict the agent to follow your specific rule.

For example, if you are developing a chatbot of counseling, you can guide the agent to use the polite and gentle tone in the IAgenticaSystemPrompt.common property.

Locale and Timezone

You can configure locale and timezone properties.

This properties are delivered to the AI agent, so that the AI agent considers the user’s locale and timezone. If you configure ko-KR to the locale property, the AI agent will conversate with the Korean language.

Otherwise you configure Asia/Seoul to the timezone property, the AI agent considers the location and timezone, so that sometimes affect to the LLM function calling’s arguments composition. For example, if you ask the AI agent to β€œrecommend me a local food”, the AI agent will recommend the local food in Seoul, Korea.

Event Handling

@agentica/core/AgenticaEvent
export type AgenticaEvent = | AgenticaEvent.Initialize | AgenticaEvent.Select | AgenticaEvent.Call | AgenticaEvent.Execute | AgenticaEvent.Describe; export namespace AgenticaEvent { export interface Text extends Base<"text"> { role: "user" | "assistant" stream: ReadableStream<string>; join(): Promise<string>; } export interface Initialize extends Base<"initialize"> {} export interface Select<Model extends ILlmSchema.Model> extends Base<"select"> { selection: AgenticaOperationSelection<Model>; } export interface Call<Model extends ILlmSchema.Model> extends Base<"call"> { id: string; operation: AgenticaOperation<Model>; arguments: Record<string, any>; } export interface Execute<Model extends ILlmSchema.Model> extends Base<"execute"> { id: string; operation: AgenticaOperation<Model>; arguments: Record<string, any>; value: any; } export interface Describe<Model extends ILlmSchema.Model> extends Base<"describe"> { executes: AgenticaHistory.Execute<Model>[]; stream: ReadableStream<string>; join(): Promise<string>; } export interface Request extends Base<"request"> { source: AgenticaEventSource; body: OpenAI.ChatCompletionCreateParamsStreaming; options?: OpenAI.RequestOptions | undefined; } export interface Response extends Base<"response"> { source: AgenticaEventSource; body: OpenAI.ChatCompletionCreateResponse; options?: OpenAI.RequestOptions | undefined; stream: ReadableStream<OpenAI.ChatCompletionChunk>; options?: OpenAI.RequestOptions | undefined; join(): Promise<OpenAI.ChatCompletion>; } interface Base<Type extends string> { type: Type; } }

Here is the list of events emitted by the Agentica class.

And you can listen the events by calling Agentica.on() function, and erase the event listener by calling Agentica.off() function. And the events are emitted only when the Agentica.conversate() function is on the process.

Even though the event listening is not essential, I recommend you to at least listen text and describe type events, because these events are the most import events containing the conversation contents between the user and the AI agent.

Prompt Histories

@agentica/core/AgenticaHistory
export type AgenticaHistory = | AgenticaHistory.Text | AgenticaHistory.Select | AgenticaHistory.Cancel | AgenticaHistory.Execute | AgenticaHistory.Describe; export namespace AgenticaHistory { export interface Select extends Base<"select"> { id: string; selections: AgenticaOperationSelection[]; toJSON(): IAgenticaHistoryJson.ISelect; } export interface Cancel extends Base<"cancel"> { id: string; selections: IAgenticaOperationSelection[]; toJSON(): IAgenticaHistoryJson.ICancel; } export interface Execute extends Base<"execute"> { id: string; operation: AgenticaOperation; arguments: Record<string, any>; value: any; toJSON(): IAgenticaHistoryJson.IExecute; } export interface Describe extends Base<"describe"> { executes: Execute[]; text: string; toJSON(): IAgenticaHistoryJson.IDescribe; } export interface Text extends Base<"text"> { role: "assistant" | "user"; text: string; toJSON(): IAgenticaHistoryJson.IText; } interface Base<Type extends string> { type: Type; } }

Here is the list of prompt types returned by the Agentica.conversate() function.

When you call Agentica.conversate() function, the agent will start the #Multi Agent Orchestration to the internal sub agents including function calling and executions, and returns the list of newly created prompts in the orchestration process.

If you want to archive the conversation state of current agent, store the returned prompts to your database. The prompt histories would be serialized from AgenticaHistory to IAgenticaHistoryJson type.

When you want to restore the agent, you can do it by creating a new Agentica instance with IAgenticaProps.histories property assignment. The Agentica will deserialize the prompt histories from IAgenticaHistoryJson to AgenticaHistory type, so that the agent can restore the conversation state.

src/main.ts
import { Agentica, IAgenticaHistoryJson } from "@agentica/core"; const histories: IAgenticaHistoryJson[] = await getHistories(); const agent: Agentica<"chatgpt"> = new Agentica({ ..., histories, }); await agent.conversate("Summarize what we have done please.");
Last updated on