Skip to content

Companion Agent

The CompanionAgent class manages the core logic for an AI companion. It integrates the agent, memory, workflow, and duplicate detection to handle message processing and state generation.

import { CompanionAgent } from "@aikyo/server";
constructor(
companion: CompanionCard,
model: LanguageModel,
history: Message[],
config?: { maxTurn: number | null; enableRepetitionJudge: boolean }
)
ParameterTypeDescriptionDefault
companionCompanionCardCompanion configuration-
modelLanguageModelLLM model from @ai-sdk/*-
historyMessage[]Conversation history array-
configobjectConfiguration settingsSee below
config.maxTurnnumber | nullMax turnsnull
config.enableRepetitionJudgebooleanEnable dup. checktrue

Configuration Details:

  • companion: Includes metadata, tools, and events
  • model: Language model instance from AI SDK providers
  • history: Array reference to conversation messages
  • config: Default is { maxTurn: null, enableRepetitionJudge: true }
import { CompanionAgent } from "@aikyo/server";
import { anthropic } from "@ai-sdk/anthropic";
import type { Message } from "@aikyo/server";
const history: Message[] = [];
const companion = new CompanionAgent(
companionCard,
anthropic("claude-3-5-haiku-latest"),
history,
{
maxTurn: 20, // Terminate after 20 turns
enableRepetitionJudge: true // Enable duplicate detection
}
);
  • Anthropic: @ai-sdk/anthropic
  • Google: @ai-sdk/google
companion: CompanionCard

The companion configuration card.

agent: Agent

An instance of the Mastra Agent responsible for managing interactions with the LLM.

repetitionJudge: RepetitionJudge

A judge for detecting conversation duplicates. See Duplicate Detection for details.

stateJudge: StateJudge

A judge that generates the companion’s state (State) based on conversation history. Used for turn-taking management.

history: Message[]

Conversation history array (reference).

memory: Memory

An instance of the Memory class managing long-term and working memory.

Persistence:

  • Creates a LibSQL database at db/<companion_id>.db
  • Utilizes LibSQLStore for storage and LibSQLVector for vector store
  • Supports similarity searches using the vector store

Working Memory Schema:

export const MemorySchema = z.object({
messages: z.array(
z.object({
from: z.string().describe("ID of the companion who sent the message"),
content: z.string().describe("Summary of the message content"),
}),
),
});
runtimeContext: RuntimeContext

The runtime context referenced during tool execution. Contains the following information:

KeyTypeDescription
idstringCompanion’s ID
libp2pLibp2plibp2p instance
companionsMap<string, Metadata>List of connected companions
pendingQueriesMapPending queries
agentCompanionAgentThe agent itself
run: Run

A Run instance of the workflow generated by createToolInstructionWorkflow.

count: number

Current turn count (used when maxTurn configuration is set).

config: { maxTurn: number | null; enableRepetitionJudge: boolean }

Configuration passed to the constructor.

Generates a tool execution instruction by evaluating CEL expressions based on the received message.

async generateToolInstruction(input: Message): Promise<string>

Parameters:

  • input: The received message

Returns:

  • string: Tool execution instruction (e.g., “Introduce yourself. Use the tool to respond.”) or "failed" if event execution fails

Process Flow:

  1. In Workflow’s evaluateStep, the LLM evaluates the params schema
  2. In runStep, checks conditions are evaluated based on CEL expressions
  3. Concatenates and returns the instruction of the matched condition

Generates the companion’s state (State) based on the complete conversation history.

async generateState(): Promise<State>

Parameters:

None (internally references this.history)

Returns:

  • State: State information including speak/listen, importance, selected, and closing statuses

Process Flow:

  1. Performs duplicate detection if enableRepetitionJudge is true
  2. Adds closing instructions if score > 0.7
  3. Generates State using StateJudge
  4. Checks for maxTurn limit (if configured)

For details, see Turn-Taking.

Receives a message and executes the LLM based on the tool execution instruction.

async input(message: Message): Promise<void>

Parameters:

  • message: The received message

Process Flow:

  1. Retrieves a tool execution instruction using generateToolInstruction
  2. Executes the LLM with the instruction and message
  3. The LLM automatically executes tools as needed