# Expose Agents via API
Source: https://docs.laragent.ai/core-concepts/agent-via-api
This document describes the feature introduced in the v0.5 and explains how to expose your agents through an OpenAI-compatible endpoint.
## Expose API in Laravel
`LarAgent\API\Completions` handles OpenAI compatible chat completion requests.
The class expects a valid `Illuminate\Http\Request` and an agent class name:
```php
use LarAgent\API\Completions;
public function completion(Request $request)
{
$response = Completions::make($request, MyAgent::class);
// Your code
}
```
Where `$response` is either an `array` (For non-streaming responses) or a `Generator` with chunks (for streaming responses).
### Base Controllers
To not bother you with building the controllers with `Completions` class we create abstract classes,
So that you can use the provided base controllers to create endpoints quickly by extending them.
Both controllers implement a `completion(Request $request)` method that delegates work to `Completions::make()`
and automatically handles SSE streaming or JSON responses compatible with OpenAI API.
#### SingleAgentController
Simple controller for exposing a single agent providing `completion` and `models` methods.
Once you have your agent created, 3 steps is enough to expose it via API.
Extend [`SingleAgentController`](https://github.com/MaestroError/LarAgent/tree/main/src/API/Completion/Controllers/SingleAgentController.php) when exposing a single agent:
1. Set `protected ?string $agentClass` property to specify the agent class.
2. Set `protected ?array $models` property to specify the models.
Controller Example:
```php
namespace App\Http\Controllers;
use LarAgent\API\Completion\Controllers\SingleAgentController;
class MyAgentApiController extends SingleAgentController
{
protected ?string $agentClass = \App\AiAgents\MyAgent::class;
protected ?array $models = ['gpt-4o-mini'];
}
```
3. Define the API routes in your Laravel application
Routes example:
```php
Route::post('/v1/chat/completions', [MyAgentApiController::class, 'completion']);
Route::get('/v1/models', [MyAgentApiController::class, 'models']);
```
#### MultiAgentController
When several agents share one endpoint extend [`MultiAgentController`](https://github.com/MaestroError/LarAgent/tree/main/src/API/Completion/Controllers/MultiAgentController.php):
1. Set `protected ?array $agents` property to specify the agent classes.
2. Set `protected ?array $models` property to specify the models.
```php
namespace App\Http\Controllers;
use LarAgent\API\Completion\Controllers\MultiAgentController;
class AgentsController extends MultiAgentController
{
protected ?array $agents = [
\App\AiAgents\ChatAgent::class,
\App\AiAgents\SupportAgent::class,
];
protected ?array $models = [
'ChatAgent/gpt-4o-mini',
'SupportAgent/gpt-4.1-mini',
'SupportAgent',
];
}
```
The client specifies `model` as `AgentName/model` or as `AgentName` (Default model is used defined in Agent class or provider).
3. Define the API routes in your Laravel application
Routes example:
```php
Route::post('/v1/chat/completions', [AgentsController::class, 'completion']);
Route::get('/v1/models', [AgentsController::class, 'models']);
```
### Storing chat histories
Since the most of clients manage the chat history on their side, this method **is not necessary**
if you don't want to store chats.
Without this method, the session id will be random string per each request,
you can easily set "in\_memory" as a chat history type of your exposed agent and forget about it.
But if you want to store the chat histories and maintain the state on your side,
you will need to set the session id for the agent using `setSessionId` method in `SingleAgentController` or `MultiAgentController`.
```php
// @return string
protected function setSessionId()
{
$user = auth()->user();
if ($user) {
return (string) $user->id;
}
return "OpenWebUi-LarAgent";
}
```
### Streaming response
Streaming responses are sent as Server-Sent Events where each event contains a JSON chunk matching OpenAI's streaming format.
Including `"stream": true` in request returns a `text/event-stream` where each chunk matches the OpenAI format and includes:
```php
echo "event: chunk\n";
echo 'data: '.json_encode($chunk)."\n\n";
```
Example of chunk:
```json
{
"id": "ApiAgent_OpenWebUi-LarAgent",
"object": "chat.completion.chunk",
"created": 1753446654,
"model": "gpt-4.1-nano",
"choices": [
{
"index": 0,
"delta": {
"role": "assistant",
"content": " can"
},
"logprobs": null,
"finish_reason": null
}
],
"usage": null
}
```
Note that the `usage` data is included only in the last chunk as in OpenAI API.
Use either controller according to your needs and point your OpenAI compatible client to these routes.
### Calling from a Custom Controller
If you need more control you may call `Completions::make()` directly:
```php
use Illuminate\Http\Request;
use LarAgent\API\Completions;
class CustomController
{
public function chat(Request $request)
{
$response = Completions::make($request, \App\AiAgents\MyAgent::class);
if ($response instanceof \Generator) {
// stream Server-Sent Events
return response()->stream(function () use ($response) {
foreach ($response as $chunk) {
echo "event: chunk\n";
echo 'data: '.json_encode($chunk)."\n\n";
ob_flush();
flush();
}
}, 200, ['Content-Type' => 'text/event-stream']);
}
return response()->json($response);
}
}
```
For more references see [Completions](https://github.com/MaestroError/LarAgent/tree/main/src/API/Completions.php), [SingleAgentController](https://github.com/MaestroError/LarAgent/tree/main/src/API/Completion/Controllers/SingleAgentController.php), [MultiAgentController](https://github.com/MaestroError/LarAgent/tree/main/src/API/Completion/Controllers/MultiAgentController.php).
### Example Request
```bash
curl -X POST /v1/chat/completions \
-H 'Content-Type: application/json' \
-d '{
"model": "MyAgent/gpt-4o-mini",
"messages": [
{"role":"user","content":"Hello"}
],
}'
```
### Example Response
```json
{
"id": "MyAgent_abcd1234",
"object": "chat.completion",
"created": 1753357877,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hi!"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 10,
"total_tokens": 15
}
}
```
# Agents
Source: https://docs.laragent.ai/core-concepts/agents
Agents are the core of LarAgent. They represent an AI model that can be used to interact with users, systems, or any other source of input.
Agents are the foundation of LarAgent. They define how your AI assistant behaves, what tools it can use, and how it processes information.
## Creating an Agent
You can create a new agent manually by extending the `LarAgent\Agent` class. This is the parent class for building your custom AI agent with specific capabilities and behaviors.
Using `make:agent` command is a recommended way of creating agents.
```php
namespace App\AiAgents;
use LarAgent\Agent;
class MyAgent extends Agent
{
// Your agent implementation
}
```
For rapid development, you can use the artisan command to generate a new agent with a basic structure:
```bash
php artisan make:agent MyAgent
```
This will create a new agent class in the `App\AiAgents` directory with all the necessary boilerplate code:
```php YourAgentName.php
```php Instructions Method
// Define the agent's system instructions
// This sets the behavior, role, and capabilities of your agent
// For simple textual instructions, use the `instructions` property
// For more complex instructions or dynamic behavior, use the `instructions` method
public function instructions()
{
return "Define your agent's instructions here.";
}
```
```php Prompt Method
// Customize how messages are processed before sending to the AI
// Useful for formatting, adding context (RAG), or standardizing input
public function prompt(string $message)
{
return $message;
}
```
```php Model Method
// Decide which model to use dynamically with custom logic
// Or use property $model to statically set the model
public function model()
{
return $this->model;
}
```
```php API Key / URL
// Dynamically set the API URL for the driver
public function getApiUrl()
{
return $this->apiUrl;
}
// Dynamically set the API Key for the driver
public function getApiKey()
{
return $this->apiKey;
}
```
### Properties
#### Instructions
```php
/** @var string - Define the agent's behavior and role */
protected $instructions;
```
This property sets the system instructions for your agent, defining its behavior, personality, and capabilities.
```php
// Example
protected $instructions = "You are a helpful assistant specialized in weather forecasting.";
```
#### History
```php
/** @var string - Choose from built-in chat history types */
protected $history;
```
Choose from built-in chat history implementations: "in\_memory", "session", "cache", "file", or "json".
```php
// Example
protected $history = 'cache';
// Or use a class
protected $history = \LarAgent\History\CacheChatHistory::class;
```
#### Driver
```php
/** @var string - Specify which LLM driver to use */
protected $driver;
```
The driver class that handles communication with the AI provider.
```php
// Example
protected $driver = \LarAgent\Drivers\OpenAi\OpenAiDriver::class;
```
#### Provider
```php
/** @var string - Select the AI provider configuration */
protected $provider = 'default';
```
References a provider configuration from your config file.
```php
// Example
protected $provider = 'openai-gpt4';
```
#### Model
```php
/** @var string - Choose which language model to use */
protected $model = 'gpt-4o-mini';
```
The specific model to use from your chosen provider.
```php
// Example
protected $model = 'gpt-4o';
```
#### Other properties
All config properties have relevant setter (chainable methods) you can use to set them at runtime. For example, `$maxCompletionTokens` -> `maxCompletionTokens(int $tokens)`, `$topP` -> `topP(float $topP)` and etc.
All config properties can be defined by config file in provider settings. For example, `$maxCompletionTokens` -> `max_completion_tokens => 2000` and etc.
##### **Tokens**
```php
/** @var int - Set the maximum number of tokens in the completion */
protected $maxCompletionTokens;
```
Limits the length of the AI's response.
```php
// Example
protected $maxCompletionTokens = 2000;
```
##### **Temperature**
```php
/** @var float - Control response creativity */
protected $temperature;
```
Controls randomness: 0.0 for focused responses, 2.0 for creative ones.
```php
// Example
protected $temperature = 0.7; // Balanced
```
##### **Message**
```php
/** @var string|null - Current message being processed */
protected $message;
```
##### **n**
```php
/** @var int|null */
protected $n;
```
> How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
In case of \$n > 1, the agent will return an array of responses.
##### **top\_p**
```php
/** @var float|null */
protected $topP;
```
> An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering `$topP` or `$temperature` but not both.
##### **frequency\_penalty**
```php
/** @var float|null */
protected $frequencyPenalty;
```
> Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
##### **presence\_penalty**
```php
/** @var float|null */
protected $presencePenalty;
```
> Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
## Using an Agent
There are two ways to interact with your agent: direct response or chainable methods.
### Direct Response
The simplest way is to use the `for()` method to specify a chat history name and get an immediate response:
```php
// Using a specific chat history name
echo WeatherAgent::for('test_chat')->respond('What is the weather like?');
```
### Using Chainable Methods
For more control over the interaction, you can use the chainable syntax:
```php Basic Chaining
$response = WeatherAgent::for('test_chat')
->message('What is the weather like?') // Set the message
->temperature(0.7) // Optional: Override temperature
->respond(); // Get the response
```
```php Advanced Chaining
// Override default settings
echo WeatherAgent::for('user_1_chat')
->message('What is the weather like?')
->withModel('gpt-4o')
->withTemperature(0.7)
->withMaxTokens(2000)
->withN(3)
->withTool(new WeatherTool())
->respond();
```
#### Ready made user message
Instead of passing a string as a message, you can build your own UserMessage instance.
It allows you to add metadata to the message, such as the user ID or request ID.
Also, using different methods associated with UserMessage instance.
```php
$userMessage = Message::user($finalPrompt, ['userRequest' => $userRequestId]);
$response = WeatherAgent::for('test_chat')->message($userMessage)->respond();
```
If you use UserMessage instance instead string, it will bypass the `prompt` method.
#### Return message
By default, `respond` method returns:
* `string` with regular request
* `array` of `string` if `$n` is greater than 1.
* `array` Associative array of defined structured output schema.
You can change the return type by using `returnMessage` method or setting `$returnMessage` property to `true`,
which will enforce `respond` method to return MessageInterface instance.
```php
// Returns MessageInterface -> AssistantMessage instance
$response = WeatherAgent::for('test_chat')->returnMessage()->respond();
```
If `returnMessage` is true and structured output is defined, `beforeStructuredOutput` hook will not happen,
because structured output is not processed to array.
### Image input
You can pass publicly accessible image URLs to the agent as an array of strings.
```php
$images = [
'https://example.com/image.jpg',
'https://example.com/image2.jpg',
];
$response = WeatherAgent::for('test_chat')->withImages($images)->respond();
```
Alternative way to pass images is to use base64 encoded data.
```php
$imageUrls = [
'data:image/jpeg;base64,' . base64_encode($img)
];
$response = ImageAgent::for('test')->withImages($imageUrls)->respond("Analyze images");
```
Base64 image input Contributed by [havspect](https://github.com/havspect) in [issue #74](https://github.com/MaestroError/LarAgent/issues/74).
### Audio input
You can pass base64 encoded audio data to the agent as an array of arrays containing the `format` and `data`.
Supported formats by OpenAI: "wav", "mp3", "ogg", "flac", "m4a", "webm"
```php
$audios = [
[
'format' => 'mp3',
'data' => $base64Audio
]
];
echo WeatherAgent::for('test_chat')->withAudios($audios)->respond();
```
## Agent Mutators Reference
Here are some chainable methods to modify the agent's behavior on the fly:
```php
/**
* Set the message for the agent to process
*/
public function message(string|UserMessage $message);
```
Sets the message that will be sent to the AI model.
```php
// Example
$agent = WeatherAgent::for('test_chat');
$agent->message('What is the weather like today?')->respond();
```
```php
/**
* Add images to the agent's input (message)
* @param array $imageUrls Array of image URLs
*/
public function withImages(array $imageUrls);
```
Adds images to the message for multimodal models.
```php
// Example
$agent->message('What's in this image?')
->withImages(['https://example.com/image.jpg'])
->respond();
```
```php
/**
* Add audios to the agent's input (message)
* Array of arrays: ['data' => 'base64', 'format' => 'wav']
* Possible formats: "wav", "mp3", "ogg", "flac", "m4a", "webm"
* @param array $audioStrings Array of audio data
*/
public function withAudios(array $audioStrings);
```
Adds audios to the message, use for multimodal models.
```php
// Example
$audios = [
[
'format' => 'mp3',
'data' => $base64Audio
]
];
$agent->message('What's in this audio?')
->withAudios($audios)
->respond();
```
```php
/**
* Decide model dynamically in your controller
* @param string $model Model identifier (e.g., 'gpt-4o', 'gpt-3.5-turbo')
*/
public function withModel(string $model);
```
Overrides the default model for this specific call.
```php
// Example
$agent->message('Complex question')
->withModel('gpt-4o')
->respond();
```
```php
/**
* @param MessageInterface
*/
public function addMessage(MessageInterface $message);
```
Adds a custom message to the chat history. MessageInterface: Easily created with Message class: `Message::system('Your message here')`
Use with caution, as it is added directly to the chat history and keep in mind that history needs to keep certain structure defined by openai.
```php
// Example
use LarAgent\Message;
$agent->addMessage(Message::system('Remember to be concise'))
->message('Explain quantum computing')
->respond();
```
```php
public function clear();
```
Clear the chat history. This removes all messages from the chat history
```php
// Example
$agent->clear()->message("Let's start fresh")->respond();
```
```php
/**
* Set other chat history instance
*/
public function setChatHistory(ChatHistoryInterface $chatHistory);
```
Replaces the current chat history with a different instance.
```php
// Example
$newHistory = new CacheChatHistory('backup-chat');
$agent->setChatHistory($newHistory)->respond('Continue from backup');
```
```php
/**
* Add tool to the agent's registered tools
*/
public function withTool(ToolInterface $tool);
```
Adds a tool for this specific call.
```php
// Example
$weatherTool = new WeatherTool();
$agent->withTool($weatherTool)
->message("What's the weather in New York?")
->respond();
```
```php
/**
* Remove tool for this specific call
*/
public function removeTool(string $name);
```
Removes a tool for this specific call.
```php
// Example
$agent->removeTool('get_weather')
->message('Tell me a story without checking the weather')
->respond();
```
```php
/**
* Override the temperature for this specific call
*/
public function temperature(float $temp);
```
Controls the randomness of the response (0.0 for focused, 2.0 for creative).
```php
// Example
$agent->temperature(1.5) // More creative response
->message('Write a poem about AI')
->respond();
```
## Agent Accessors Reference
You can access the agent's properties using these methods on an instance of the agent:
```php
/**
* Get the current chat session ID
* String like "[AGENT_NAME]_[MODEL_NAME]_[CHAT_NAME]"
* CHAT_NAME is defined by "for" method
* Example: WeatherAgent_gtp-4o-mini_test-chat
*/
public function getChatSessionId(): string;
```
Returns the unique identifier for the current chat session.
```php
// Example
$sessionId = $agent->getChatSessionId();
// Returns: "WeatherAgent_gtp-4o-mini_user-123"
```
```php
/**
* Returns the provider name
*/
public function getProviderName(): string;
```
Returns the name of the AI provider being used.
```php
// Example
$provider = $agent->getProviderName();
// Returns: "openai" or other provider name
```
```php
/**
* Returns an array of registered tools
*/
public function getTools(): array;
```
Returns all tools registered with the agent.
```php
// Example
$tools = $agent->getTools();
foreach ($tools as $tool) {
echo $tool->getName() . "\n";
}
```
```php
/**
* Returns current chat history instance
*/
public function chatHistory(): ChatHistoryInterface;
```
Returns the current chat history instance.
```php
// Example
$history = $agent->chatHistory();
$messageCount = $history->count();
```
```php
/**
* Returns the current message
*/
public function currentMessage(): ?string;
```
Returns the message currently being processed.
```php
// Example
$message = $agent->currentMessage();
if ($message) {
echo "Processing: " . $message;
}
```
```php
/**
* Returns the last message
*/
public function lastMessage(): ?MessageInterface;
```
Returns the last message in the chat history.
```php
// Example
$lastMessage = $agent->lastMessage();
if ($lastMessage) {
echo "Last message role: " . $lastMessage->getRole();
echo "Last message content: " . $lastMessage->getContent();
}
```
```php
/**
* Get all chat keys associated with this agent class
*/
public function getChatKeys(): array;
```
Returns all chat keys associated with this agent class.
```php
// Example
$chatKeys = $agent->getChatKeys();
// Returns: ["user_1_chat", "user_2_chat", ...]
// You can use this to list all active conversations
foreach ($chatKeys as $key) {
echo "Active chat: " . $key . "\n";
}
```
```php
/**
* Get all modalities associated with this agent class
*/
public function getModalities(): array;
```
Returns all modalities associated with this agent class.
```php
// Example
$modalities = $agent->getModalities();
// Returns: ["text", "image", "audio"]
```
```php
/**
* Get all audio associated with this agent class
*/
public function getAudio(): ?array;
```
Returns all audio associated with this agent class.
```php
// Example
$audio = $agent->getAudio();
// Returns: null or array of audio data containing arrays of format and data
// [
// [
// 'format' => 'mp3',
// 'data' => $base64Audio
// ]
// ]
```
## Example Configuration
```php
class WeatherAgent extends Agent
{
protected $model = 'gpt-4o';
protected $history = 'session';
protected $temperature = 0.7;
public function instructions()
{
return "You are a weather expert assistant. Provide accurate weather information.";
}
public function prompt(string $message)
{
return "Weather query: " . $message;
}
}
```
# Chat History
Source: https://docs.laragent.ai/core-concepts/chat-history
Chat history stores conversations between users and agents, enabling context-aware interactions across multiple sessions.
Chat history is a crucial component that allows your agents to maintain
context across multiple interactions, providing a more natural and coherent
conversation experience.
## Built-in Chat Histories
LarAgent provides several built-in chat history implementations to suit different needs:
Set in your Agent class by the name:
```php yourAgent.php
// Stores chat history temporarily in memory (lost after request)
protected $history = 'in_memory';
// Uses Laravel's session storage
protected $history = 'session';
// Uses Laravel's cache system
protected $history = 'cache';
// Stores in files (storage/app/chat-histories)
protected $history = 'file';
// Stores in JSON files (storage/app/chat-histories)
protected $history = 'json';
```
Or use it by class:
```php yourAgent.php
protected $history = \LarAgent\History\SessionChatHistory::class;
```
Or use it by override method to add custom logic or configuration:
```php yourAgent.php
public function createChatHistory($name)
{
return new JsonChatHistory($name, ['folder' => __DIR__.'/json_History']);
}
```
```php
use LarAgent\History\InMemoryChatHistory;
use LarAgent\History\JsonChatHistory;
use LarAgent\Drivers\OpenAi\OpenAiDriver;
use LarAgent\LarAgent;
$driver = new OpenAiDriver(['api_key' => 'your-api-key']);
// Stores chat history in memory (lost after each request)
$history = new InMemoryChatHistory('user-123');
// Or in JSON files
$history = new JsonChatHistory('user-123', ['folder' => __DIR__.'/json_History']);
// Setup LarAgent
$agent = LarAgent::setup($driver, $history, [
'model' => 'gpt-4', // or any other model
]);
```
## Chat History Configuration
You can configure how chat history behaves using these properties in your Agent class:
### Reinjecting Instructions
```php
/** @var int - Number of messages after which to reinject the agent's instructions */
protected $reinjectInstructionsPer;
```
Instructions are always injected at the beginning of the chat history. The `$reinjectInstructionsPer` property defines when to reinject these instructions to ensure the agent stays on track. By default, it's set to `0` (disabled).
Reinjecting instructions can be useful for long conversations where the agent
might drift from its original purpose or forget important constraints.
### Managing Context Window Size
```php
/** @var int - Maximum number of tokens to keep in context window */
protected $contextWindowSize;
```
After the context window is exceeded, the oldest messages are removed until the context window is satisfied or the limit is reached. This helps manage token usage and ensures the conversation stays within the model's context limits.
### Managing chatHistory SessionId
```php
/** @var bool - Whether to include model name in chat session id */
protected $includeModelInChatSessionId;
```
By default it is set to `false` (disabled). If you want to include model name in chat session id, set it to `true`.
Before v0.5 model name was used in chat history key by default: `AgentName_ModelName_UserId`.
To keep old code compatible set it to `true` in your agent classes.
Agent automatically appends the **Agent class basename** and **model name** used on each message's metadata to keep track of the agent and model used.
You can access metadata of message using `getMetadata()` method or store it in chat history automatically by setting `storeMeta` property to `true` in your agent class.
Check [Storing usage data](#storing-usage-data) section for more information.
## How chat history works
```mermaid
flowchart LR
%% Main components
User([User])
LLM([LLM provider])
%% Group Agent and ChatHistory together
subgraph Framework["Agent class"]
Agent([Agent])
ChatHistory[Chat History]
%% Internal relationship
end
%% External relationships
User -->|#1 sends prompt| Agent
Agent -->|#2 processes prompt and adds message| ChatHistory
ChatHistory -->|#3 provides all messages| LLM
LLM -->|#4 generates response| Agent
Agent -->|#5 adds response| ChatHistory
ChatHistory -->|#6 returns the last message| User
```
The flow of information in the chat history process works as follows:
1. **User to Agent**: The user sends a prompt to the agent.
2. **Agent to Chat History**: The agent processes and adds the user's message to the chat history.
3. **Chat History to Agent**: The agent retrieves all relevant messages from the chat history.
4. **Agent to LLM**: The agent sends these messages as context to the language model.
5. **LLM to Agent**: The language model generates a response based on the provided context.
6. **Agent to Chat History**: The agent processes and adds the LLM's response to the chat history.
7. **Agent to User**: The agent displays the last message from the chat history to the user.
Throughout this process, the Context Window Management system:
* Tracks all messages in the conversation
* Counts tokens to ensure they stay within model limits
* Prunes older messages when necessary to maintain the context window size
#### Extensibility
You can implement custom logic for context window management using [events](/customization/events) and the chat history instance inside your agent.
Or create [custom chat history](#creating-custom-chat-histories) implementations by implementing the `ChatHistoryInterface`.
## Using Chat History
### Per-User Chat History
One of the most common use cases is maintaining separate chat histories for different users:
```php
// Create a chat history for a specific user
$response = MyAgent::forUser(auth()->user())->respond('Hello, how can you help me?');
// Later, the same user continues the conversation
$response = MyAgent::forUser(auth()->user())->respond('Can you explain more about that?');
```
### Named Chat Histories
You can also create named chat histories for specific contexts or topics:
```php
// Start a conversation about weather
$response = MyAgent::for('weather_chat')->respond('What's the weather like today?');
// Start a separate conversation about recipes
$response = MyAgent::for('recipe_chat')->respond('How do I make pasta carbonara?');
// Continue the weather conversation
$response = MyAgent::for('weather_chat')->respond('Will it rain tomorrow?');
```
## Accessing and Managing Chat History
LarAgent provides several methods to access and manage chat history:
```php Accessing History
// Get the current chat history instance
$history = $agent->chatHistory();
// Get all messages in the chat history
$messages = $history->getMessages();
// Get the last message (MessageInterface)
$lastMessage = $history->getLastMessage();
// Count messages in history
$count = $history->count();
// Get the chat history identifier
$identifier = $history->getIdentifier();
// Convert messages to array
$messagesArray = $history->toArray();
// Convert messages to array with metadata
$messagesWithMeta = $history->toArrayWithMeta();
```
```php Managing History
// Clear the current chat history
$agent->clear();
// Set a different chat history instance at runtime
$agent->setChatHistory(new CustomChatHistory('custom_identifier'));
// Get all chat keys associated with this agent class
$chatKeys = $agent->getChatKeys();
// Add a message to chat history
$history->addMessage(new Message('user', 'Hello, agent!'));
// Set context window size (in tokens)
$history->setContextWindow(4000);
// Check if context exceeds the token amount
if (!$history->exceedsContextWindow(500)) {
// Safe to add more content
}
```
The `addMessage(MessageInterface $message)` method adds a new message to the
chat history instance, which will be saved automatically by the agent class if
you are using it in agent context. In other case, you can save it manually
using `writeToMemory()` method.
## Creating Custom Chat Histories
You can create your own chat history by implementing the `ChatHistoryInterface` and extending the `LarAgent\Core\Abstractions\ChatHistory` abstract class.
Check example implementations in [src/History](https://github.com/MaestroError/LarAgent/tree/main/src/History)
There are two ways to register your custom chat history into an agent. If you use standard constructor only with `$name` parameter, you can define it by class in `$history` property or provider configuration:
**Agent Class**
```php
protected $history = \App\ChatHistories\CustomChatHistory::class;
```
**Provider Configuration (config/laragent.php)**
```php
'chat_history' => \App\ChatHistories\CustomChatHistory::class,
```
If you need any configuration other than `$name`, you can override `createChatHistory()` method:
```php
public function createChatHistory($name)
{
return new \App\ChatHistories\CustomChatHistory($name, [
// Added config
'folder' => __DIR__.'/history',
// Default configs used inside Agent class:
'context_window' => $this->contextWindowSize,
'store_meta' => $this->storeMeta,
'save_chat_keys' => $this->saveChatKeys,
]);
}
```
## Storing usage data
You can store usage data (if available) automatically in the chat history by setting `store_meta` config or `$storeMeta` property to `true`:
```php
protected $storeMeta = true;
```
This will store usage data (if available) in the chat history as metadata of message.
Each message supports `toArrayWithMeta()` method to get array with metadata:
```php
// Example
$message = $message->toArrayWithMeta();
// Result
[
// Other message properties
'usage' => [
'prompt_tokens' => 10,
'completion_tokens' => 20,
'total_tokens' => 30,
// Other usage details
],
]
```
Before v0.5 keys was stored as `promptTokens` (camelCase) instead of `prompt_tokens` (snake\_case), so make sure to update your code if you were using it.
Alternatively, you can store usage data manually using `afterResponse` hooks in your agent class:
```php
protected function afterResponse($message)
{
$messageArray = $message->toArrayWithMeta();
// Store usage data
Usage::create([
'agent_name' => class_basename($this),
'user_id' => auth()->user()->id,
'prompt_tokens' => $messageArray['usage']['prompt_tokens'],
'completion_tokens' => $messageArray['usage']['completion_tokens'],
'total_tokens' => $messageArray['usage']['total_tokens'],
]);
}
```
## Best Practices
**Do** choose the appropriate chat history implementation based on your needs
(persistence, performance, etc.)
**Do** set a reasonable context window size to balance coherence and token
usage
**Do** use unique identifiers for chat histories to prevent
cross-contamination
**Don't** store sensitive information
in chat histories without proper encryption
**Don't** neglect to clear chat histories
when they're no longer needed
# Artisan Commands
Source: https://docs.laragent.ai/core-concepts/commands
Artisan commands available in LarAgent for managing AI agents
LarAgent provides several Artisan commands to help you create and manage your AI agents. These commands streamline the development process and make it easier to interact with your agents.
## Creating an Agent
You can quickly create a new agent using the `make:agent` command:
```bash
php artisan make:agent AgentName
```
This will create a new agent class in your `app/AiAgents` directory with the basic structure and methods needed to get started. The generated agent includes:
* Default model configuration
* In-memory chat history
* Default provider setting
* Empty tools array
* Placeholder for instructions and prompt methods
## Interactive Chat
You can start an interactive chat session with any of your agents using the `agent:chat` command:
```bash
# Start a chat with default history name
php artisan agent:chat AgentName
# Start a chat with a specific history name
php artisan agent:chat AgentName --history=weather_chat_1
```
The interactive chat session provides the following capabilities:
* Send messages to your agent
* Get responses in real-time
* Use any tools configured for the agent
* Type 'exit' to end the chat session
This is particularly useful for testing your agent's functionality and behavior directly from the command line without needing to implement a full UI.
## Clear Chat History
You can clear all chat histories for a specific agent using the `agent:chat:clear` command:
```bash
php artisan agent:chat:clear AgentName
```
This command clears all chat histories for the specified agent while preserving the chat history structure and keys. This is useful when you want to reset conversations but maintain the same history identifiers.
## Remove Chat History
You can completely remove all chat histories and keys for a specific agent using the `agent:chat:remove` command:
```bash
php artisan agent:chat:remove AgentName
```
This command removes all chat histories and their associated keys for the specified agent, effectively resetting the chat history completely. Use this when you want to completely remove all traces of previous conversations.
Both, `remove` and `clear` commands will affect the current type of chat
history. For example, if you were using `session` chat history, and later
switched to `file`, both commands will remove from the `file` (current type of
chat history) storage.
## Planned Commands
LarAgent has plans to add more commands in the future to enhance developer experience:
* `make:agent:tool` - Generate tool classes with ready-to-use stubs
* `make:agent:chat-history` - Scaffold custom chat history implementations
These upcoming commands will further simplify the development process and make it easier to extend LarAgent's functionality.
# LLM Drivers
Source: https://docs.laragent.ai/core-concepts/llm-drivers
LLM Drivers provide a flexible interface to connect with different AI providers while maintaining a consistent API for your application.
LLM Drivers allow you to switch between different AI providers (like OpenAI, Ollama, or OpenRouter) without changing your application code, providing flexibility and vendor independence.
## Understanding LLM Drivers
LLM Drivers provide a standardized interface for interacting with different language model providers, allowing you to easily switch between providers without changing your application code.
The built-in drivers implement this interface and provide a simple way to use various AI APIs with a consistent interface.
Check [Creating Custom Drivers](#creating-custom-drivers) for more details about building custom drivers.
## Available Drivers
Default driver for OpenAI API. Works with minimal configuration - just add your `OPENAI_API_KEY` to your `.env` file.
Works with any OpenAI-compatible API, allowing you to use alternative backends with the same API format.
Works with Google Gemini API via OpenAI compatible endpoint
Works with Groq platform API, add `GROQ_API_KEY` to your `.env` file and use "groq" provider in your agents
## Configuring Drivers
You can configure LLM drivers in two ways:
There are `default` (OpenAI) `gemini` & `groq` drivers pre-configured in `config/laragent.php` providers.
You can use them right away or delete which isn't needed.
### 1. Global Configuration
Set drivers in the configuration file inside provider settings (`config/laragent.php`):
```php {5}
'providers' => [
'default' => [
'label' => 'openai',
'api_key' => env('OPENAI_API_KEY'),
'driver' => \LarAgent\Drivers\OpenAi\OpenAiDriver::class,
],
],
```
### 2. Per-Agent Configuration
Set the driver directly in your agent class:
```php app/AiAgents/YourAgent.php
namespace App\AiAgents;
use LarAgent\Agent;
use LarAgent\Drivers\OpenAi\OpenAiCompatible;
class YourAgent extends Agent
{
protected $driver = OpenAiCompatible::class;
// Other agent configuration
}
```
If you set the driver in the agent class, it will override the global configuration.
## Example Configurations
### Ollama (Local LLM)
```php
// File: config/laragent.php
'providers' => [
'ollama' => [
'label' => 'ollama-local',
'driver' => \LarAgent\Drivers\OpenAi\OpenAiCompatible::class,
'api_key' => 'ollama', // Can be any string for Ollama
'api_url' => "http://localhost:11434/v1",
'default_context_window' => 50000,
'default_max_completion_tokens' => 100,
'default_temperature' => 1,
],
],
```
```php
// In your agent class
protected $provider = 'ollama';
protected $model = 'llama2'; // Or any other model available in your Ollama instance
```
### OpenRouter
```php
// File: config/laragent.php
'providers' => [
'openrouter' => [
'label' => 'openrouter-provider',
'driver' => \LarAgent\Drivers\OpenAi\OpenAiCompatible::class,
'api_key' => env('OPENROUTER_API_KEY'),
'api_url' => "https://api.openrouter.ai/api/v1",
'default_context_window' => 50000,
'default_max_completion_tokens' => 100,
'default_temperature' => 1,
],
],
```
```php
// In your agent class
protected $provider = 'openrouter';
protected $model = 'anthropic/claude-3-opus'; // Or any other model available on OpenRouter
```
### Gemini
```php
'gemini' => [
'label' => 'gemini',
'model'=>'gemini-2.5-pro-preview-03-25',
'api_key' => env('GEMINI_API_KEY'),
'driver' => \LarAgent\Drivers\OpenAi\GeminiDriver::class,
'default_context_window' => 1000000,
'default_max_completion_tokens' => 10000,
'default_temperature' => 1,
],
```
```php
// In your agent class
protected $provider = 'gemini';
```
Gemini driver doesn't support streaming yet.
## Fallback Provider
There is a fallback provider that is used when the current provider fails to process a request.
By default, it's set to the default provider:
```php
// File: config/laragent.php
'fallback_provider' => 'default',
```
You can set any provider as fallback provider in configuration file, just replace "default" with your provider name:
**Example configuration**
```php {32}
return [
// ...
'providers' => [
'default' => [
'name' => 'openai',
'model' => 'gpt-4o-mini',
'driver' => \LarAgent\Drivers\OpenAi\OpenAiCompatible::class,
'api_key' => env('OPENAI_API_KEY'),
'default_context_window' => 50000,
'default_max_completion_tokens' => 100,
'default_temperature' => 1,
],
'ollama' => [
'name' => 'ollama-local',
// Required configs for fallback provider:
'model' => 'llama3.2',
'api_key' => 'ollama',
'api_url' => "http://localhost:11434/v1",
'driver' => \LarAgent\Drivers\OpenAi\OpenAiCompatible::class,
],
'gemini' => [
'name' => 'gemini provider',
'model' => 'model-GEMINI',
'driver' => \LarAgent\Drivers\OpenAi\GeminiDriver::class,
'api_key' => env('GEMINI_API_KEY'),
'default_context_window' => 500000,
'default_max_completion_tokens' => 10000,
'default_temperature' => 0.8,
],
],
'fallback_provider' => 'ollama',
];
```
It is recommended to have a 'model' set in provider which is used as a fallback.
You can disable fallback provider by setting `fallback_provider` to `null` in configuration file or just by removing it.
## LLM Drivers Architecture
```mermaid
flowchart TD
Agent[Agent] --> LlmDriver[LLM Driver]
subgraph AbstractionLayer["LLM Driver Abstraction"]
LlmDriver -->|#0 Set configs model, temperature, etc. | Config[Configuration]
LlmDriver -->|#1 Register Tools| Tools[Tools Registry]
LlmDriver -->|#2 Set Response Schema| Schema[Response Schema]
LlmDriver -->|#3 Format Tool Calls| Formatter[Tool Call structure]
end
subgraph Implementations["Provider Implementations"]
LlmDriver --> OpenAI[OpenAI Driver]
LlmDriver --> Compatible[OpenAI Compatible]
LlmDriver --> Anthropic[Anthropic Driver]
end
OpenAI --> LLMProvider[OpenAI API]
Compatible --> LLMCompat[Ollama/Gemini/Other API]
Anthropic --> LLMAnthropic[Anthropic API]
```
The LLM Driver architecture handles three key responsibilities:
1. **Tool Registration** - Register function calling tools that can be used by the LLM
2. **Response Schema** - Define structured output formats for LLM responses
3. **Tool Call Formatting** - Abstract away provider-specific formats for tool calls and results
This abstraction allows you to switch between different LLM providers without changing your application code.
## Creating Custom Drivers
If you need to integrate with an AI provider that doesn't have a built-in driver, you can create your own by implementing the `LlmDriver` interface:
```php
namespace App\LlmDrivers;
use LarAgent\Core\Abstractions\LlmDriver;
use LarAgent\Core\Contracts\LlmDriver as LlmDriverInterface;
use LarAgent\Core\Contracts\ToolCall as ToolCallInterface;
use LarAgent\Messages\AssistantMessage;
use LarAgent\Messages\StreamedAssistantMessage;
use LarAgent\Messages\ToolCallMessage;
use LarAgent\ToolCall;
class CustomProviderDriver extends LlmDriver implements LlmDriverInterface
{
public function sendMessage(array $messages, array $options = []): AssistantMessage
{
// Implement the API call to your provider
}
public function sendMessageStreamed(array $messages, array $options = [], ?callable $callback = null): \Generator
{
// Implement streaming for your custom provider
}
public function toolCallsToMessage(array $toolCalls): array
{
// Implement tool calls to message conversion
}
public function toolResultToMessage(ToolCallInterface $toolCall, mixed $result): array
{
// Implement tool result to message conversion
}
// Implement other helper methods...
}
```
Then register your custom driver in the configuration:
```php
// config/laragent.php
'providers' => [
'custom' => [
'label' => 'my-custom-provider',
'driver' => \App\LlmDrivers\CustomProviderDriver::class,
'api_key' => env('CUSTOM_PROVIDER_API_KEY'),
'api_url' => env('CUSTOM_PROVIDER_API_URL'),
'model' => 'model-name',
// Any other configuration your driver needs
],
],
```
Check [Base OpenAI driver](https://github.com/MaestroError/LarAgent/blob/main/src/Drivers/OpenAi/BaseOpenAiDriver.php) for example.
## Best Practices
**Do** store API keys in environment variables, never hardcode them
**Do** set reasonable defaults for context window and token limits
**Do** consider implementing fallback mechanisms between providers
**Don't** expose sensitive provider configuration in client-side code
**Don't** assume all providers support the same features (like function calling or parallel tool execution)
# Streaming
Source: https://docs.laragent.ai/core-concepts/streaming
Receive AI responses in real-time chunks rather than waiting for the complete response, improving user experience for long interactions.
Streaming allows your application to display AI responses as they're being generated, creating a more responsive and engaging user experience.
## Basic Streaming
The simplest way to use streaming is with the `respondStreamed` method:
```php
$agent = WeatherAgent::for('user-123');
$stream = $agent->respondStreamed('What\'s the weather like in Boston and Los Angeles?');
foreach ($stream as $chunk) {
if ($chunk instanceof \LarAgent\Messages\StreamedAssistantMessage) {
echo $chunk->getLastChunk(); // Output each new piece of content
}
}
```
## Streaming with Callback
You can also provide a callback function to process each chunk:
```php
$agent = WeatherAgent::for('user-123');
$stream = $agent->respondStreamed(
'What\'s the weather like in Boston and Los Angeles?',
function ($chunk) {
if ($chunk instanceof \LarAgent\Messages\StreamedAssistantMessage) {
echo $chunk->getLastChunk();
// Flush output buffer to send content to the browser immediately
ob_flush();
flush();
}
}
);
// Consume the stream to ensure it completes
foreach ($stream as $_) {
// The callback handles the output
}
```
## Understanding Stream Chunks
The stream can yield three types of chunks:
Regular text content chunks from the AI assistant
Tool call messages (handled internally by LarAgent)
Final chunk when structured output is enabled
For most use cases, you only need to handle `StreamedAssistantMessage` chunks as shown in the examples above. Tool calls are processed automatically by LarAgent.
## Laravel HTTP Streaming
For Laravel applications, LarAgent provides the `streamResponse` method that returns a Laravel `StreamedResponse`, making it easy to integrate with your controllers:
```php
// In a controller
public function chat(Request $request)
{
$message = $request->input('message');
$agent = WeatherAgent::for(auth()->id());
// Return a streamable response
return $agent->streamResponse($message, 'plain');
}
```
The `streamResponse` method supports three formats:
* Plain Text
* JSON
* Server-Sent Events (SSE)
```php
// Simple text output
return $agent->streamResponse($message, 'plain');
```
Frontend implementation (JavaScript):
```javascript
fetch('/chat?message=What is the weather in Boston?')
.then(response => {
const reader = response.body.getReader();
const decoder = new TextDecoder();
function read() {
return reader.read().then(({ done, value }) => {
if (done) return;
const text = decoder.decode(value);
document.getElementById('output').textContent += text;
return read();
});
}
return read();
});
```
```php
// Structured JSON with delta and content
return $agent->streamResponse($message, 'json');
```
Example output:
```json
{"delta":"Hello","content":"Hello"}
{"delta":" there","content":"Hello there"}
{"delta":"!","content":"Hello there!"}
```
Frontend implementation:
```javascript
fetch('/chat?message=Greet me&format=json')
.then(response => {
const reader = response.body.getReader();
const decoder = new TextDecoder();
function read() {
return reader.read().then(({ done, value }) => {
if (done) return;
const text = decoder.decode(value);
const lines = text.split('\n').filter(line => line.trim());
lines.forEach(line => {
try {
const data = JSON.parse(line);
document.getElementById('output').textContent = data.content;
} catch (e) {
console.error('Error parsing JSON:', e);
}
});
return read();
});
}
return read();
});
```
```php
// Server-Sent Events format with event types
return $agent->streamResponse($message, 'sse');
```
Example output:
```
event: content
data: {"delta":"Hello","content":"Hello"}
event: content
data: {"delta":" there","content":"Hello there"}
event: content
data: {"delta":"!","content":"Hello there!"}
event: complete
data: {"content":"Hello there!"}
```
Frontend implementation:
```javascript
const eventSource = new EventSource('/chat?message=Greet me&format=sse');
eventSource.addEventListener('content', function(e) {
const data = JSON.parse(e.data);
document.getElementById('output').textContent = data.content;
});
eventSource.addEventListener('complete', function(e) {
eventSource.close();
});
eventSource.addEventListener('error', function(e) {
console.error('EventSource error:', e);
eventSource.close();
});
```
***
## Streaming with Structured Output
When using structured output with streaming, you'll receive text chunks during generation, and the final structured data at the end:
```php
$agent = ProfileAgent::for('user-123');
$stream = $agent->respondStreamed('Generate a profile for John Doe');
$finalStructuredData = null;
foreach ($stream as $chunk) {
if ($chunk instanceof \LarAgent\Messages\StreamedAssistantMessage) {
echo $chunk->getLastChunk(); // Part of JSON
} elseif (is_array($chunk)) {
// This is the final structured data
$finalStructuredData = $chunk;
}
}
// Now $finalStructuredData is array which contains the structured output
// For example: ['name' => 'John Doe', 'age' => 30, 'interests' => [...]]
```
When using SSE format with structured output, you'll receive a special event:
```
event: structured
data: {"type":"structured","delta":"","content":{"name":"John Doe","age":30,"interests":["coding","reading","hiking"]},"complete":true}
event: complete
data: {"content":{"name":"John Doe","age":30,"interests":["coding","reading","hiking"]}}
```
## Best Practices
**Do** use streaming for long responses to improve user experience
**Do** handle both text chunks and structured output appropriately
**Do** implement proper error handling in your streaming code
**Don't** forget to consume the entire stream, even when using callbacks
**Don't** rely on specific timing of chunks, as they can vary based on network conditions
# Structured Output
Source: https://docs.laragent.ai/core-concepts/structured-output
Define JSON schemas to receive structured, predictable responses from your AI agents instead of free-form text.
Structured output allows you to define a specific JSON schema that your AI responses must conform to, making it easier to integrate AI-generated content with your application logic.
## Defining Response Schemas
You can define the response schema in your agent class using the `$responseSchema` property or add the `structuredOutput` method in your agent class for defining more complex schemas.
```php Basic Schema
protected $responseSchema = [
'name' => 'weather_info',
'schema' => [
'type' => 'object',
'properties' => [
'temperature' => [
'type' => 'number',
'description' => 'Temperature in degrees'
],
],
'required' => ['temperature'],
'additionalProperties' => false,
],
'strict' => true,
];
```
```php Complex Schema
public function structuredOutput()
{
return [
'name' => 'weather_info',
'schema' => [
'type' => 'object',
'properties' => [
'temperature' => [
'type' => 'number',
'description' => 'Temperature in degrees'
],
'conditions' => [
'type' => 'string',
'description' => 'Weather conditions (e.g., sunny, rainy)'
],
'forecast' => [
'type' => 'array',
'items' => [
'type' => 'object',
'properties' => [
'day' => ['type' => 'string'],
'temp' => ['type' => 'number']
],
'required' => ['day', 'temp'],
'additionalProperties' => false,
],
'description' => '5-day forecast'
]
],
'required' => ['temperature', 'conditions'],
'additionalProperties' => false,
],
'strict' => true,
];
}
```
Pay attention to the `required`, `additionalProperties`, and `strict` properties. It's recommended by OpenAI to set these when defining schemas to get exactly the structure you need.
For complex schemas, it's recommended to use the `structuredOutput()` method instead of the property, as it provides more flexibility and can include dynamic logic.
Using trait with `structuredOutput()` method is a good practice to avoid code duplication and make your code more maintainable.
#### Schema Configuration Options
The schema follows the JSON Schema specification and supports all its features:
string, number, boolean, array, object
Specify which fields must be present
Complex objects and arrays
Guide the AI on what each field should contain
## Using Structured Output
When structured output is defined, the agent's response will be automatically formatted and returned as an array according to the schema:
```php
// Example response when using the complex schema above
$response = $agent->respond("What's the weather like today?");
// Returns:
[
'temperature' => 25.5,
'conditions' => 'sunny',
'forecast' => [
['day' => 'tomorrow', 'temp' => 28],
['day' => 'Wednesday', 'temp' => 24]
]
]
```
## Runtime Schema Management
The schema can be accessed or modified at runtime.
### Access schema
The schema can be accessed using the `structuredOutput()` method:
```php
// Get current schema
$schema = $agent->structuredOutput();
// Check if structured output is enabled
if ($agent->structuredOutput()) {
// Handle structured response
}
```
### Modify or set schema
The schema can be modified using the `responseSchema()` method:
```php
// responseSchema(?array $schema)
$agent->responseSchema([
'name' => 'user_profile',
'schema' => [
'type' => 'object',
'properties' => [
'name' => ['type' => 'string'],
'age' => ['type' => 'number'],
'interests' => [
'type' => 'array',
'items' => ['type' => 'string']
],
],
'required' => ['name', 'age', 'interests'],
'additionalProperties' => false,
],
'strict' => true,
]);
```
## Example Use Cases
```php
protected $responseSchema = [
'name' => 'user_profile',
'schema' => [
'type' => 'object',
'properties' => [
'name' => ['type' => 'string'],
'age' => ['type' => 'number'],
'interests' => [
'type' => 'array',
'items' => ['type' => 'string']
],
],
'required' => ['name', 'age', 'interests'],
'additionalProperties' => false,
],
'strict' => true,
];
```
```php
protected $responseSchema = [
'name' => 'product_recommendations',
'schema' => [
'type' => 'object',
'properties' => [
'products' => [
'type' => 'array',
'items' => [
'type' => 'object',
'properties' => [
'id' => ['type' => 'string'],
'name' => ['type' => 'string'],
'reason' => ['type' => 'string'],
'score' => ['type' => 'number'],
],
'required' => ['id', 'name', 'reason'],
'additionalProperties' => false,
],
],
'category' => ['type' => 'string'],
],
'required' => ['products', 'category'],
'additionalProperties' => false,
],
'strict' => true,
];
```
```php
protected $responseSchema = [
'name' => 'content_analysis',
'schema' => [
'type' => 'object',
'properties' => [
'sentiment' => [
'type' => 'string',
'enum' => ['positive', 'neutral', 'negative'],
],
'topics' => [
'type' => 'array',
'items' => ['type' => 'string'],
],
'summary' => ['type' => 'string'],
'keyPoints' => [
'type' => 'array',
'items' => ['type' => 'string'],
],
],
'required' => ['sentiment', 'topics', 'summary'],
'additionalProperties' => false,
],
'strict' => true,
];
```
In most of the cases you need to include all properties in `required`, skip only properties which are optional in ANY case.
## Best Practices
**Do** make your schema as specific as possible
**Do** include descriptive property descriptions to guide the AI
**Do** set `additionalProperties` to `false` when you want to restrict the output to only the defined properties
**Don't** create overly complex (Deeply Nested) schemas that the AI might struggle to fulfill
**Don't** forget to set the `required` property for fields that must be present
# Tools
Source: https://docs.laragent.ai/core-concepts/tools
Tools extend your agent's capabilities, allowing it to perform tasks like sending messages, making API calls, or executing commands.
Tools (also known as function calling) allow your AI agents to interact with
external systems, APIs, and services, greatly expanding their capabilities
beyond simple text generation.
## Tool Configuration
Tools in LarAgent can be configured using these properties in your Agent class:
```php
/** @var bool - Controls whether tools can be executed in parallel */
protected $parallelToolCalls;
/** @var array - List of tool classes to be registered with the agent */
protected $tools = [];
```
You can set `$parallelToolCalls` to `null` if you want to remove it from the
request, as some models (o1) do not support parallel tool calls.
## Creating Tools
There are three ways to create and register tools in your agent:
Best for service-based or static tools specific to a single agent.
Ideal for dynamically creating tools based on runtime conditions or user
state.
Perfect for complex tools that may be reused across multiple agents or
require extensive logic.
### 1. Using the Tool Attribute
The simplest approach is using the `#[Tool]` attribute to transform your agent's methods into tools:
```php Basic Tool
use LarAgent\Attributes\Tool;
#[Tool('Get the current weather in a given location')]
public function weatherTool($location, $unit = 'celsius')
{
return 'The weather in '.$location.' is '.'20'.' degrees '.$unit;
}
```
```php Tool with Parameter Descriptions
use LarAgent\Attributes\Tool;
#[Tool(
'Get the current weather in a given location',
[
'location' => 'The city and state, e.g. San Francisco, CA',
'unit' => 'Unit of temperature'
]
)]
public function weatherTool($location, $unit = 'celsius')
{
return 'The weather in '.$location.' is '.'20'.' degrees '.$unit;
}
```
The agent will automatically register the tool with the given description and extract method information, including parameter names and types.
You can add tools without properties (= Method without parameters)
#### Using Enum Types with Tools
You can use PHP Enums to provide the AI with a specific set of options to choose from, as well as provide separate descriptions for each property (argument):
```php
// app/Enums/Unit.php
namespace App\Enums;
enum Unit: string
{
case CELSIUS = 'celsius';
case FAHRENHEIT = 'fahrenheit';
}
// app/AiAgents/WeatherAgent.php
use LarAgent\Attributes\Tool;
use App\Enums\Unit;
// ...
#[Tool(
'Get the current weather in a given location',
[
'unit' => 'Unit of temperature',
'location' => 'The city and state, e.g. San Francisco, CA'
]
)]
public static function weatherToolForNewYork(Unit $unit, $location = 'New York')
{
return WeatherService::getWeather($location, $unit->value);
}
```
It's recommended to use the `#[Tool]` attribute with static methods if there's
no need for the agent instance (`$this`).
### 2. Using the registerTools Method
This method allows you to programmatically create and register tools using the `LarAgent\Tool` class:
```php
use LarAgent\Tool;
public function registerTools()
{
$user = auth()->user();
return [
Tool::create("user_location", "Returns user's current location")
->setCallback(function () use ($user) {
return $user->location()->city;
}),
Tool::create("get_current_weather", "Returns the current weather in a given location")
->addProperty("location", "string", "The city and state, e.g. San Francisco, CA")
->setCallback("getWeather"),
];
}
```
`setCallback` method accepts any [php
callable](https://www.php.net/manual/en/language.types.callable.php), such as
a function name, a closure, or a class method.
### 3. Using Tool Classes
For complex tools, you can create dedicated tool classes and add them to the `$tools` property:
```php
protected $tools = [
WeatherTool::class,
LocationTool::class
];
```
#### Example Tool Class
Tool creation artisan command is comming soon...
```php
class WeatherTool extends LarAgent\Tool
{
protected string $name = 'get_current_weather';
protected string $description = 'Get the current weather in a given location';
protected array $properties = [
'location' => [
'type' => 'string',
'description' => 'The city and state, e.g. San Francisco, CA',
],
'unit' => [
'type' => 'string',
'description' => 'The unit of temperature',
'enum' => ['celsius', 'fahrenheit'],
],
];
protected array $required = ['location'];
protected array $metaData = ['sent_at' => '2024-01-01'];
public function execute(array $input): mixed
{
// Call the weather API
return 'The weather in '.$input['location'].' is '.rand(10, 60).' degrees '.$input['unit'];
}
}
```
## Tool choice
You can set the tool choice for your agent using the following methods:
```php
// Disable tools for this specific call:
WeatherAgent::for('test_chat')->toolNone()->respond('What is my name?');
// Require at least 1 tool call for this specific call:
WeatherAgent::for('test_chat')->toolRequired()->respond('Who is president of US?');
// Force specific tool to be used for this specific call:
WeatherAgent::for('test_chat')->forceTool('weatherToolForNewYork')->respond('What is weather in New York?');
```
`forceTool` method requires tool's name as a parameter.
`toolRequired` & `forceTool` is set only at the first call, after that it will automatically switched to 'auto' avoiding infinite loop.
If tools are registered, default value is 'auto', otherwise it's 'none'.
```php
protected $toolChoice = 'auto';
```
## Phantom Tools 👻
Phantom Tools are dynamically registered tools that are not executed on the LarAgent side, but instead returns [`ToolCallMessage`](https://github.com/MaestroError/LarAgent/blob/ed4f0611f98ca93815443da3de6c987f0ee88450/src/Messages/ToolCallMessage.php#L7)
```php at runtime
use LarAgent\PhantomTool;
// ...
// Create a Phantom tool with properties and custom callback
$PhantomTool = PhantomTool::create('Phantom_tool', 'Get the current weather in a given location')
->addProperty('location', 'string', 'The city and state, e.g. San Francisco, CA')
->setRequired('location')
->setCallback("PhantomTool");
// Register the Phantom tool with the agent
$agent->withTool($PhantomTool);
```
```php in Agent class
use LarAgent\PhantomTool;
// ...
public function registerTools()
{
return [
PhantomTool::create('Phantom_tool', 'Get the current weather in a given location')
->addProperty('location', 'string', 'The city and state, e.g. San Francisco, CA')
->setRequired('location')
->setCallback("PhantomTool"),
];
}
```
Phantom Tools are particularly useful when:
* You need to integrate with external services dynamically
* You want to handle tool execution outside of LarAgent
* You need to make tool registration/execution available from API
Phantom Tools follow the same interface as regular tools but instead of automatical execution, they return [`ToolCallMessage`](https://github.com/MaestroError/LarAgent/blob/ed4f0611f98ca93815443da3de6c987f0ee88450/src/Messages/ToolCallMessage.php#L7) instance, providing more flexibility
in terms of when and how they are executed.
## Chainable Tool Methods
You can dynamically add or remove tools during runtime using `withTool` and `removeTool` methods:
**Add tool with instance**
```php
$tool = Tool::create('test_tool', 'Test tool')->setCallback(fn () => 'test');
$agent->withTool($tool);
```
**Add tool with predefined tool class**
```php
$agent->withTool(WeatherTool::class);
```
**Remove tool by name**
```php
$agent->removeTool('get_current_weather');
```
**Remove tool by instance**
```php
$tool = Tool::create('test_tool', 'Test tool')->setCallback(fn () => 'test');
$agent->withTool($tool);
if (!$needsTool) {
$agent->removeTool($tool);
}
```
**Remove tool by class name**
```php
$agent->removeTool(WeatherTool::class);
```
**Set toolChoice property**
```php
// @param string|array|null $toolChoice Tool choice configuration
$agent->setToolChoice('none');
```
**Enable/Disable parallel tool calls**
```php
// @param bool|null $parallel Parallel tool calls configuration
$agent->parallelToolCalls(true);
```
## Best Practices
**Do** create separate tool classes for complex functionality that might be
reused
**Do** provide clear, descriptive names and parameter descriptions
**Do** use Enums when you need to restrict the AI to specific options
**Don't** create tools with ambiguous
functionality or unclear parameter requirements
**Don't** expose sensitive operations
without proper validation and security checks
# Usage Without Laravel
Source: https://docs.laragent.ai/core-concepts/usage-without-laravel
Learn how to use LarAgent in non-Laravel PHP projects by leveraging the standalone engine.
**Important Notice**: LarAgent will soon drop support for direct usage outside of Laravel. However, the core engine will be released as a separate package with the same API (methods/functions), ensuring continued support for non-Laravel projects.
## Understanding engine
LarAgent's core functionality is powered by the `LarAgent\LarAgent` class, often referred to as the "LarAgent engine." This engine is a standalone component that contains all the abstractions and doesn't depend on Laravel. It manages agents, tools, chat histories, structured output, and other core features.
```mermaid
flowchart TB
subgraph "LarAgent Engine"
Engine["LarAgent Core"]
HistoryManager["Chat History"]
LLMBridge["LLM Driver"]
end
subgraph "Agent configuration"
StructuredOutput["Structured Output"]
ParallelToolCalls["Parallel Tool Calls"]
Other["Other"]
end
subgraph "Manages tools"
RegisterTools["Register Tools"]
ExecutesToolCalls["Executes Tool Calls"]
end
subgraph "Lifecycle Hooks"
beforeSend
beforeResponse
afterResponse
afterSend
beforeSaveHistory
end
subgraph "Lifecycle Hooks (Optional)"
beforeToolExecution
afterToolExecution
beforeStructuredOutput
beforeReinjectingInstructions
end
Application["Application"] --> Engine
Engine --> HistoryManager
Engine --> LLMBridge
Engine --> beforeSend
beforeSend --> beforeResponse
beforeResponse --> afterResponse
afterResponse --> afterSend
afterSend --> beforeSaveHistory
Engine --> RegisterTools
RegisterTools <--> ExecutesToolCalls
Engine --> StructuredOutput
StructuredOutput <--> ParallelToolCalls
ParallelToolCalls <--> Other
class Engine,LLMBridge,HistoryManager,ToolManager,LifecycleHooks primary
class AgentConfig,Tools,ParallelTools,StructuredOutput,InstructionRejection,ParallelToolCalls,RegisterTools,ExecutesToolCalls secondary
classDef primary fill:#4f46e5,stroke:#312e81,color:white
classDef secondary fill:#818cf8,stroke:#4f46e5,color:white
```
## Getting Started
To use LarAgent outside of Laravel, you'll need to handle some configuration and initialization manually:
```php
$yourApiKey]);
$chatKey = 'test-chat-history';
$chatHistory = new InMemoryChatHistory($chatKey);
$agent = LarAgent::setup($driver, $chatHistory, [
'model' => 'gpt-4o-mini',
]);
// Add a message and get a response
$userMessage = Message::user('Hello, how can you help me?');
$agent->withMessage($userMessage);
$response = $agent->run();
echo $response;
```
## Configuring Structured Output
You can define structured output schemas to get responses in a specific format:
```php
// Define a structured output schema
$weatherInfoSchema = [
'name' => 'weather_info',
'schema' => [
'type' => 'object',
'properties' => [
'locations' => [
'type' => 'array',
'items' => [
'type' => 'object',
'properties' => [
'city' => ['type' => 'string'],
'weather' => ['type' => 'string'],
],
'required' => ['city', 'weather'],
'additionalProperties' => false,
],
],
],
'required' => ['locations'],
'additionalProperties' => false,
],
'strict' => true,
];
// Use the schema with the agent
$userMessage = Message::user('What\'s the weather like in Boston and Los Angeles? I prefer celsius');
$agent->structured($weatherInfoSchema)->withMessage($userMessage);
$response = $agent->run();
// The response will be a structured array
print_r($response);
/* Outputs:
Array
(
[locations] => Array
(
[0] => Array
(
[city] => Boston, MA
[weather] => The weather is 22 degrees Celsius.
)
[1] => Array
(
[city] => Los Angeles, CA
[weather] => The weather is 22 degrees Celsius.
)
)
)
*/
```
## Adding Tools
You can add tools to the agent using the `Tool` class:
```php
use LarAgent\Tool;
// Create a tool
$toolName = 'get_current_weather';
$tool = Tool::create($toolName, 'Get the current weather in a given location');
$tool->addProperty('location', 'string', 'The city and state, e.g. San Francisco, CA')
->addProperty('unit', 'string', 'The unit of temperature', ['celsius', 'fahrenheit'])
->setRequired('location')
->setMetaData(['sent_at' => '2024-01-01'])
->setCallback(function ($location, $unit = 'fahrenheit') {
// "Call the weather API"
return 'The weather in '.$location.' is 72 degrees '.$unit;
});
// Add the tool to the agent
$agent->setTools([$tool]);
```
## Use Chat History
Outside of Laravel, you'll need to create and pass chat history instance into setup method:
```php
use LarAgent\History\JsonChatHistory;
// Setup
$yourApiKey = 'your-openai-api-key'; // Replace with your actual API key
$driver = new OpenAiDriver(['api_key' => $yourApiKey]);
$chatHistory = new JsonChatHistory('test-chat-history');
$agent = LarAgent::setup($driver, $chatHistory, [
'model' => 'gpt-4o-mini',
]);
```
There is 2 availble chat history classes for usage outside of Laravel:
* `JsonChatHistory`
* `InMemoryChatHistory`
You can easily implement your own, check [custom-chat-history](customization/custom-chat-history) for more information.
## Differences from Laravel Usage
When using LarAgent outside of Laravel, be aware of these key differences:
You need to provide all configuration directly instead of using Laravel's config system.
You won't have access to the helpful artisan commands for generating agents and tools.
Instead of classes, you will create Agents using `LarAgent::setup()`
You can't leverage Laravel's service container for dependency injection.
## Future Plans
As mentioned at the top of this page, LarAgent will soon separate its core engine into a standalone package. This will provide a cleaner, more focused API for non-Laravel projects while maintaining full compatibility with the current methods and functions.
The standalone engine will offer:
* Improved documentation specifically for non-Laravel usage
* Simplified installation for PHP projects
* Reduced dependencies
* The same powerful features you're used to in LarAgent
Stay tuned to the [official repository](https://github.com/MaestroError/LarAgent) for announcements about this upcoming change.
## Complete Example
Here's a complete example combining structured output, tools, and custom instructions:
```php
use LarAgent\Drivers\OpenAi\OpenAiDriver;
use LarAgent\History\InMemoryChatHistory;
use LarAgent\LarAgent;
use LarAgent\Message;
use LarAgent\Messages\ToolCallMessage;
use LarAgent\Tool;
// Setup
$driver = new OpenAiDriver(['api_key' => $yourApiKey]);
$chatHistory = new InMemoryChatHistory('weather-chat');
$agent = LarAgent::setup($driver, $chatHistory, [
'model' => 'gpt-4o-mini',
]);
// Create a weather tool
$weatherTool = Tool::create('get_current_weather', 'Get the current weather in a given location')
->addProperty('location', 'string', 'The city and state, e.g. San Francisco, CA')
->addProperty('unit', 'string', 'The unit of temperature', ['celsius', 'fahrenheit'])
->setRequired('location')
->setCallback(function ($location, $unit = 'fahrenheit') {
return 'The weather in '.$location.' is 72 degrees '.$unit;
});
// Define structured output schema
$weatherInfoSchema = [
'name' => 'weather_info',
'schema' => [
'type' => 'object',
'properties' => [
'locations' => [
'type' => 'array',
'items' => [
'type' => 'object',
'properties' => [
'city' => ['type' => 'string'],
'weather' => ['type' => 'string'],
],
'required' => ['city', 'weather'],
],
],
],
'required' => ['locations'],
],
'strict' => true,
];
// Set up the agent with all components
$userMessage = Message::user('What\'s the weather like in Boston and Los Angeles? I prefer celsius');
$instructions = 'You are weather assistant and always respond using celsius. If it provided as fahrenheit, convert it to celsius.';
$agent->setTools([$weatherTool])
->structured($weatherInfoSchema)
->withInstructions($instructions)
->withMessage($userMessage);
// Get the structured response
$response = $agent->run();
```
# Custom Chat History
Source: https://docs.laragent.ai/customization/custom-chat-history
Learn how to create your own chat history implementation for LarAgent
# Creating a Custom Chat History
LarAgent allows you to create custom chat history implementations to store conversation data in your preferred storage mechanism. This guide will walk you through the process of creating a custom chat history implementation.
## Understanding the Chat History Architecture
The LarAgent framework uses a structured approach for chat history management:
* **ChatHistory Interface** (`LarAgent\Core\Contracts\ChatHistory`): Defines the contract all chat history implementations must follow
* **Abstract ChatHistory** (`LarAgent\Core\Abstractions\ChatHistory`): Provides common functionality for all chat history implementations
* **Concrete Implementations**: Implement storage-specific logic (e.g., `InMemoryChatHistory`, `JsonChatHistory`, `FileChatHistory`)
The code examples in this guide are simplified for educational purposes. Check the [actual implementations](https://github.com/MaestroError/LarAgent/tree/main/src/History) for more details.
## Creating Your Custom Chat History
### Step 1: Create the Chat History Class
First, create a new file for your custom chat history implementation:
```php
customStorage = $options['custom_storage'] ?? null;
// Call parent constructor to handle common setup
parent::__construct($name, $options);
}
// Implement abstract methods...
}
```
### Step 2: Implement Required Methods
#### 2.1 Memory Management Methods
These methods handle reading from and writing to your storage mechanism:
```php
/**
* Read chat history from storage
*/
public function readFromMemory(): void
{
// Retrieve messages from your storage mechanism
$storedMessages = $this->retrieveFromStorage($this->getIdentifier());
// Set messages in the chat history
// If using a format that needs conversion, use buildMessages()
$this->setMessages($this->buildMessages($storedMessages) ?? []);
}
/**
* Write chat history to storage
*/
public function writeToMemory(): void
{
// Convert messages to a format suitable for storage
$messagesForStorage = $this->toArrayForStorage();
// Save to your storage mechanism
$this->saveToStorage($this->getIdentifier(), $messagesForStorage);
}
```
#### 2.2 Chat Key Management Methods
These methods handle tracking which chat histories exist:
```php
/**
* Save the chat key to storage
*/
public function saveKeyToMemory(): void
{
// Get current keys
$keys = $this->loadKeysFromMemory();
// Add current key if not already present
$key = $this->getIdentifier();
if (!in_array($key, $keys)) {
$keys[] = $key;
$this->saveKeysToStorage($keys);
}
}
/**
* Load chat keys from storage
*/
public function loadKeysFromMemory(): array
{
// Retrieve keys from your storage mechanism
return $this->retrieveKeysFromStorage() ?? [];
}
/**
* Remove a chat history and its key from storage
*/
public function removeChatFromMemory(string $key): void
{
// Remove the chat history data
$this->removeFromStorage($key);
// Remove the key
$this->removeChatKey($key);
}
```
### Step 3: Implement Storage-Specific Methods
These are helper methods specific to your storage mechanism:
```php
/**
* Remove a chat key from storage
*/
protected function removeChatKey(string $key): void
{
// Get current keys
$keys = $this->loadKeysFromMemory();
// Filter out the key to remove
$keys = array_filter($keys, fn($k) => $k !== $key);
// Save updated keys
$this->saveKeysToStorage($keys);
}
/**
* Retrieve data from storage
*/
protected function retrieveFromStorage(string $key)
{
// Implement based on your storage mechanism
}
/**
* Save data to storage
*/
protected function saveToStorage(string $key, array $data): void
{
// Implement based on your storage mechanism
}
/**
* Retrieve keys from storage
*/
protected function retrieveKeysFromStorage(): array
{
// Implement based on your storage mechanism
}
/**
* Save keys to storage
*/
protected function saveKeysToStorage(array $keys): void
{
// Implement based on your storage mechanism
}
/**
* Remove data from storage
*/
protected function removeFromStorage(string $key): void
{
// Implement based on your storage mechanism
}
```
## Real-World Example: FileChatHistory
Here's a complete implementation of a file-based chat history that uses Laravel's Storage facade:
```php
disk = $options['disk'] ?? config('filesystems.default'); // Default to 'local' storage
$this->folder = $options['folder'] ?? 'chat_histories'; // Default folder
parent::__construct($name, $options);
}
public function readFromMemory(): void
{
$filePath = $this->getFullPath();
if (Storage::disk($this->disk)->exists($filePath)) {
$content = Storage::disk($this->disk)->get($filePath);
try {
$messages = json_decode($content, true);
if (is_array($messages)) {
$this->setMessages($this->buildMessages($messages));
} else {
$this->setMessages([]);
}
} catch (\Exception $e) {
$this->setMessages([]);
}
} else {
$this->setMessages([]);
}
}
public function writeToMemory(): void
{
$filePath = $this->getFullPath();
try {
// Create directory if it doesn't exist
$this->createFolderIfNotExists();
// Write messages to the file
Storage::disk($this->disk)->put($filePath, json_encode($this->toArrayForStorage(), JSON_PRETTY_PRINT));
} catch (\Exception $e) {
throw new \RuntimeException("Failed to write chat history to file: {$filePath}");
}
}
public function saveKeyToMemory(): void
{
try {
$this->createFolderIfNotExists();
$keysPath = $this->folder.'/'.$this->keysFile;
$keys = $this->loadKeysFromMemory();
$key = $this->getIdentifier();
if (! in_array($key, $keys)) {
$keys[] = $key;
Storage::disk($this->disk)->put($keysPath, json_encode($keys, JSON_PRETTY_PRINT));
}
} catch (\Exception $e) {
throw new \RuntimeException("Failed to save chat history key: {$this->getIdentifier()}");
}
}
public function loadKeysFromMemory(): array
{
try {
$keysPath = $this->folder.'/'.$this->keysFile;
if (! Storage::disk($this->disk)->exists($keysPath)) {
return [];
}
$content = Storage::disk($this->disk)->get($keysPath);
return json_decode($content, true) ?? [];
} catch (\Exception $e) {
return [];
}
}
public function removeChatFromMemory(string $key): void
{
$safeName = preg_replace('/[^A-Za-z0-9_\-]/', '_', $key);
$filePath = $this->folder.'/'.$safeName.'.json';
if (Storage::disk($this->disk)->exists($filePath)) {
Storage::disk($this->disk)->delete($filePath);
}
$this->removeChatKey($key);
}
// Helper methods:
protected function createFolderIfNotExists(): void
{
$directory = $this->folder;
if (! Storage::disk($this->disk)->exists($directory)) {
Storage::disk($this->disk)->makeDirectory($directory);
}
}
protected function getSafeName(): string
{
$name = $this->getIdentifier();
return preg_replace('/[^A-Za-z0-9_\-]/', '_', $name); // Sanitize the name
}
protected function getFullPath(): string
{
return $this->folder.'/'.$this->getSafeName().'.json';
}
protected function removeChatKey(string $key): void
{
$keys = $this->loadKeysFromMemory();
$keys = array_filter($keys, fn ($k) => $k !== $key);
$keysPath = $this->folder.'/'.$this->keysFile;
Storage::disk($this->disk)->put($keysPath, json_encode($keys, JSON_PRETTY_PRINT));
}
}
```
## Registering Your Custom Chat History
Specify the chat history directly in your agent class:
```php
use App\ChatHistory\YourCustomChatHistory;
class YourAgent extends Agent
{
protected function createChatHistory($name): ChatHistory
{
return new YourCustomChatHistory($name, [
'custom_option' => 'value',
]);
}
}
```
## Best Practices
### Error Handling
* Always use try-catch blocks for storage operations
* Provide meaningful error messages
* Implement fallbacks for missing data
### Performance
* Consider caching for frequently accessed data
* Use efficient storage mechanisms for your use case
* Implement cleanup strategies for old chat histories
### Security
* Sanitize chat history identifiers before using as file names
* Validate input data before storage
* Consider encryption for sensitive conversation data
### Cleanup
* Implement automatic cleanup for old chat histories
* Provide methods for manual cleanup
* Ensure proper removal of both chat data and keys
## Testing Your Custom Chat History
Create test cases to verify your implementation:
```php
use Tests\TestCase;
use App\ChatHistory\YourCustomChatHistory;
class YourCustomChatHistoryTest extends TestCase
{
protected YourCustomChatHistory $history;
protected function setUp(): void
{
parent::setUp();
$this->history = new YourCustomChatHistory('test-chat');
}
public function testReadWriteToMemory(): void
{
// Add test messages
$this->history->addUserMessage('Hello');
$this->history->addAssistantMessage('Hi there');
// Write to storage
$this->history->writeToMemory();
// Create a new instance to read from storage
$newHistory = new YourCustomChatHistory('test-chat');
$newHistory->readFromMemory();
// Assert messages were stored correctly
$this->assertCount(2, $newHistory->getMessages());
$this->assertEquals('Hello', $newHistory->getMessages()[0]->content);
}
public function testKeyManagement(): void
{
// Save key
$this->history->saveKeyToMemory();
// Assert key was saved
$keys = $this->history->loadKeysFromMemory();
$this->assertContains('test-chat', $keys);
// Remove chat
$this->history->removeChatFromMemory('test-chat');
// Assert key was removed
$keys = $this->history->loadKeysFromMemory();
$this->assertNotContains('test-chat', $keys);
}
}
```
## Conclusion
Creating a custom chat history implementation allows you to integrate LarAgent with your preferred storage mechanism. By following the steps in this guide and using the provided examples, you can create robust and efficient chat history implementations that meet your specific needs.
Remember to check the [actual implementations](https://github.com/MaestroError/LarAgent/tree/main/src/History) in the LarAgent repository for more detailed examples and best practices.
# Custom LLM Drivers
Source: https://docs.laragent.ai/customization/custom-driver
Learn how to create your own LLM driver for LarAgent
# Creating a Custom LLM Driver
LarAgent allows you to integrate with various LLM providers by creating custom drivers. This guide will walk you through the process of creating a custom LLM driver for a new provider, similar to the existing OpenAI driver but tailored to your specific LLM provider.
## Understanding the LLM Driver Architecture
The LarAgent framework uses a driver-based architecture for LLM integrations:
* **LlmDriver Interface** (`LarAgent\Core\Contracts\LlmDriver`): Defines the contract all LLM drivers must implement
* **Abstract LlmDriver** (`LarAgent\Core\Abstractions\LlmDriver`): Provides common functionality for all drivers
* **Concrete Drivers**: Implement provider-specific logic (e.g., `OpenAiDriver`)
## Creating Your Custom Driver
The code bellow is a simplified example of a custom driver implementation. It is not a complete implementation and is intended for educational purposes only.
Check the real drivers [here](https://github.com/MaestroError/LarAgent/tree/main/src/Drivers/OpenAi).
### Step 1: Create the Driver Class
First, create a new directory for your provider, then create your driver class:
```php
client = $this->initializeClient($settings);
}
/**
* Initialize the client for your LLM provider
*/
protected function initializeClient(array $settings): mixed
{
// Example implementation:
$apiKey = $settings['api_key'] ?? null;
if (!$apiKey) {
return null;
}
// Return your initialized client
// This will depend on your provider's SDK
return new YourProviderClient($apiKey);
}
// Implement required methods...
}
```
### Step 2: Implement Required Methods
#### 2.1 Send Message Method
This is the core method for sending messages to the LLM and receiving responses:
```php
/**
* Send a message to the LLM and receive a response.
*
* @param array $messages Array of messages to send
* @param array $options Configuration options
* @return AssistantMessage The response from the LLM
*
* @throws \Exception
*/
public function sendMessage(array $messages, array $options = []): AssistantMessage
{
if (empty($this->client)) {
throw new \Exception('API key is required to use the YourProvider driver.');
}
// Prepare the payload with common settings
$payload = $this->preparePayload($messages, $options);
// Make an API call to your provider
$response = $this->client->createCompletion($payload);
$this->lastResponse = $response;
// Handle the response based on your provider's response format
// For example, if your provider supports tool calls:
if ($this->isToolCallResponse($response)) {
// Extract tool calls from the response
$toolCalls = $this->extractToolCalls($response);
// Build tool calls message
$message = $this->toolCallsToMessage($toolCalls);
return new ToolCallMessage($toolCalls, $message, $this->getResponseMetadata($response));
}
// For regular text responses:
$content = $this->extractContent($response);
return new AssistantMessage($content, $this->getResponseMetadata($response));
}
```
#### 2.2 Tool Result to Message Method
This method formats tool results for the LLM:
```php
/**
* Convert a tool result to a message format for the LLM
*
* @param ToolCallInterface $toolCall The tool call
* @param mixed $result The result from the tool
* @return array The formatted message
*/
public function toolResultToMessage(ToolCallInterface $toolCall, mixed $result): array
{
// Format depends on your provider's expected format
// Example for OpenAI-compatible format:
$content = json_decode($toolCall->getArguments(), true);
$content[$toolCall->getToolName()] = $result;
return [
'role' => 'tool',
'content' => json_encode($content),
'tool_call_id' => $toolCall->getId(),
];
}
```
#### 2.3 Tool Calls to Message Method
This method formats tool calls for the LLM:
```php
/**
* Convert tool calls to a message format for the LLM
*
* @param array $toolCalls Array of tool calls
* @return array The formatted message
*/
public function toolCallsToMessage(array $toolCalls): array
{
$toolCallsArray = [];
foreach ($toolCalls as $tc) {
$toolCallsArray[] = $this->toolCallToContent($tc);
}
// Format depends on your provider's expected format
// Example for OpenAI-compatible format:
return [
'role' => 'assistant',
'tool_calls' => $toolCallsArray,
];
}
```
#### Step 2.4 Tool Call to Content Method
This method formats a tool call for the LLM:
```php
/**
* Format a tool call for your provider's API payload
*/
public function formatToolForPayload(ToolInterface $tool): array
{
// Override the default implementation if your provider has a different format
// Example for a provider with a different tool format:
return [
'name' => $tool->getName(),
'description' => $tool->getDescription(),
'parameters' => $tool->getProperties(),
'required_params' => $tool->getRequired(),
];
}
```
#### 2.5 Streamed Message Method
For providers that support streaming:
```php
/**
* Send a message to the LLM and receive a streamed response.
*
* @param array $messages Array of messages to send
* @param array $options Configuration options
* @param callable|null $callback Optional callback function to process each chunk
* @return \Generator A generator that yields chunks of the response
*
* @throws \Exception
*/
public function sendMessageStreamed(array $messages, array $options = [], ?callable $callback = null): \Generator
{
if (empty($this->client)) {
throw new \Exception('API key is required to use the YourProvider driver.');
}
// Prepare the payload with common settings
$payload = $this->preparePayload($messages, $options);
// Add stream-specific options
$payload['stream'] = true;
// Create a streamed response
$stream = $this->client->createCompletionStream($payload);
// Initialize a streamed message
$streamedMessage = new StreamedAssistantMessage;
// Process the stream according to your provider's format
foreach ($stream as $chunk) {
$this->lastResponse = $chunk;
// Process the chunk and update the message
// This will depend on your provider's streaming format
$this->processStreamChunk($chunk, $streamedMessage);
// Execute callback if provided
if ($callback) {
$callback($streamedMessage);
}
// Yield the updated message
yield $streamedMessage;
}
}
```
### Step 3: Implement Helper Methods
These methods help with the core functionality:
```php
/**
* Prepare the payload for API request with common settings
*/
protected function preparePayload(array $messages, array $options = []): array
{
// Add model if from provider data if not provided via options
if (empty($options['model'])) {
$options['model'] = $this->getSettings()['model'] ?? 'default-model';
}
$this->setConfig($options);
$payload = array_merge($this->getConfig(), [
'messages' => $this->formatMessages($messages),
]);
// Set the response format if structured output is enabled
if ($this->structuredOutputEnabled()) {
$payload['response_format'] = $this->formatResponseSchema($this->getResponseSchema());
}
// Add tools to payload if any are registered
if (!empty($this->tools)) {
$payload['tools'] = array_map(
fn($tool) => $this->formatToolForPayload($tool),
$this->getRegisteredTools()
);
}
return $payload;
}
/**
* Format messages for your provider's expected format
*/
protected function formatMessages(array $messages): array
{
// Transform LarAgent message format to your provider's format if needed
// Return the formatted messages
return $messages;
}
/**
* Format the response schema for your provider
*/
protected function formatResponseSchema(array $schema): array
{
// Transform the schema to your provider's expected format
return [
'type' => 'json_schema',
'schema' => $schema,
];
}
```
### Step 4: Implement Provider-Specific Methods
These are methods specific to your provider's API:
```php
/**
* Check if a response contains tool calls
*/
protected function isToolCallResponse($response): bool
{
// Implement based on your provider's response format
// Example:
return isset($response->tool_calls) && !empty($response->tool_calls);
}
/**
* Extract tool calls from a response
*/
protected function extractToolCalls($response): array
{
// Implement based on your provider's response format
$toolCalls = [];
foreach ($response->tool_calls as $tc) {
$toolCalls[] = new ToolCall(
$tc->id ?? 'tool_call_' . uniqid(),
$tc->name ?? $tc->function->name ?? '',
$tc->arguments ?? $tc->function->arguments ?? '{}'
);
}
return $toolCalls;
}
/**
* Extract content from a response
*/
protected function extractContent($response): string
{
// Implement based on your provider's response format
// Example:
return $response->choices[0]->message->content ?? '';
}
/**
* Get metadata from a response
*/
protected function getResponseMetadata($response): array
{
// Extract usage information or other metadata
// Example:
return [
'usage' => [
'prompt_tokens' => $response->usage->prompt_tokens ?? 0,
'completion_tokens' => $response->usage->completion_tokens ?? 0,
'total_tokens' => $response->usage->total_tokens ?? 0,
],
];
}
/**
* Process a stream chunk
*/
protected function processStreamChunk($chunk, StreamedAssistantMessage $message): void
{
// Implement based on your provider's streaming format
// Example:
if (isset($chunk->content)) {
$message->appendContent($chunk->content);
}
if (isset($chunk->usage)) {
$message->setUsage([
'prompt_tokens' => $chunk->usage->prompt_tokens ?? 0,
'completion_tokens' => $chunk->usage->completion_tokens ?? 0,
'total_tokens' => $chunk->usage->total_tokens ?? 0,
]);
$message->setComplete(true);
}
}
```
## Testing Your Driver
Create tests for your driver to ensure it works correctly:
```php
driver = new YourProviderDriver([
'api_key' => 'test_key',
'model' => 'test_model',
]);
}
public function testSendMessage()
{
// Mock your provider's client response
$this->mockClientResponse();
$messages = [
['role' => 'system', 'content' => 'You are a helpful assistant.'],
['role' => 'user', 'content' => 'Hello!'],
];
$response = $this->driver->sendMessage($messages);
$this->assertInstanceOf(AssistantMessage::class, $response);
$this->assertEquals('Hello! How can I help you today?', $response->getContent());
}
public function testSendMessageWithToolCalls()
{
// Mock your provider's client response for tool calls
$this->mockClientToolCallResponse();
$messages = [
['role' => 'user', 'content' => 'What\'s the weather?'],
];
$response = $this->driver->sendMessage($messages);
$this->assertInstanceOf(ToolCallMessage::class, $response);
$this->assertEquals('get_weather', $response->getToolCalls()[0]->getToolName());
}
// Add more tests for other methods
}
```
## Registering Your Driver
To make your driver available in the LarAgent framework, you'll need to register it:
### In Laravel
Add your driver to the configuration file:
```php
// config/laragent.php
return [
// ...
'providers' => [
'your-provider' => [
'label' => 'your-provider-name',
'driver' => \App\Drivers\YourProvider\YourProviderDriver::class,
'api_key' => env('YOUR_PROVIDER_API_KEY'),
'model' => 'your-default-model',
// Other provider-specific settings
],
],
];
```
### In Agent Class
```php
namespace App\AiAgents;
use LarAgent\Agent;
class YourAgent extends Agent
{
protected $driver = \App\Drivers\YourProvider\YourProviderDriver::class;
// ...
}
```
## Best Practices
1. **Error Handling**: Implement robust error handling for API calls
2. **Rate Limiting**: Consider implementing rate limiting or retry logic
3. **Logging**: Add logging for debugging purposes
4. **Configuration**: Make your driver configurable with sensible defaults
5. **Documentation**: Document your driver's capabilities and limitations
## Complete Example Implementation
Here's a simplified example of a complete driver implementation:
```php
client = $apiKey ? new YourProviderClient($apiKey) : null;
}
public function sendMessage(array $messages, array $options = []): AssistantMessage
{
if (empty($this->client)) {
throw new \Exception('API key is required to use the YourProvider driver.');
}
$payload = $this->preparePayload($messages, $options);
$response = $this->client->createCompletion($payload);
$this->lastResponse = $response;
if (isset($response->tool_calls) && !empty($response->tool_calls)) {
$toolCalls = [];
foreach ($response->tool_calls as $tc) {
$toolCalls[] = new ToolCall(
$tc->id ?? 'tool_call_' . uniqid(),
$tc->function->name ?? '',
$tc->function->arguments ?? '{}'
);
}
$message = $this->toolCallsToMessage($toolCalls);
return new ToolCallMessage($toolCalls, $message, ['usage' => $response->usage]);
}
$content = $response->choices[0]->message->content ?? '';
return new AssistantMessage($content, ['usage' => $response->usage]);
}
public function sendMessageStreamed(array $messages, array $options = [], ?callable $callback = null): \Generator
{
if (empty($this->client)) {
throw new \Exception('API key is required to use the YourProvider driver.');
}
$payload = $this->preparePayload($messages, $options);
$payload['stream'] = true;
$stream = $this->client->createCompletionStream($payload);
$streamedMessage = new StreamedAssistantMessage;
foreach ($stream as $chunk) {
$this->lastResponse = $chunk;
if (isset($chunk->content)) {
$streamedMessage->appendContent($chunk->content);
}
if (isset($chunk->usage)) {
$streamedMessage->setUsage([
'prompt_tokens' => $chunk->usage->prompt_tokens,
'completion_tokens' => $chunk->usage->completion_tokens,
'total_tokens' => $chunk->usage->total_tokens,
]);
$streamedMessage->setComplete(true);
}
if ($callback) {
$callback($streamedMessage);
}
yield $streamedMessage;
}
}
public function toolResultToMessage(ToolCallInterface $toolCall, mixed $result): array
{
$content = json_decode($toolCall->getArguments(), true);
$content[$toolCall->getToolName()] = $result;
return [
'role' => 'tool',
'content' => json_encode($content),
'tool_call_id' => $toolCall->getId(),
];
}
public function toolCallsToMessage(array $toolCalls): array
{
$toolCallsArray = [];
foreach ($toolCalls as $tc) {
$toolCallsArray[] = [
'id' => $tc->getId(),
'type' => 'function',
'function' => [
'name' => $tc->getToolName(),
'arguments' => $tc->getArguments(),
],
];
}
return [
'role' => 'assistant',
'tool_calls' => $toolCallsArray,
];
}
protected function preparePayload(array $messages, array $options = []): array
{
if (empty($options['model'])) {
$options['model'] = $this->getSettings()['model'] ?? 'default-model';
}
$this->setConfig($options);
$payload = array_merge($this->getConfig(), [
'messages' => $messages,
]);
if ($this->structuredOutputEnabled()) {
$payload['response_format'] = [
'type' => 'json_schema',
'schema' => $this->getResponseSchema(),
];
}
if (!empty($this->tools)) {
foreach ($this->getRegisteredTools() as $tool) {
$payload['tools'][] = $this->formatToolForPayload($tool);
}
}
return $payload;
}
}
```
## Conclusion
By following this guide, you can create a custom LLM driver for any provider and integrate it with the LarAgent framework. This allows you to leverage the full power of LarAgent with your preferred LLM provider while maintaining compatibility with the existing architecture.
For more details, see the real drivers [here](https://github.com/MaestroError/LarAgent/tree/main/src/Drivers/OpenAi).
# Agent Events
Source: https://docs.laragent.ai/customization/events
Learn how to use Agent lifecycle events to customize behavior
# Agent Events
LarAgent provides a comprehensive event system that allows you to hook into various stages of the agent's lifecycle. Agent events focus on the agent's lifecycle such as initialization, conversation flow, and termination. They are perfect for setting up agent-specific configurations, handling conversation state, and managing cleanup operations.
## Available Agent Events
You can override any of these methods in your agent class to customize behavior at different points in the agent's lifecycle.
### onInitialize
The `onInitialize` hook is called when the agent is fully initialized. This is the perfect place to set up any initial state or configurations your agent needs.
**Example: Set temperature dynamically based on user preferences**
```php
protected function onInitialize()
{
if (auth()->check() && auth()->user()->prefersCreative()) {
$this->temperature(1.4);
}
}
```
### onConversationStart
This hook is triggered at the beginning of each `respond` method call, signaling the start of a new step in conversation. Use this to prepare conversation-specific resources or logging.
**Example: Log conversation start**
```php
protected function onConversationStart()
{
Log::info(
'Starting new conversation',
[
'agent' => self::class,
'message' => $this->currentMessage()
]
);
}
```
### onConversationEnd
Called at the end of each `respond` method, this hook allows you to perform cleanup, logging or any other logic your application might need after a conversation ends. In case of streaming, it runs at the last chunk received.
**Example: Save conversation history**
```php
/** @param MessageInterface|array|null $message */
protected function onConversationEnd($message)
{
// Clean the history
$this->clear();
// Save the last response
DB::table('chat_histories')->insert(
[
'chat_session_id' => $this->chatHistory()->getIdentifier(),
'message' => $message,
]
);
}
```
### onToolChange
This hook is triggered whenever a tool is added to or removed from the agent. It receives the tool instance and a boolean indicating whether the tool was added (`true`) or removed (`false`).
**Example: Update tool metadata**
```php
/**
* @param ToolInterface $tool
* @param bool $added
*/
protected function onToolChange($tool, $added = true)
{
// If 'my_tool' tool is added
if($added && $tool->getName() == 'my_tool') {
// Update metadata
$newMetaData = ['using_in' => self::class, ...$tool->getMetaData()];
$tool->setMetaData($newMetaData);
}
}
```
### onClear
Triggered before the agent's chat history is cleared. Use this hook to perform any necessary logic before the chat history is cleared.
**Example: Backup chat history**
```php
protected function onClear()
{
// Backup chat history
file_put_contents('backup.json', json_encode($this->chatHistory()->toArrayWithMeta()));
}
```
### onTerminate
This hook is called when the agent is being terminated. It's the ideal place to perform final cleanup, save state, or close connections.
**Example: Log termination**
```php
protected function onTerminate()
{
Log::info('Agent terminated successfully');
}
```
### onEngineError
This hook is called when the provider (or underlying engine) fails to process a request, right before trying to call the fallback provider.
**Example: Log provider error**
```php
protected function onEngineError(\Throwable $th)
{
Log::info('Provider failed', [
'error' => $th->getMessage(),
]);
}
```
## Using Laravel Events with Agent Hooks
LarAgent hooks can be integrated with Laravel's event system to provide more flexibility and better separation of concerns. This allows you to:
* Decouple event handling logic from your agent class
* Use event listeners and subscribers
* Leverage Laravel's event broadcasting capabilities
* Handle events asynchronously using queues
### Basic Event Integration
First, define your event classes:
```php
// app/Events/AgentMessageReceived.php
class AgentMessageReceived
{
use Dispatchable, InteractsWithSockets, SerializesModels;
public function __construct(
public ChatHistoryInterface $history,
public MessageInterface $message
) {}
}
```
Then, implement the hook in your agent class:
```php
protected function afterSend($history, $message)
{
// Dispatch Laravel event
AgentMessageReceived::dispatch($history, $message);
return true;
}
```
**Note:** If you want to pass the agent in an event handler, use the `toDTO` method: `$this->toDTO()`
### Using Event Listeners
Create dedicated listeners for your agent events:
```php
// app/Listeners/LogAgentMessage.php
class LogAgentMessage
{
public function handle(AgentMessageReceived $event)
{
Log::info('Agent message received', [
'content' => $event->message->getContent(),
'tokens' => Tokenizer::count($event->message->getContent()),
'history_id' => $event->history->getIdentifier()
]);
}
}
```
Register the event-listener mapping in your `EventServiceProvider`:
```php
// app/Providers/EventServiceProvider.php
protected $listen = [
AgentMessageReceived::class => [
LogAgentMessage::class,
NotifyAdminAboutMessage::class,
// Add more listeners as needed
],
];
```
## Best Practices
1. **Keep hooks focused** - Each hook should have a single responsibility
2. **Use appropriate hooks** - Choose the right hook for the task at hand
3. **Handle exceptions** - Properly handle exceptions in your hooks to prevent breaking the agent
4. **Consider performance** - Be mindful of performance implications, especially for hooks that run frequently
5. **Use Laravel events for complex logic** - For complex event handling, consider using Laravel's event system
# Engine Hooks
Source: https://docs.laragent.ai/customization/hooks
Learn how to use Engine hooks to customize the conversation flow
# Engine Hooks
LarAgent's engine hooks provide fine-grained control over the conversation flow, message handling, and tool execution. Unlike agent events that focus on lifecycle, engine hooks dive deeper into the conversation processing pipeline, allowing you to intercept and modify behavior at crucial points.
Each engine hook returns a boolean value where `true` allows the operation to proceed and `false` prevents it. In most cases, it's better to throw and handle exceptions instead of just returning `false`, since returning `false` silently stops execution.
## Available Engine Hooks
You can override any of these methods in your agent class to customize behavior at different points in the conversation flow.
### beforeReinjectingInstructions
This hook is called before the engine reinjects system instructions into the chat history. Use this to modify or validate the chat history before instructions are reinjected or even change the instructions completely.
Instructions are always injected at the beginning of the chat history. The `$reinjectInstructionsPer` property defines when to reinject the instructions again. By default, it is set to `0` (disabled).
**Example: Modify instructions based on chat history**
```php
/**
* @param ChatHistoryInterface $chatHistory
* @return bool
*/
protected function beforeReinjectingInstructions($chatHistory)
{
// Prevent reinjecting instructions for specific chat types
if ($chatHistory->count() > 1000) {
$this->instuctions = view("agents/new_instructions", ['user' => auth()->user()])->render();
}
return true;
}
```
### beforeSend & afterSend
These hooks are called before and after a message is added to the chat history. Use them to modify, validate, or log messages.
**Example: Filter sensitive information and log messages**
```php
/**
* @param ChatHistoryInterface $history
* @param MessageInterface|null $message
* @return bool
*/
protected function beforeSend($history, $message)
{
// Filter out sensitive information
if ($message && Checker::containsSensitiveData($message->getContent())) {
throw new \Exception("Message contains sensitive data");
}
return true;
}
protected function afterSend($history, $message)
{
// Log successful messages
Log::info('Message sent', [
'session' => $history->getIdentifier(),
'content_length' => Tokenizer::count($message->getContent())
]);
return true;
}
```
### beforeSaveHistory
Triggered before the chat history is saved. Perfect for validation or modification of the history before persistence.
**Example: Add metadata before saving**
```php
protected function beforeSaveHistory($history)
{
// Add metadata before saving
$updatedMeta = [
'saved_at' => now()->timestamp,
'message_count' => $history->count(),
...$history->getMetadata()
];
$history->getLastMessage()->setMetadata($updatedMeta);
return true;
}
```
### beforeResponse / afterResponse
These hooks are called before sending a message (message is already added to the chat history) to the LLM and after receiving its response. Use them for request/response manipulation or monitoring.
**Example: Log user messages and validate responses**
```php
/**
* @param ChatHistoryInterface $history
* @param MessageInterface|null $message
*/
protected function beforeResponse($history, $message)
{
// Add context to the message
if ($message) {
Log::info('User message: ' . $message->getContent());
}
return true;
}
/**
* @param MessageInterface $message
*/
protected function afterResponse($message)
{
// Process or validate the LLM response
if (is_array($message->getContent())) {
Log::info('Structured response received');
}
return true;
}
```
### beforeToolExecution / afterToolExecution
These hooks are triggered before and after a tool is executed. Perfect for tool-specific validation, logging, or result modification.
**Example: Check permissions and format results**
```php
/**
* @param ToolInterface $tool
* @return bool
*/
protected function beforeToolExecution($tool)
{
// Check tool permissions
if (!$this->hasToolPermission($tool->getName())) {
Log::warning("Unauthorized tool execution attempt: {$tool->getName()}");
return false;
}
return true;
}
/**
* @param ToolInterface $tool
* @param mixed &$result
* @return bool
*/
protected function afterToolExecution($tool, &$result)
{
// Modify or format tool results
if (is_array($result)) {
// Since tool result is reference (&$result), we can safely modify it
$result = array_map(fn($item) => trim($item), $result);
}
return true;
}
```
### beforeStructuredOutput
This hook is called before processing structured output. Use it to modify or validate the response structure.
**Example: Validate and enhance structured output**
```php
protected function beforeStructuredOutput(array &$response)
{
// Return false if response contains something unexpected
if (!$this->checkArrayContent($response)) {
return false; // After returning false, the method stops executing and 'respond' will return `null`
}
// Add additional data to output
$response['timestamp'] = now()->timestamp;
return true;
}
```
## Practical Use Cases
Here are some common use cases for engine hooks:
### Security and Validation
* Filter out sensitive information before sending messages
* Validate tool inputs and outputs
* Enforce rate limiting or usage quotas
### Logging and Monitoring
* Track token usage and conversation metrics
* Log user interactions for compliance
* Monitor tool execution performance
### Response Modification
* Enhance responses with additional context
* Format or standardize tool outputs
* Add metadata to structured responses
### Integration with External Systems
* Sync conversation data with CRM systems
* Trigger notifications based on conversation events
* Update analytics dashboards in real-time
## Best Practices
1. **Return values matter** - Always return `true` unless you explicitly want to halt execution
2. **Use exceptions for errors** - Throw exceptions where possible with clear messages instead of just returning `false`
3. **Keep hooks lightweight** - Avoid heavy processing in hooks that run frequently
4. **Be careful with references** - When modifying referenced parameters (like `&$result`), ensure you understand the implications
5. **Test thoroughly** - Hooks can have subtle effects on the conversation flow, so test them carefully
# Development
Source: https://docs.laragent.ai/development
Contribute to LarAgent development
# Contributing to LarAgent
We welcome contributions to LarAgent! Whether it's improving documentation, fixing bugs, or adding new features, your help is appreciated. This guide will walk you through the process of setting up your development environment and submitting contributions.
Need help in contributing? Join our [Discord community](https://discord.gg/NAczq2T9F8), send "@Maintainer 🛠️ onboard me" in any channel and we will help you to get started.
## Development Setup
Follow these steps to set up your local development environment:
1. **Fork the repository** on GitHub
2. **Clone your fork**:
```bash
git clone https://github.com/YOUR_USERNAME/LarAgent.git
cd LarAgent
```
3. **Install dependencies**:
```bash
composer install
```
4. **Create a new branch** for your feature or bugfix:
```bash
git checkout -b feature/your-feature-name
```
## Coding Guidelines
When contributing to LarAgent, please follow these guidelines to ensure your code meets our standards:
### Code Style
* Use type hints and return types where possible
* Add PHPDoc blocks for classes and methods
* Keep methods focused and concise
* Follow PSR-12 coding standards
LarAgent uses PHP CS Fixer to maintain code style. You can run it with:
```bash
composer format
```
### Testing
All new features and bug fixes should include tests. LarAgent uses [PEST](https://pestphp.com/) for testing.
* Add tests for new features
* Ensure all tests pass before submitting:
```bash
composer test
```
* Maintain or improve code coverage
### Documentation
Good documentation is crucial for any project:
* Add PHPDoc blocks for new classes and methods
* Include examples for new features
* Consider updating the [official documentation](https://github.com/MaestroError/docs) for major changes
### Commit Guidelines
* Use clear, descriptive commit messages
* Reference issues and pull requests in your commits
* Keep commits focused and atomic
## Pull Request Process
Follow these steps to submit your contributions:
1. **Update your fork** with the latest changes from main:
```bash
git remote add upstream https://github.com/MaestroError/LarAgent.git
git fetch upstream
git rebase upstream/main
```
2. **Push your changes** to your fork:
```bash
git push origin feature/your-feature-name
```
3. **Create a Pull Request** with:
* Clear title and description
* List of changes and impact
* Any breaking changes highlighted
* Screenshots/examples if relevant
The maintainers aim to review all pull requests within 2 weeks.
## Getting Help
If you need assistance while contributing:
* Open an issue for bugs or feature requests
* Join discussions in existing issues
* Join our [Discord community](https://discord.gg/NAczq2T9F8)
* Reach out to maintainers for guidance
## Security Vulnerabilities
If you discover a security vulnerability, please review [our security policy](https://github.com/MaestroError/LarAgent/security/policy) on how to report it properly.
## Running Tests
You can run the test suite with:
```bash
composer test
```
For more specific tests:
```bash
# Run a specific test file
./vendor/bin/pest tests/YourTestFile.php
# Run with coverage report
composer test-coverage
```
Thank you for contributing to LarAgent! Your efforts help make the package better for everyone.
# Introduction
Source: https://docs.laragent.ai/introduction
LarAgent brings the power of AI agents to your Laravel projects with an elegant syntax. Create, extend, and manage AI agents with ease while maintaining Laravel's fluent API design patterns.
Get started with LarAgent in minutes
Learn how to create agents and tools
Learn about structured output
Check out our blog for tutorials and updates
## What is LarAgent?
What if you can create AI agents just like you create any other Eloquent model?
Why not?! 👇
```bash
php artisan make:agent YourAgentName
```
And it looks familiar, isn't it?
```php
namespace App\AiAgents;
use LarAgent\Agent;
class YourAgentName extends Agent
{
protected $model = 'gpt-4';
protected $history = 'in_memory';
protected $provider = 'default';
protected $tools = [];
public function instructions()
{
return "Define your agent's instructions here.";
}
public function prompt($message)
{
return $message;
}
}
```
And you can tweak the configs, like `history`
```php
// ...
protected $history = \LarAgent\History\CacheChatHistory::class;
// ...
```
Or add `temperature`:
```php
// ...
protected $temperature = 0.5;
// ...
```
Oh, and add a new tool:
```php
// ...
#[Tool('Get the current weather in a given location')]
public function exampleWeatherTool($location, $unit = 'celsius')
{
return 'The weather in '.$location.' is '.'20'.' degrees '.$unit;
}
// ...
```
And run it, per user:
```php
Use App\AiAgents\YourAgentName;
// ...
YourAgentName::forUser(auth()->user())->respond($message);
```
Or use your custom name for the chat history:
```php
Use App\AiAgents\YourAgentName;
// ...
YourAgentName::for("custom_history_name")->respond($message);
```
Let's check the [quickstart](/quickstart) page 👍
# Quickstart
Source: https://docs.laragent.ai/quickstart
Get started with LarAgent in minutes
## Requirements
Before installing LarAgent, make sure your environment meets the following requirements:
* Laravel 10.x or higher
* PHP 8.3 or higher
* OpenAI API key or other [supported LLM provider](/core-concepts/llm-drivers#available-drivers) API key
## Installation
You can install LarAgent via Composer:
```bash
composer require maestroerror/laragent
```
After installing the package, publish the configuration file:
```bash
php artisan vendor:publish --tag="laragent-config"
```
This will create a `config/laragent.php` file in your application.
## Configuration
### OpenAi
If you are using OpenAI API, just set your API key in your `.env` file and you are good to go:
```
OPENAI_API_KEY=your-openai-api-key
```
### Basic Configuration
The published configuration file contains the following default settings:
```php
// config for Maestroerror/LarAgent
return [
'default_driver' => \LarAgent\Drivers\OpenAi\OpenAiCompatible::class,
'default_chat_history' => \LarAgent\History\InMemoryChatHistory::class,
'namespaces' => [
'App\\AiAgents\\',
'App\\Agents\\',
],
'providers' => [
'default' => [
'label' => 'openai',
'api_key' => env('OPENAI_API_KEY'),
'driver' => \LarAgent\Drivers\OpenAi\OpenAiDriver::class,
'default_context_window' => 50000,
'default_max_completion_tokens' => 10000,
'default_temperature' => 1,
],
'gemini' => [
'label' => 'gemini',
'api_key' => env('GEMINI_API_KEY'),
'driver' => \LarAgent\Drivers\OpenAi\GeminiDriver::class,
'default_context_window' => 1000000,
'default_max_completion_tokens' => 10000,
'default_temperature' => 1,
],
'groq' => [
'label' => 'groq',
'api_key' => env('GROQ_API_KEY'),
'api_url' => 'https://api.groq.com/openai/v1',
'model' => 'llama-3.1-8b-instant',
'driver' => \LarAgent\Drivers\Groq\GroqDriver::class,
'default_context_window' => 131072,
'default_max_completion_tokens' => 131072,
'default_temperature' => 1,
],
],
'fallback_provider' => 'default',
];
```
### Custom Providers
You can configure additional providers with custom settings:
```php
// Example custom provider with all possible configurations
'custom_provider' => [
// Just name for reference, changes nothing
'label' => 'custom',
'api_key' => env('PROVIDER_API_KEY'),
'api_url' => env('PROVIDER_API_URL'),
// Defaults (Can be overriden per agent)
'model' => 'your-provider-model',
'driver' => \LarAgent\Drivers\OpenAi\OpenAiDriver::class,
'chat_history' => \LarAgent\History\InMemoryChatHistory::class,
'default_context_window' => 15000,
'default_max_completion_tokens' => 100,
'default_temperature' => 0.7,
// Enable/disable parallel tool calls
'parallel_tool_calls' => true,
// Store metadata with messages
'store_meta' => true,
// Save chat keys to memory via chatHistory
'save_chat_keys' => true,
],
```
## Creating Your First Agent
### Using Artisan Command
The quickest way to create a new agent is using the provided Artisan command:
```bash
php artisan make:agent WeatherAgent
```
This will create a new agent class in the `App\AiAgents` directory with all the necessary boilerplate code.
## Basic Usage
### Simple Response
```php
use App\AiAgents\WeatherAgent;
// Create an instance for a specific user or chat session
$agent = WeatherAgent::for('user-123');
// Get a response
$response = $agent->respond('What\'s the weather like in Boston?');
echo $response; // "The weather in Boston is currently..."
```
### Adding Tools
Tools allow your agent to perform actions and access external data:
```php
namespace App\AiAgents;
use LarAgent\Agent;
use LarAgent\Attributes\Tool;
class WeatherAgent extends Agent
{
// ... other properties
#[Tool('Get the current weather in a given location')]
public function getCurrentWeather($location, $unit = 'celsius')
{
// Call a weather API or service
return "The weather in {$location} is 22 degrees {$unit}.";
}
}
```
## Next Steps
Now that you have LarAgent set up, you can explore more advanced features:
Learn more about creating and configuring agents
Discover how to create powerful tools for your agents
Get responses in structured formats like JSON
Stream responses in real-time for better UX