LLM Drivers allow you to switch between different AI providers (like OpenAI, Ollama, or OpenRouter) without changing your application code, providing flexibility and vendor independence.

Understanding LLM Drivers

LLM Drivers provide a standardized interface for interacting with different language model providers, allowing you to easily switch between providers without changing your application code.

The built-in drivers implement this interface and provide a simple way to use various AI APIs with a consistent interface.

Check Creating Custom Drivers for more details about building custom drivers.

Available Drivers

OpenAiDriver

Default driver for OpenAI API. Works with minimal configuration - just add your OPENAI_API_KEY to your .env file.

OpenAiCompatible

Works with any OpenAI-compatible API, allowing you to use alternative backends with the same API format.

Configuring Drivers

You can configure LLM drivers in two ways:

1. Global Configuration

Set drivers in the configuration file inside provider settings (config/laragent.php):

'providers' => [
    'default' => [
        'label' => 'openai',
        'api_key' => env('OPENAI_API_KEY'),
        'driver' => \LarAgent\Drivers\OpenAi\OpenAiDriver::class,
    ],
],

2. Per-Agent Configuration

Set the driver directly in your agent class:

app/AiAgents/YourAgent.php
namespace App\AiAgents;

use LarAgent\Agent;
use LarAgent\Drivers\OpenAi\OpenAiCompatible;

class YourAgent extends Agent
{
    protected $driver = OpenAiCompatible::class;
    // Other agent configuration
}

If you set the driver in the agent class, it will override the global configuration.

Example Configurations

Ollama (Local LLM)

// File: config/laragent.php
'providers' => [
    'ollama' => [
        'label' => 'ollama-local',
        'driver' => \LarAgent\Drivers\OpenAi\OpenAiCompatible::class,
        'api_key' => 'ollama', // Can be any string for Ollama
        'api_url' => "http://localhost:11434/v1",
        'default_context_window' => 50000,
        'default_max_completion_tokens' => 100,
        'default_temperature' => 1,
    ],
],
// In your agent class
protected $provider = 'ollama';
protected $model = 'llama2'; // Or any other model available in your Ollama instance

OpenRouter

// File: config/laragent.php
'providers' => [
    'openrouter' => [
        'label' => 'openrouter-provider',
        'driver' => \LarAgent\Drivers\OpenAi\OpenAiCompatible::class,
        'api_key' => env('OPENROUTER_API_KEY'),
        'api_url' => "https://api.openrouter.ai/api/v1",
        'default_context_window' => 50000,
        'default_max_completion_tokens' => 100,
        'default_temperature' => 1,
    ],
],
// In your agent class
protected $provider = 'openrouter';
protected $model = 'anthropic/claude-3-opus'; // Or any other model available on OpenRouter

Gemini

'gemini' => [
    'label' => 'gemini',
    'model'=>'gemini-2.5-pro-preview-03-25',
    'driver'=>\LarAgent\Drivers\OpenAi\OpenAiCompatible::class,
    'api_url'=>'https://generativelanguage.googleapis.com/v1beta/openai',
    'api_key' => env('GEMINI_API_KEY'),
]
// In your agent class
protected $provider = 'gemini';

Using Multiple Providers

You can configure multiple providers and switch between them as needed:

// Configure different agents to use different providers
class CreativeAgent extends Agent
{
    protected $provider = 'anthropic'; // Points to a provider in your config
    protected $model = 'claude-3-opus';
}

class EconomicalAgent extends Agent
{
    protected $provider = 'openai';
    protected $model = 'gpt-3.5-turbo';
}

You can also switch providers at runtime:

// Switch provider based on user preference or other conditions
$provider = $user->premium ? 'premium-provider' : 'standard-provider';
$agent = MyAgent::for('chat-session')
    ->withProvider($provider)
    ->respond('Hello, how can you help me?');

LLM Drivers Architecture

The LLM Driver architecture handles three key responsibilities:

  1. Tool Registration - Register function calling tools that can be used by the LLM
  2. Response Schema - Define structured output formats for LLM responses
  3. Tool Call Formatting - Abstract away provider-specific formats for tool calls and results

This abstraction allows you to switch between different LLM providers without changing your application code.

Creating Custom Drivers

If you need to integrate with an AI provider that doesn’t have a built-in driver, you can create your own by implementing the LlmDriver interface:

namespace App\LlmDrivers;

use LarAgent\Core\Abstractions\LlmDriver;
use LarAgent\Core\Contracts\LlmDriver as LlmDriverInterface;
use LarAgent\Core\Contracts\ToolCall as ToolCallInterface;
use LarAgent\Messages\AssistantMessage;
use LarAgent\Messages\StreamedAssistantMessage;
use LarAgent\Messages\ToolCallMessage;
use LarAgent\ToolCall;

class CustomProviderDriver extends LlmDriver implements LlmDriverInterface
{
    
    public function sendMessage(array $messages, array $options = []): AssistantMessage
    {
        // Implement the API call to your provider
    }
    
    public function sendMessageStreamed(array $messages, array $options = [], ?callable $callback = null): \Generator
    {
        // Implement streaming for your custom provider
    }

    public function toolCallsToMessage(array $toolCalls): array
    {
        // Implement tool calls to message conversion
    }
    
    public function toolResultToMessage(ToolCallInterface $toolCall, mixed $result): array
    {
        // Implement tool result to message conversion
    }
    // Implement other helper methods...
}

Then register your custom driver in the configuration:

// config/laragent.php
'providers' => [
    'custom' => [
        'label' => 'my-custom-provider',
        'driver' => \App\LlmDrivers\CustomProviderDriver::class,
        'api_key' => env('CUSTOM_PROVIDER_API_KEY'),
        'api_url' => env('CUSTOM_PROVIDER_API_URL'),
        'model' => 'model-name',
        // Any other configuration your driver needs
    ],
],

Check Base OpenAI driver for example.

Best Practices

Do store API keys in environment variables, never hardcode them

Do set reasonable defaults for context window and token limits

Do consider implementing fallback mechanisms between providers

Don’t expose sensitive provider configuration in client-side code

Don’t assume all providers support the same features (like function calling or parallel tool execution)