LLM Drivers
LLM Drivers provide a flexible interface to connect with different AI providers while maintaining a consistent API for your application.
LLM Drivers allow you to switch between different AI providers (like OpenAI, Ollama, or OpenRouter) without changing your application code, providing flexibility and vendor independence.
Understanding LLM Drivers
LLM Drivers provide a standardized interface for interacting with different language model providers, allowing you to easily switch between providers without changing your application code.
The built-in drivers implement this interface and provide a simple way to use various AI APIs with a consistent interface.
Check Creating Custom Drivers for more details about building custom drivers.
Available Drivers
OpenAiDriver
Default driver for OpenAI API. Works with minimal configuration - just add your OPENAI_API_KEY
to your .env
file.
OpenAiCompatible
Works with any OpenAI-compatible API, allowing you to use alternative backends with the same API format.
Configuring Drivers
You can configure LLM drivers in two ways:
1. Global Configuration
Set drivers in the configuration file inside provider settings (config/laragent.php
):
2. Per-Agent Configuration
Set the driver directly in your agent class:
If you set the driver in the agent class, it will override the global configuration.
Example Configurations
Ollama (Local LLM)
OpenRouter
Gemini
Using Multiple Providers
You can configure multiple providers and switch between them as needed:
You can also switch providers at runtime:
LLM Drivers Architecture
The LLM Driver architecture handles three key responsibilities:
- Tool Registration - Register function calling tools that can be used by the LLM
- Response Schema - Define structured output formats for LLM responses
- Tool Call Formatting - Abstract away provider-specific formats for tool calls and results
This abstraction allows you to switch between different LLM providers without changing your application code.
Creating Custom Drivers
If you need to integrate with an AI provider that doesn’t have a built-in driver, you can create your own by implementing the LlmDriver
interface:
Then register your custom driver in the configuration:
Check Base OpenAI driver for example.
Best Practices
Do store API keys in environment variables, never hardcode them
Do set reasonable defaults for context window and token limits
Do consider implementing fallback mechanisms between providers
Don’t expose sensitive provider configuration in client-side code
Don’t assume all providers support the same features (like function calling or parallel tool execution)