yannxtrem / ollama-bridge
Laravel Client for Ollama API.
Requires
- php: ^8.2
- illuminate/http: ^11.0|^12.0
- illuminate/support: ^11.0|^12.0
README
Laravel Ollama Bridge is a robust HTTP client designed to connect your Laravel application to a remote Laravel server hosting LLMs via Ollama.
It handles authentication via Laravel Sanctum, manages timeouts, provides fine-grained error handling, and allows dynamic model switching (Gemma, Mistral, Llama3, etc.) without changing code.
🚀 Features
- Secure: Built-in support for Laravel Sanctum (Bearer Token authentication).
- Model Agnostic: Switch between models (e.g., from
gemmatomistral) via config or at runtime. - Robust Error Handling: Maps HTTP errors (401, 403, 422, 500) to descriptive PHP Exceptions.
- Laravel Native: Includes a Service Provider, Facade, and fully typed config.
- Resilient: Configurable timeouts and connection checks.
📋 Requirements
- PHP 8.2 or higher
- Laravel 11.0 or higher
📦 Installation
Install the package via Composer:
composer require yannxtrem/ollama-bridge
⚙️ Configuration
1. Publish Configuration
Publish the configuration file to config/ollama-bridge.php:
php artisan vendor:publish --tag=ollama-bridge-config
2. Environment Variables
Add the following variables to your .env file:
# The URL of your remote AI Server OLLAMA_BRIDGE_URL=https://api.your-ai-server.com # The Sanctum Token generated on the AI Server OLLAMA_BRIDGE_TOKEN=your-secret-sanctum-token # The default model to use (gemma, mistral, llama3, etc.) OLLAMA_BRIDGE_MODEL=gemma # Request timeout in seconds (useful for long generations) OLLAMA_BRIDGE_TIMEOUT=60
Voici la section Usage révisée et considérablement détaillée. Elle explique chaque paramètre de la méthode ask(), montre comment utiliser les arguments nommés (PHP 8+) et fournit des exemples concrets pour différents cas d'utilisation (rôles système, créativité).
Tu peux remplacer l'ancienne section ## 🛠 Usage par celle-ci :
🛠 Usage
The main entry point of this package is the ask() method. It handles the communication with the remote AI API.
Method Signature
public function ask( string $prompt, ?string $system = null, ?float $temperature = null ): string
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
$prompt |
string |
Yes | The main input or question you want to send to the AI. |
$system |
string |
No | The system instructions (persona). Defines how the AI should behave (e.g., "You are a senior developer"). |
$temperature |
float |
No | Controls randomness/creativity. Range: 0.0 (deterministic/focused) to 1.0 (creative/random). |
Examples
1. Basic Request
The simplest way to use the client. It uses the default model configuration.
use Yannxtrem\OllamaBridge\OllamaBridge; public function index(OllamaBridge $ai) { // Simple question $answer = $ai->ask("What is Laravel?"); return $answer; }
2. Using a System Persona
Use the $system parameter to give context or define a role for the AI. This is crucial for getting specific types of answers.
$prompt = "Explain the Singleton pattern."; $system = "You are a sarcastic senior engineer who explains concepts using cooking metaphors."; $answer = $ai->ask($prompt, $system); // Output: "Listen up! A Singleton is like having only ONE master chef in the kitchen..."
3. Controlling Creativity (Temperature)
Use the $temperature parameter to adjust the output.
- Low Temperature (0.1 - 0.3): Best for code generation, facts, and technical documentation.
- High Temperature (0.7 - 1.0): Best for creative writing, brainstorming, and poetry.
// Very precise (Deterministic) $code = $ai->ask("Write a JSON parser in PHP", null, 0.1); // Very creative $poem = $ai->ask("Write a poem about servers", null, 0.9);
4. Using Named Arguments (Recommended)
PHP 8 named arguments make the code much more readable, especially when you want to skip the $system parameter but set the $temperature.
$response = $ai->ask( prompt: "List 5 unique cat names", temperature: 0.8 );
🔄 Advanced: Dynamic Model Switching
You can chain the model() method before calling ask() to switch the LLM for a specific request. This overrides the default_model in your config.
// Use 'mistral' for a quick chat $chat = $ai->model('mistral')->ask("Hello!"); // Use 'llama3' for complex reasoning $analysis = $ai->model('llama3')->ask( prompt: "Analyze this large dataset...", system: "You are a data analyst.", temperature: 0.2 );
⚡ Checking Service Status
Before sending a request, you might want to check if the remote inference server is reachable.
if (! $ai->isOnline()) { return "The AI service is currently down for maintenance."; } return $ai->ask("Hello");
🔌 Server Requirements
This package acts as a Client. It expects your remote Laravel Server to expose endpoints compatible with the following structure.
The URLs are constructed as: {base_url}/api/{model}/{action}.
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/{model}/ask |
Sends the prompt to the LLM. |
GET |
/api/{model}/status |
Checks if the service is up. |
Expected Server Response (Success)
The server must return a JSON object with a data.response key.
{
"data": {
"response": "Here is the answer generated by the AI..."
}
}
Expected Server Response (Error)
Standard Laravel validation or error structure.
{
"message": "Validation Error",
"errors": {
"prompt": ["The prompt field is required."]
}
}
⚠️ Error Handling
The ask() method throws standard \Exception with translated messages based on the HTTP status code:
- 401: Invalid AI Configuration (Token rejected).
- 403: Access Denied (Unauthorized IP or Scope).
- 422: Malformed Request (Validation error on server).
- 500: AI Server Error.
- ConnectionException: Network timeout or DNS issue.
🤝 Contribution
Contributions are welcome!
- Fork the repository.
- Create a feature branch.
- Commit your changes.
- Push to the branch.
- Open a Pull Request.
📄 License
This package is open-sourced software licensed under the MIT license.