omgbwa-yasse / aibridge
Laravel / Ai Bridge - Package Laravel unifiΓ© pour interagir avec OpenAI, Ollama, Onn, Gemini, Grok, Claude.
Requires
- php: >=8.1
- illuminate/container: ^8.0|^9.0|^10.0|^11.0|^12.0
- illuminate/http: ^8.0|^9.0|^10.0|^11.0|^12.0
- illuminate/support: ^8.0|^9.0|^10.0|^11.0|^12.0
- psr/http-message: ^1.1|^2.0
Requires (Dev)
- phpunit/phpunit: ^10.0
README
Unified Laravel package for interacting with multiple LLM APIs (OpenAI, Ollama, Gemini, Claude, Grok, etc.) with complete support for:
- π¬ Conversational chat with history
- π Real-time streaming
- π Embeddings for semantic search
- π¨ Image generation (DALL-E, Stable Diffusion via Ollama)
- π Audio (Text-to-Speech and Speech-to-Text)
- π Structured output (JSON mode with schema validation)
- π οΈ Function calling native and generic
- π― Extensible system tools
- π§ Laravel Facade
AiBridge
for simplified access
β Status: Stable - Consolidated API after fixes (v1.0)
Installation
composer require omgbwa-yasse/aibridge
Configuration
Publish the configuration file:
php artisan vendor:publish --provider="AiBridge\AiBridgeServiceProvider" --tag=config
Environment Variables
Configure your API keys in .env
:
# OpenAI OPENAI_API_KEY=sk-... # Other providers OLLAMA_ENDPOINT=http://localhost:11434 GEMINI_API_KEY=... CLAUDE_API_KEY=... GROK_API_KEY=... ONN_API_KEY=... # OpenRouter OPENROUTER_API_KEY=... # Optional override (defaults to https://openrouter.ai/api/v1) # OPENROUTER_BASE_URL=https://openrouter.ai/api/v1 # Optional app discovery headers # OPENROUTER_REFERER=https://your-app.example.com # OPENROUTER_TITLE=Your App Name # Ollama Turbo (SaaS) OLLAMA_TURBO_API_KEY=... # Optional override (defaults to https://ollama.com) # OLLAMA_TURBO_ENDPOINT=https://ollama.com # Custom providers (Azure OpenAI, etc.) OPENAI_CUSTOM_API_KEY=... OPENAI_CUSTOM_BASE_URL=https://your-azure-openai.openai.azure.com OPENAI_CUSTOM_AUTH_HEADER=api-key OPENAI_CUSTOM_AUTH_PREFIX= # HTTP Configuration LLM_HTTP_TIMEOUT=30 LLM_HTTP_RETRY=1 LLM_HTTP_RETRY_SLEEP=200
Basic Usage
Access via Laravel Container
Get the manager directly from the container:
$manager = app('AiBridge'); // AiBridge\AiBridgeManager instance $resp = $manager->chat('openai', [ ['role' => 'user', 'content' => 'Hello'] ]);
Register a custom provider at runtime (advanced):
$manager->registerProvider('myprov', new MyProvider());
Or via dependency injection:
use AiBridge\AiBridgeManager; class MyService { public function __construct(private AiBridgeManager $ai) {} public function run(): array { return $this->ai->chat('openai', [ ['role' => 'user', 'content' => 'Hello'] ]); } }
Basic Chat with Facade
use AiBridge\Facades\AiBridge; $res = AiBridge::chat('openai', [ ['role' => 'user', 'content' => 'Hello, who are you?'] ]); $text = $res['choices'][0]['message']['content'] ?? '';
Laravel Alias (Optional)
The AiBridge
facade is available via auto-discovery. For a custom alias, add to config/app.php
:
'aliases' => [ // ... 'AI' => AiBridge\Facades\AiBridge::class, ],
Normalized Response
use AiBridge\Support\ChatNormalizer; $raw = AiBridge::chat('openai', [ ['role' => 'user', 'content' => 'Hello'] ]); $normalized = ChatNormalizer::normalize($raw); echo $normalized['text'];
Advanced Features
Fluent text builder (v2.1+)
Prefer short, explicit methods instead of large option arrays when generating text:
use AiBridge\Facades\AiBridge; $out = AiBridge::text() ->using('claude', 'claude-3-5-sonnet-20240620', [ 'api_key' => getenv('CLAUDE_API_KEY') ]) ->withSystemPrompt('You are concise.') ->withPrompt('Explain gravity in one sentence.') ->withMaxTokens(64) ->usingTemperature(0.2) ->asText(); echo $out['text'];
using(provider, model, config)
sets the provider, model, and optional per-call config (api_key
,endpoint
,base_url
, ...).withPrompt
appends a user message;withSystemPrompt
prepends a system message.withMaxTokens
,usingTemperature
,usingTopP
control generation.asText()
returns a normalized array withtext
,raw
,usage
, andfinish_reason
.asRaw()
returns the raw provider payload;asStream()
yields string chunks.
This complements the classic API and can reduce errors versus large option arrays.
Streaming Output (builder)
Show model responses as they generate:
use AiBridge\Facades\AiBridge; $stream = AiBridge::text() ->using('openai', 'gpt-4o', ['api_key' => getenv('OPENAI_API_KEY')]) ->withPrompt('Tell me a short story about a brave knight.') ->asStream(); foreach ($stream as $chunk) { // $chunk is AiBridge\Support\StreamChunk echo $chunk->text; if (function_exists('ob_flush')) { @ob_flush(); } if (function_exists('flush')) { @flush(); } }
Laravel controller (Server-Sent Events):
use Illuminate\Http\Response; use AiBridge\Facades\AiBridge; return response()->stream(function() { $stream = AiBridge::text() ->using('openai', 'gpt-4o', ['api_key' => env('OPENAI_API_KEY')]) ->withPrompt('Explain quantum computing step by step.') ->asStream(); foreach ($stream as $chunk) { echo $chunk->text; @ob_flush(); @flush(); } }, 200, [ 'Cache-Control' => 'no-cache', 'Content-Type' => 'text/event-stream', 'X-Accel-Buffering' => 'no', ]);
Laravel 12 Event Streams:
Route::get('/chat', function () { return response()->eventStream(function () { $stream = AiBridge::text() ->using('openai', 'gpt-4o', ['api_key' => env('OPENAI_API_KEY')]) ->withPrompt('Explain quantum computing step by step.') ->asStream(); foreach ($stream as $resp) { yield $resp->text; } }); });
Note: Packages that intercept Laravel HTTP client streams (e.g., Telescope) can consume the stream. Disable or exclude AiBridge requests for streaming endpoints.
Real-time Streaming
foreach (AiBridge::stream('openai', [ ['role' => 'user', 'content' => 'Explain gravity in 3 points'] ]) as $chunk) { echo $chunk; // flush to SSE client }
Event-based streaming from the manager (delta/end events):
foreach (app('AiBridge')->streamEvents('openai', [ ['role' => 'user', 'content' => 'Stream me a short answer'] ]) as $evt) { if ($evt['type'] === 'delta') echo $evt['data']; if ($evt['type'] === 'end') break; }
Embeddings for Semantic Search
$result = AiBridge::embeddings('openai', [ 'First text to vectorize', 'Second text to analyze' ]); $vectors = $result['embeddings'];
Normalize embeddings across providers:
use AiBridge\Support\EmbeddingsNormalizer; $raw = AiBridge::embeddings('openai', ['hello world']); $norm = EmbeddingsNormalizer::normalize($raw); $vectors = $norm['vectors'];
Image Generation
$result = AiBridge::image('openai', 'An astronaut cat in space', [ 'size' => '1024x1024', 'model' => 'dall-e-3', 'quality' => 'hd' ]); $imageUrl = $result['images'][0]['url'] ?? null;
Normalize images from any provider:
use AiBridge\Support\ImageNormalizer; $raw = AiBridge::image('openai_custom', 'A watercolor elephant'); $images = ImageNormalizer::normalize($raw); foreach ($images as $img) { if ($img['type'] === 'url') { echo $img['url']; } if ($img['type'] === 'b64') { file_put_contents('out.png', base64_decode($img['data'])); } }
Facade convenience for normalizers:
// Images $imgs = AiBridge::normalizeImages($rawImage); // Audio TTS $tts = AiBridge::normalizeTTSAudio($rawTTS); // Audio STT $stt = AiBridge::normalizeSTTAudio($rawSTT); // Embeddings $emb = AiBridge::normalizeEmbeddings($rawEmb);
Audio Text-to-Speech
$result = AiBridge::tts('openai', 'Hello world', [ 'voice' => 'alloy', 'model' => 'tts-1-hd' ]); file_put_contents('output.mp3', base64_decode($result['audio']));
Normalize audio responses:
use AiBridge\Support\AudioNormalizer; $raw = AiBridge::tts('openai', 'Hello world'); $audio = AudioNormalizer::normalizeTTS($raw); file_put_contents('tts.mp3', base64_decode($audio['b64']));
Audio Speech-to-Text
$result = AiBridge::stt('openai', storage_path('app/audio.wav'), [ 'model' => 'whisper-1' ]); $transcription = $result['text'];
Structured Output (JSON Mode)
With Schema Validation
$res = AiBridge::chat('openai', [ ['role' => 'user', 'content' => 'Give me person info in JSON format'] ], [ 'response_format' => 'json', 'json_schema' => [ 'name' => 'person_schema', 'schema' => [ 'type' => 'object', 'properties' => [ 'name' => ['type' => 'string'], 'age' => ['type' => 'number'], 'city' => ['type' => 'string'] ], 'required' => ['name', 'age'] ] ] ]); // Check validation if ($res['schema_validation']['valid'] ?? false) { $person = json_decode($res['choices'][0]['message']['content'], true); echo "Name: " . $person['name']; } else { $errors = $res['schema_validation']['errors'] ?? []; echo "Validation errors: " . implode(', ', $errors); }
Simple JSON Mode (Ollama)
$res = AiBridge::chat('ollama', [ ['role' => 'user', 'content' => 'List 3 African countries in JSON'] ], [ 'response_format' => 'json', 'model' => 'llama3.1' ]);
Function Calling
OpenAI Native Function Calling
$tools = [ [ 'name' => 'getWeather', 'description' => 'Get weather for a city', 'parameters' => [ 'type' => 'object', 'properties' => [ 'city' => ['type' => 'string', 'description' => 'City name'] ], 'required' => ['city'] ] ] ]; $resp = AiBridge::chat('openai', [ ['role' => 'user', 'content' => 'What\'s the weather in Paris?'] ], [ 'tools' => $tools, 'tool_choice' => 'auto' ]); if (!empty($resp['tool_calls'])) { foreach ($resp['tool_calls'] as $call) { $functionName = $call['name']; $arguments = $call['arguments']; // Execute function... } }
Generic Tools System
Create a custom tool:
use AiBridge\Contracts\ToolContract; class WeatherTool implements ToolContract { public function name(): string { return 'get_weather'; } public function description(): string { return 'Get current weather for a city'; } public function schema(): array { return [ 'type' => 'object', 'properties' => [ 'city' => ['type' => 'string'] ], 'required' => ['city'] ]; } public function execute(array $arguments): string { $city = $arguments['city'] ?? 'Paris'; // Weather API call... return json_encode(['city' => $city, 'temp' => '22Β°C']); } }
Register and use the tool:
$manager = app('AiBridge'); $manager->registerTool(new WeatherTool()); $result = $manager->chatWithTools('ollama', [ ['role' => 'user', 'content' => 'What\'s the weather in Lyon?'] ], [ 'model' => 'llama3.1', 'max_tool_iterations' => 3 ]); echo $result['final']['message']['content']; // Tool call history in $result['tool_calls']
Supported Providers
Provider | Chat | Stream | Embeddings | Images | Audio (TTS) | Audio (STT) | Tools |
---|---|---|---|---|---|---|---|
OpenAI | β | β | β | β (DALL-E) | β | β | β Native |
Ollama | β | β | β | β (SD) | β | β | β Generic |
Ollama Turbo | β | β | β | β (SD) | β | β | β Generic |
Gemini | β | β | β | β | β | β | β Generic |
Claude | β | β | β | β | β | β | β Generic |
Grok | β | β | β | β | β | β | β Generic |
OpenRouter | β | β | β | β | β | β | β Native (OpenAI-compatible) |
ONN | β | β (simulated) | β | β | β | β | β |
Custom OpenAI | β | β | β | β | β | β | β Native |
Advanced Configuration
Timeouts and Retry
# HTTP request timeout (seconds) LLM_HTTP_TIMEOUT=30 # Number of retry attempts on failure LLM_HTTP_RETRY=2 # Delay between retries (ms) LLM_HTTP_RETRY_SLEEP=200
File Security
# Maximum file size (bytes) LLM_MAX_FILE_BYTES=2097152 # Allowed MIME types for files # (configured in config/aibridge.php)
Custom Provider (Azure OpenAI)
OPENAI_CUSTOM_API_KEY=your-azure-key OPENAI_CUSTOM_BASE_URL=https://your-resource.openai.azure.com OPENAI_CUSTOM_AUTH_HEADER=api-key OPENAI_CUSTOM_AUTH_PREFIX=
Ollama via OpenAI-compatible API
Ollama exposes an experimental, OpenAI-compatible API at http://localhost:11434/v1. You can use AiBridge's "Custom OpenAI" provider to call Ollama with OpenAI-shaped requests (chat/completions, streaming, embeddings, vision as content parts).
Environment example:
# Ollama OpenAI compatibility OPENAI_CUSTOM_API_KEY=ollama # required by client but ignored by Ollama OPENAI_CUSTOM_BASE_URL=http://localhost:11434/v1 # The default paths already match Ollama's OpenAI-compat endpoints: # /v1/chat/completions, /v1/embeddings, /v1/images/generations, etc. # Keep defaults unless you run a proxy.
Usage example (PHP):
use AiBridge\AiBridgeManager; $ai = new AiBridgeManager([ 'openai_custom' => [ 'api_key' => 'ollama', 'base_url' => 'http://localhost:11434/v1', 'paths' => [ 'chat' => '/v1/chat/completions', 'embeddings' => '/v1/embeddings', ], ], 'options' => [ 'default_timeout' => 30 ], ]); // Chat $resp = $ai->chat('openai_custom', [ ['role' => 'user', 'content' => 'Say this is a test'], ], [ 'model' => 'llama3.2' ]); echo $resp['choices'][0]['message']['content'] ?? ''; // Streaming foreach ($ai->stream('openai_custom', [ ['role' => 'user', 'content' => 'Explain gravity in one paragraph.'], ], [ 'model' => 'llama3.2' ]) as $chunk) { echo $chunk; } // Embeddings $emb = $ai->embeddings('openai_custom', [ 'why is the sky blue?', 'why is the grass green?', ], [ 'model' => 'all-minilm' ]); $vectors = $emb['embeddings'];
Notes:
- Ollama supports base64 image content parts in chat messages (OpenAI-style). Provide an array of content parts with a data URL if needed.
- Not all OpenAI fields are supported (e.g., tool_choice, logprobs). See Ollama docs for the current matrix.
Vision (image content parts)
$imageB64 = base64_encode(file_get_contents('example.png')); $messages = [ [ 'role' => 'user', 'content' => [ [ 'type' => 'text', 'text' => "What's in this image?" ], [ 'type' => 'image_url', 'image_url' => 'data:image/png;base64,' . $imageB64 ], ], ], ]; $resp = $ai->chat('openai_custom', $messages, [ 'model' => 'llava' ]); echo $resp['choices'][0]['message']['content'] ?? '';
Troubleshooting Ollama (OpenAI-compat)
- Ensure Ollama is started with the OpenAI-compatible API: it should expose http://localhost:11434/v1
- Use an arbitrary api key (e.g., "ollama"): some clients require a token header even if the server ignores it.
- If you see 404 on /v1/models, set paths in config to match your proxy or version.
OpenRouter (OpenAI-compatible)
OpenRouter exposes an OpenAI-compatible API at https://openrouter.ai/api/v1 and is pre-wired in AiBridge via a CustomOpenAIProvider.
Environment example:
OPENROUTER_API_KEY=your-key # Optional # OPENROUTER_BASE_URL=https://openrouter.ai/api/v1 # OPENROUTER_REFERER=https://your-app.example.com # OPENROUTER_TITLE=Your App Name
Usage examples (PHP):
use AiBridge\Facades\AiBridge; // Chat $res = AiBridge::chat('openrouter', [ ['role' => 'user', 'content' => 'Give me a one-liner joke'] ], [ 'model' => 'openai/gpt-4o-mini' ]); echo $res['choices'][0]['message']['content'] ?? ''; // Streaming foreach (AiBridge::stream('openrouter', [ ['role' => 'user', 'content' => 'Stream a haiku about the sea'] ], [ 'model' => 'meta-llama/llama-3.1-8b-instruct' ]) as $chunk) { echo $chunk; } // Embeddings $emb = AiBridge::embeddings('openrouter', [ 'hello world', 'bonjour le monde' ], [ 'model' => 'text-embedding-3-small' ]); $vectors = $emb['embeddings']; // Images (if the routed model supports it) $img = AiBridge::image('openrouter', 'A watercolor fox in the forest', [ 'model' => 'openai/dall-e-3' ]); // Audio (TTS/STT) if available through OpenRouter for your chosen model $tts = AiBridge::tts('openrouter', 'Hello from OpenRouter', [ 'model' => 'openai/tts-1', 'voice' => 'alloy' ]);
Notes:
- Model IDs and capabilities depend on OpenRouter routing. Choose models accordingly.
- The Referer/Title headers are optional but recommended to surface your app in OpenRouterβs ecosystem.
Models (list/retrieve) with OpenAI-compatible endpoints
// List models from an OpenAI-compatible base URL (e.g., Ollama /v1) $models = $ai->models('openai_custom'); foreach (($models['data'] ?? []) as $m) { echo $m['id'] . PHP_EOL; } // Retrieve a single model $model = $ai->model('openai_custom', 'llama3.2'); print_r($model);
Also works with built-in providers that speak the OpenAI schema, e.g. openrouter
and openai
.
Streaming events (OpenAI)
use AiBridge\Providers\OpenAIProvider; $prov = new OpenAIProvider(env('OPENAI_API_KEY')); foreach ($prov->streamEvents([ ['role' => 'user', 'content' => 'Stream me a short answer.'] ], [ 'model' => 'gpt-4o-mini' ]) as $evt) { if ($evt['type'] === 'delta') { echo $evt['data']; } if ($evt['type'] === 'end') { echo "\n[done]\n"; } }
ONN Provider
Basic chat support with optional simulated streaming.
Environment:
ONN_API_KEY=your-onn-key
Usage:
use AiBridge\Facades\AiBridge; $res = AiBridge::chat('onn', [ ['role' => 'user', 'content' => 'Say hello'] ]); echo $res['response'] ?? ''; foreach (AiBridge::stream('onn', [ ['role' => 'user', 'content' => 'Stream a short sentence'] ]) as $chunk) { echo $chunk; }
Practical Examples
Conversational Assistant with History
class ChatbotService { private array $conversation = []; public function __construct(private AiBridgeManager $ai) {} public function chat(string $userMessage): string { $this->conversation[] = ['role' => 'user', 'content' => $userMessage]; $response = $this->ai->chat('openai', $this->conversation, [ 'model' => 'gpt-4', 'temperature' => 0.7 ]); $assistantMessage = $response['choices'][0]['message']['content']; $this->conversation[] = ['role' => 'assistant', 'content' => $assistantMessage]; return $assistantMessage; } }
Semantic Search with Embeddings
class SemanticSearch { public function __construct(private AiBridgeManager $ai) {} public function search(string $query, array $documents): array { // Vectorize query and documents $inputs = [$query, ...$documents]; $result = $this->ai->embeddings('openai', $inputs); $queryVector = $result['embeddings'][0]; $docVectors = array_slice($result['embeddings'], 1); // Calculate cosine similarity $similarities = []; foreach ($docVectors as $i => $docVector) { $similarities[$i] = $this->cosineSimilarity($queryVector, $docVector); } // Sort by relevance arsort($similarities); return array_map(fn($i) => [ 'document' => $documents[$i], 'score' => $similarities[$i] ], array_keys($similarities)); } private function cosineSimilarity(array $a, array $b): float { $dotProduct = array_sum(array_map(fn($x, $y) => $x * $y, $a, $b)); $normA = sqrt(array_sum(array_map(fn($x) => $x * $x, $a))); $normB = sqrt(array_sum(array_map(fn($x) => $x * $x, $b))); return $dotProduct / ($normA * $normB); } }
Streaming for Real-time Interface
Route::get('/chat-stream', function (Request $request) { $message = $request->input('message'); return response()->stream(function () use ($message) { $manager = app('AiBridge'); foreach ($manager->stream('openai', [ ['role' => 'user', 'content' => $message] ]) as $chunk) { echo "data: " . json_encode(['chunk' => $chunk]) . "\n\n"; ob_flush(); flush(); } echo "data: [DONE]\n\n"; }, 200, [ 'Content-Type' => 'text/plain', 'Cache-Control' => 'no-cache', 'X-Accel-Buffering' => 'no' ]); });
Testing
Run the test suite:
composer test
Or via PHPUnit directly:
./vendor/bin/phpunit
Development
Contributing
- Fork the project
- Create a feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Roadmap
- Native Claude Function Calling support
- Automatic embeddings caching
- Additional providers (Cohere, Hugging Face)
- Web administration interface
- Integrated metrics and monitoring
- Advanced multimodal support (vision, audio)
License
This package is open source under the MIT license.
Disclaimer
This package is not officially affiliated with OpenAI, Anthropic, Google, or other mentioned providers. Please respect their respective terms of service.
Support
- π Complete Documentation
- π Report a Bug
- π¬ Discussions
- β Don't forget to star the project if it helps you!
Per-call overrides (v2.0+)
You can now pass provider credentials and endpoints directly on each call, without editing config:
- OpenAI:
api_key
, optionalchat_endpoint
- Ollama:
endpoint
- Ollama Turbo:
api_key
, optionalendpoint
- Claude/Grok/ONN/Gemini:
api_key
, optionalendpoint
- Custom OpenAI-compatible:
api_key
,base_url
, optionalpaths
,auth_header
,auth_prefix
,extra_headers
Examples:
$res = app('AiBridge')->chat('ollama', $messages, [ 'endpoint' => 'http://localhost:11434', 'model' => 'llama3', ]); $res = app('AiBridge')->chat('openai', $messages, [ 'api_key' => getenv('OPENAI_API_KEY'), 'chat_endpoint' => 'https://api.openai.com/v1/chat/completions', ]); $res = app('AiBridge')->chat('openai_custom', $messages, [ 'api_key' => 'ollama', // for Ollama OpenAI-compatible mode 'base_url' => 'http://localhost:11434/v1', 'paths' => [ 'chat' => '/chat/completions' ], ]);
See CHANGELOG.md
for details.