lzhx00 / laravel-llm-client
Laravel LLM Client Package - A unified interface for multiple LLM providers
Installs: 3
Dependents: 0
Suggesters: 0
Security: 0
Stars: 0
Watchers: 0
Forks: 0
Open Issues: 0
pkg:composer/lzhx00/laravel-llm-client
Requires
- php: >=8.1
- laravel/framework: ^10.0|^11.0|^12.0
Requires (Dev)
- orchestra/testbench: ^8.0|^9.0|^10.0
- phpunit/phpunit: ^10.0
README
A Laravel package providing a unified, chainable interface for multiple LLM (Large Language Model) providers: OpenAI, Anthropic (Claude), Gemini, and Ollama.
Requirements
- Laravel 10.x, 11.x, or 12.x (This package only supports Laravel 10 and above)
- PHP 8.1 or higher
Tested on Laravel 12.x. Other versions may work, but are not officially tested.
Installation
composer require lzhx00/laravel-llm-client
Laravel will auto-discover and register the package.
If you have disabled auto-discovery, add the following to config/app.php:
'providers' => [ // ... Lzhx00\LLMClient\LLMClientServiceProvider::class, ], 'aliases' => [ // ... 'LLMClient' => Lzhx00\LLMClient\Facades\LLMClient::class, ],
Configuration
Publish the config file (optional, for customization):
php artisan vendor:publish --tag=llm-client-config
Set your API keys and provider settings in .env or config/llm.php:
LLM_DEFAULT_PROVIDER=openai OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-... GOOGLE_API_KEY=AIza... OLLAMA_BASE_URL=http://localhost:11434
config/llm.php Example
Each provider has its own default_model, embedding_model, and options.
return [ 'default' => env('LLM_DEFAULT_PROVIDER', 'openai'), 'providers' => [ 'openai' => [ 'api_key' => env('OPENAI_API_KEY'), 'default_model' => 'gpt-3.5-turbo', 'embedding_model' => 'text-embedding-3-small', 'options' => [ 'temperature' => 0.7, // ...other OpenAI-specific options ], ], 'ollama' => [ 'base_url' => env('OLLAMA_BASE_URL', 'http://localhost:11434'), 'default_model' => 'llama3', 'embedding_model' => 'nomic-embed-text', 'options' => [ 'temperature' => 0.5, // ...other Ollama-specific options ], ], // ...other providers ], ];
Usage
Basic Text Generation
$response = LLMClient::generate('Say hello in English.');
Specify Provider
$response = LLMClient::use('ollama')->generate('Say hello in English.');
Chainable Model/Options (Recommended)
$response = LLMClient::model('llama3')->with(['temperature' => 0.5])->generate('Say hello.');
model()only affects chat/completion.embedModel()only affects embed.with()sets provider-specific options (except model/embeddingModel).
Embeddings
$vector = LLMClient::use('ollama')->embed('hello world'); // Specify embedding model $vector = LLMClient::embedModel('nomic-embed-text')->embed('hello world');
Streaming Response
LLMClient::generateStream('Tell me a joke.', [], function($chunk) { echo $chunk; });
List Models
$models = LLMClient::use('gemini')->models();
Supported Providers
- OpenAI (ChatGPT)
- Anthropic (Claude)
- Gemini (Google)
- Ollama
⚠️ Note: Only the Ollama provider has been fully tested.
Other providers are implemented based on official docs, but not tested with real API keys.
📄 License
MIT License