aitoolkit / laravel-ai-toolkit
A professional, production-ready Laravel package for integrating multiple AI providers with advanced features like asyncprocessing, real-time streaming, intelligent caching, and error handling.
Installs: 0
Dependents: 0
Suggesters: 0
Security: 0
Stars: 0
Watchers: 0
Forks: 0
Open Issues: 0
pkg:composer/aitoolkit/laravel-ai-toolkit
Requires
- php: ^8.3
- anthropic-ai/sdk: v0.3.0
- filament/filament: ^4.0
- laravel/framework: ^11.0|^12.0
- openai-php/client: v0.17.0
Requires (Dev)
- laravel/pint: v1.25.0
- orchestra/testbench: ^9.0
- pestphp/pest: ^2.0
This package is not auto-updated.
Last update: 2025-11-02 06:31:50 UTC
README
A professional, production-ready Laravel package for integrating multiple AI providers with advanced features like async processing, real-time streaming, intelligent caching, and error handling.
โจ Features
๐ค Multi-Provider AI Support
- OpenAI - GPT-4, GPT-3.5, and embeddings
- Anthropic - Claude 4 Sonnet, Claude 3 Opus, and more
- Groq - Ultra-fast inference with Mixtral and Llama models
- Easy provider switching via configuration
โก Performance & Reliability
- Async Queue Jobs - Non-blocking AI operations
- Intelligent Caching - Redis/Database cache with TTL and invalidation
- Circuit Breaker Pattern - Automatic failure protection
- Exponential Backoff - Smart retry logic with jitter
- Rate Limiting - Built-in API rate limit handling
๐ Real-Time Features
- Streaming Responses - Real-time AI response streaming
- Laravel Reverb Integration - WebSocket broadcasting for live updates
- Event-Driven Architecture - Dispatch events for UI updates
๐ง Developer Experience
- Type-Safe Contracts - Interface-driven provider architecture
- Comprehensive CLI - Command-line tools for testing and management
- Laravel Facades - Easy dependency injection
- Rich Configuration - Environment-based provider settings
- Extensible Design - Easy to add new providers
๐ฆ Installation
Requirements
- PHP 8.3 or higher
- Laravel 11 or 12
- Composer
Install via Composer
composer require dvictor357/laravel-ai-toolkit
Publish Configuration
php artisan vendor:publish --provider="AIToolkit\\AIToolkit\\AiToolkitServiceProvider" --tag="config"
This creates config/ai-toolkit.php where you can configure your providers.
Environment Variables
Add your API keys to .env:
# Default provider AI_DEFAULT_PROVIDER=openai # OpenAI OPENAI_API_KEY=sk-your-openai-key # Anthropic ANTHROPIC_API_KEY=sk-ant-your-anthropic-key # Groq GROQ_API_KEY=gsk_your-groq-key
๐ Quick Start
Basic Usage
use AIToolkit\AIToolkit\Contracts\AIProviderContract; // Dependency injection public function chat(AIProviderContract $provider) { $response = $provider->chat('Tell me a joke about programming'); return [ 'content' => $response['content'], 'usage' => $response['usage'], // Token usage stats 'model' => $response['model'], ]; }
Using Facades
use AIToolkit\AIToolkit\Facades\AiCache; // Cache AI responses $result = AiCache::remember('chat', 'openai', 'Your prompt', function () { return app(AIProviderContract::class)->chat('Your prompt'); });
Async Jobs
use AIToolkit\AIToolkit\Jobs\AiChatJob; // Dispatch async job AiChatJob::dispatch('Generate a report about AI trends', [ 'max_tokens' => 2000, 'temperature' => 0.7 ], 'unique-result-id'); // Listen for completion event(new AiChatCompleted($response, 'unique-result-id'));
Streaming Responses
// Direct streaming $stream = $provider->stream('Tell me a story about...'); return $stream; // Returns StreamedResponse // Broadcasting via Reverb $stream = $provider->streamBroadcast('Your prompt', [], 'my-channel'); // Frontend JavaScript window.Echo.channel('ai-stream') .listen('AiResponseChunk', (e) => { if (e.chunk === '__START__') { // Stream started } else {if (e.chunk === '__END__') { // Stream ended } else { // Append chunk to UI document.getElementById('response').innerHTML += e.chunk; }} });
CLI Usage
# Basic chat php artisan ai:chat "What's the weather like today?" # With options php artisan ai:chat "Explain quantum computing" \ --provider=anthropic \ --model=claude-3-5-sonnet-20241022 \ --max-tokens=1000 \ --temperature=0.7 # Streaming response php artisan ai:chat "Continue this story..." --stream # JSON output php artisan ai:chat "List the planets" --json
โ๏ธ Configuration
Provider Settings
// config/ai-toolkit.php return [ 'default_provider' => env('AI_DEFAULT_PROVIDER', 'openai'), 'providers' => [ 'openai' => [ 'api_key' => env('OPENAI_API_KEY'), 'default_model' => env('OPENAI_DEFAULT_MODEL', 'gpt-4o'), 'default_max_tokens' => env('OPENAI_DEFAULT_MAX_TOKENS', 1024), 'default_temperature' => env('OPENAI_DEFAULT_TEMPERATURE', 0.7), ], 'anthropic' => [ 'api_key' => env('ANTHROPIC_API_KEY'), 'default_model' => env('ANTHROPIC_DEFAULT_MODEL', 'claude-3-5-sonnet-20241022'), 'default_max_tokens' => env('ANTHROPIC_DEFAULT_MAX_TOKENS', 1024), 'default_temperature' => env('ANTHROPIC_DEFAULT_TEMPERATURE', 1.0), ], 'groq' => [ 'api_key' => env('GROQ_API_KEY'), 'default_model' => env('GROQ_DEFAULT_MODEL', 'mixtral-8x7b-32768'), 'default_max_tokens' => env('GROQ_DEFAULT_MAX_TOKENS', 1024), 'default_temperature' => env('GROQ_DEFAULT_TEMPERATURE', 0.7), ], ], ];
Cache Settings
'cache' => [ 'enabled' => env('AI_CACHE_ENABLED', true), 'ttl' => env('AI_CACHE_TTL', 3600), // 1 hour 'prefix' => env('AI_CACHE_PREFIX', 'ai_toolkit'), ],
Queue Settings
'queue' => [ 'connection' => env('AI_QUEUE_CONNECTION', null), 'timeout' => env('AI_QUEUE_TIMEOUT', 60), 'tries' => env('AI_QUEUE_TRIES', 3), ],
Broadcasting Settings
'broadcasting' => [ 'enabled' => env('AI_BROADCASTING_ENABLED', true), 'channel' => env('AI_BROADCASTING_CHANNEL', 'ai-stream'), ],
๐๏ธ Architecture
Provider Pattern
All AI providers implement AIProviderContract:
interface AIProviderContract { public function chat(string $prompt, array $options = []): array; public function stream(string $prompt, array $options = []): \Symfony\Component\HttpFoundation\StreamedResponse; public function embed(string $text): array; }
Service Architecture
โโโโโโโโโโโโโโโโโโโ
โ Controllers โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โโโโโโโโโโผโโโโโโโโโ
โ AIProviderContract โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โโโโโโดโโโโโโ
โ Providers โ
โโโโโโโโโโโโค
โ โข OpenAI โ
โ โข Claude โ
โ โข Groq โ
โโโโโโโโโโโโ
Caching Strategy
Request โโโ
โ
โโโ Cache Hit โโโ Return Cached Response
โ
โโโ Cache Miss โโโ Call API โโโ Store & Return
Retry Logic
Request โโโ
โ
โโโ Success โโโ Return Response
โ
โโโ Failure โโโ Wait (exponential backoff)
โ
โโโ Retry (up to max attempts)
โ
โโโ Final Failure
๐งช Testing
Run the test suite:
composer test
Run with coverage:
composer test-coverage
The package includes:
- Unit Tests - Individual component testing
- Feature Tests - Integration testing
- Provider Tests - Mock API testing
- Service Tests - Caching, retry logic, and jobs
๐ API Reference
AIProviderContract
chat(string $prompt, array $options = []): array
Send a chat message to the AI provider.
Parameters:
$prompt- The user message$options- Additional parameters (model, max_tokens, temperature, etc.)
Returns:
[
'content' => 'AI response text',
'usage' => [
'prompt_tokens' => 150,
'completion_tokens' => 50,
'total_tokens' => 200,
],
'model' => 'gpt-4o',
]
stream(string $prompt, array $options = []): StreamedResponse
Stream a chat response in real-time.
embed(string $text): array
Generate text embeddings (OpenAI only).
AiCacheService
remember(string $operation, string $provider, string $input, callable $callback, array $options = []): mixed
Cache AI responses with automatic TTL.
generateKey(string $operation, string $provider, string $input, array $options = []): string
Generate consistent cache keys.
invalidatePattern(string $pattern): int
Invalidate cache by pattern.
RetryService
execute(callable $callback, array $options = []): mixed
Execute operations with retry logic.
Options:
max_retries- Maximum retry attempts (default: 3)base_delay- Base delay in seconds (default: 1)strategy- 'exponential', 'linear', or 'fixed'circuit_breaker- Enable circuit breaker (default: true)
๐ Security
- API Key Validation - Validates provider keys on initialization
- Input Sanitization - All inputs are sanitized before API calls
- Rate Limiting - Built-in rate limit handling
- Error Handling - No sensitive data in error messages
- Caching - Cached responses don't include API keys
๐ Performance Tips
1. Enable Caching
config(['ai.cache.enabled' => true]);
2. Use Async Jobs for Long Operations
AiChatJob::dispatch($prompt, $options, $resultId);
3. Monitor Cache Hit Rates
$stats = AiCache::getStats(); Log::info('Cache stats', $stats);
4. Use Circuit Breakers
// Circuit breaker automatically opens after 5 failures // Resets after 60 seconds
๐ Advanced Usage
Custom Providers
Create a custom provider by implementing the contract:
namespace App\AI; use AIToolkit\AIToolkit\Contracts\AIProviderContract; class CustomProvider implements AIProviderContract { public function chat(string $prompt, array $options = []): array { // Your custom implementation } // ... implement other methods }
Register in your service provider:
$this->app->singleton(AIProviderContract::class, function () { return new CustomProvider(); });
Event Listeners
// Listen for AI chat completion Event::listen(AiChatCompleted::class, function (AiChatCompleted $event) { if ($event->failed) { Log::error('AI chat failed', $event->broadcastWith()); } else { Log::info('AI chat completed', $event->broadcastWith()); } });
Custom Caching
$cacheService = app('ai-cache'); // Pre-warm cache $cacheService->warmCache([ [ 'prompt' => 'What is Laravel?', 'operations' => ['chat'], 'options' => ['temperature' => 0.7] ] ]);
๐ Troubleshooting
Common Issues
1. "Invalid AI provider" Error
- Check your
ai.default_providerconfiguration - Ensure the provider name matches exactly: 'openai', 'anthropic', or 'groq'
2. API Key Not Found
- Verify environment variables are set
- Check for typos in variable names
- Ensure the config file is published
3. Streaming Not Working
- Enable broadcasting in config
- Set up Laravel Reverb or Pusher
- Check browser console for WebSocket errors
4. Queue Jobs Not Running
- Ensure queue driver is configured
- Run
php artisan queue:work - Check Laravel logs for errors
Debug Mode
Enable detailed logging:
config([ 'ai.logging.enabled' => true, 'ai.logging.channel' => 'stack', 'app.debug' => true, ]);
๐ค Contributing
- Fork the repository
- Create a feature branch
- Write tests for new functionality
- Ensure all tests pass
- Submit a pull request
Development Setup
git clone https://github.com/dvictor357/laravel-ai-toolkit.git cd laravel-ai-toolkit composer install cp .env.example .env php artisan key:generate composer test
๐ License
This package is open-sourced software licensed under the MIT license.
๐ Support
- ๐ Issues: GitHub Issues
๐ Acknowledgments
- OpenAI for their excellent APIs
- Anthropic for Claude
- Groq for lightning-fast inference
- Laravel for the amazing framework
๐๏ธ Filament Admin Panel Integration
The Laravel AI Toolkit includes optional Filament Admin Panel integration for a beautiful web interface to manage AI providers, monitor usage, and test AI operations.
Quick Setup
# 1. Install Filament (if not already installed) composer require filament/filament:"^4.0" # 2. Publish and run migration php artisan vendor:publish --provider="AIToolkit\AIToolkit\AIToolkitServiceProvider" --tag="ai-toolkit-migrations" php artisan migrate # 3. Seed initial providers (optional) php artisan db:seed --class="AIToolkit\AIToolkit\Database\Seeders\AIProviderSeeder" # 4. Access the admin panel at /admin
Features
- โ AI Provider Management - Configure multiple providers (OpenAI, Anthropic, Groq)
- โ Real-time Chat Dashboard - Interactive AI testing interface
- โ Usage Analytics - Monitor requests, cache hit rates, response times
- โ Provider Testing - Built-in connection testing for all providers
- โ Encrypted Storage - API keys encrypted in database
- โ Dynamic Configuration - Database-driven provider management
Database-Driven Architecture
The Filament integration uses a database-first approach:
- Providers stored in database (not config files)
- Encrypted API keys for security
- Dynamic configuration via admin UI
- Seeder creates initial providers from
.envvalues
Documentation
๐ Full documentation available in: docs/FILAMENT_INTEGRATION.md
Includes detailed setup, configuration, troubleshooting, and advanced usage examples.
Made with โค๏ธ for the Laravel community