ai-agent / ai-agent
A powerful Laravel package for AI integration with multiple providers
Requires
- php: ^8.1
- guzzlehttp/guzzle: ^7.5
- illuminate/console: ^8.0|^9.0|^10.0|^12.0
- illuminate/contracts: ^8.0|^9.0|^10.0|^12.0
- illuminate/support: ^8.0|^9.0|^10.0|^12.0
Requires (Dev)
- orchestra/testbench: ^8.36
- pestphp/pest: ^2.0
This package is auto-updated.
Last update: 2025-06-06 08:43:52 UTC
README
A Laravel package for integrating with various AI providers including OpenAI, Anthropic and Google's Gemini.
Features
- 🤖 Simple, unified API for different AI providers
- 🔄 Support for text generation, chat completions, and embeddings
- 📊 Built-in logging of AI interactions
- 🚦 Rate limiting for API requests
- 🔌 Easy to extend with new AI providers
- 🔧 Blade directives for simple template integration
- 🚀 HTTP API endpoints for integration with frontend frameworks
- 🧠 Memory caching with configurable TTL
- 📝 Comprehensive logging for auditing and debugging
Table of Contents
- AI Agent Provider
Installation
You can install the package via composer:
composer require ai-agent/ai-agent
Publish the configuration file:
php artisan vendor:publish --provider="AiAgent\Providers\AiAgentServiceProvider" --tag="ai-agent-config"
Publish migrations (optional):
php artisan vendor:publish --provider="AiAgent\Providers\AiAgentServiceProvider" --tag="ai-agent-migrations"
Run migrations to create the AI interactions table:
php artisan migrate
Configuration
Configure your AI providers in the config/ai-agent.php
file:
return [ 'default_provider' => env('AI_DEFAULT_PROVIDER', 'openai'), 'providers' => [ 'openai' => [ 'enabled' => true, 'adapter' => \AiAgent\Adapters\OpenAiAdapter::class, 'api_key' => env('OPENAI_API_KEY'), 'model' => env('OPENAI_MODEL', 'gpt-4-turbo-preview'), 'embedding_model' => env('OPENAI_EMBEDDING_MODEL', 'text-embedding-3-small'), ], 'anthropic' => [ 'enabled' => true, 'adapter' => \AiAgent\Adapters\AnthropicAdapter::class, 'api_key' => env('ANTHROPIC_API_KEY'), 'model' => env('ANTHROPIC_MODEL', 'claude-3-opus-20240229'), ], 'gemini' => [ 'enabled' => true, 'adapter' => \AiAgent\Adapters\GeminiAdapter::class, 'api_key' => env('GEMINI_API_KEY'), 'model' => env('GEMINI_MODEL', 'gemini-pro'), 'embedding_model' => env('GEMINI_EMBEDDING_MODEL', 'embedding-001'), ], ], 'logging' => [ 'enabled' => env('AI_LOGGING_ENABLED', true), 'channel' => env('AI_LOGGING_CHANNEL', 'stack'), ], 'rate_limiting' => [ 'enabled' => env('AI_RATE_LIMITING_ENABLED', true), 'max_requests' => env('AI_RATE_LIMITING_MAX_REQUESTS', 60), 'decay_minutes' => env('AI_RATE_LIMITING_DECAY_MINUTES', 1), ], 'cache' => [ 'enabled' => env('AI_CACHE_ENABLED', true), 'ttl' => env('AI_CACHE_TTL', 3600), // seconds 'prefix' => env('AI_CACHE_PREFIX', 'ai_agent_'), ], 'routes' => [ 'enabled' => env('AI_ROUTES_ENABLED', true), 'prefix' => env('AI_ROUTES_PREFIX', 'api/ai'), 'middleware' => ['api', 'throttle:60,1'], ], ];
Basic Usage
Text Generation
use AiAgent\Facades\AiAgent; // Using the default provider $response = AiAgent::generate('Write a haiku about programming'); // Using a specific provider $response = AiAgent::provider('anthropic')->generate('Write a haiku about programming'); // Using with options $response = AiAgent::generate('Explain quantum computing', [ 'temperature' => 0.7, 'max_tokens' => 500 ]);
Chat Completions
$messages = [ ['role' => 'user', 'content' => 'Hello, who are you?'], ['role' => 'assistant', 'content' => 'I am an AI assistant. How can I help you today?'], ['role' => 'user', 'content' => 'Tell me a joke about programming.'], ]; $response = AiAgent::chat($messages); // Using a specific provider with options $response = AiAgent::provider('gemini')->chat($messages, [ 'temperature' => 0.9, ]);
Embeddings
// Single text embedding $embedding = AiAgent::embeddings('Convert this text to a vector representation'); // Multiple text embeddings $embeddings = AiAgent::embeddings([ 'This is the first text to embed', 'This is the second text to embed' ]);
Blade Directives
The package provides several Blade directives for easy template integration:
// First, add the styles for loading indicators in your layout @aiStyles() // Text generation with loading indicator @ai('Generate a tagline for a tech company') // Chat completion with loading indicator @aichat([ ['role' => 'user', 'content' => 'Write a short bio for a software developer'] ]) // With specific provider and refresh option @ai('Generate a tagline for a tech company', 'anthropic', true)
The directives now include a loading indicator that shows while the AI is processing your request, providing better UX for users.
Advanced Usage
Custom Options
Each AI provider supports different options that can be passed to customize the request:
// OpenAI options $options = [ 'temperature' => 0.7, // Controls randomness (0.0 to 1.0) 'max_tokens' => 500, // Maximum length of the response 'top_p' => 0.9, // Controls diversity via nucleus sampling 'frequency_penalty' => 0.5, // Reduces repetition of token sequences 'presence_penalty' => 0.5, // Encourages discussing new topics ]; $response = AiAgent::provider('openai')->generate('Write a story', $options);
Error Handling
Handle potential errors when working with AI providers:
use AiAgent\Exceptions\ProviderNotFoundException; use AiAgent\Exceptions\AdapterNotFoundException; try { $response = AiAgent::provider('unknown')->generate('Test prompt'); } catch (ProviderNotFoundException $e) { // Handle provider not found error report($e); $response = 'Sorry, the AI provider is not available.'; } catch (\Exception $e) { // Handle general errors report($e); $response = 'Sorry, there was an error processing your request.'; }
Logging
Enable or disable logging at runtime:
use AiAgent\Facades\AiAgent; // Get the logger service $logger = app(\AiAgent\Services\AiLoggerService::class); // Disable logging for a specific operation $logger->setEnabled(false); $response = AiAgent::generate('This won\'t be logged'); $logger->setEnabled(true); // Check if logging is enabled if ($logger->isEnabled()) { // Do something }
Rate Limiting
The package includes built-in rate limiting to prevent excessive API usage. You can configure this in the config/ai-agent.php
file:
'rate_limiting' => [ 'enabled' => true, 'max_requests' => 60, // Maximum number of requests 'decay_minutes' => 1, // Time window in minutes ],
Caching Responses
Cache AI responses to reduce API costs and improve performance:
use Illuminate\Support\Facades\Cache; // With custom caching $cacheKey = 'ai_response_' . md5('My prompt'); if (Cache::has($cacheKey)) { $response = Cache::get($cacheKey); } else { $response = AiAgent::generate('My prompt'); Cache::put($cacheKey, $response, now()->addHours(24)); } // Using built-in caching with Blade directives // The third parameter (true) forces a refresh, bypassing the cache @ai('Generate a quote', 'openai', false) // Uses cache if available
HTTP API Endpoints
The package provides API endpoints for integration with frontend frameworks:
Get all providers
GET /api/ai/providers
Response:
{ "providers": ["openai", "anthropic", "gemini"], "default": "openai" }
Generate text
POST /api/ai/generate
Request:
{ "prompt": "Explain artificial intelligence", "provider": "openai", // optional "options": { "temperature": 0.7, "max_tokens": 500 } }
Response:
{ "result": "Artificial intelligence (AI) refers to..." }
Chat completion
POST /api/ai/chat
Request:
{ "messages": [ {"role": "user", "content": "What is Laravel?"} ], "provider": "anthropic", // optional "options": { "temperature": 0.7 } }
Generate embeddings
POST /api/ai/embeddings
Request:
{ "input": "Convert this text to a vector", "provider": "openai", // optional "options": { "model": "text-embedding-3-small" // optional } }
Extending with Custom Providers
You can create your own adapters to integrate additional AI providers:
- Create a new adapter class that extends the BaseAdapter:
namespace App\Adapters; use AiAgent\Adapters\BaseAdapter; class CustomAdapter extends BaseAdapter { public function generate(string $prompt, array $options = []): string { // Implement your custom logic to generate text // Example: $api_key = $this->getConfig('api_key'); $model = $this->getConfig('model', 'default-model'); // Make API request... return $response; } public function chat(array $messages, array $options = []): array { // Implement chat functionality } public function embeddings($input, array $options = []): array { // Implement embeddings functionality } }
- Register your custom adapter in the config file:
'providers' => [ // Other providers... 'custom' => [ 'enabled' => true, 'adapter' => \App\Adapters\CustomAdapter::class, 'api_key' => env('CUSTOM_API_KEY'), 'model' => env('CUSTOM_MODEL', 'default-model'), // Add any other configuration your adapter needs ], ],
- Use your custom provider:
$response = AiAgent::provider('custom')->generate('Hello, custom AI!');
Artisan Commands
The package provides helpful Artisan commands:
# List all available AI providers with their features
php artisan ai:providers
Troubleshooting
Common Issues
-
API Key Not Found
Make sure you've set the appropriate API keys in your
.env
file:OPENAI_API_KEY=your-openai-key ANTHROPIC_API_KEY=your-anthropic-key GEMINI_API_KEY=your-gemini-key
-
Rate Limiting Errors
If you're hitting rate limits, consider:
- Increasing the rate limit in the configuration
- Implementing better caching strategies
- Optimizing your prompt to reduce token usage
-
Provider Not Found Error
Ensure the provider is correctly configured in
config/ai-agent.php
and that theenabled
flag is set totrue
.
Debugging
Enable more detailed logging to troubleshoot issues:
// In your .env file AI_LOGGING_ENABLED=true AI_LOGGING_CHANNEL=stack
API Reference
The package provides the following core API:
AiAgent Facade
AiAgent::provider(string $name)
: Get a specific AI providerAiAgent::generate(string $prompt, array $options = [], string $provider = null)
: Generate textAiAgent::chat(array $messages, array $options = [], string $provider = null)
: Get chat completionAiAgent::embeddings(string|array $input, array $options = [], string $provider = null)
: Generate embeddingsAiAgent::getProviderNames()
: Get all available provider names
AiLoggerService
log(string $provider, string $type, $input, $output, array $options = [], int $tokensUsed = 0, float $duration = 0, bool $success = true, string $error = null, ?int $userId = null)
: Log an AI interactionisEnabled()
: Check if logging is enabledsetEnabled(bool $enabled)
: Enable or disable logging
Testing
composer test
License
This package is open-sourced software licensed under the MIT license.
Issues
If you encounter any issues or have suggestions, please submit them through our GitHub Issues page.
Author
Created by Arseno25