aitoolkit/laravel-ai-toolkit

There is no license information available for the latest version (1.1.0) of this package.

A professional, production-ready Laravel package for integrating multiple AI providers with advanced features like asyncprocessing, real-time streaming, intelligent caching, and error handling.

Installs: 0

Dependents: 0

Suggesters: 0

Security: 0

Stars: 0

Watchers: 0

Forks: 0

Open Issues: 0

pkg:composer/aitoolkit/laravel-ai-toolkit

1.1.0 2025-11-01 11:28 UTC

This package is not auto-updated.

Last update: 2025-11-02 06:31:50 UTC


README

A professional, production-ready Laravel package for integrating multiple AI providers with advanced features like async processing, real-time streaming, intelligent caching, and error handling.

Laravel PHP License Tests

โœจ Features

๐Ÿค– Multi-Provider AI Support

  • OpenAI - GPT-4, GPT-3.5, and embeddings
  • Anthropic - Claude 4 Sonnet, Claude 3 Opus, and more
  • Groq - Ultra-fast inference with Mixtral and Llama models
  • Easy provider switching via configuration

โšก Performance & Reliability

  • Async Queue Jobs - Non-blocking AI operations
  • Intelligent Caching - Redis/Database cache with TTL and invalidation
  • Circuit Breaker Pattern - Automatic failure protection
  • Exponential Backoff - Smart retry logic with jitter
  • Rate Limiting - Built-in API rate limit handling

๐Ÿ”„ Real-Time Features

  • Streaming Responses - Real-time AI response streaming
  • Laravel Reverb Integration - WebSocket broadcasting for live updates
  • Event-Driven Architecture - Dispatch events for UI updates

๐Ÿ”ง Developer Experience

  • Type-Safe Contracts - Interface-driven provider architecture
  • Comprehensive CLI - Command-line tools for testing and management
  • Laravel Facades - Easy dependency injection
  • Rich Configuration - Environment-based provider settings
  • Extensible Design - Easy to add new providers

๐Ÿ“ฆ Installation

Requirements

  • PHP 8.3 or higher
  • Laravel 11 or 12
  • Composer

Install via Composer

composer require dvictor357/laravel-ai-toolkit

Publish Configuration

php artisan vendor:publish --provider="AIToolkit\\AIToolkit\\AiToolkitServiceProvider" --tag="config"

This creates config/ai-toolkit.php where you can configure your providers.

Environment Variables

Add your API keys to .env:

# Default provider
AI_DEFAULT_PROVIDER=openai

# OpenAI
OPENAI_API_KEY=sk-your-openai-key

# Anthropic
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key

# Groq
GROQ_API_KEY=gsk_your-groq-key

๐Ÿš€ Quick Start

Basic Usage

use AIToolkit\AIToolkit\Contracts\AIProviderContract;

// Dependency injection
public function chat(AIProviderContract $provider)
{
    $response = $provider->chat('Tell me a joke about programming');

    return [
        'content' => $response['content'],
        'usage' => $response['usage'], // Token usage stats
        'model' => $response['model'],
    ];
}

Using Facades

use AIToolkit\AIToolkit\Facades\AiCache;

// Cache AI responses
$result = AiCache::remember('chat', 'openai', 'Your prompt', function () {
    return app(AIProviderContract::class)->chat('Your prompt');
});

Async Jobs

use AIToolkit\AIToolkit\Jobs\AiChatJob;

// Dispatch async job
AiChatJob::dispatch('Generate a report about AI trends', [
    'max_tokens' => 2000,
    'temperature' => 0.7
], 'unique-result-id');

// Listen for completion
event(new AiChatCompleted($response, 'unique-result-id'));

Streaming Responses

// Direct streaming
$stream = $provider->stream('Tell me a story about...');

return $stream; // Returns StreamedResponse

// Broadcasting via Reverb
$stream = $provider->streamBroadcast('Your prompt', [], 'my-channel');

// Frontend JavaScript
window.Echo.channel('ai-stream')
    .listen('AiResponseChunk', (e) => {
        if (e.chunk === '__START__') {
            // Stream started
        } else {if (e.chunk === '__END__') {
            // Stream ended
        } else {
            // Append chunk to UI
            document.getElementById('response').innerHTML += e.chunk;
        }}
    });

CLI Usage

# Basic chat
php artisan ai:chat "What's the weather like today?"

# With options
php artisan ai:chat "Explain quantum computing" \
    --provider=anthropic \
    --model=claude-3-5-sonnet-20241022 \
    --max-tokens=1000 \
    --temperature=0.7

# Streaming response
php artisan ai:chat "Continue this story..." --stream

# JSON output
php artisan ai:chat "List the planets" --json

โš™๏ธ Configuration

Provider Settings

// config/ai-toolkit.php
return [
    'default_provider' => env('AI_DEFAULT_PROVIDER', 'openai'),

    'providers' => [
        'openai' => [
            'api_key' => env('OPENAI_API_KEY'),
            'default_model' => env('OPENAI_DEFAULT_MODEL', 'gpt-4o'),
            'default_max_tokens' => env('OPENAI_DEFAULT_MAX_TOKENS', 1024),
            'default_temperature' => env('OPENAI_DEFAULT_TEMPERATURE', 0.7),
        ],

        'anthropic' => [
            'api_key' => env('ANTHROPIC_API_KEY'),
            'default_model' => env('ANTHROPIC_DEFAULT_MODEL', 'claude-3-5-sonnet-20241022'),
            'default_max_tokens' => env('ANTHROPIC_DEFAULT_MAX_TOKENS', 1024),
            'default_temperature' => env('ANTHROPIC_DEFAULT_TEMPERATURE', 1.0),
        ],

        'groq' => [
            'api_key' => env('GROQ_API_KEY'),
            'default_model' => env('GROQ_DEFAULT_MODEL', 'mixtral-8x7b-32768'),
            'default_max_tokens' => env('GROQ_DEFAULT_MAX_TOKENS', 1024),
            'default_temperature' => env('GROQ_DEFAULT_TEMPERATURE', 0.7),
        ],
    ],
];

Cache Settings

'cache' => [
    'enabled' => env('AI_CACHE_ENABLED', true),
    'ttl' => env('AI_CACHE_TTL', 3600), // 1 hour
    'prefix' => env('AI_CACHE_PREFIX', 'ai_toolkit'),
],

Queue Settings

'queue' => [
    'connection' => env('AI_QUEUE_CONNECTION', null),
    'timeout' => env('AI_QUEUE_TIMEOUT', 60),
    'tries' => env('AI_QUEUE_TRIES', 3),
],

Broadcasting Settings

'broadcasting' => [
    'enabled' => env('AI_BROADCASTING_ENABLED', true),
    'channel' => env('AI_BROADCASTING_CHANNEL', 'ai-stream'),
],

๐Ÿ—๏ธ Architecture

Provider Pattern

All AI providers implement AIProviderContract:

interface AIProviderContract
{
    public function chat(string $prompt, array $options = []): array;
    public function stream(string $prompt, array $options = []): \Symfony\Component\HttpFoundation\StreamedResponse;
    public function embed(string $text): array;
}

Service Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Controllers   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ AIProviderContract โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
    โ”Œโ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”
    โ”‚ Providers โ”‚
    โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
    โ”‚ โ€ข OpenAI โ”‚
    โ”‚ โ€ข Claude โ”‚
    โ”‚ โ€ข Groq   โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Caching Strategy

Request โ”€โ”€โ”
          โ”‚
          โ”œโ”€โ†’ Cache Hit โ”€โ”€โ†’ Return Cached Response
          โ”‚
          โ””โ”€โ†’ Cache Miss โ”€โ”€โ†’ Call API โ”€โ”€โ†’ Store & Return

Retry Logic

Request โ”€โ”€โ”
          โ”‚
          โ”œโ”€โ†’ Success โ”€โ”€โ†’ Return Response
          โ”‚
          โ””โ”€โ†’ Failure โ”€โ”€โ†’ Wait (exponential backoff)
                     โ”‚
                     โ”œโ”€โ†’ Retry (up to max attempts)
                     โ”‚
                     โ””โ”€โ†’ Final Failure

๐Ÿงช Testing

Run the test suite:

composer test

Run with coverage:

composer test-coverage

The package includes:

  • Unit Tests - Individual component testing
  • Feature Tests - Integration testing
  • Provider Tests - Mock API testing
  • Service Tests - Caching, retry logic, and jobs

๐Ÿ“š API Reference

AIProviderContract

chat(string $prompt, array $options = []): array

Send a chat message to the AI provider.

Parameters:

  • $prompt - The user message
  • $options - Additional parameters (model, max_tokens, temperature, etc.)

Returns:

[
    'content' => 'AI response text',
    'usage' => [
        'prompt_tokens' => 150,
        'completion_tokens' => 50,
        'total_tokens' => 200,
    ],
    'model' => 'gpt-4o',
]

stream(string $prompt, array $options = []): StreamedResponse

Stream a chat response in real-time.

embed(string $text): array

Generate text embeddings (OpenAI only).

AiCacheService

remember(string $operation, string $provider, string $input, callable $callback, array $options = []): mixed

Cache AI responses with automatic TTL.

generateKey(string $operation, string $provider, string $input, array $options = []): string

Generate consistent cache keys.

invalidatePattern(string $pattern): int

Invalidate cache by pattern.

RetryService

execute(callable $callback, array $options = []): mixed

Execute operations with retry logic.

Options:

  • max_retries - Maximum retry attempts (default: 3)
  • base_delay - Base delay in seconds (default: 1)
  • strategy - 'exponential', 'linear', or 'fixed'
  • circuit_breaker - Enable circuit breaker (default: true)

๐Ÿ”’ Security

  • API Key Validation - Validates provider keys on initialization
  • Input Sanitization - All inputs are sanitized before API calls
  • Rate Limiting - Built-in rate limit handling
  • Error Handling - No sensitive data in error messages
  • Caching - Cached responses don't include API keys

๐Ÿš€ Performance Tips

1. Enable Caching

config(['ai.cache.enabled' => true]);

2. Use Async Jobs for Long Operations

AiChatJob::dispatch($prompt, $options, $resultId);

3. Monitor Cache Hit Rates

$stats = AiCache::getStats();
Log::info('Cache stats', $stats);

4. Use Circuit Breakers

// Circuit breaker automatically opens after 5 failures
// Resets after 60 seconds

๐Ÿ“– Advanced Usage

Custom Providers

Create a custom provider by implementing the contract:

namespace App\AI;

use AIToolkit\AIToolkit\Contracts\AIProviderContract;

class CustomProvider implements AIProviderContract
{
    public function chat(string $prompt, array $options = []): array
    {
        // Your custom implementation
    }

    // ... implement other methods
}

Register in your service provider:

$this->app->singleton(AIProviderContract::class, function () {
    return new CustomProvider();
});

Event Listeners

// Listen for AI chat completion
Event::listen(AiChatCompleted::class, function (AiChatCompleted $event) {
    if ($event->failed) {
        Log::error('AI chat failed', $event->broadcastWith());
    } else {
        Log::info('AI chat completed', $event->broadcastWith());
    }
});

Custom Caching

$cacheService = app('ai-cache');

// Pre-warm cache
$cacheService->warmCache([
    [
        'prompt' => 'What is Laravel?',
        'operations' => ['chat'],
        'options' => ['temperature' => 0.7]
    ]
]);

๐Ÿ› Troubleshooting

Common Issues

1. "Invalid AI provider" Error

  • Check your ai.default_provider configuration
  • Ensure the provider name matches exactly: 'openai', 'anthropic', or 'groq'

2. API Key Not Found

  • Verify environment variables are set
  • Check for typos in variable names
  • Ensure the config file is published

3. Streaming Not Working

  • Enable broadcasting in config
  • Set up Laravel Reverb or Pusher
  • Check browser console for WebSocket errors

4. Queue Jobs Not Running

  • Ensure queue driver is configured
  • Run php artisan queue:work
  • Check Laravel logs for errors

Debug Mode

Enable detailed logging:

config([
    'ai.logging.enabled' => true,
    'ai.logging.channel' => 'stack',
    'app.debug' => true,
]);

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Write tests for new functionality
  4. Ensure all tests pass
  5. Submit a pull request

Development Setup

git clone https://github.com/dvictor357/laravel-ai-toolkit.git
cd laravel-ai-toolkit
composer install
cp .env.example .env
php artisan key:generate
composer test

๐Ÿ“„ License

This package is open-sourced software licensed under the MIT license.

๐Ÿ†˜ Support

๐Ÿ™ Acknowledgments

๐ŸŽ›๏ธ Filament Admin Panel Integration

The Laravel AI Toolkit includes optional Filament Admin Panel integration for a beautiful web interface to manage AI providers, monitor usage, and test AI operations.

Quick Setup

# 1. Install Filament (if not already installed)
composer require filament/filament:"^4.0"

# 2. Publish and run migration
php artisan vendor:publish --provider="AIToolkit\AIToolkit\AIToolkitServiceProvider" --tag="ai-toolkit-migrations"
php artisan migrate

# 3. Seed initial providers (optional)
php artisan db:seed --class="AIToolkit\AIToolkit\Database\Seeders\AIProviderSeeder"

# 4. Access the admin panel at /admin

Features

  • โœ… AI Provider Management - Configure multiple providers (OpenAI, Anthropic, Groq)
  • โœ… Real-time Chat Dashboard - Interactive AI testing interface
  • โœ… Usage Analytics - Monitor requests, cache hit rates, response times
  • โœ… Provider Testing - Built-in connection testing for all providers
  • โœ… Encrypted Storage - API keys encrypted in database
  • โœ… Dynamic Configuration - Database-driven provider management

Database-Driven Architecture

The Filament integration uses a database-first approach:

  1. Providers stored in database (not config files)
  2. Encrypted API keys for security
  3. Dynamic configuration via admin UI
  4. Seeder creates initial providers from .env values

Documentation

๐Ÿ“– Full documentation available in: docs/FILAMENT_INTEGRATION.md

Includes detailed setup, configuration, troubleshooting, and advanced usage examples.

Made with โค๏ธ for the Laravel community