easybdit/laraveleasyai

Unified AI chat for Laravel β€” Ollama, OpenAI, Anthropic (Claude), DeepSeek. One interface, any AI.

Maintainers

Package info

github.com/easybdit/laraveleasyai

pkg:composer/easybdit/laraveleasyai

Fund package maintenance!

easybdit

easyit.com.bd/donate

Statistics

Installs: 0

Dependents: 0

Suggesters: 0

Stars: 0

Open Issues: 0

1.0.0 2026-05-12 07:18 UTC

This package is auto-updated.

Last update: 2026-05-12 07:32:47 UTC


README

LaravelAI Banner

LaravelAI

One interface, any AI.
Unified AI chat for Laravel β€” Ollama, OpenAI (ChatGPT), Anthropic (Claude), DeepSeek

πŸ‘¨β€πŸ’» Full Stack Laravel Vue Developer and DevOps Engineer

Latest Version Total Downloads License PHP Version Tests

Quick Start β€’ Chat UI β€’ Projects β€’ RAG β€’ Providers β€’ API Reference β€’ Configuration

πŸ“˜ Facebook Page β€’ πŸ‘₯ Facebook Group β€’ πŸ’¬ WhatsApp Group

πŸ“Ί Video Tutorials

Self-Hosted AI Server
πŸ–₯️ Self-Hosted AI Server

Set up your own local AI server with Ollama
Laravel AI Package
πŸš€ Laravel AI Package Setup

Install and use LaravelAI in your project
Built-in Chat UI
πŸ’¬ Built-in Chat UI

Zero-setup ChatGPT-like app included

Why LaravelAI?

Building AI features in Laravel normally means separate SDKs, different formats, and custom error handling for every provider. LaravelAI eliminates all of that.

// Same code. Any provider. Just change the name.
$response = AI::provider('ollama')->chat($messages);    // Self-hosted, free
$response = AI::provider('openai')->chat($messages);    // ChatGPT
$response = AI::provider('anthropic')->chat($messages); // Claude
$response = AI::provider('deepseek')->chat($messages);  // DeepSeek

Built on Laravel's driver pattern β€” same architecture as Mail, Cache, and Queue.

πŸ“¦ Installation

Step 1: Install via Composer

composer require easybdit/laraveleasyai

Step 2: Publish config and assets

php artisan vendor:publish --tag=ai-config
php artisan vendor:publish --tag=ai-chat-assets

Step 3: Run migrations

php artisan migrate

Step 4: Add to .env

AI_PROVIDER=ollama
AI_OLLAMA_URL=http://127.0.0.1:11434
AI_OLLAMA_MODEL=qwen2:1.5b

Step 5: Visit /ai-chat in your browser βœ…

Requirements

Requirement Version
PHP 8.2+
Laravel 10, 11, 12, 13

πŸš€ Quick Start

use EasyAI\LaravelAI\Facades\AI;

$response = AI::chat([['role' => 'user', 'content' => 'What is Laravel?']]);
echo $response->content;

One-Liner Helper

$answer = ai('What is Laravel?');

Test in Tinker

php artisan tinker
>>> AI::provider('ollama')->health()
=> true
>>> ai('Say hello in 3 words')
=> "Hello there, friend!"

πŸ’¬ Built-in Chat UI

New in v1.3.0 β€” A full ChatGPT-like chat app included. Zero setup required.

What you get out of the box

Feature Description
πŸ’¬ Chat UI ChatGPT-like sidebar with session history
⚑ Streaming Real-time typing effect
πŸ“ Markdown Full rendering with syntax-highlighted code
πŸ“‹ Copy buttons Per message and per code block
πŸ”„ Provider switcher Switch Ollama / OpenAI / Claude / DeepSeek live
πŸ’Ύ DB persistence History survives page refresh
🏷️ Auto-title First message becomes session title
πŸ“ Projects RAG-powered knowledge bases (v1.4.0)
πŸ“¦ Offline assets No CDN dependency

Customize the view

php artisan vendor:publish --tag=ai-chat-views
# β†’ resources/views/vendor/laravelai/chat.blade.php

Routes registered automatically

Method URL Description
GET /ai-chat Chat UI
POST /ai-chat/api/sessions Create session
DELETE /ai-chat/api/sessions/{id} Delete session
GET /ai-chat/api/stream SSE streaming
POST /ai-chat/api/provider Switch provider
GET /ai-chat/api/projects List projects
POST /ai-chat/api/projects Create project
DELETE /ai-chat/api/projects/{id} Delete project
POST /ai-chat/api/projects/{id}/files Upload & ingest file
DELETE /ai-chat/api/projects/{id}/files/{fid} Delete file

πŸ—‚οΈ Projects & Knowledge Bases

New in v1.4.0 β€” Self-hosted Claude-like Projects. Create knowledge bases, upload documents, and get RAG-powered answers scoped per project.

How it works

Create Project β†’ Upload Files β†’ Chat Inside Project β†’ RAG answers from your docs
  1. Click next to Projects in the sidebar
  2. Upload .txt, .md, or .pdf files β€” auto-ingested into RAG on upload
  3. Click the project to start a new RAG-powered chat session
  4. Every message retrieves relevant context from that project's documents only
  5. Normal chats outside projects are completely unaffected

What you see in the UI

  • πŸ“ Projects section in sidebar with file count badge
  • 🧠 RAG ON badge in chat header when inside a project session
  • πŸ“Ž Manage Files button β€” upload, view ingestion status, delete files
  • 🟒 Status per file: pending β†’ ingested β†’ failed
  • Project context active indicator in the input footer

PDF support (optional)

composer require smalot/pdfparser

RAG Scoping API

$results = AI::rag()->source('project_5')->search('your query');
$answer  = AI::rag()->source('project_5')->ask('your question');
AI::rag()->flush('project_5');

🧠 RAG (Built-in)

No external vector database required β€” uses your existing SQL database.

Setup

ollama pull nomic-embed-text
php artisan migrate
AI_RAG_PROVIDER=ollama
AI_RAG_EMBED_MODEL=nomic-embed-text

Usage

// Store
AI::rag()->ingest('Laravel is a PHP framework using MVC.', 'docs');

// Ask
$answer = AI::rag()->ask('What is Laravel?');

// Search
$results = AI::rag()->search('MVC pattern');
// [['content' => '...', 'source' => 'docs', 'score' => 0.91]]

// Scoped
$results = AI::rag()->source('project_5')->search('your query');

// Flush
AI::rag()->flush();
AI::rag()->flush('project_5');

Artisan

php artisan ai:rag:ingest storage/docs/manual.txt --source=manual
php artisan ai:rag:ingest storage/docs/ --flush

RAG Configuration

.env Key Default Description
AI_RAG_PROVIDER ollama Embedding provider
AI_RAG_EMBED_MODEL nomic-embed-text Embedding model
AI_RAG_CHUNK_SIZE 2000 Max chars per chunk
AI_RAG_TOP_K 3 Chunks retrieved per query
AI_RAG_TABLE ai_documents Database table

πŸ€– Providers

Ollama β€” Self-Hosted & Free

AI_PROVIDER=ollama
AI_OLLAMA_URL=http://127.0.0.1:11434
AI_OLLAMA_MODEL=qwen2:1.5b
AI_OLLAMA_TIMEOUT=120

Note for small models (qwen2, qwen2.5): If you get 400 errors with RAG context, set num_ctx to match your model's context window:

ollama show qwen2:1.5b --modelfile > /tmp/modelfile
echo "PARAMETER num_ctx 2048" >> /tmp/modelfile
ollama create qwen2-fixed -f /tmp/modelfile

Then use AI_OLLAMA_MODEL=qwen2-fixed in .env.

OpenAI (ChatGPT)

AI_OPENAI_KEY=sk-your-api-key
AI_OPENAI_MODEL=gpt-4o-mini

Anthropic (Claude)

AI_ANTHROPIC_KEY=sk-ant-your-api-key
AI_ANTHROPIC_MODEL=claude-sonnet-4-20250514

DeepSeek

AI_DEEPSEEK_KEY=sk-your-api-key
AI_DEEPSEEK_MODEL=deepseek-chat

✨ Features

Fluent Builder API

$response = AI::provider('ollama')
    ->model('qwen2:1.5b')
    ->temperature(0.9)
    ->maxTokens(500)
    ->systemPrompt('You are a helpful Laravel expert.')
    ->chat([['role' => 'user', 'content' => 'Explain middleware']]);

Streaming

AI::provider('ollama')->stream(
    [['role' => 'user', 'content' => 'Write a poem']],
    function (string $chunk) { echo $chunk; }
);

Health Check + Fallback

foreach (['ollama', 'deepseek', 'openai'] as $provider) {
    try {
        if (!AI::provider($provider)->health()) continue;
        return AI::provider($provider)->chat($messages)->content;
    } catch (\Throwable $e) {
        Log::warning("{$provider} failed: {$e->getMessage()}");
    }
}

Token Estimation

$tokens = AI::estimateTokens('Hello world');
$tokens = AI::estimateTokens($messagesArray);

Ollama Advanced Features

AI::provider('ollama')->format('json')->chat($messages);
AI::provider('ollama')->embed('Hello world');
AI::provider('ollama')->keepAlive('10m')->chat($messages);
AI::provider('ollama')->options(['num_ctx' => 2048])->chat($messages);
AI::provider('ollama')->pullModel('llama3.1:8b');
AI::provider('ollama')->runningModels();
AI::provider('ollama')->deleteModel('old-model');

Error Handling

use EasyAI\LaravelAI\Exceptions\ConnectionException;
use EasyAI\LaravelAI\Exceptions\ProviderException;

try {
    $response = AI::provider('openai')->chat($messages);
} catch (ConnectionException $e) {
    Log::error("Connection failed: " . $e->getMessage());
} catch (ProviderException $e) {
    Log::error("Provider [{$e->getProvider()}]: " . $e->getMessage());
}

πŸ“– API Reference

Facade Methods

Method Returns Description
AI::chat(array $messages) AIResponse Chat with default provider
AI::provider(string $name) AIProvider Switch provider
AI::estimateTokens(string|array) int Estimate token count
AI::rag() RAGManager Access RAG system

Provider Methods (Chainable)

Method Description
->model($name) Set the model
->temperature($float) Creativity (0–2)
->maxTokens($int) Max response tokens
->systemPrompt($text) Set instructions
->timeout($seconds) Request timeout
->chat(array $messages) Send and get response
->stream(array $messages, callable) Stream token by token
->health() Check provider reachable
->models() List available models

RAG Methods

Method Description
->ingest($text, $source) Store as embeddings
->search($query) Similarity search
->ask($question) RAG-powered Q&A
->source($name) Scope to one source
->flush($source?) Delete documents

Ollama-Only Methods

Method Description
->format('json') Force JSON output
->embed($text) Generate embedding
->keepAlive($duration) Keep in memory
->options($array) Raw Ollama options (e.g. num_ctx)
->pullModel($name) Download model
->showModel($name) Model details
->deleteModel($name) Remove model
->copyModel($src, $dst) Copy model
->runningModels() List loaded models

AIResponse Object

Property Type Description
$response->content string AI reply text
$response->model string Model used
$response->promptTokens int Input tokens
$response->replyTokens int Output tokens
$response->totalTokens int Total tokens
$response->provider string Provider name
$response->getRaw() array Raw API response
(string) $response string Cast to string

Helper Function

ai('Your question')
ai('Your question', 'openai')
ai('Your question', 'anthropic', 'claude-haiku-...')

βš™οΈ Configuration

// config/ai.php
return [
    'default' => env('AI_PROVIDER', 'ollama'),
    'providers' => [
        'ollama'    => ['driver' => 'ollama',    'url'     => env('AI_OLLAMA_URL'),    'model' => env('AI_OLLAMA_MODEL', 'qwen2:1.5b'),      'timeout' => env('AI_OLLAMA_TIMEOUT', 120)],
        'openai'    => ['driver' => 'openai',    'api_key' => env('AI_OPENAI_KEY'),    'model' => env('AI_OPENAI_MODEL', 'gpt-4o-mini'),      'timeout' => 60],
        'anthropic' => ['driver' => 'anthropic', 'api_key' => env('AI_ANTHROPIC_KEY'), 'model' => env('AI_ANTHROPIC_MODEL'),                  'timeout' => 60],
        'deepseek'  => ['driver' => 'deepseek',  'api_key' => env('AI_DEEPSEEK_KEY'),  'model' => env('AI_DEEPSEEK_MODEL', 'deepseek-chat'),  'timeout' => 60],
    ],
    'rag' => [
        'embed_provider' => env('AI_RAG_PROVIDER', 'ollama'),
        'embed_model'    => env('AI_RAG_EMBED_MODEL', 'nomic-embed-text'),
        'chat_provider'  => env('AI_RAG_CHAT_PROVIDER', null),
        'chunk_size'     => (int) env('AI_RAG_CHUNK_SIZE', 2000),
        'top_k'          => (int) env('AI_RAG_TOP_K', 3),
        'table'          => env('AI_RAG_TABLE', 'ai_documents'),
        'system_prompt'  => env('AI_RAG_SYSTEM_PROMPT', 'Answer using ONLY the context below. If unsure, say so.'),
    ],
];

Complete .env Reference

# Provider
AI_PROVIDER=ollama

# Ollama (self-hosted, free)
AI_OLLAMA_URL=http://127.0.0.1:11434
AI_OLLAMA_MODEL=qwen2:1.5b
AI_OLLAMA_TIMEOUT=120

# OpenAI
AI_OPENAI_KEY=sk-proj-xxxx
AI_OPENAI_MODEL=gpt-4o-mini

# Anthropic (Claude)
AI_ANTHROPIC_KEY=sk-ant-xxxx
AI_ANTHROPIC_MODEL=claude-sonnet-4-20250514

# DeepSeek
AI_DEEPSEEK_KEY=sk-xxxx
AI_DEEPSEEK_MODEL=deepseek-chat

# RAG
AI_RAG_PROVIDER=ollama
AI_RAG_EMBED_MODEL=nomic-embed-text
AI_RAG_CHUNK_SIZE=500
AI_RAG_TOP_K=1
AI_RAG_TABLE=ai_documents

# RAG for small models β€” reduce chunk size and limit context
# AI_OLLAMA_NUM_CTX=2048

πŸ§ͺ Testing

vendor/bin/phpunit
vendor/bin/phpunit --filter=test_ollama_chat

Uses Http::fake() β€” no real API calls needed.

πŸ—ΊοΈ Roadmap

Version Feature Status
v1.0 Ollama, OpenAI, Anthropic, DeepSeek βœ… Released
v1.1 Laravel 12 & 13 support βœ… Released
v1.2 Built-in RAG system + Ollama advanced βœ… Released
v1.3 Built-in Chat UI βœ… Released
v1.4 Projects + RAG scoping (self-hosted Claude Projects) βœ… Released
v2.0 Function / Tool calling πŸ”œ Planned
v2.0 Vision / Image input πŸ”œ Planned
v2.1 Groq driver πŸ”œ Planned
v2.1 Google Gemini driver πŸ”œ Planned
v2.2 Response caching πŸ”œ Planned
v3.0 Image generation πŸ”œ Planned

❀️ Support

Donate Β  GitHub Sponsors

  • ⭐ Star this repo on GitHub
  • πŸ› Report bugs via Issues
  • πŸ”€ Submit a PR β€” contributions welcome
  • πŸ“’ Share with your developer friends

πŸ‘€ Credits

Md Murad Hosen β€” Full-Stack Laravel Vue Developer and DevOps Engineer from Chittagong, Bangladesh πŸ‡§πŸ‡©

🌐 Websiteeasyit.com.bd πŸ“Ί YouTubeEasyBD IT
πŸ“˜ FacebookMurad Hosen πŸ“± WhatsApp+8801827517700
πŸ’» GitHubmuradbdinfo πŸ‘₯ FB GroupEITBD

πŸ“„ License

MIT License β€” free to use in personal and commercial projects. See LICENSE for details.

Made with ❀️ in Bangladesh πŸ‡§πŸ‡© Β· Built for the Laravel community worldwide