easybdit / laraveleasyai
Unified AI chat for Laravel β Ollama, OpenAI, Anthropic (Claude), DeepSeek. One interface, any AI.
Requires
- php: ^8.0
- illuminate/http: ^9.0 || ^10.0 || ^11.0 || ^12.0 || ^13.0
- illuminate/support: ^9.0 || ^10.0 || ^11.0 || ^12.0 || ^13.0
Requires (Dev)
- orchestra/testbench: ^7.0 || ^8.0 || ^9.0 || ^10.0
- phpunit/phpunit: ^9.5 || ^10.5 || ^11.0
Suggests
- smalot/pdfparser: Required for PDF file ingestion in Projects (^2.0)
README
LaravelAI
One interface, any AI.
Unified AI chat for Laravel β Ollama, OpenAI (ChatGPT), Anthropic (Claude), DeepSeek
π¨βπ» Full Stack Laravel Vue Developer and DevOps Engineer
Quick Start β’ Chat UI β’ Projects β’ RAG β’ Providers β’ API Reference β’ Configuration
π Facebook Page β’ π₯ Facebook Group β’ π¬ WhatsApp Group
πΊ Video Tutorials
|
π₯οΈ Self-Hosted AI Server Set up your own local AI server with Ollama |
π Laravel AI Package Setup Install and use LaravelAI in your project |
π¬ Built-in Chat UI Zero-setup ChatGPT-like app included |
Why LaravelAI?
Building AI features in Laravel normally means separate SDKs, different formats, and custom error handling for every provider. LaravelAI eliminates all of that.
// Same code. Any provider. Just change the name. $response = AI::provider('ollama')->chat($messages); // Self-hosted, free $response = AI::provider('openai')->chat($messages); // ChatGPT $response = AI::provider('anthropic')->chat($messages); // Claude $response = AI::provider('deepseek')->chat($messages); // DeepSeek
Built on Laravel's driver pattern β same architecture as Mail, Cache, and Queue.
π¦ Installation
Step 1: Install via Composer
composer require easybdit/laraveleasyai
Step 2: Publish config and assets
php artisan vendor:publish --tag=ai-config php artisan vendor:publish --tag=ai-chat-assets
Step 3: Run migrations
php artisan migrate
Step 4: Add to .env
AI_PROVIDER=ollama AI_OLLAMA_URL=http://127.0.0.1:11434 AI_OLLAMA_MODEL=qwen2:1.5b
Step 5: Visit /ai-chat in your browser β
Requirements
| Requirement | Version |
|---|---|
| PHP | 8.2+ |
| Laravel | 10, 11, 12, 13 |
π Quick Start
use EasyAI\LaravelAI\Facades\AI; $response = AI::chat([['role' => 'user', 'content' => 'What is Laravel?']]); echo $response->content;
One-Liner Helper
$answer = ai('What is Laravel?');
Test in Tinker
php artisan tinker >>> AI::provider('ollama')->health() => true >>> ai('Say hello in 3 words') => "Hello there, friend!"
π¬ Built-in Chat UI
New in v1.3.0 β A full ChatGPT-like chat app included. Zero setup required.
What you get out of the box
| Feature | Description |
|---|---|
| π¬ Chat UI | ChatGPT-like sidebar with session history |
| β‘ Streaming | Real-time typing effect |
| π Markdown | Full rendering with syntax-highlighted code |
| π Copy buttons | Per message and per code block |
| π Provider switcher | Switch Ollama / OpenAI / Claude / DeepSeek live |
| πΎ DB persistence | History survives page refresh |
| π·οΈ Auto-title | First message becomes session title |
| π Projects | RAG-powered knowledge bases (v1.4.0) |
| π¦ Offline assets | No CDN dependency |
Customize the view
php artisan vendor:publish --tag=ai-chat-views
# β resources/views/vendor/laravelai/chat.blade.php
Routes registered automatically
| Method | URL | Description |
|---|---|---|
| GET | /ai-chat |
Chat UI |
| POST | /ai-chat/api/sessions |
Create session |
| DELETE | /ai-chat/api/sessions/{id} |
Delete session |
| GET | /ai-chat/api/stream |
SSE streaming |
| POST | /ai-chat/api/provider |
Switch provider |
| GET | /ai-chat/api/projects |
List projects |
| POST | /ai-chat/api/projects |
Create project |
| DELETE | /ai-chat/api/projects/{id} |
Delete project |
| POST | /ai-chat/api/projects/{id}/files |
Upload & ingest file |
| DELETE | /ai-chat/api/projects/{id}/files/{fid} |
Delete file |
ποΈ Projects & Knowledge Bases
New in v1.4.0 β Self-hosted Claude-like Projects. Create knowledge bases, upload documents, and get RAG-powered answers scoped per project.
How it works
Create Project β Upload Files β Chat Inside Project β RAG answers from your docs
- Click + next to Projects in the sidebar
- Upload
.txt,.md, or.pdffiles β auto-ingested into RAG on upload - Click the project to start a new RAG-powered chat session
- Every message retrieves relevant context from that project's documents only
- Normal chats outside projects are completely unaffected
What you see in the UI
- π Projects section in sidebar with file count badge
- π§ RAG ON badge in chat header when inside a project session
- π Manage Files button β upload, view ingestion status, delete files
- π’ Status per file:
pendingβingestedβfailed - Project context active indicator in the input footer
PDF support (optional)
composer require smalot/pdfparser
RAG Scoping API
$results = AI::rag()->source('project_5')->search('your query'); $answer = AI::rag()->source('project_5')->ask('your question'); AI::rag()->flush('project_5');
π§ RAG (Built-in)
No external vector database required β uses your existing SQL database.
Setup
ollama pull nomic-embed-text php artisan migrate
AI_RAG_PROVIDER=ollama AI_RAG_EMBED_MODEL=nomic-embed-text
Usage
// Store AI::rag()->ingest('Laravel is a PHP framework using MVC.', 'docs'); // Ask $answer = AI::rag()->ask('What is Laravel?'); // Search $results = AI::rag()->search('MVC pattern'); // [['content' => '...', 'source' => 'docs', 'score' => 0.91]] // Scoped $results = AI::rag()->source('project_5')->search('your query'); // Flush AI::rag()->flush(); AI::rag()->flush('project_5');
Artisan
php artisan ai:rag:ingest storage/docs/manual.txt --source=manual php artisan ai:rag:ingest storage/docs/ --flush
RAG Configuration
.env Key |
Default | Description |
|---|---|---|
AI_RAG_PROVIDER |
ollama |
Embedding provider |
AI_RAG_EMBED_MODEL |
nomic-embed-text |
Embedding model |
AI_RAG_CHUNK_SIZE |
2000 |
Max chars per chunk |
AI_RAG_TOP_K |
3 |
Chunks retrieved per query |
AI_RAG_TABLE |
ai_documents |
Database table |
π€ Providers
Ollama β Self-Hosted & Free
AI_PROVIDER=ollama AI_OLLAMA_URL=http://127.0.0.1:11434 AI_OLLAMA_MODEL=qwen2:1.5b AI_OLLAMA_TIMEOUT=120
Note for small models (qwen2, qwen2.5): If you get 400 errors with RAG context, set
num_ctxto match your model's context window:ollama show qwen2:1.5b --modelfile > /tmp/modelfile echo "PARAMETER num_ctx 2048" >> /tmp/modelfile ollama create qwen2-fixed -f /tmp/modelfileThen use
AI_OLLAMA_MODEL=qwen2-fixedin.env.
OpenAI (ChatGPT)
AI_OPENAI_KEY=sk-your-api-key AI_OPENAI_MODEL=gpt-4o-mini
Anthropic (Claude)
AI_ANTHROPIC_KEY=sk-ant-your-api-key AI_ANTHROPIC_MODEL=claude-sonnet-4-20250514
DeepSeek
AI_DEEPSEEK_KEY=sk-your-api-key AI_DEEPSEEK_MODEL=deepseek-chat
β¨ Features
Fluent Builder API
$response = AI::provider('ollama') ->model('qwen2:1.5b') ->temperature(0.9) ->maxTokens(500) ->systemPrompt('You are a helpful Laravel expert.') ->chat([['role' => 'user', 'content' => 'Explain middleware']]);
Streaming
AI::provider('ollama')->stream( [['role' => 'user', 'content' => 'Write a poem']], function (string $chunk) { echo $chunk; } );
Health Check + Fallback
foreach (['ollama', 'deepseek', 'openai'] as $provider) { try { if (!AI::provider($provider)->health()) continue; return AI::provider($provider)->chat($messages)->content; } catch (\Throwable $e) { Log::warning("{$provider} failed: {$e->getMessage()}"); } }
Token Estimation
$tokens = AI::estimateTokens('Hello world'); $tokens = AI::estimateTokens($messagesArray);
Ollama Advanced Features
AI::provider('ollama')->format('json')->chat($messages); AI::provider('ollama')->embed('Hello world'); AI::provider('ollama')->keepAlive('10m')->chat($messages); AI::provider('ollama')->options(['num_ctx' => 2048])->chat($messages); AI::provider('ollama')->pullModel('llama3.1:8b'); AI::provider('ollama')->runningModels(); AI::provider('ollama')->deleteModel('old-model');
Error Handling
use EasyAI\LaravelAI\Exceptions\ConnectionException; use EasyAI\LaravelAI\Exceptions\ProviderException; try { $response = AI::provider('openai')->chat($messages); } catch (ConnectionException $e) { Log::error("Connection failed: " . $e->getMessage()); } catch (ProviderException $e) { Log::error("Provider [{$e->getProvider()}]: " . $e->getMessage()); }
π API Reference
Facade Methods
| Method | Returns | Description |
|---|---|---|
AI::chat(array $messages) |
AIResponse |
Chat with default provider |
AI::provider(string $name) |
AIProvider |
Switch provider |
AI::estimateTokens(string|array) |
int |
Estimate token count |
AI::rag() |
RAGManager |
Access RAG system |
Provider Methods (Chainable)
| Method | Description |
|---|---|
->model($name) |
Set the model |
->temperature($float) |
Creativity (0β2) |
->maxTokens($int) |
Max response tokens |
->systemPrompt($text) |
Set instructions |
->timeout($seconds) |
Request timeout |
->chat(array $messages) |
Send and get response |
->stream(array $messages, callable) |
Stream token by token |
->health() |
Check provider reachable |
->models() |
List available models |
RAG Methods
| Method | Description |
|---|---|
->ingest($text, $source) |
Store as embeddings |
->search($query) |
Similarity search |
->ask($question) |
RAG-powered Q&A |
->source($name) |
Scope to one source |
->flush($source?) |
Delete documents |
Ollama-Only Methods
| Method | Description |
|---|---|
->format('json') |
Force JSON output |
->embed($text) |
Generate embedding |
->keepAlive($duration) |
Keep in memory |
->options($array) |
Raw Ollama options (e.g. num_ctx) |
->pullModel($name) |
Download model |
->showModel($name) |
Model details |
->deleteModel($name) |
Remove model |
->copyModel($src, $dst) |
Copy model |
->runningModels() |
List loaded models |
AIResponse Object
| Property | Type | Description |
|---|---|---|
$response->content |
string |
AI reply text |
$response->model |
string |
Model used |
$response->promptTokens |
int |
Input tokens |
$response->replyTokens |
int |
Output tokens |
$response->totalTokens |
int |
Total tokens |
$response->provider |
string |
Provider name |
$response->getRaw() |
array |
Raw API response |
(string) $response |
string |
Cast to string |
Helper Function
ai('Your question') ai('Your question', 'openai') ai('Your question', 'anthropic', 'claude-haiku-...')
βοΈ Configuration
// config/ai.php return [ 'default' => env('AI_PROVIDER', 'ollama'), 'providers' => [ 'ollama' => ['driver' => 'ollama', 'url' => env('AI_OLLAMA_URL'), 'model' => env('AI_OLLAMA_MODEL', 'qwen2:1.5b'), 'timeout' => env('AI_OLLAMA_TIMEOUT', 120)], 'openai' => ['driver' => 'openai', 'api_key' => env('AI_OPENAI_KEY'), 'model' => env('AI_OPENAI_MODEL', 'gpt-4o-mini'), 'timeout' => 60], 'anthropic' => ['driver' => 'anthropic', 'api_key' => env('AI_ANTHROPIC_KEY'), 'model' => env('AI_ANTHROPIC_MODEL'), 'timeout' => 60], 'deepseek' => ['driver' => 'deepseek', 'api_key' => env('AI_DEEPSEEK_KEY'), 'model' => env('AI_DEEPSEEK_MODEL', 'deepseek-chat'), 'timeout' => 60], ], 'rag' => [ 'embed_provider' => env('AI_RAG_PROVIDER', 'ollama'), 'embed_model' => env('AI_RAG_EMBED_MODEL', 'nomic-embed-text'), 'chat_provider' => env('AI_RAG_CHAT_PROVIDER', null), 'chunk_size' => (int) env('AI_RAG_CHUNK_SIZE', 2000), 'top_k' => (int) env('AI_RAG_TOP_K', 3), 'table' => env('AI_RAG_TABLE', 'ai_documents'), 'system_prompt' => env('AI_RAG_SYSTEM_PROMPT', 'Answer using ONLY the context below. If unsure, say so.'), ], ];
Complete .env Reference
# Provider AI_PROVIDER=ollama # Ollama (self-hosted, free) AI_OLLAMA_URL=http://127.0.0.1:11434 AI_OLLAMA_MODEL=qwen2:1.5b AI_OLLAMA_TIMEOUT=120 # OpenAI AI_OPENAI_KEY=sk-proj-xxxx AI_OPENAI_MODEL=gpt-4o-mini # Anthropic (Claude) AI_ANTHROPIC_KEY=sk-ant-xxxx AI_ANTHROPIC_MODEL=claude-sonnet-4-20250514 # DeepSeek AI_DEEPSEEK_KEY=sk-xxxx AI_DEEPSEEK_MODEL=deepseek-chat # RAG AI_RAG_PROVIDER=ollama AI_RAG_EMBED_MODEL=nomic-embed-text AI_RAG_CHUNK_SIZE=500 AI_RAG_TOP_K=1 AI_RAG_TABLE=ai_documents # RAG for small models β reduce chunk size and limit context # AI_OLLAMA_NUM_CTX=2048
π§ͺ Testing
vendor/bin/phpunit vendor/bin/phpunit --filter=test_ollama_chat
Uses Http::fake() β no real API calls needed.
πΊοΈ Roadmap
| Version | Feature | Status |
|---|---|---|
| v1.0 | Ollama, OpenAI, Anthropic, DeepSeek | β Released |
| v1.1 | Laravel 12 & 13 support | β Released |
| v1.2 | Built-in RAG system + Ollama advanced | β Released |
| v1.3 | Built-in Chat UI | β Released |
| v1.4 | Projects + RAG scoping (self-hosted Claude Projects) | β Released |
| v2.0 | Function / Tool calling | π Planned |
| v2.0 | Vision / Image input | π Planned |
| v2.1 | Groq driver | π Planned |
| v2.1 | Google Gemini driver | π Planned |
| v2.2 | Response caching | π Planned |
| v3.0 | Image generation | π Planned |
β€οΈ Support
- β Star this repo on GitHub
- π Report bugs via Issues
- π Submit a PR β contributions welcome
- π’ Share with your developer friends
π€ Credits
Md Murad Hosen β Full-Stack Laravel Vue Developer and DevOps Engineer from Chittagong, Bangladesh π§π©
| π Website | easyit.com.bd | πΊ YouTube | EasyBD IT |
| π Facebook | Murad Hosen | π± WhatsApp | +8801827517700 |
| π» GitHub | muradbdinfo | π₯ FB Group | EITBD |
π License
MIT License β free to use in personal and commercial projects. See LICENSE for details.
Made with β€οΈ in Bangladesh π§π© Β· Built for the Laravel community worldwide