irabbi360/laravel-ragent

Laravel RAG AI ChatBot. Retrieval-Augmented AI Chatbot for Laravel Applications.

Fund package maintenance!
Irabbi360

Installs: 7

Dependents: 0

Suggesters: 0

Security: 0

Stars: 0

Watchers: 0

Forks: 0

Open Issues: 0

pkg:composer/irabbi360/laravel-ragent

v1.0-beta.5 2026-01-26 15:41 UTC

This package is auto-updated.

Last update: 2026-01-26 15:46:00 UTC


README

Latest Version on Packagist GitHub Tests Action Status GitHub Code Style Action Status Total Downloads

Build intelligent, grounded AI chatbots powered by Retrieval-Augmented Generation (RAG). This Laravel package enables AI-powered conversations based on your own data, preventing hallucinations and ensuring responses are always sourced from your knowledge base.

Key Features:

  • 🎯 Prevents Hallucinations – Responses generated only from your data, never made up
  • 🔄 RAG Pipeline – Automatic vector embedding, semantic search, and context injection
  • 🔌 LLM Agnostic – Support for OpenAI, Google Gemini, and more
  • 📚 Easy Training – Ingest data from Eloquent models, text, or files
  • 🔐 Multi-tenant Ready – Scope isolation for teams and tenants
  • Production Ready – 52 tests, 138 assertions, built on Laravel best practices

Support us

We invest a lot of resources into creating best in class open source packages. You can support us by buying one of our paid products.

We highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using. You'll find our address on our contact page. We publish all received postcards on our virtual postcard wall.

Installation

You can install the package via composer:

composer require irabbi360/laravel-ragent

You can publish and run the migrations with:

php artisan vendor:publish --tag="ragent-migrations"
php artisan migrate

You can publish the config file with:

php artisan vendor:publish --tag="ragent-config"

This is the contents of the published config file:

return [
];

Optionally, you can publish the views using

php artisan vendor:publish --tag="ragent-views"

Setup

After installation, publish migrations and run them:

php artisan vendor:publish --tag="ragent-migrations"
php artisan migrate

Publish the config file:

php artisan vendor:publish --tag="ragent-config"

Set your LLM API key in .env:

# For OpenAI (default)
OPENAI_API_KEY=your-openai-api-key

# For Google Gemini
GEMINI_API_KEY=your-gemini-api-key

Usage

1. Train Your Chatbot with Data

Train from Eloquent models:

use Irabbi360\LaravelRagent\Facades\LaravelRagent as Rag;

// Train from a Laravel model with specific fields
Rag::train(Post::class, ['fields' => ['title', 'content']]);

// Train from raw text
Rag::train(\Irabbi360\LaravelRagent\DataSources\TextSource::class, [
    'content' => 'Your documentation or knowledge base text here...'
]);

Or use Artisan commands:

# Train from a model
php artisan rag:train Post --fields=title,content

# Reset all embeddings
php artisan rag:reset

# Get statistics
php artisan rag:stats

2. Chat with Your Data

Ask questions that will be answered from your trained data:

use Irabbi360\LaravelRagent\Facades\LaravelRagent as Rag;

// Simple chat
$response = Rag::chat('How do I use this package?')
    ->send();

echo $response->message;
// => "This package enables RAG-powered chatbots..."

// See which documents were used as sources
foreach ($response->sources as $source) {
    echo $source->title; // Original document
    echo $source->content; // Relevant excerpt
}

3. Advanced Usage

Multi-tenant isolation:

$response = Rag::scope('tenant', $tenantId)
    ->chat('Question relevant to this tenant')
    ->send();

Configure LLM provider:

// In config/ragent.php
'llm_provider' => 'gemini', // or 'openai'
'model' => 'gemini-1.5-pro', // Use Gemini instead of GPT-4
'max_sources' => 5,
'similarity_threshold' => 0.7,

Manage chat sessions:

// Create a session
$session = Rag::session($userId)
    ->chat('First question')
    ->send();

// Continue conversation
$response = Rag::session($userId)
    ->chat('Follow-up question')
    ->send();

// Get conversation history
$history = Rag::session($userId)->history();

Custom prompts:

$response = Rag::chat('How does embedding work?')
    ->withSystemPrompt('You are a helpful AI expert in machine learning.')
    ->send();

4. API Integration

Use the REST API in your frontend:

# POST /api/ragent/chat
curl -X POST http://localhost:8000/api/ragent/chat \
  -H "Content-Type: application/json" \
  -d '{
    "message": "How do I train the chatbot?",
    "session_id": "optional-session-uuid"
  }'

Configuration

Key options in config/ragent.php:

return [
    'enabled' => true,
    
    // LLM Provider: 'openai', 'gemini'
    'llm_provider' => 'openai',
    'model' => 'gpt-4',
    
    // Embedding Settings
    'embedding_model' => 'text-embedding-3-small',
    'embedding_dimensions' => 1536,
    
    // RAG Behavior
    'max_sources' => 5,
    'similarity_threshold' => 0.7,
    'temperature' => 0.7,
    
    // Vector Storage
    'vector_store' => 'database',
    
    // Text Chunking
    'chunking' => [
        'strategy' => 'recursive',
        'chunk_size' => 512,
        'overlap' => 20,
    ],
];

Chat UI Integration

Step 1: Setup (One-Time)

Before using the chat component, run these commands in your Laravel application (not the package):

# 1. Publish migrations and run them
php artisan vendor:publish --tag="ragent-migrations"
php artisan migrate

# 2. Publish configuration
php artisan vendor:publish --tag="ragent-config"

# 3. Publish views (optional - for customizing the component)
php artisan vendor:publish --tag="ragent-views"

# 4. Clear view cache
php artisan view:clear

The chat component is automatically registered by the service provider, so no additional setup is needed.

Step 2: Add Chat to Your Page

Simply include the chat component in any Blade template:

<x-rag-chat 
    title="Documentation Assistant"
    placeholder="Ask about our docs..."
    welcome="Hi! I'm your documentation assistant. What can I help you with?"
/>

The component includes everything: responsive UI, message history, source tracking, and CSS/JS.

Troubleshooting: "Unable to locate a class or view for component [rag-chat]"

If you still get this error after setup, try these steps:

1. Make sure you're in a Laravel application (not the package directory)

2. Clear all caches:

php artisan config:clear
php artisan cache:clear
php artisan view:clear
php artisan optimize:clear

3. Verify API key is set in .env:

OPENAI_API_KEY=your-key
# OR
GEMINI_API_KEY=your-key

4. Check CSRF token is in your layout:

<head>
    <meta name="csrf-token" content="{{ csrf_token() }}">
</head>

5. If still not working, manually publish views:

php artisan vendor:publish --tag="ragent-views" --force
php artisan view:clear

Then verify the component exists at resources/views/vendor/ragent/components/rag-chat.blade.php

Full Integration Example

1. Create a page for the chatbot:

<!-- resources/views/chat.blade.php -->
<x-app-layout>
    <div class="py-12">
        <div class="max-w-7xl mx-auto">
            <h1>Chat with Our AI Assistant</h1>
            <p>Ask questions about our documentation or products.</p>
            
            <!-- Add the chat component -->
            <x-rag-chat />
        </div>
    </div>
</x-app-layout>

2. Add a route:

// routes/web.php
Route::get('/chat', function () {
    return view('chat');
})->middleware(['auth'])->name('chat');

3. Ensure migrations are run:

php artisan migrate

Component Props

Customize the chat component:

<x-rag-chat 
    title="Support Bot"                    <!-- Header title -->
    placeholder="Ask me anything..."       <!-- Input placeholder -->
    welcome="Hello! How can I help?"       <!-- Welcome message -->
/>

Styling & Customization

The chat component includes beautiful default styling, but you can override it in your CSS:

/* Customize colors */
.rag-chat-header {
    background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
}

/* Customize dimensions */
.rag-chat-wrapper {
    width: 400px;
    height: 500px;
}

/* Dark theme example */
.rag-chat-container {
    background: #1e1e1e;
    color: #ffffff;
}

API Endpoints

The component automatically uses these API routes:

POST   /api/ragent/chat                    Send a message
GET    /api/ragent/chat/{sessionId}/history Get conversation history
DELETE /api/ragent/chat/{sessionId}         Clear a session

Example API call:

// Send a message
const response = await fetch('/api/ragent/chat', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
        message: 'How do I get started?',
        session_id: 'user-session-123'
    })
});

const data = await response.json();
console.log(data.message);      // AI response
console.log(data.sources);      // Documents used

Custom JavaScript Integration

If you want to build a custom UI, use the API directly:

// Initialize chat session
const sessionId = crypto.randomUUID();

async function sendMessage(message) {
    const response = await fetch('/api/ragent/chat', {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json',
            'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content
        },
        body: JSON.stringify({ message, session_id: sessionId })
    });
    
    return response.json();
}

// Get chat history
async function getHistory() {
    const response = await fetch(`/api/ragent/chat/${sessionId}/history`);
    return response.json();
}

// Clear session
async function clearSession() {
    await fetch(`/api/ragent/chat/${sessionId}`, { method: 'DELETE' });
}

Multi-User Support

Sessions are automatically tied to authenticated users:

<!-- The component handles authentication automatically -->
<x-rag-chat />

<!-- Users can only see their own chat history -->
<!-- Scope isolation prevents data leakage -->

Performance Tips

  1. Pre-train your chatbot before going live
  2. Use semantic search with lower thresholds for broader results
  3. Limit max sources (default: 5) to reduce context window
  4. Enable caching in production for embeddings
  5. Use Gemini instead of OpenAI to save 98% on API costs

Providers Supported

Provider Model Context Window Cost (vs OpenAI)
OpenAI GPT-4 8K 1.00x
OpenAI GPT-4 Turbo 128K 0.75x
Google Gemini gemini-1.5-pro 1M 0.02x ✨
Google Gemini gemini-1.5-flash 1M 0.01x ✨

Gemini provides incredible value with 98% cost savings!

Testing

composer test

Changelog

Please see CHANGELOG for more information on what has changed recently.

Contributing

Please see CONTRIBUTING for details.

Security Vulnerabilities

Please review our security policy on how to report security vulnerabilities.

Credits

License

The MIT License (MIT). Please see License File for more information.