mindwave / mindwave
Production AI utilities for Laravel: auto-fit prompts, streaming, tracing, and context discovery.
Requires
- php: ^8.2|^8.3|^8.4
- ext-zip: *
- brandembassy/file-type-detector: ^2.3
- guzzlehttp/guzzle: ^7.5
- helgesverre/mistral: ^2.0
- helgesverre/telefonkatalog: ^1.0
- hkulekci/qdrant: ^0.4.1
- illuminate/contracts: ^11.0
- laravel/prompts: ^0.3
- mozex/anthropic-php: ^1.0
- open-telemetry/exporter-otlp: ^1.0
- open-telemetry/sdk: ^1.0
- openai-php/client: ^0.10
- probots-io/pinecone-php: ^1.1.0
- smalot/pdfparser: ^2.5
- spatie/laravel-package-tools: ^1.14.0
- spatie/regex: ^3.1
- symfony/dom-crawler: ^6.2|^7.0
- teamtnt/laravel-scout-tntsearch-driver: ^13.0|^14.0
- teamtnt/tntsearch: ^3.0
- timkley/weaviate-php: ^0.10.0
- yethee/tiktoken: ^0.5
Requires (Dev)
- larastan/larastan: ^3.7
- laravel/pint: ^1.18
- laravel/telescope: ^5.15
- nunomaduro/collision: ^8.0
- orchestra/testbench: ^9.0
- pestphp/pest: ^3.0
- pestphp/pest-plugin-arch: ^3.0
- pestphp/pest-plugin-laravel: ^3.0
- phpro/grumphp: ^2.18
- phpstan/extension-installer: ^1.1
- phpstan/phpstan-deprecation-rules: ^2.0
- phpstan/phpstan-phpunit: ^2.0
This package is auto-updated.
Last update: 2026-03-10 18:29:43 UTC
README
Mindwave: Production AI Utilities for Laravel
The working developer's AI toolkit - Long prompts, streaming, tracing, and context discovery made simple.
v1.0.0 Released - All 4 pillars complete with 1300+ tests.
Experimental - This package is under active development. APIs may change. Use in production at your own risk.
What is Mindwave?
Mindwave is a Laravel package that provides production-grade AI utilities for building LLM-powered features. Unlike complex agent frameworks, Mindwave focuses on practical tools that Laravel developers actually need:
- ✅ Auto-fit long prompts to any model's context window
- ✅ Stream LLM responses with 3 lines of code (SSE/EventSource)
- ✅ OpenTelemetry tracing with database storage for costs, tokens, and performance
- ✅ Ad-hoc context discovery from your database/CSV using TNTSearch
Why Mindwave?
Not another agent framework. Just batteries-included utilities for shipping AI features fast.
// Write long prompts, Mindwave auto-fits to model limits Mindwave::prompt() ->section('system', $instructions) ->section('context', $largeDocument, priority: 50, shrinker: 'summarize') ->section('user', $question) ->fit() // Auto-trims to context window ->run(); // Stream responses in 3 lines (backend) return Mindwave::stream($prompt)->respond(); // View traces and costs $traces = MindwaveTrace::expensive(0.10)->with('spans')->get(); // Pull context from your DB on-the-fly Mindwave::prompt() ->context(TntSearchSource::fromEloquent(User::query(), fn($u) => "Name: {$u->name}")) ->ask('Who has Laravel expertise?');
Installation
Install via Composer:
composer require mindwave/mindwave
Publish the config files:
php artisan vendor:publish --tag="mindwave-config"
Run migrations for tracing (optional but recommended):
php artisan migrate
Quick Start
1. Basic LLM Chat
use Mindwave\Mindwave\Facades\Mindwave; $response = Mindwave::llm()->chat([ ['role' => 'system', 'content' => 'You are a helpful assistant.'], ['role' => 'user', 'content' => 'Explain Laravel in one sentence.'], ]); echo $response->content;
2. Streaming Responses
Backend:
use Mindwave\Mindwave\Facades\Mindwave; Route::get('/chat', function (Request $request) { return Mindwave::stream($request->input('message')) ->model('gpt-4') ->respond(); });
Frontend:
const stream = new EventSource('/chat?message=' + encodeURIComponent(question)); stream.onmessage = e => output.textContent += e.data; stream.addEventListener('done', () => stream.close());
3. Auto-Fit Long Prompts
use Mindwave\Mindwave\Facades\Mindwave; // Automatically handles token limits Mindwave::prompt() ->reserveOutputTokens(500) ->section('system', 'You are an expert analyst', priority: 100) ->section('documentation', $longDocContent, priority: 50, shrinker: 'summarize') ->section('history', $conversationHistory, priority: 75) ->section('user', $userQuestion, priority: 100) ->fit() // Trims to model's context window ->run();
4. View Costs & Traces
use Mindwave\Mindwave\Observability\Models\Trace; use Mindwave\Mindwave\Observability\Models\Span; // Find expensive traces $expensive = Trace::where('estimated_cost', '>', 0.10) ->with('spans') ->orderByDesc('created_at') ->get(); // Find slow LLM calls $slow = Span::where('operation_name', 'chat') ->where('duration', '>', 5_000_000_000) // 5 seconds in nanoseconds ->with('trace') ->get(); // Daily cost summary $dailyCosts = Trace::selectRaw(' DATE(created_at) as date, COUNT(*) as total_traces, SUM(estimated_cost) as total_cost, SUM(total_input_tokens) as input_tokens, SUM(total_output_tokens) as output_tokens ') ->groupBy('date') ->orderByDesc('date') ->get();
5. Ad-Hoc Context Discovery
use Mindwave\Mindwave\Context\Sources\TntSearchSource; // Search your database on-the-fly Mindwave::prompt() ->context( TntSearchSource::fromEloquent( Product::where('active', true), fn($p) => "Product: {$p->name}, Price: {$p->price}" ) ) ->ask('What products under $50 do you have?'); // Or from CSV files Mindwave::prompt() ->context(TntSearchSource::fromCsv('data/knowledge-base.csv')) ->ask('How do I reset my password?');
Core Features
🧩 Prompt Composer
Automatically manage context windows with priority-based section trimming:
- Token budgeting - Reserve tokens for output, auto-fit sections
- Smart shrinkers - Summarize, truncate, or compress content
- Priority system - Keep important sections, trim less critical ones
- Multi-model support - Works with GPT-4, Claude, Mistral, etc.
🌊 Streaming (SSE)
Production-ready Server-Sent Events streaming:
- 3-line setup - Backend and frontend
- Proper headers - Works with Nginx/Apache out of the box
- Connection monitoring - Handles client disconnects
- Error handling - Graceful failure and retry
📊 OpenTelemetry Tracing
Industry-standard observability with GenAI semantic conventions:
- Automatic tracing - All LLM calls tracked (zero configuration)
- Database storage - Query traces via Eloquent models
- OTLP export - Send to Jaeger, Grafana, Datadog, Honeycomb, etc.
- Cost tracking - Automatic cost estimation per call
- Token usage - Input/output/total tokens tracked
- PII protection - Configurable message capture and redaction
- Artisan commands - Export, prune, and analyze traces
Quick Start:
// 1. Enable tracing in .env // MINDWAVE_TRACING_ENABLED=true // 2. LLM calls are automatically traced $response = Mindwave::llm()->generateText('Hello!'); // 3. Query traces use Mindwave\Mindwave\Observability\Models\Trace; $expensive = Span::where('cost_usd', '>', 0.10) ->orderBy('cost_usd', 'desc') ->get();
📖 Complete Tracing Guide - Querying, cost analysis, custom spans, OTLP setup
📐 Architecture Documentation - Technical deep dive
🔍 TNTSearch Context Discovery
Pull context from your application data without complex RAG setup:
- No infrastructure - Pure PHP, no external services
- Multiple sources - Eloquent, arrays, CSV files, VectorStores
- Fast indexing - Ephemeral indexes with automatic cleanup
- BM25 ranking - Industry-standard relevance scoring
- Auto-query extraction - Automatically extracts search terms from user messages
- OpenTelemetry tracing - Track search performance and results
Quick Example:
use Mindwave\Mindwave\Context\Sources\TntSearch\TntSearchSource; use Mindwave\Mindwave\Context\ContextPipeline; // Search Eloquent models $userSource = TntSearchSource::fromEloquent( User::where('active', true), fn($u) => "Name: {$u->name}, Skills: {$u->skills}" ); // Search CSV files $docsSource = TntSearchSource::fromCsv('data/knowledge-base.csv'); // Combine multiple sources $pipeline = (new ContextPipeline) ->addSource($userSource) ->addSource($docsSource) ->deduplicate() // Remove duplicates ->rerank(); // Sort by relevance // Use in prompt (query auto-extracted from user message) Mindwave::prompt() ->context($pipeline, limit: 5) ->section('user', 'Who has Laravel expertise?') ->run();
📖 Complete Context Discovery Guide - All source types, pipelines, advanced features
Configuration
LLM Configuration
// config/mindwave-llm.php return [ 'default' => env('MINDWAVE_LLM_DRIVER', 'openai'), 'llms' => [ 'openai' => [ 'api_key' => env('OPENAI_API_KEY'), 'model' => env('OPENAI_MODEL', 'gpt-4-turbo'), 'max_tokens' => 4096, 'temperature' => 0.7, ], 'mistral' => [ 'api_key' => env('MISTRAL_API_KEY'), 'model' => env('MISTRAL_MODEL', 'mistral-large-latest'), ], ], ];
Tracing Configuration
// config/mindwave-tracing.php return [ 'enabled' => env('MINDWAVE_TRACING_ENABLED', true), 'database' => [ 'enabled' => true, // Store in database ], 'otlp' => [ 'enabled' => env('MINDWAVE_TRACE_OTLP_ENABLED', false), 'endpoint' => env('OTEL_EXPORTER_OTLP_ENDPOINT', 'http://localhost:4318'), ], 'capture_messages' => false, // PII protection 'retention_days' => 30, ];
Artisan Commands
# Export traces to CSV/JSON php artisan mindwave:export-traces --since=yesterday --format=csv # Prune old traces php artisan mindwave:prune-traces --older-than=30days # View trace statistics php artisan mindwave:trace-stats # View TNTSearch index statistics php artisan mindwave:index-stats # Clear old TNTSearch indexes (default: 24 hours) php artisan mindwave:clear-indexes --ttl=24 --force
Use Cases
💬 AI-Powered Customer Support
Mindwave::prompt() ->section('system', 'You are a helpful support agent') ->context(TntSearchSource::fromEloquent( FAQ::published(), fn($f) => "Q: {$f->question}\nA: {$f->answer}" )) ->section('history', $conversation) ->section('user', $userMessage) ->fit() ->run();
📄 Document Q&A
Mindwave::prompt() ->context(TntSearchSource::fromCsv('uploads/company-handbook.csv')) ->ask('What is the vacation policy?');
🔍 Data Analysis
Mindwave::prompt() ->context(TntSearchSource::fromEloquent( Order::where('created_at', '>', now()->subMonth()), fn($o) => "Order #{$o->id}: {$o->total}, Status: {$o->status}" )) ->ask('Summarize sales trends from last month');
Supported LLM Providers
- ✅ OpenAI (GPT-4, GPT-3.5, etc.)
- ✅ Mistral AI (Mistral Large, Small, etc.)
- ✅ Anthropic (Claude 3.5 Sonnet, Opus, Haiku, etc.)
- 🔄 Google Gemini (Coming soon)
Supported Vector Stores
- ✅ Qdrant - High-performance vector database with UUID-based IDs
- ✅ Weaviate - Open-source vector search engine
- ✅ Pinecone - Managed vector database service
- ✅ In-Memory - For testing and development
- ✅ File-based - JSON file storage for simple use cases
Vector Store Configuration:
All vector stores now support configurable embedding dimensions. Set the dimension in your .env file to match your embedding model:
# Common values: 1536 (OpenAI ada-002, 3-small), 3072 (OpenAI 3-large)
MINDWAVE_QDRANT_DIMENSIONS=1536
MINDWAVE_WEAVIATE_DIMENSIONS=1536
MINDWAVE_PINECONE_DIMENSIONS=1536
Breaking Changes in v2.0
⚠️ Important: Version 2.0 introduces breaking changes:
-
Removed
OPENAI_EMBEDDING_LENGTHconstant - Embedding dimensions are now configured per vector store inconfig/mindwave-vectorstore.phpand environment variables. -
Qdrant ID generation changed - Now uses UUID strings instead of auto-incrementing integers. Existing Qdrant collections will need to be recreated.
-
Weaviate dependency moved -
timkley/weaviate-phpis now inrequireinstead ofrequire-devto prevent production crashes.
Migration Guide:
# 1. Update your .env file with dimension settings MINDWAVE_QDRANT_DIMENSIONS=1536 MINDWAVE_WEAVIATE_DIMENSIONS=1536 # 2. Update your config (if you published it) php artisan vendor:publish --tag="mindwave-config" --force # 3. Rebuild Qdrant collections (if using Qdrant) # The new UUID-based IDs are incompatible with old integer IDs
Documentation
Full documentation available at mindwave.no (coming soon).
For now, see:
- PIVOT_PLAN.md - Implementation roadmap
- TRACING_ARCHITECTURE.md - OpenTelemetry details
Roadmap
v1.0 (December 2025) - RELEASED
- LLM abstraction (OpenAI, Anthropic, Mistral)
- Prompt Composer with auto-fitting
- Streaming SSE support
- OpenTelemetry tracing + database storage
- TNTSearch context discovery
- Laravel Telescope integration
- 500+ tests passing
v1.1 (Q1 2026)
- More LLM providers (Cohere, Groq)
- Advanced shrinkers (semantic compression)
- Cost budgets and alerts
- Grafana dashboard templates
v2.0 (Q2 2026)
- Multi-modal support (images, audio)
- Prompt testing framework
- A/B testing utilities
- Batch processing
Credits
- Helge Sverre - Creator
- OpenAI PHP Client - OpenAI integration
- TeamTNT/TNTSearch - Full-text search
- OpenTelemetry PHP - Observability
- Tiktoken PHP - Token counting
License
The MIT License (MIT). Please see License File for more information.
