mukundhanmohan / laravel-explainable-ai
Laravel-native AI orchestration and explainability toolkit.
Package info
github.com/mukundhan-mohan/laravel-explainable-ai
pkg:composer/mukundhanmohan/laravel-explainable-ai
Requires
- php: ^8.2
- illuminate/cache: ^11.0
- illuminate/config: ^11.0
- illuminate/console: ^11.0
- illuminate/contracts: ^11.0
- illuminate/database: ^11.0
- illuminate/http: ^11.0
- illuminate/pipeline: ^11.0
- illuminate/queue: ^11.0
- illuminate/support: ^11.0
- symfony/yaml: ^7.0 || ^8.0
Requires (Dev)
- mockery/mockery: ^1.6
- orchestra/testbench: ^9.0
- phpunit/phpunit: ^11.0
README
laravel-explainable-ai is a Laravel-native AI orchestration and explainability toolkit for teams that want more than a raw provider SDK. It wraps prompt templates, structured outputs, heuristic explainability metadata, audit logging, cost tracking, caching, queue dispatch, and test helpers behind a Laravel-friendly API.
The package is designed around a simple fluent developer experience:
use LaravelExplainableAI\Facades\AI; $result = AI::prompt('summarize_feedback') ->input(['feedback' => $text]) ->withExplanation() ->withConfidence() ->provider('openai') ->model('gpt-4.1-mini') ->execute();
Typical normalized output:
[
'content' => 'Escalate this complaint to support.',
'structured_output' => [
'summary' => 'Customer is unhappy',
'next_action' => 'Escalate this complaint to support.',
],
'explanation' => [
'summary' => 'Heuristic rationale based on refund, complaint, support.',
'factors' => ['refund', 'complaint', 'support', 'schema-compliant structured output'],
'confidence' => 0.86,
'risk_flags' => ['customer_churn_risk'],
'decision_trace' => [...],
],
'usage' => [
'input_tokens' => 100,
'output_tokens' => 25,
'total_tokens' => 125,
'estimated_cost' => 0.00008,
'currency' => 'USD',
],
'provider' => 'openai',
'model' => 'gpt-4.1-mini',
]
Why explainability matters
Most application teams do not need fake claims about model internals. They need operational clarity:
- What prompt version was used
- What evidence patterns influenced the answer
- How confident the system appears to be
- Which risk flags should trigger human review
- How much the request cost
- Whether the answer was cached
This package frames explainability as heuristic rationale, decision trace, and confidence estimation built from observable request and response data.
Features
- Provider-agnostic orchestration via contracts
- Fluent
AIfacade and request builder - YAML prompt templates in
resources/prompts/*.yaml - Structured output parsing and validation
- Heuristic explainability metadata
- Confidence scoring and risk flagging
- Database audit logging and cost tracking
- Cache-aware execution
- Queue-based async execution
- Middleware safeguards for rate limiting, PII redaction, and safety checks
AI::fake()testing support- Artisan install, prompt generation, and prompt test commands
Requirements
- PHP 8.2+
- Laravel 11+
Installation
Install the package through Composer:
composer require mukundhanmohan/laravel-explainable-ai
Publish the package assets:
php artisan ai:install
Or publish assets individually:
php artisan vendor:publish --tag=explainable-ai-config php artisan vendor:publish --tag=explainable-ai-migrations php artisan vendor:publish --tag=explainable-ai-prompts php artisan migrate
Add provider credentials to .env:
OPENAI_API_KEY=your-key OPENAI_MODEL=gpt-4.1-mini EXPLAINABLE_AI_PROVIDER=openai
Configuration
The main package config lives at config/explainable-ai.php. It includes:
- Default provider selection
- Provider credentials and default models
- Prompt search paths
- Cache configuration
- Queue configuration
- Logging flags
- Explainability heuristics
- Rate limits
- Pricing tables for token cost estimation
Quick start
Basic prompt execution
use LaravelExplainableAI\Facades\AI; $response = AI::prompt('summarize_feedback') ->input(['feedback' => 'This is my third complaint and I want a refund.']) ->provider('openai') ->model('gpt-4.1-mini') ->execute(); $response->toArray();
Explainability and confidence
$response = AI::prompt('summarize_feedback') ->input(['feedback' => $feedback]) ->withExplanation() ->withConfidence() ->execute(); $summary = $response->explanation?->summary; $confidence = $response->explanation?->confidence;
Structured output
You can rely on the schema from the prompt template or override it at runtime:
$response = AI::prompt('classify_urgency') ->input(['message' => $message]) ->schema([ 'type' => 'object', 'required' => ['urgency', 'rationale'], 'properties' => [ 'urgency' => ['type' => 'string'], 'rationale' => ['type' => 'string'], ], ]) ->execute(); $urgency = $response->structuredOutput['urgency'];
Async execution
AI::prompt('recommend_next_action') ->input(['case' => $case]) ->provider('openai') ->async() ->execute();
This dispatches LaravelExplainableAI\Queue\ExecuteAIRequestJob using the package queue settings.
Prompt templates
Prompt files live in resources/prompts/*.yaml.
Example:
name: summarize_feedback version: 1.0.0 system: | You summarize customer feedback for support operations. Provide a concise recommendation. If a schema is provided, return valid JSON only. user: | Summarize this feedback and recommend the next step: {{ feedback }} schema: type: object required: - summary - next_action properties: summary: type: string next_action: type: string
Included examples:
resources/prompts/summarize_feedback.yamlresources/prompts/classify_urgency.yamlresources/prompts/recommend_next_action.yaml
Explainability model
The package does not claim access to model internals. Instead it produces a practical explanation layer with:
- Summary: a concise heuristic rationale
- Factors: keyword and response-shape signals observed in input/output
- Confidence: a heuristic score derived from completeness, schema compliance, and provider signals when available
- Risk flags: keyword-based operational flags like churn or escalation risk
- Decision trace: a step-by-step trace of prompt rendering, provider completion, output parsing, and heuristic scoring
Core implementation lives in:
src/Explainability/ExplainabilityEngine.phpsrc/Explainability/ConfidenceScorer.phpsrc/Explainability/RiskFlagger.php
Logging and observability
The package stores audit and cost data in four tables:
ai_requestsai_responsesai_cost_recordsai_prompt_versions
Logging classes:
Cache behavior
Responses are cached using a key derived from:
- Prompt name and version
- Input payload
- Provider and model
- Schema
- Explainability flags
- Rendered prompt content
Cache implementation:
Testing
The package includes AI::fake() so application tests can avoid provider calls:
use LaravelExplainableAI\Facades\AI; AI::fake([ [ 'content' => 'Escalate this complaint to support.', 'provider' => 'openai', 'model' => 'gpt-4.1-mini', ], ]); $response = AI::prompt('summarize_feedback') ->input(['feedback' => 'Repeated refund complaint']) ->execute();
Core test coverage is included for:
- Prompt rendering
- OpenAI normalization
- Output parsing and repair
- Explainability generation
- Cache hits
- Async dispatch
- Facade fake support
Tests live under tests.
Artisan commands
php artisan ai:installphp artisan ai:prompt make {name}php artisan ai:test {prompt}
Architecture overview
Main runtime flow:
AI::prompt(...)builds anAIRequestAIManagerresolves provider/model defaults or dispatches async workAIExecutionPipelinerenders the prompt and applies middleware- Cache is checked
- The provider executes the request
- Structured output is parsed and validated
- Explainability metadata is generated
- Audit and cost data are stored
- A normalized
AIResponseis returned
Key entry points:
Extensibility
Contracts are defined for:
src/Contracts/AIProviderInterface.phpsrc/Contracts/PromptRepositoryInterface.phpsrc/Contracts/ExplanationEngineInterface.phpsrc/Contracts/OutputParserInterface.php
Placeholder providers are included for:
- Anthropic
- Gemini
- Ollama
The OpenAI path is the MVP implementation. The others are ready for provider-specific adapters.
Roadmap
- Provider-specific adapters for Anthropic, Gemini, and Ollama
- Streaming responses
- Conversation memory primitives
- Richer schema support
- Response review workflows
- Per-tenant policy layers
- Telemetry integrations
Local development status
The package source and tests are present in this repository. PHP syntax checks passed across src, tests, config, and database. Full PHPUnit execution was not run in this workspace because Composer dependencies are not installed yet.