llm-speak / open-router
A Laravel package for integrating OpenRouter into LLMSpeak
Requires
- php: ^8.2
README
LLMSpeak OpenRouter is a Laravel package that provides a fluent, Laravel-native interface for integrating with OpenRouter's unified AI model gateway. Built as part of the LLMSpeak ecosystem, it offers seamless access to 100+ AI models from providers like Anthropic, OpenAI, Google, Meta, and more through a single, consistent API.
Note: This package is part of the larger LLMSpeak ecosystem. For universal provider switching and standardized interfaces, check out the LLMSpeak Core package.
Table of Contents
Features
- 🌐 Multi-Provider Access: Access 100+ models from Anthropic, OpenAI, Google, Meta, Mistral, and more
- 🚀 Laravel Native: Full Laravel integration with automatic service discovery
- 🔧 Fluent Interface: Expressive request builders with method chaining
- 📊 Laravel Data: Powered by Spatie Laravel Data for robust data validation
- 🛠️ Tool Support: Complete function calling capabilities with parallel execution
- 🧠 Reasoning Mode: OpenRouter's unique "thinking" tokens for enhanced reasoning
- 📈 Log Probabilities: Advanced probability analysis and token confidence scoring
- 🎛️ Advanced Sampling: Fine-grained control over model behavior with multiple sampling methods
- 💨 Streaming: Real-time streaming responses
- 🎯 Type Safety: Full PHP 8.2+ type declarations and IDE support
- 🔐 Secure: Built-in API key management and request validation
Get Started
Requires PHP 8.2+ and Laravel 10.x/11.x/12.x
Install the package via Composer:
composer require llm-speak/open-router
The package will automatically register itself via Laravel's package discovery.
Environment Configuration
Add your OpenRouter API key to your .env
file:
OPENROUTER_API_KEY=your_openrouter_api_key_here
Get your API key from OpenRouter.ai.
Usage
Basic Request
The simplest way to chat with any AI model through OpenRouter:
use LLMSpeak\OpenRouter\OpenRouterCompletionsRequest; $request = new OpenRouterCompletionsRequest( model: 'anthropic/claude-3.5-sonnet', messages: [ ['role' => 'user', 'content' => 'Hello! What can you help me with?'] ] ); $response = $request->post(); echo $response->getTextContent(); // "Hello! I'm here to help..."
Model Selection
OpenRouter provides access to 100+ models. Choose the right model for your use case:
// High-quality reasoning models $request = new OpenRouterCompletionsRequest( model: 'anthropic/claude-3.5-sonnet', messages: $messages ); // Fast, cost-effective models $request = new OpenRouterCompletionsRequest( model: 'anthropic/claude-3.5-haiku', messages: $messages ); // Cutting-edge experimental models $request = new OpenRouterCompletionsRequest( model: 'openai/gpt-4o', messages: $messages ); // Open-source models $request = new OpenRouterCompletionsRequest( model: 'meta-llama/llama-3.1-70b-instruct', messages: $messages ); // Specialized models $request = new OpenRouterCompletionsRequest( model: 'google/gemini-pro-1.5', messages: $messages );
Fluent Request Building
Build complex requests using the fluent interface:
use LLMSpeak\OpenRouter\OpenRouterCompletionsRequest; $request = new OpenRouterCompletionsRequest( model: 'anthropic/claude-3.5-sonnet', messages: [ ['role' => 'user', 'content' => 'Write a creative story about time travel'] ] ) ->setMaxTokens(2000) ->setTemperature(0.8) ->setTopP(0.9) ->setTopK(50) ->setFrequencyPenalty(0.1) ->setPresencePenalty(0.1); $response = $request->post(); // Access response properties echo $response->id; // chat-completion-abc123 echo $response->model; // anthropic/claude-3.5-sonnet echo $response->getTotalTokens(); // 1850 echo $response->getTextContent(); // Generated story content
Batch Configuration
Set multiple parameters at once:
$request = new OpenRouterCompletionsRequest( model: 'openai/gpt-4o', messages: $conversation )->setMultiple([ 'maxTokens' => 1500, 'temperature' => 0.7, 'topP' => 0.95, 'frequencyPenalty' => 0.2, 'presencePenalty' => 0.1, 'stop' => ['Human:', 'Assistant:'], 'seed' => 12345, 'user' => 'user_123' ]);
Tool Calling
Enable models to use external functions and tools:
$tools = [ [ 'type' => 'function', 'function' => [ 'name' => 'get_stock_price', 'description' => 'Get the current stock price for a given symbol', 'parameters' => [ 'type' => 'object', 'properties' => [ 'symbol' => [ 'type' => 'string', 'description' => 'Stock symbol (e.g., AAPL, GOOGL)' ], 'currency' => [ 'type' => 'string', 'enum' => ['USD', 'EUR'], 'description' => 'Currency for the price' ] ], 'required' => ['symbol'] ] ] ] ]; $request = new OpenRouterCompletionsRequest( model: 'openai/gpt-4o', messages: [ ['role' => 'user', 'content' => 'What\'s the current price of Apple stock?'] ] ) ->setTools($tools) ->setToolChoice('auto') ->setParallelFunctionCalling(true); $response = $request->post(); // Check for tool usage if ($response->usedTools()) { $toolCalls = $response->getToolCalls(); foreach ($toolCalls as $call) { echo "Function: {$call['function']['name']}\n"; echo "Arguments: " . json_encode($call['function']['arguments']) . "\n"; } }
Reasoning Mode
Enable OpenRouter's unique reasoning capabilities for enhanced problem-solving:
$request = new OpenRouterCompletionsRequest( model: 'anthropic/claude-3.5-sonnet', messages: [ [ 'role' => 'user', 'content' => 'Solve this step-by-step: A train leaves Station A at 2 PM traveling at 60 mph. Another train leaves Station B at 3 PM traveling at 80 mph toward Station A. If the stations are 300 miles apart, when do the trains meet?' ] ] ) ->setReasoning(['effort' => 'high']) ->setMaxTokens(3000); $response = $request->post(); // Access reasoning content $reasoning = $response->getReasoningContent(); $finalAnswer = $response->getTextContent(); echo "Reasoning process:\n" . $reasoning . "\n\n"; echo "Final answer:\n" . $finalAnswer; // Check reasoning efficiency $reasoningTokens = $response->getReasoningTokens(); $efficiency = $response->getReasoningEfficiency(); echo "Used {$reasoningTokens} reasoning tokens ({$efficiency}% of output)";
Advanced Sampling
Fine-tune model behavior with advanced sampling parameters:
$request = new OpenRouterCompletionsRequest( model: 'meta-llama/llama-3.1-70b-instruct', messages: $messages ) ->setTemperature(0.8) // Creativity level (0.0-2.0) ->setTopP(0.9) // Nucleus sampling (0.0-1.0) ->setTopK(40) // Top-K sampling (0+) ->setMinP(0.05) // Minimum probability threshold ->setTopA(0.2) // Top-A sampling ->setRepetitionPenalty(1.1) // Prevent repetition ->setFrequencyPenalty(0.1) // Frequency-based penalty ->setPresencePenalty(0.1) // Presence-based penalty ->setSeed(42); // Deterministic output $response = $request->post();
Log Probabilities
Analyze token probabilities and model confidence:
$request = new OpenRouterCompletionsRequest( model: 'openai/gpt-4o', messages: [ ['role' => 'user', 'content' => 'Is this statement true or false: The Earth is flat?'] ] ) ->setLogprobs(true) ->setTopLogprobs(5) // Get top 5 token probabilities ->setMaxTokens(100); $response = $request->post(); // Analyze confidence $logProbs = $response->getLogProbs(); $avgConfidence = $response->getAverageLogProb(); $tokenConfidence = $response->getTokenConfidence(); echo "Average confidence: " . ($avgConfidence * 100) . "%\n"; echo "High confidence tokens: " . count($tokenConfidence['high']) . "\n"; echo "Low confidence tokens: " . count($tokenConfidence['low']) . "\n";
Streaming Responses
Enable real-time streaming for long responses:
$request = new OpenRouterCompletionsRequest( model: 'anthropic/claude-3.5-sonnet', messages: [ ['role' => 'user', 'content' => 'Write a detailed essay about renewable energy'] ] ) ->setStream(true) ->setMaxTokens(4000); $response = $request->post(); // Stream handling will be processed by the CompletionsEndpoint // Response contains streaming data format
Advanced Configuration
Configure advanced parameters for optimal performance:
$request = new OpenRouterCompletionsRequest( model: 'google/gemini-pro-1.5', messages: $conversationHistory ) ->setMaxTokens(8000) ->setTemperature(0.7) ->setTopP(0.95) ->setFrequencyPenalty(0.3) ->setPresencePenalty(0.2) ->setRepetitionPenalty(1.05) ->setStop(['[END]', '###', 'Human:']) ->setSeed(12345) ->setUser('analytics_user_456') ->setLogitBias([50256 => -100]) // Suppress specific tokens ->setResponseFormat(['type' => 'json_object']); $response = $request->post();
Response Handling
Access comprehensive response data:
$response = $request->post(); // Basic response info $responseId = $response->id; $modelUsed = $response->model; $timestamp = $response->created; // Content access $textContent = $response->getTextContent(); $allChoices = $response->choices; $firstChoice = $response->getFirstChoice(); // Token usage analysis $totalTokens = $response->getTotalTokens(); $inputTokens = $response->getInputTokens(); $outputTokens = $response->getOutputTokens(); $reasoningTokens = $response->getReasoningTokens(); // Completion analysis $finishReason = $response->getFinishReason(); $completedNaturally = $response->completedNaturally(); $hitTokenLimit = $response->reachedTokenLimit(); $wasStopped = $response->stoppedBySequence(); // Tool usage $usedTools = $response->usedTools(); $toolCalls = $response->getToolCalls(); // Reasoning analysis (if enabled) $hasReasoning = $response->hasReasoning(); $reasoningContent = $response->getReasoningContent(); $reasoningEfficiency = $response->getReasoningEfficiency(); // Log probability analysis (if enabled) $logProbs = $response->getLogProbs(); $avgLogProb = $response->getAverageLogProb(); $tokenConfidence = $response->getTokenConfidence(); // Quality metrics $responseQuality = $response->calculateQualityScore(); $confidenceLevel = $response->getConfidenceLevel(); // Convert to array for storage $responseArray = $response->toArray();
Testing
The package provides testing utilities for mocking OpenRouter responses:
use LLMSpeak\OpenRouter\OpenRouterCompletionsRequest; use LLMSpeak\OpenRouter\OpenRouterCompletionsResponse; // Create a mock response $mockResponse = new OpenRouterCompletionsResponse( id: 'chatcmpl-test123', model: 'anthropic/claude-3.5-sonnet', created: time(), choices: [ [ 'index' => 0, 'message' => [ 'role' => 'assistant', 'content' => 'Test response content' ], 'finish_reason' => 'stop' ] ], usage: [ 'prompt_tokens' => 10, 'completion_tokens' => 15, 'total_tokens' => 25 ] ); // Test your application logic $this->assertEquals('Test response content', $mockResponse->getTextContent()); $this->assertEquals(25, $mockResponse->getTotalTokens()); $this->assertTrue($mockResponse->completedNaturally());
Credits
- Project Saturn Studios
- OpenRouter.ai for providing the unified AI model gateway
License
The MIT License (MIT). Please see License File for more information.
Part of the LLMSpeak Ecosystem - Built with ❤️ by Project Saturn Studios