falahatiali / homa
Homa - The legendary bird that brings AI wisdom to Laravel. A simple and elegant AI assistant package.
Installs: 15
Dependents: 0
Suggesters: 0
Security: 0
Stars: 5
Watchers: 0
Forks: 0
Open Issues: 0
pkg:composer/falahatiali/homa
Requires
- php: ^8.1
- ext-curl: *
- google-gemini-php/client: ^2.6
- grok-php/client: ^1.3
- illuminate/support: ^10.0|^11.0|^12.0
- openai-php/laravel: ^0.16.0
Requires (Dev)
- guzzlehttp/guzzle: ^7.10
- laravel/pint: ^1.25
- mockery/mockery: ^1.6
- orchestra/testbench: ^8.0|^9.0|^10.0
- phpstan/phpstan: ^2.1
- phpunit/phpunit: ^10.0
README
Homa ๐ฆ
The legendary bird that brings AI wisdom to Laravel.
Homa is a simple and elegant AI assistant package for Laravel applications. Integrate multiple AI providers (OpenAI, Anthropic Claude, and more) with a clean, fluent API. Named after the mythical Persian bird that brings good fortune and wisdom to those it flies over.
โจ Features
- ๐ฆ Simple, Fluent API - Elegant interface inspired by Laravel's design philosophy
- ๐ Multiple AI Providers - Support for OpenAI (GPT-4, GPT-3.5), Anthropic (Claude), Grok, Groq, Google Gemini, and Ollama (local)
- ๐ฌ Conversation Management - Built-in context-aware multi-turn conversations
- โ๏ธ Highly Configurable - Extensive configuration options for every use case
- ๐งช Fully Tested - 70 tests with 135 assertions covering all critical paths
- ๐ฆ Zero Configuration - Works out of the box with sensible defaults
- ๐ฏ Extensible - Easy to add custom AI providers via Factory Pattern
- ๐ Type Safe - Full PHP 8.1+ type hints and return types
- ๐๏ธ SOLID Principles - Clean architecture following best practices
- โก Production Ready - Uses official OpenAI PHP client for reliability
- ๐ Code Quality - PHPStan level 5 + Laravel Pint for consistency
๐ Requirements
- PHP 8.1 or higher
- Laravel 10.x or 11.x
- API keys for your chosen AI provider(s)
๐ฆ Installation
Install the package via Composer:
composer require falahatiali/homa
Publish Configuration (Optional)
php artisan vendor:publish --tag=homa-config
This will create a config/homa.php configuration file.
Quick Setup
-
Copy environment file:
cp .env.example .env
-
Add your API keys to
.env:HOMA_PROVIDER=openai OPENAI_API_KEY=sk-your-actual-api-key
-
Start using Homa:
use Homa\Facades\Homa; $response = Homa::ask('Hello!');
Configure Your API Keys
Copy the example environment file and add your AI provider API keys:
# Copy the example environment file cp .env.example .env # Edit with your actual API keys nano .env
Required Environment Variables:
# Choose your default provider (openai, anthropic, grok, groq, gemini, ollama) HOMA_PROVIDER=openai # OpenAI Configuration OPENAI_API_KEY=sk-your-openai-api-key-here OPENAI_MODEL=gpt-4 # Anthropic Configuration ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key-here ANTHROPIC_MODEL=claude-3-5-sonnet-20241022 # Grok Configuration GROK_API_KEY=xai-your-grok-api-key-here GROK_MODEL=grok-2 # Groq Configuration (Ultra-fast inference) GROQ_API_KEY=gsk_your-groq-api-key-here GROQ_MODEL=openai/gpt-oss-20b # Gemini Configuration (Google AI with multimodal) GEMINI_API_KEY=your-gemini-api-key-here GEMINI_MODEL=gemini-2.0-flash-exp # Ollama (Local, free) OLLAMA_API_URL=http://localhost:11434 OLLAMA_MODEL=llama3
Get API Keys:
- OpenAI: platform.openai.com/api-keys
- Anthropic: console.anthropic.com
- Grok: console.x.ai
- Groq: console.groq.com
- Gemini: aistudio.google.com/apikey
๐ฅ๏ธ Use Ollama locally (free)
Ollama lets you run models like Llama 3, Mistral, Qwen locally with no API cost.
- Install Ollama
curl -fsSL https://ollama.com/install.sh | sh # macOS/Linux # Windows: download installer at https://ollama.com/download
- Download and run a model
ollama run llama3 # or: mistral:7b-instruct, qwen2.5:7b-instruct
- Configure Homa for Ollama
HOMA_PROVIDER=ollama OLLAMA_API_URL=http://localhost:11434 OLLAMA_MODEL=llama3
- Use in code
$response = Homa::provider('ollama')->ask('Explain Laravel service container.'); echo $response->content();
Best local models (balanced): llama3.1:8b-instruct, mistral:7b-instruct, qwen2.5:7b-instruct, phi3:mini.
Browse models: https://ollama.com/library and curated lists at https://llm-explorer.com/.
๐ Quick Start
Basic Usage
Ask a simple question:
use Homa\Facades\Homa; $response = Homa::ask('What is Laravel?'); echo $response->content();
Configure On-The-Fly
Chain configuration methods for custom behavior:
$response = Homa::model('gpt-4') ->temperature(0.7) ->maxTokens(500) ->ask('Explain dependency injection in Laravel'); echo $response->content();
Switch Between Providers
Easily switch between different AI providers:
// Use OpenAI $openaiResponse = Homa::provider('openai') ->model('gpt-4') ->ask('What is Laravel?'); // Use Anthropic Claude $claudeResponse = Homa::provider('anthropic') ->model('claude-3-5-sonnet-20241022') ->ask('What is Laravel?'); // Use Groq (Ultra-fast inference) $groqResponse = Homa::provider('groq') ->model('openai/gpt-oss-20b') ->ask('What is Laravel?'); // Use Gemini (Google AI with multimodal) $geminiResponse = Homa::provider('gemini') ->model('gemini-2.0-flash-exp') ->ask('What is Laravel?');
Custom System Prompts
Set custom system prompts for specialized behavior:
$response = Homa::systemPrompt('You are a Laravel expert. Answer concisely.') ->ask('What is a service provider?');
Multi-Turn Conversations
Create context-aware conversations:
$conversation = Homa::startConversation(); $response1 = $conversation->ask('Hello! My name is Ali.'); // AI: Hello Ali! Nice to meet you... $response2 = $conversation->ask('What is my name?'); // AI: Your name is Ali. // Access conversation history $history = $conversation->history(); // Clear conversation and start fresh $conversation->clear();
Advanced Chat Control
For full control over the conversation, use the chat() method:
$messages = [ ['role' => 'system', 'content' => 'You are a helpful Laravel assistant.'], ['role' => 'user', 'content' => 'What are service containers?'], ['role' => 'assistant', 'content' => 'Service containers are...'], ['role' => 'user', 'content' => 'Can you give me an example?'], ]; $response = Homa::chat($messages);
Working with Responses
The AIResponse object provides several useful methods:
$response = Homa::ask('Hello!'); // Get the response content $content = $response->content(); // Get the model used $model = $response->model(); // Get usage statistics (tokens, etc.) $usage = $response->usage(); // Get raw API response $raw = $response->raw(); // Convert to array $array = $response->toArray(); // Convert to JSON $json = $response->toJson(); // Use as string echo $response; // Automatically calls content()
โ๏ธ Configuration
Configuration File
After publishing the config file, you can customize all aspects in config/homa.php. For environment variables, see .env.example for all available options:
return [ // Default AI provider 'default' => env('HOMA_PROVIDER', 'openai'), // Provider configurations 'providers' => [ 'openai' => [ 'api_key' => env('OPENAI_API_KEY'), 'api_url' => env('OPENAI_API_URL', 'https://api.openai.com/v1'), 'model' => env('OPENAI_MODEL', 'gpt-4'), 'temperature' => env('OPENAI_TEMPERATURE', 0.7), 'max_tokens' => env('OPENAI_MAX_TOKENS', 1000), 'timeout' => env('OPENAI_TIMEOUT', 30), ], 'anthropic' => [ 'api_key' => env('ANTHROPIC_API_KEY'), 'api_url' => env('ANTHROPIC_API_URL', 'https://api.anthropic.com/v1'), 'model' => env('ANTHROPIC_MODEL', 'claude-3-5-sonnet-20241022'), 'temperature' => env('ANTHROPIC_TEMPERATURE', 0.7), 'max_tokens' => env('ANTHROPIC_MAX_TOKENS', 1000), 'timeout' => env('ANTHROPIC_TIMEOUT', 30), ], 'grok' => [ 'api_key' => env('GROK_API_KEY'), 'model' => env('GROK_MODEL', 'grok-2'), 'temperature' => env('GROK_TEMPERATURE', 0.7), 'max_tokens' => env('GROK_MAX_TOKENS', 1000), ], 'groq' => [ 'api_key' => env('GROQ_API_KEY'), 'api_url' => env('GROQ_API_URL', 'https://api.groq.com/openai/v1'), 'model' => env('GROQ_MODEL', 'openai/gpt-oss-20b'), 'temperature' => env('GROQ_TEMPERATURE', 0.7), 'max_tokens' => env('GROQ_MAX_TOKENS', 1000), 'timeout' => env('GROQ_TIMEOUT', 30), ], 'gemini' => [ 'api_key' => env('GEMINI_API_KEY'), 'base_uri' => env('GEMINI_BASE_URI', 'https://generativelanguage.googleapis.com/v1beta'), 'model' => env('GEMINI_MODEL', 'gemini-2.0-flash-exp'), 'temperature' => env('GEMINI_TEMPERATURE', 0.7), 'max_tokens' => env('GEMINI_MAX_TOKENS', 1000), 'timeout' => env('GEMINI_TIMEOUT', 30), ], ], // Default system prompt 'system_prompt' => env('HOMA_SYSTEM_PROMPT', 'You are a helpful AI assistant.'), // Logging configuration 'logging' => [ 'enabled' => env('HOMA_LOGGING', false), 'channel' => env('HOMA_LOG_CHANNEL', 'stack'), ], // Caching configuration 'cache' => [ 'enabled' => env('HOMA_CACHE_ENABLED', false), 'ttl' => env('HOMA_CACHE_TTL', 3600), 'prefix' => 'homa_', ], ];
Available Models
OpenAI:
gpt-5- Latest, most advanced modelgpt-5o- Optimized GPT-5 variantgpt-4o- Latest GPT-4 with vision capabilitiesgpt-4o-mini- Smaller, faster GPT-4ogpt-4-turbo- Fast GPT-4 variantgpt-4- Most capable, best for complex tasksgpt-3.5-turbo- Fast and cost-effective
Anthropic:
claude-3-5-sonnet-20241022- Latest, most capableclaude-3-opus-20240229- Most powerful for complex tasksclaude-3-sonnet-20240229- Balanced performanceclaude-3-haiku-20240307- Fastest, most cost-effective
Groq (Ultra-fast inference):
openai/gpt-oss-20b- Large, capable modelopenai/gpt-oss-7b- Smaller, faster modelllama-3.1-70b-versatile- Meta's Llama modelllama-3.1-8b-instant- Fast Llama modelmixtral-8x7b-32768- Mixtral modelgemma-7b-it- Google's Gemma model
Gemini (Google AI with multimodal capabilities):
gemini-2.0-flash-exp- Latest, fastest (experimental)gemini-1.5-pro-latest- Most capablegemini-1.5-flash-latest- Balanced speed and capabilitygemini-1.5-pro- Stable pro modelgemini-1.5-flash- Fast and efficientgemini-1.5-pro-002- Versioned pro modelgemini-1.5-flash-002- Versioned flash model
๐ฏ Use Cases
Content Generation
$blogPost = Homa::model('gpt-4') ->maxTokens(2000) ->ask('Write a blog post about Laravel best practices');
Code Assistance
$response = Homa::systemPrompt('You are an expert PHP developer.') ->ask('Review this code and suggest improvements: ' . $code);
Customer Support Bot
$conversation = Homa::systemPrompt('You are a helpful customer support agent.') ->startConversation(); $response = $conversation->ask($customerQuestion);
Data Analysis
$analysis = Homa::model('claude-3-5-sonnet-20241022') ->ask("Analyze this data and provide insights: " . json_encode($data));
๐๏ธ Architecture
Package Structure
homa/
โโโ config/
โ โโโ homa.php # Configuration file
โโโ src/
โ โโโ Contracts/
โ โ โโโ AIProviderInterface.php # Provider interface
โ โโโ Conversation/
โ โ โโโ Conversation.php # Conversation manager
โ โโโ Exceptions/
โ โ โโโ AIException.php # Base exception
โ โ โโโ ConfigurationException.php
โ โโโ Facades/
โ โ โโโ Homa.php # Laravel facade
โ โโโ Manager/
โ โ โโโ HomaManager.php # Main manager class
โ โโโ Providers/
โ โ โโโ AnthropicProvider.php # Anthropic implementation
โ โ โโโ OpenAIProvider.php # OpenAI implementation
โ โโโ Response/
โ โ โโโ AIResponse.php # Response wrapper
โ โโโ HomaServiceProvider.php # Laravel service provider
โโโ tests/ # Comprehensive test suite
Adding Custom Providers
You can extend Homa with custom AI providers by implementing the AIProviderInterface:
use Homa\Contracts\AIProviderInterface; use Homa\Response\AIResponse; class CustomProvider implements AIProviderInterface { public function sendMessage(array $messages, array $options = []): AIResponse { // Your implementation } // Implement other required methods... }
๐งช Testing
Run the test suite:
composer test
Or with PHPUnit directly:
./vendor/bin/phpunit
๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
๐ Changelog
Please see CHANGELOG for more information on what has changed recently.
๐ Security
If you discover any security-related issues, please email the maintainer instead of using the issue tracker.
๐ License
The MIT License (MIT). Please see License File for more information.
๐ Credits
- Author: Ali Falahati
- Inspired by: The mythical Persian Homa bird, a symbol of wisdom and good fortune
๐ About Homa
In Persian mythology, the Homa (also spelled Huma) is a legendary bird that brings good fortune and wisdom to those fortunate enough to be graced by its shadow. The bird is said to never land, continuously soaring through the skies, much like how this package aims to elevate your Laravel applications with the power of AI.
Just as the Homa bird is known for its wisdom and grace, this package strives to bring intelligent, elegant solutions to your AI integration needs, making it effortless to incorporate cutting-edge AI capabilities into your Laravel applications.
May the wisdom of Homa guide your code! ๐ฆ
Made with โค๏ธ for the Laravel community