gowelle / google-moderator
Laravel package for text and image moderation using Google AI APIs with opt-in blocklists, multi-language support, and internal engine switching.
Installs: 0
Dependents: 0
Suggesters: 0
Security: 0
Stars: 0
Watchers: 0
Forks: 0
Open Issues: 0
pkg:composer/gowelle/google-moderator
Requires
- php: ^8.2
- google/cloud-language: ^0.32
- google/cloud-vision: ^1.8
- illuminate/contracts: ^10.0|^11.0|^12.0
- illuminate/support: ^10.0|^11.0|^12.0
- spatie/laravel-package-tools: ^1.16
Requires (Dev)
- laravel/framework: 12.*
- laravel/pint: ^1.0
- orchestra/testbench: ^10.0
- pestphp/pest: ^2.0|^3.0
- pestphp/pest-plugin-laravel: ^2.0|^3.0
- phpstan/extension-installer: ^1.0
- phpstan/phpstan: ^1.0
- phpstan/phpstan-deprecation-rules: ^1.0
- phpstan/phpstan-phpunit: ^1.0
This package is auto-updated.
Last update: 2025-12-19 04:03:32 UTC
README
A Laravel package for text and image moderation using Google AI APIs, with opt-in blocklists, multi-language support, and internal engine switching.
Table of Contents
- Features
- Requirements
- Installation
- Configuration
- Quick Start
- ModerationResult API
- Blocklists
- Engine Comparison
- Thresholds
- Events
- Testing
- Changelog
Features
- ๐ค Text Moderation - Analyze text for toxic, harmful, or inappropriate content
- ๐ผ๏ธ Image Moderation - Detect adult, violent, or racy content in images
- ๐ Multi-Language Support - Swahili-first, with support for any language via custom blocklists
- ๐ Custom Blocklists - File or database-backed blocklists with regex support
- ๐ Engine Switching - Switch between Natural Language API, Vision API, or Gemini
- โก Caching - Built-in caching for blocklist terms
- ๐งช Testable - Fully testable with mocked Google clients
Requirements
- PHP 8.2+
- Laravel 10.x, 11.x, or 12.x
- Google Cloud account with enabled APIs
Installation
composer require gowelle/google-moderator
Publish the configuration:
php artisan vendor:publish --tag="google-moderator-config"
Publish the migrations:
php artisan vendor:publish --tag="google-moderator-migrations"
php artisan migrate
Configuration
Authentication
The package supports multiple authentication methods:
// config/google-moderator.php 'auth' => [ // Option 1: Path to service account JSON file 'credentials_path' => env('GOOGLE_APPLICATION_CREDENTIALS'), // Option 2: Inline JSON (for serverless environments like Vapor) 'credentials_json' => env('GOOGLE_CREDENTIALS_JSON'), // Option 3: Project ID for Application Default Credentials 'project_id' => env('GOOGLE_CLOUD_PROJECT'), ],
Engine Selection
'engines' => [ 'text' => 'natural_language', // or 'gemini' 'image' => 'vision', // or 'gemini' ],
Quick Start
Text Moderation
use Gowelle\GoogleModerator\Facades\Moderation; $result = Moderation::text( text: 'This is some content to moderate', language: 'en' ); if ($result->isSafe()) { // Content is safe } else { // Content was flagged foreach ($result->flags() as $flag) { echo "{$flag->category}: {$flag->severity}"; } }
Image Moderation
use Gowelle\GoogleModerator\Facades\Moderation; // From file path $result = Moderation::image('/path/to/image.jpg'); // From URL $result = Moderation::image('https://example.com/image.jpg'); if ($result->isUnsafe()) { // Handle unsafe image }
ModerationResult API
$result = Moderation::text($content, 'sw'); // Safety checks $result->isSafe(); // bool $result->isUnsafe(); // bool $result->confidence(); // float|null // Flag access $result->flags(); // array<FlaggedTerm> $result->apiFlags(); // Flags from Google API only $result->blocklistFlags(); // Flags from blocklist only $result->highSeverityFlags(); // High severity flags only $result->hasHighSeverityFlags(); // bool // Metadata $result->provider(); // 'google' or 'blocklist' $result->engine(); // 'natural_language', 'vision', 'gemini' // Grouping $result->flagsByCategory(); // array<string, array<FlaggedTerm>> // Serialization $result->toArray(); json_encode($result);
Blocklists
๐ก Bonus Feature: Google APIs don't provide customizable term blocking. This package includes a complete blocklist system so you can catch domain-specific terms, slang, or phrases that the AI might miss.
Why Blocklists?
- Domain-specific terms - Block product names, competitor mentions, or industry jargon
- Regional slang - Catch offensive terms in local dialects (especially useful for Swahili and other languages)
- Zero-tolerance words - Instantly flag specific terms regardless of AI confidence
- Runs after AI analysis - Combines AI intelligence with your custom rules
Enabling Blocklists
'blocklists' => [ 'enabled' => true, 'storage' => 'database', // or 'file' 'languages' => ['en', 'sw', 'fr'], // any languages ],
File-Based Blocklists
Create JSON files in storage/blocklists/:
// storage/blocklists/sw.json { "language": "sw", "terms": [ { "value": "offensive_word", "severity": "high" }, { "value": "*partial_match*", "severity": "medium" } ] }
Publish sample files:
php artisan vendor:publish --tag="google-moderator-blocklists"
Database Blocklists
Store terms in the database for easy management via admin panels.
Note
Architecture Note: This package uses DB::table('blocklist_terms') directly for performance and does not include an Eloquent model. You can interact with the table using the DB facade or the provided Blocklist repository methods.
Ensure you have run the migrations:
php artisan migrate
Table schema structure:
// Example of accessing the table directly DB::table('blocklist_terms')->insert([ 'language' => 'sw', // string(10) 'value' => 'bad_word', // string 'severity' => 'high', // enum('low', 'medium', 'high') 'is_regex' => false, // boolean 'created_at' => now(), 'updated_at' => now(), ]);
// Add terms programmatically Moderation::blocklist()->addTerm('sw', 'neno_baya', 'high'); Moderation::blocklist()->addTerm('en', '*spam*', 'medium');
Import/export via Artisan:
php artisan moderator:blocklist:import storage/blocklists/sw.json --language=sw php artisan moderator:blocklist:export --language=sw --output=exported.json
Pattern Matching
Blocklist terms support three matching modes:
| Pattern | Example | Matches |
|---|---|---|
| Exact | badword |
"This is badword here" โ "badwordy" โ |
| Wildcard | *offensive* |
"very offensive content" โ |
| Regex | /\b(bad|terrible)\b/i |
"This is bad" โ |
Engine Comparison
| Feature | Natural Language | Vision | Gemini |
|---|---|---|---|
| Text Moderation | โ | โ | โ |
| Image Moderation | โ | โ | โ |
| Toxicity Detection | โ (16 categories) | โ | โ |
| SafeSearch | โ | โ | โ |
| Multi-language | โ | N/A | โ |
| Cost | Per request | Per image | Per request |
| Default | โ Text | โ Image | โ Optional |
Thresholds
Configure sensitivity per category:
'thresholds' => [ // Text (0.0 - 1.0, lower = more strict) 'toxic' => 0.7, 'severe_toxic' => 0.5, 'profanity' => 0.7, // Image (VERY_UNLIKELY to VERY_LIKELY) 'adult' => 'LIKELY', 'violence' => 'LIKELY', 'racy' => 'POSSIBLE', ],
Events
The package dispatches a ContentFlagged event whenever content is flagged as unsafe:
use Gowelle\GoogleModerator\Events\ContentFlagged; use Illuminate\Support\Facades\Event; // In your EventServiceProvider or listener Event::listen(ContentFlagged::class, function (ContentFlagged $event) { Log::warning('Unsafe content detected', [ 'type' => $event->type, // 'text' or 'image' 'categories' => $event->categories(), 'is_high_severity' => $event->isHighSeverity(), 'flags' => $event->result->flags(), ]); // Take action: notify moderators, block submission, etc. });
ContentFlagged Event Properties
$event->result; // ModerationResult DTO $event->type; // 'text' or 'image' $event->content; // Original text or image path $event->language; // Language code (for text) $event->metadata; // Additional context // Helper methods $event->isText(); // bool $event->isImage(); // bool $event->categories(); // array of flagged categories $event->isHighSeverity(); // bool
Disabling Events
// config/google-moderator.php 'events' => [ 'enabled' => false, ],
Testing
# Unit tests composer test # With coverage composer test-coverage # Static analysis composer analyse # Code style composer format-test
Mocking in Tests
use Gowelle\GoogleModerator\Facades\Moderation; use Gowelle\GoogleModerator\DTOs\ModerationResult; Moderation::shouldReceive('text') ->with('test content', 'en') ->andReturn(ModerationResult::safe('google', 'natural_language'));
Changelog
Please see CHANGELOG for recent changes.
Contributing
Please see CONTRIBUTING for details.
Security
If you discover a security vulnerability, please send an email to gowelle.john@icloud.com.
Credits
License
The MIT License (MIT). Please see License File for more information.