badrshs / scribe-ai
A pluggable Laravel package for AI-powered content processing and multi-channel publishing (Blogger, WordPress, Facebook, Telegram, and more).
Requires
- php: ^8.2
- illuminate/console: ^11.0|^12.0
- illuminate/http: ^11.0|^12.0
- illuminate/pipeline: ^11.0|^12.0
- illuminate/queue: ^11.0|^12.0
- illuminate/support: ^11.0|^12.0
Requires (Dev)
- binarytorch/larecipe: ^2.8
- orchestra/testbench: ^9.0|^10.9
- phpunit/phpunit: ^11.0|^12.5
README
Scribe AI
A Laravel package that turns any URL into a published article - automatically.
Scribe AI scrapes a webpage, rewrites the content with AI, generates a cover image, optimises it for the web, saves the article to your database, and publishes it to one or more channels. One command. Zero manual steps.
Built for Laravel 11 & 12 Β· PHP 8.2+ Β· Queue-first Β· Fully extensible
The full documentation covers every stage, driver, provider, event, and extension in detail - with code examples, config references, and step-by-step guides for building custom integrations. badrshs.github.io/scribe-ai
Table of Contents
- Installation
- Quick Start
- How It Works
- Configuration
- AI Providers
- Events
- Usage
- Categories
- Content Sources (Input Drivers)
- Run Tracking & Resume
- Image Optimization
- Built-in Publish Drivers
- Architecture
- Extensions
- Testing
- License
Installation
composer require badrshs/scribe-ai
Interactive Setup (Recommended)
Run the install wizard - it publishes config/migrations, asks for your AI provider & API keys, configures publish channels, and writes everything to .env:
php artisan scribe:install
Manual Setup
Publish the config file and migrations, then migrate:
php artisan vendor:publish --tag=scribe-ai-config php artisan vendor:publish --tag=scribe-ai-migrations php artisan migrate
Quick Start
Add your AI provider key to .env:
# OpenAI (default) AI_PROVIDER=openai OPENAI_API_KEY=sk-... # Or use Claude, Gemini, or Ollama - see "AI Providers" below
Run the pipeline on any URL:
php artisan scribe:process-url https://example.com/article --sync
That's it. The article is scraped, rewritten, illustrated, stored, and published to the log channel by default. Swap log for real channels when you're ready.
How It Works
Every URL passes through an ordered pipeline of stages. Each stage reads from an immutable ContentPayload DTO and passes a new copy to the next stage.
| # | Stage | What it does |
|---|---|---|
| 1 | Scrape | Extracts title, body, and metadata from the source URL |
| 2 | AI Rewrite | Sends the raw content to OpenAI and returns a polished article |
| 3 | Generate Image | Creates a cover image with DALL-E based on article context |
| 4 | Optimise Image | Resizes, compresses, and converts the image to WebP |
| 5 | Create Article | Persists the article to the database with status, tags, and category |
| 6 | Publish | Pushes the article to every active publishing channel |
Stages are individually skippable, replaceable, and reorderable via config or at runtime.
Configuration
All config lives under config/scribe-ai.php. Key environment variables:
# -- AI Provider --------------------------------------- AI_PROVIDER=openai # openai, claude, gemini, ollama AI_IMAGE_PROVIDER= # separate provider for images (optional) AI_OUTPUT_LANGUAGE=English # language for AI-written articles # -- OpenAI -------------------------------------------- OPENAI_API_KEY=sk-... OPENAI_CONTENT_MODEL=gpt-4o-mini # model for rewriting OPENAI_IMAGE_MODEL=dall-e-3 # model for image generation # -- Anthropic Claude ---------------------------------- ANTHROPIC_API_KEY=sk-ant-... # -- Google Gemini ------------------------------------- GEMINI_API_KEY=AIza... # -- Ollama (local) ------------------------------------ OLLAMA_HOST=http://localhost:11434 # -- Pipeline ------------------------------------------ PIPELINE_HALT_ON_ERROR=true # stop on stage failure (default) PIPELINE_TRACK_RUNS=true # persist each run for resume support # -- Content Sources ----------------------------------- CONTENT_SOURCE_DRIVER=web # default input driver (web, rss, text) WEB_SCRAPER_TIMEOUT=30 RSS_MAX_ITEMS=10 # -- Image --------------------------------------------- IMAGE_OPTIMIZE=true # set false to skip WebP conversion # -- Publishing ---------------------------------------- PUBLISHER_CHANNELS=log # comma-separated active channels PUBLISHER_DEFAULT_CHANNEL=log # -- Facebook ------------------------------------------ FACEBOOK_PAGE_ID= FACEBOOK_PAGE_ACCESS_TOKEN= # -- Telegram ------------------------------------------ TELEGRAM_BOT_TOKEN= TELEGRAM_CHAT_ID= # -- Google Blogger ------------------------------------ BLOGGER_BLOG_ID= GOOGLE_APPLICATION_CREDENTIALS= # -- WordPress ----------------------------------------- WORDPRESS_URL= WORDPRESS_USERNAME= WORDPRESS_PASSWORD= # -- Telegram Approval Extension ----------------------- TELEGRAM_APPROVAL_ENABLED=false # enable the RSSβTelegram workflow TELEGRAM_APPROVAL_BOT_TOKEN= # defaults to TELEGRAM_BOT_TOKEN TELEGRAM_APPROVAL_CHAT_ID= # defaults to TELEGRAM_CHAT_ID TELEGRAM_WEBHOOK_URL= # auto-resolved from APP_URL if empty TELEGRAM_WEBHOOK_SECRET= # optional verification secret
AI Providers
Scribe AI supports multiple AI backends via a driver-based AiProviderManager. Switch providers with a single env var - all internal code stays the same.
Built-in providers
| Provider | Text/Chat | Image Gen | Env Key |
|---|---|---|---|
| OpenAI | GPT-4o, GPT-4o-mini, o1, o3, etc. | DALL-E 3 | OPENAI_API_KEY |
| Claude | Claude Sonnet/Opus/Haiku | - | ANTHROPIC_API_KEY |
| Gemini | Gemini 2.0 Flash, Pro, etc. | Imagen | GEMINI_API_KEY |
| Ollama | Llama, Mistral, Phi, etc. (local) | - | OLLAMA_HOST |
| PiAPI | - | Flux (via piapi.ai) | PIAPI_API_KEY |
Switching providers
# Use Claude for text, OpenAI for images AI_PROVIDER=claude ANTHROPIC_API_KEY=sk-ant-... AI_IMAGE_PROVIDER=openai OPENAI_API_KEY=sk-...
Using different providers for text vs images
The AI_IMAGE_PROVIDER env var lets you use one provider for chat/rewriting and another for image generation. If not set, the default AI_PROVIDER is used for images too (and falls back to OpenAI if the default provider doesn't support images).
Registering custom AI providers
Create a class implementing Badr\ScribeAi\Contracts\AiProvider:
use Badr\ScribeAi\Contracts\AiProvider; class PerplexityProvider implements AiProvider { public function __construct(protected array $config) {} public function chat(array $messages, string $model, int $maxTokens = 4096, bool $jsonMode = false): array { // Call Perplexity API and return OpenAI-compatible format: return ['choices' => [['message' => ['content' => $text]]]]; } public function generateImage(string $prompt, string $model, string $size, string $quality): ?string { return null; // Not supported } public function supportsImageGeneration(): bool { return false; } public function name(): string { return 'perplexity'; } }
Register it:
use Badr\ScribeAi\Services\Ai\AiProviderManager; app(AiProviderManager::class)->extend('perplexity', fn(array $config) => new PerplexityProvider($config));
Then set AI_PROVIDER=perplexity in your .env and add config under scribe-ai.ai.providers.perplexity.
Events
Every pipeline stage dispatches a Laravel event, letting you hook into the content lifecycle with standard event listeners.
Available events
| Event | Fired when | Key properties |
|---|---|---|
PipelineStarted |
Pipeline begins execution | payload, runId |
PipelineCompleted |
Pipeline finishes successfully | payload, runId |
PipelineFailed |
Pipeline fails or content is rejected | payload, reason, stage, runId |
ContentScraped |
ScrapeStage fetches content | payload, driver, contentLength |
ContentRewritten |
AiRewriteStage completes | payload, title, categoryId |
ImageGenerated |
GenerateImageStage produces an image | payload, imagePath |
ImageOptimized |
OptimizeImageStage converts/resizes | payload, originalPath, optimizedPath |
ArticleCreated |
CreateArticleStage persists to DB | payload, article |
ArticlePublished |
Each channel publish attempt | payload, result, channel |
All events are in the Badr\ScribeAi\Events namespace.
Listening to events
Register listeners in your EventServiceProvider or use closures:
use Badr\ScribeAi\Events\ArticleCreated; use Badr\ScribeAi\Events\PipelineFailed; use Badr\ScribeAi\Events\ContentRewritten; // In EventServiceProvider::$listen protected $listen = [ ArticleCreated::class => [ SendSlackNotification::class, UpdateSearchIndex::class, ], PipelineFailed::class => [ AlertOpsTeam::class, ], ];
Or listen inline:
use Illuminate\Support\Facades\Event; use Badr\ScribeAi\Events\ContentRewritten; Event::listen(ContentRewritten::class, function (ContentRewritten $event) { logger()->info("Article rewritten: {$event->title}", [ 'category' => $event->categoryId, 'url' => $event->payload->sourceUrl, ]); });
Usage
Artisan Commands
# Process a URL (queued by default) php artisan scribe:process-url https://example.com/article # Process synchronously with live progress output php artisan scribe:process-url https://example.com/article --sync # Pass categories inline (id:name pairs) php artisan scribe:process-url https://example.com/article --sync --categories="1:Tech,2:Health,3:Business" # Force a specific source driver (auto-detected by default) php artisan scribe:process-url https://blog.com/feed.xml --sync --source=rss # Suppress progress output php artisan scribe:process-url https://example.com/article --sync --silent # List recent pipeline runs php artisan scribe:runs php artisan scribe:runs --status=failed # Resume a failed run (picks up from the failed stage) php artisan scribe:resume 42 # Publish an existing article by ID php artisan scribe:publish 1 # Publish to specific channels only php artisan scribe:publish 1 --channels=facebook,telegram # Batch-publish approved staged content php artisan scribe:publish-approved --limit=5
Programmatic API
use Badr\ScribeAi\Data\ContentPayload; use Badr\ScribeAi\Facades\ContentPipeline; use Badr\ScribeAi\Facades\Publisher; use Badr\ScribeAi\Services\Pipeline\ContentPipeline as Pipeline; // Run the full pipeline $payload = ContentPipeline::process( ContentPayload::fromUrl('https://example.com/article') ); // Pass categories via the payload $payload = new ContentPayload( sourceUrl: 'https://example.com/article', categories: [1 => 'Technology', 2 => 'Health', 3 => 'Business'], // Optional: The AI will choose the category that best fits your article. ); $result = app(Pipeline::class)->process($payload); // Resume a failed run $result = app(Pipeline::class)->resume($pipelineRunId); // Disable run tracking for a one-off call $result = app(Pipeline::class)->withoutTracking()->process($payload); // Listen to progress events app(Pipeline::class) ->onProgress(function (string $stage, string $status) { echo "{$stage}: {$status}\n"; }) ->process($payload); // Publish to a single channel Publisher::driver('telegram')->publish($article); // Publish to all active channels Publisher::publishToChannels($article);
Custom Pipeline Stages
Create a class that implements Badr\ScribeAi\Contracts\Pipe:
use Badr\ScribeAi\Contracts\Pipe; use Badr\ScribeAi\Data\ContentPayload; use Closure; class TranslateStage implements Pipe { public function handle(ContentPayload $payload, Closure $next): mixed { $translated = MyTranslator::translate($payload->content); return $next($payload->with(['content' => $translated])); } }
Then use it at runtime or register it in the config:
ContentPipeline::through([ ScrapeStage::class, TranslateStage::class, CreateArticleStage::class, ])->process($payload);
Custom Publish Drivers
Implement Badr\ScribeAi\Contracts\Publisher and register the driver in a service provider:
use Badr\ScribeAi\Facades\Publisher; Publisher::extend('medium', fn (array $config) => new MediumDriver($config));
Then add medium to your PUBLISHER_CHANNELS env variable.
Categories
Categories are fully optional. If no categories are provided, the AI writes freely without category constraints.
When categories are provided, the AI selects the most appropriate one from the list and includes category_id in its JSON response.
How categories are resolved
The pipeline resolves categories in priority order - the first non-empty source wins:
| Priority | Source | Example |
|---|---|---|
| 1 | Payload - passed directly in code or CLI | --categories="1:Tech,2:Health" |
| 2 | Database - categories table |
Rows seeded or added via your app |
| 3 | Config - scribe-ai.categories array |
[1 => 'Tech', 2 => 'Health'] |
| 4 | None - empty list | AI writes without category selection |
Passing categories
CLI:
php artisan scribe:process-url https://example.com --sync --categories="1:Tech,2:Health,3:Business"
Programmatic:
$payload = new ContentPayload( sourceUrl: 'https://example.com/article', categories: [1 => 'Technology', 2 => 'Health', 3 => 'Business'], ); app(Pipeline::class)->process($payload);
Config (config/scribe-ai.php):
'categories' => [ 1 => 'Technology', 2 => 'Health', 3 => 'Business', ],
Content Sources (Input Drivers)
The input side of the pipeline uses the same extensible driver pattern as publishing. ContentSourceManager resolves a content-source driver for each identifier (URL, feed, raw text) - either by auto-detection or by explicit override.
Input: ContentSourceManager β web, rss, text, your custom drivers
Processing: ContentPipeline β scrape, rewrite, image, publish, ...
Output: PublisherManager β log, telegram, facebook, ...
Built-in source drivers
| Driver | Identifier | What it does |
|---|---|---|
web |
Any HTTP(S) URL | Scrapes and cleans the HTML content |
rss |
Feed URL (.xml, .rss, /feed) |
Parses RSS 2.0 / Atom, returns latest entry |
text |
Any non-URL string | Passes raw text straight through (no network call) |
Auto-detection vs explicit override
By default the manager iterates drivers in order (rss β web β text) and picks the first one whose supports() returns true. You can force a specific driver instead:
CLI:
# Auto-detect (URL β web driver) php artisan scribe:process-url https://example.com/article --sync # Force RSS driver php artisan scribe:process-url https://blog.com/feed.xml --sync --source=rss # Force text driver (pipe content in via payload)
Programmatic:
use Badr\ScribeAi\Data\ContentPayload; use Badr\ScribeAi\Services\Pipeline\ContentPipeline; // Auto-detect $payload = ContentPayload::fromUrl('https://blog.com/feed.xml'); app(ContentPipeline::class)->process($payload); // Force a specific driver $payload = new ContentPayload( sourceUrl: 'https://blog.com/feed.xml', sourceDriver: 'rss', ); app(ContentPipeline::class)->process($payload);
Fetch content without the pipeline:
use Badr\ScribeAi\Facades\ContentSource; // Auto-detect $result = ContentSource::fetch('https://example.com/article'); // $result = ['content' => '...', 'title' => '...', 'meta' => [...]] // Force driver $result = ContentSource::driver('rss')->fetch('https://blog.com/feed.xml');
Registering custom source drivers
Create a class implementing Badr\ScribeAi\Contracts\ContentSource:
use Badr\ScribeAi\Contracts\ContentSource; class YouTubeTranscriptSource implements ContentSource { public function __construct(protected array $config = []) {} public function fetch(string $identifier): array { // Fetch transcript from YouTube API... return ['content' => $transcript, 'title' => $videoTitle, 'meta' => [...]]; } public function supports(string $identifier): bool { return str_contains($identifier, 'youtube.com') || str_contains($identifier, 'youtu.be'); } public function name(): string { return 'youtube'; } }
Register it in a service provider:
use Badr\ScribeAi\Services\Sources\ContentSourceManager; app(ContentSourceManager::class)->extend('youtube', fn(array $config) => new YouTubeTranscriptSource($config));
Configuration
# Default source driver (used when no auto-detection match) CONTENT_SOURCE_DRIVER=web # Web driver settings WEB_SCRAPER_TIMEOUT=30 WEB_SCRAPER_USER_AGENT="Mozilla/5.0 (compatible; ContentBot/1.0)" # RSS driver settings RSS_TIMEOUT=30 RSS_MAX_ITEMS=10
Run Tracking & Resume
Every pipeline execution is automatically persisted to the pipeline_runs table, giving you full visibility into what ran, what failed, and the ability to resume from the exact stage that failed.
How it works
- When
process()starts, aPipelineRunrecord is created with statusPending. - As each stage completes, the run's
current_stage_indexandpayload_snapshotare updated. - On success β status becomes
Completed. On rejection βRejected. On uncaught exception βFailed(witherror_messageanderror_stagerecorded). - Failed runs can be resumed - the pipeline rehydrates the payload from the last snapshot and continues from the failed stage.
Listing runs
# Show the 20 most recent runs php artisan scribe:runs # Filter by status php artisan scribe:runs --status=failed # Show more php artisan scribe:runs --limit=50
Resuming a failed run
# Resume run #42 from the stage that failed
php artisan scribe:resume 42
Programmatic:
use Badr\ScribeAi\Services\Pipeline\ContentPipeline; $pipeline = app(ContentPipeline::class); // Resume by run ID $result = $pipeline->resume(42); // Or pass the PipelineRun model directly $run = PipelineRun::find(42); $result = $pipeline->resume($run);
Disabling run tracking
Run tracking is enabled by default. To disable it:
PIPELINE_TRACK_RUNS=false
Or disable it for a single call:
app(ContentPipeline::class)->withoutTracking()->process($payload);
Note: When tracking is enabled, the
pipeline_runsmigration must exist. If the table is missing, the pipeline throws aRuntimeExceptionat startup rather than failing silently mid-run.
Image Optimization
Generated cover images are automatically converted to WebP format with configurable quality and dimensions. This reduces file size while maintaining visual quality.
To disable image optimization (e.g., if you handle images externally):
IMAGE_OPTIMIZE=false
When disabled, the OptimizeImageStage is silently skipped and the original image passes through unchanged.
Built-in Publish Drivers
| Driver | Platform | Auth Method |
|---|---|---|
log |
Laravel Log (dev / testing) | None |
facebook |
Facebook Pages | Page Access Token |
telegram |
Telegram Bot API | Bot Token |
blogger |
Google Blogger | OAuth 2 Service Account |
wordpress |
WordPress REST API | Application Password |
Architecture
+-------------------------------------------------------------------+
| ContentSourceManager |
| |
| identifier --> auto-detect / forced driver |
| driver('web') --> WebDriver::fetch() |
| driver('rss') --> RssDriver::fetch() |
| driver('text') --> TextDriver::fetch() |
+-------------------------------------------------------------------+
|
v
+-------------------------------------------------------------------+
| ContentPipeline |
| |
| ContentPayload --> Stage 1 --> Stage 2 --> ... --> Stage N |
| (DTO) Scrape Rewrite Publish |
| |
| Each stage tracked in PipelineRun (DB) |
| Failed? β snapshot saved β resume from that stage |
+-------------------------------------------------------------------+
|
v
+-------------------------------------------------------------------+
| PublisherManager |
| |
| driver('facebook') --> FacebookDriver::publish() |
| driver('telegram') --> TelegramDriver::publish() |
| |
| Each result --> PublishResult DTO --> publish_logs table |
+-------------------------------------------------------------------+
Key classes:
| Class | Role |
|---|---|
ContentSourceManager |
Resolves input drivers (web, rss, text, custom). Auto-detects or uses explicit override. |
AiProviderManager |
Resolves AI backends (openai, claude, gemini, ollama, custom). Separate text & image providers. |
ContentPayload |
Immutable DTO carrying state between stages. Supports toSnapshot() / fromSnapshot() for JSON serialisation. |
ContentPipeline |
Runs stages in sequence, tracks each step in a PipelineRun, supports resume from failure. Dispatches Pipeline* events. |
PipelineRun |
Eloquent model persisting run state, stage progress, and payload snapshots to pipeline_runs. |
PublisherManager |
Resolves and dispatches to channel publish drivers. |
PublishResult |
Per-channel outcome DTO, auto-persisted to publish_logs. |
Extensions
Extensions are optional modules that add complete workflows on top of the core pipeline. Each extension is loaded only when explicitly enabled, keeping the default footprint minimal.
Telegram Approval (RSS β AI β Telegram β Pipeline)
A two-phase human-in-the-loop workflow:
Phase 1: RSS feed β AI analysis β Telegram messages with β
/β buttons β StagedContent (pending)
Phase 2: Human approves β pipeline dispatched with web driver β Article created & published
Enable the extension
TELEGRAM_APPROVAL_ENABLED=true # Uses the Telegram publish driver's bot_token/chat_id by default. # Override if you want a separate bot for approvals: TELEGRAM_APPROVAL_BOT_TOKEN= TELEGRAM_APPROVAL_CHAT_ID=
Phase 1 - Fetch RSS & send for review
# Fetch RSS, filter entries from the last 7 days, send to Telegram php artisan scribe:rss-review https://blog.com/feed.xml # Use AI to summarise and rank entries, filter older than 3 days php artisan scribe:rss-review https://blog.com/feed.xml --days=3 --ai-filter # Limit to 5 entries php artisan scribe:rss-review https://blog.com/feed.xml --limit=5 --ai-filter
Each entry appears in your Telegram chat with:
- Title, category, AI summary (when
--ai-filteris used) - Source URL
- β Approve / β Reject inline buttons
Entries are stored as StagedContent (pending). The pipeline does not run yet.
Phase 2 - Process decisions
Option A: Polling (no webhook needed, works locally)
# Continuous long-poll (Ctrl+C to stop) php artisan scribe:telegram-poll # Single pass - process pending decisions and exit php artisan scribe:telegram-poll --once
Option B: Webhook (production - Telegram pushes decisions to your app)
The webhook is auto-configured when the first approval message is sent. By default it uses your APP_URL combined with the webhook path (api/scribe/telegram/webhook).
Override the URL only when APP_URL doesn't match your public-facing address (e.g. behind a reverse proxy or using ngrok):
# Optional - only needed when APP_URL is not your public URL TELEGRAM_WEBHOOK_URL=https://yourapp.com/api/scribe/telegram/webhook TELEGRAM_WEBHOOK_SECRET=your-random-secret
You can also set or remove the webhook manually:
php artisan scribe:telegram-set-webhook php artisan scribe:telegram-set-webhook --remove
When you tap β Approve in Telegram:
- The
StagedContentis marked as approved - The full pipeline is dispatched using the web driver (URL already known)
- Article is created, optimised, and published to your configured channels
When you tap β Reject, the entry is marked as processed and skipped.
Extension file structure
All extension code lives in a self-contained directory:
src/Extensions/TelegramApproval/
TelegramApprovalExtension.php # Extension contract implementation
TelegramApprovalService.php # Telegram Bot API interactions
CallbackHandler.php # Processes approve/reject decisions
RssReviewCommand.php # scribe:rss-review
TelegramPollCommand.php # scribe:telegram-poll
SetWebhookCommand.php # scribe:telegram-set-webhook
TelegramWebhookController.php # HTTP controller for webhook
routes/
telegram-webhook.php # Webhook route definition
Creating Custom Extensions
You can build your own extensions on top of the core pipeline. Every extension implements Badr\ScribeAi\Contracts\Extension:
use Badr\ScribeAi\Contracts\Extension; use Illuminate\Contracts\Foundation\Application; class SlackApprovalExtension implements Extension { public function name(): string { return 'slack-approval'; } public function isEnabled(): bool { return (bool) config('scribe-ai.extensions.slack_approval.enabled', false); } public function register(Application $app): void { $app->singleton(SlackApprovalService::class); } public function boot(Application $app): void { // Register commands, routes, event listeners, etc. if ($app->runningInConsole()) { // $app->make(Kernel::class) -- register artisan commands } } }
Register your extension in config/scribe-ai.php:
'custom_extensions' => [ App\Extensions\SlackApprovalExtension::class, ],
Or register it programmatically from any service provider:
use Badr\ScribeAi\Services\ExtensionManager; public function register(): void { $this->app->booted(function () { app(ExtensionManager::class) ->register(new SlackApprovalExtension(), $this->app); }); }
The ExtensionManager calls register() and boot() only when isEnabled() returns true, so disabled extensions have zero overhead.
You can also query the registry at runtime:
use Badr\ScribeAi\Services\ExtensionManager; $manager = app(ExtensionManager::class); $manager->all(); // all registered extensions $manager->enabled(); // only enabled ones $manager->isEnabled('slack-approval'); // check by name
Testing
The package ships with 22 unit tests (63 assertions) using Orchestra Testbench.
# Run all unit/feature tests ./vendor/bin/phpunit # Run a specific test ./vendor/bin/phpunit --filter=test_full_pipeline_end_to_end
Integration tests (real OpenAI API)
Integration tests that call the real OpenAI API are excluded from the default test suite. To run them:
-
Copy
.env.testing.exampleto.env.testingand set your real API key:OPENAI_API_KEY=sk-your-real-key
-
Run only integration tests:
./vendor/bin/phpunit --group=integration
Integration tests are grouped with
#[Group('integration')]and skipped automatically when no real API key is present.
License
Scribe AI is open-source software released under the MIT License - free to use, modify, and distribute in personal and commercial projects.
See the LICENSE file for the full license text.
Made with β€οΈ for the Laravel community Β· Documentation Β· GitHub
