rmh / vertex
A provider for Prism adding support for Google Vertex AI.
Fund package maintenance!
Requires
- php: ^8.2
- google/auth: ^1.0
- laravel/framework: ^11.0|^12.0
- prism-php/prism: >=0.99.16
Requires (Dev)
- laravel/pint: ^1.14
- mockery/mockery: ^1.6
- orchestra/testbench: ^9.4
- pestphp/pest: ^3.0
- pestphp/pest-plugin-arch: ^3.0
- pestphp/pest-plugin-laravel: ^3.0
- phpstan/extension-installer: ^1.3
- phpstan/phpdoc-parser: ^2.0
- phpstan/phpstan: ^2.1
- phpstan/phpstan-deprecation-rules: ^2.0
- projektgopher/whisky: ^0.7.0
- rector/rector: ^2.3
- spatie/laravel-ray: ^1.40
- symplify/rule-doc-generator-contracts: ^11.2
README
Prism Vertex
A standalone Google Vertex AI provider for the Prism PHP framework. Access Google Gemini and partner model families through a single configuration and unified interface.
Installation
composer require rmh/vertex
Configuration
Add the following to your Prism configuration (config/prism.php):
Standard mode (project + location required)
'vertex' => [ 'project_id' => env('VERTEX_PROJECT_ID'), 'location' => env('VERTEX_LOCATION', 'us-central1'), // Auth - choose one (or omit both to use Application Default Credentials): 'credentials' => env('VERTEX_CREDENTIALS'), // path to service-account.json // 'api_key' => env('VERTEX_API_KEY'), ],
Express mode (API key only, Google models only)
When project_id and location are omitted, the package automatically uses Vertex AI Express Mode endpoints:
'vertex' => [ 'api_key' => env('VERTEX_API_KEY'), // project_id and location omitted — triggers Express mode ],
Important
Express mode only supports Google Gemini models. Using partner model providers in Express mode will throw an exception.
Usage
Use the provider constants from Prism\Vertex\Enums\Vertex with the Prism facade:
Text Generation
use Prism\Prism\Prism; use Prism\Vertex\Enums\Vertex; // Google Gemini $response = Prism::text() ->using(Vertex::Gemini, 'gemini-2.5-flash') ->withPrompt('Explain quantum computing in simple terms') ->asText(); echo $response->text; // Anthropic Claude on Vertex AI $response = Prism::text() ->using(Vertex::Anthropic, 'claude-3-5-sonnet@20241022') ->withPrompt('Explain quantum computing in simple terms') ->asText(); // Meta Llama on Vertex AI $response = Prism::text() ->using(Vertex::Meta, 'llama-4-scout-17b-16e-instruct-maas') ->withPrompt('Explain quantum computing in simple terms') ->asText();
Structured Output (JSON)
use Prism\Prism\Prism; use Prism\Vertex\Enums\Vertex; use Prism\Prism\Schema\ObjectSchema; use Prism\Prism\Schema\StringSchema; use Prism\Prism\Schema\ArraySchema; $schema = new ObjectSchema( name: 'languages', description: 'Top programming languages', properties: [ new ArraySchema( 'languages', 'List of programming languages', items: new ObjectSchema( name: 'language', description: 'Programming language details', properties: [ new StringSchema('name', 'The language name'), new StringSchema('popularity', 'Popularity description'), ] ) ) ] ); $response = Prism::structured() ->using(Vertex::Gemini, 'gemini-2.5-flash') ->withSchema($schema) ->withPrompt('List the top 3 programming languages') ->asStructured(); $data = $response->structured;
Embeddings
use Prism\Prism\Prism; use Prism\Vertex\Enums\Vertex; $response = Prism::embeddings() ->using(Vertex::Gemini, 'text-embedding-005') ->fromInput('The sky is blue') ->asEmbeddings(); $embeddings = $response->embeddings;
Streaming
Warning
Streaming is not yet supported. This feature is planned for a future release.
Supported Providers
All providers share one vertex config block. The provider constant determines which API schema and publisher endpoint is used:
| Constant | Publisher | Example Models | Schema |
|---|---|---|---|
Vertex::Gemini |
google |
gemini-2.5-flash, text-embedding-005 |
Gemini |
Vertex::Anthropic |
anthropic |
claude-3-5-sonnet@20241022, claude-3-5-haiku@20241022 |
Anthropic |
Vertex::Mistral |
mistralai |
mistral-small-2503, codestral-2501 |
OpenAI |
Vertex::Meta |
meta |
llama-4-scout-17b-16e-instruct-maas |
OpenAI |
Vertex::DeepSeek |
deepseek |
deepseek-v3-0324-maas |
OpenAI |
Vertex::AI21 |
ai21 |
jamba-1.5-mini@001, jamba-1.5-large@001 |
OpenAI |
Vertex::Kimi |
kimi |
kimi-k2-0711-maas |
OpenAI |
Vertex::MiniMax |
minimax |
minimax-m1-40k-0709-maas |
OpenAI |
Vertex::OpenAI |
openai |
gpt-oss-4o-mini-maas |
OpenAI |
Vertex::Qwen |
qwen |
qwen2.5-72b-instruct-maas |
OpenAI |
Vertex::ZAI |
zaiorg |
glm-4-plus-maas |
OpenAI |
API Schemas
Prism Vertex uses three API schemas to handle the different formats used by Vertex AI publishers:
| Schema | Text | Structured | Embeddings |
|---|---|---|---|
| Gemini | yes | yes | yes |
| Anthropic | yes | yes | no |
| OpenAI | yes | yes | no |
- Gemini — Google's native
generateContent/predictendpoints. - Anthropic — Uses
:rawPredictwith the Anthropic Messages API format (anthropic_version: vertex-2023-10-16). - OpenAI — Uses
:rawPredictor:chatCompletionswith OpenAI-compatible format for partner models (Mistral, Meta, DeepSeek, AI21, Kimi, MiniMax, OpenAI-OSS, Qwen, ZAI).
The provider constant automatically selects the correct schema. You can override it via withProviderOptions():
use Prism\Vertex\Enums\VertexSchema; $response = Prism::text() ->using(Vertex::Gemini, 'some-model') ->withProviderOptions(['apiSchema' => VertexSchema::Anthropic]) ->withPrompt('Hello') ->asText();
Structured Output Details
Each schema handles structured output differently:
- Gemini — Native structured output via
response_mime_type: application/jsonandresponse_schema. The model is constrained to produce valid JSON matching your schema. - OpenAI — Uses
response_format: { type: "json_object" }combined with a schema instruction message. All MaaS partner models support this mode. - Anthropic — No native JSON mode. A prompt is appended instructing the model to respond with JSON conforming to the provided schema.
You can override the schema instruction prompt for Anthropic and OpenAI schemas using withProviderOptions():
Prism::structured() ->using(Vertex::Anthropic, 'claude-3-5-sonnet@20241022') ->withSchema($schema) ->withProviderOptions([ 'jsonModeMessage' => 'My custom JSON instruction message', ]) ->withPrompt('My prompt') ->asStructured();
Authentication
API Key
Set the api_key config option. The key is sent as a ?key= query parameter on every request. This works for both Standard and Express modes.
Service Account JSON file
Set the credentials config option to the path of your service account JSON key file. The package uses google/auth to obtain a Bearer token automatically.
Application Default Credentials (ADC)
When neither credentials nor api_key are set in Standard mode, the package automatically uses Application Default Credentials. This is the recommended approach for code running on Google Cloud Platform, no credentials need to be configured at all.
ADC resolves credentials in this order:
GOOGLE_APPLICATION_CREDENTIALSenvironment variable (path to a JSON key file)gcloud auth application-default login(local development)- Attached service account (GCE, GKE, Cloud Run, Cloud Functions, etc.)
'vertex' => [ 'project_id' => env('VERTEX_PROJECT_ID'), 'location' => env('VERTEX_LOCATION', 'us-central1'), // no api_key or credentials - ADC handles auth automatically ],
Per-Provider Config Overrides
All providers read from the shared vertex config. If you need different credentials for a specific provider, add a per-provider config block — it will override the shared config:
'vertex' => [ 'project_id' => env('VERTEX_PROJECT_ID'), 'location' => env('VERTEX_LOCATION', 'us-central1'), 'credentials' => env('VERTEX_CREDENTIALS'), ], // Override for Anthropic only (e.g. different region) 'vertex-anthropic' => [ 'location' => 'europe-west1', ],
Requirements
- PHP 8.2+
- Laravel 11 or 12
- prism-php/prism >= 0.99.7
License
The MIT License (MIT). Please see License File for more information.