1manfactory / ai-costs
Unofficial PHP library for calculating LLM API costs from usage payloads.
Requires
- php: ^8.2
Requires (Dev)
- phpmd/phpmd: ^2.15
- phpstan/phpstan: ^2.1
- phpunit/phpunit: ^11.5
- squizlabs/php_codesniffer: ^4.0
README
ai-costs is an unofficial PHP library for calculating LLM API costs from usage payloads.
It is designed to stay SDK-agnostic: feed it an OpenAI, Anthropic Claude, or Google Gemini usage payload, or a normalized usage object, and it returns a detailed cost breakdown.
Warning
This library provides estimates only. I do not guarantee the accuracy, completeness, or fitness of the calculated values for any billing purpose. The amounts actually billed by the provider are authoritative and always take precedence.
Units
All calculated cost values are returned as integers in usd_micros.
1 USD = 1_000_000 usd_micros- Public variable and method names use explicit unit suffixes like
InUsdMicros - The package returns integers only
Current scope
- OpenAI
Responses APIusage extraction - OpenAI
Chat Completions APIusage extraction - Anthropic
Messages APIusage extraction - Google Gemini
generateContentusage extraction - Text token pricing for selected GPT-5.4 and GPT-5.5 family models
- Text token pricing for current Claude and Gemini 2.5 text-output models
- Automatic long-context pricing for
gpt-5.5,gpt-5.4,gpt-5.4-pro, andgemini-2.5-pro - Helper charges for web search, file search, and container sessions
Installation
composer require 1manfactory/ai-costs
Quick start
<?php declare(strict_types=1); use AiCosts\CostCalculator; use AiCosts\Enum\BillingMode; use AiCosts\OpenAI\OpenAIResponsesUsageExtractor; use AiCosts\OpenAI\OpenAIToolPricing; use AiCosts\Pricing\StaticPriceProvider; use AiCosts\Value\BillingContext; $payload = [ 'model' => 'gpt-5.4', 'usage' => [ 'input_tokens' => 1200, 'input_tokens_details' => ['cached_tokens' => 200], 'output_tokens' => 300, 'output_tokens_details' => ['reasoning_tokens' => 50], ], ]; $extractor = new OpenAIResponsesUsageExtractor(); $usage = $extractor->extract($payload); $calculator = new CostCalculator(StaticPriceProvider::default()); $breakdown = $calculator->calculate( $usage, new BillingContext( billingMode: BillingMode::STANDARD, additionalCharges: [ OpenAIToolPricing::webSearchCalls(3), ], ), ); echo $breakdown->totalCostInUsdMicros; // 37050
Included building blocks
AiCosts\CostCalculatorAiCosts\Pricing\StaticPriceProviderAiCosts\OpenAI\OpenAIResponsesUsageExtractorAiCosts\OpenAI\OpenAIChatCompletionsUsageExtractorAiCosts\Anthropic\AnthropicMessagesUsageExtractorAiCosts\Gemini\GeminiGenerateContentUsageExtractorAiCosts\OpenAI\OpenAIToolPricing
Notes
- The packaged price catalog is intentionally static and versioned in the repo.
- Long-context thresholds are only auto-applied where the threshold is explicitly documented in OpenAI model docs.
- Anthropic prompt caching reads plus 5-minute and 1-hour cache writes are supported when the API response includes the detailed
usage.cache_creationbreakdown. - Gemini support currently targets the stable
gemini-2.5-*text-output models included in the bundled catalog. gpt-5.5-propricing is included, but long-context auto-switching is intentionally disabled in this MVP until the threshold rule is modeled explicitly in the catalog.- Explicit Gemini cache storage charges and grounding/tool surcharges are not auto-derived from a single response payload in this MVP.
- Audio token pricing is intentionally blocked in this MVP so the library does not undercount usage silently.
- All calculated values are estimates only; authoritative billing always comes from OpenAI.
- Claude and Gemini usage is also estimated from official public pricing docs; provider billing remains authoritative.
- Canonical cost outputs are integer
usd_micros. - The package is not affiliated with or endorsed by OpenAI, Anthropic, or Google.
- This package is licensed under the MIT License.
Development
Run the local PHP checks manually with Composer:
composer lint:syntax composer lint:phpstan composer lint:phpmd composer lint:phpcs composer lint:all
The versioned Git hook lives in .githooks/pre-commit and this repository is configured to use it via git config core.hooksPath .githooks.
Every git commit runs composer lint:all; the commit is blocked immediately when one of the checks fails.
Release flow
This repository uses a GitHub Actions workflow at .github/workflows/release-tag.yml to create a new patch tag on every push to main.
- First release tag:
v0.1.0 - Later pushes on
main:v0.1.1,v0.1.2,v0.1.3, ...
That means the publishing path is:
local repository -> GitHub main -> automatic Git tag -> Packagist update
After the repository is submitted once on Packagist, each new Git tag becomes a new Composer-installable package version.
Pricing sources
The bundled pricing catalog is based on official provider docs as of 2026-05-02.
- OpenAI pricing overview: https://developers.openai.com/api/docs/pricing
- OpenAI GPT-5.4 model page: https://developers.openai.com/api/docs/models/gpt-5.4/
- OpenAI GPT-5.4 pro model page: https://developers.openai.com/api/docs/models/gpt-5.4-pro
- Anthropic pricing: https://platform.claude.com/docs/en/about-claude/pricing
- Anthropic model overview: https://platform.claude.com/docs/en/about-claude/models/overview
- Gemini pricing: https://ai.google.dev/gemini-api/docs/pricing
- Gemini generateContent response schema: https://ai.google.dev/api/generate-content