kreuzberg/liter-llm

High-performance LLM client for PHP. Unified interface for streaming, tool calling, and provider routing across OpenAI, Anthropic, and 142+ providers. Powered by Rust core.

Maintainers

Package info

github.com/kreuzberg-dev/liter-llm

Language:Rust

Type:php-ext

Ext name:ext-liter_llm_php

pkg:composer/kreuzberg/liter-llm

Statistics

Installs: 0

Dependents: 0

Suggesters: 0

Stars: 4

Open Issues: 0

1.0.0-rc.8 2026-03-28 16:51 UTC

This package is auto-updated.

Last update: 2026-03-28 17:05:15 UTC


README

Rust Python Node.js WASM Java Go C# PHP Ruby Elixir Docker C FFI License Docs kreuzberg.dev Discord

A lighter, faster, safer universal LLM API client -- one Rust core, 11 native language bindings, 142 providers.

Why liter-llm?

A universal LLM API client, compiled from the ground up in Rust. No interpreter, no transitive dependency tree, no supply chain surface area. One binary, 11 native language bindings, 142 providers.

  • Compiled Rust core. No pip install supply chain. No .pth auto-execution hooks. No runtime dependency tree to compromise. The kind of supply chain attack that hit litellm in 2026 is structurally impossible here.
  • Secrets stay secret. API keys are wrapped in secrecy::SecretString -- zeroed on drop, redacted in logs, never serialized.
  • Polyglot from day one. Python, TypeScript, Go, Java, Ruby, PHP, C#, Elixir, WebAssembly, C/FFI -- all thin wrappers around the same Rust core. No reimplementation drift.
  • Observability built in. Production-grade OpenTelemetry with GenAI semantic conventions -- not an afterthought callback system.
  • Composable middleware. Rate limiting, caching, cost tracking, health checks, and fallback as Tower layers you stack like building blocks.

We give credit to litellm for proving the category -- our provider registry was bootstrapped from theirs. See ATTRIBUTIONS.md.

Feature Comparison

An honest look at where things stand. We're newer and leaner -- litellm has breadth we haven't matched yet, and we have depth they can't easily retrofit.

liter-llm litellm
Language Rust (compiled, memory-safe) Python
Bindings 11 native (Rust, Python, TS, Go, Java, Ruby, PHP, C#, Elixir, WASM, C) Python (+ OpenAI-compatible proxy)
Providers 142 (compiled at build time) 100+ (runtime resolution)
Streaming SSE + AWS EventStream binary protocol SSE
Observability Built-in OpenTelemetry (GenAI semconv) 51+ callback integrations
API key safety secrecy::SecretString (zeroed, redacted) Plain strings
Middleware Composable Tower stack Built-in, non-composable
Proxy / Gateway -- Yes
Guardrails -- 35+ hooks
Semantic caching -- Redis + Qdrant backends
Virtual key mgmt -- Yes
Management API -- Multi-tenant (teams, budgets, keys)
Fine-tuning API -- Yes
Load balancer Fallback middleware Full router with strategies
Cost tracking Embedded pricing + OTEL spans Per-key/team/model budgets
Rate limiting Per-model RPM/TPM (Tower layer) Per-key/user/team/model
Caching In-memory LRU (Tower layer) 16 backends (Redis, S3, disk, ...)
Tool calling Parallel tools, structured output, JSON schema Full support
Embeddings Yes Yes
Batch API Yes Yes
Audio / Speech Yes Yes
Lifecycle hooks onRequest/onResponse/onError per-client Callback integrations
Budget tracking Per-model and global cost limits Per-key/team budgets
Response cache In-memory LRU with TTL 16 backends
Custom providers Runtime register_provider API Config-based
Image generation Yes Yes

Key Features

  • 142 providers -- OpenAI, Anthropic, Google, AWS Bedrock, Groq, Mistral, Together AI, Fireworks, Perplexity, DeepSeek, Cohere, and 130+ more
  • 11 native bindings -- Rust, Python, TypeScript/Node.js, Go, Java, Ruby, PHP, C#, Elixir, WebAssembly, C/FFI
  • First-class streaming -- SSE and AWS EventStream binary protocol with zero-copy buffers
  • OpenTelemetry -- GenAI semantic conventions, cost tracking spans, HTTP-level tracing
  • Tower middleware -- Rate limiting, LRU caching, cost estimation, health checks, cooldowns, fallback -- all composable
  • Tool calling -- Parallel tools, structured outputs, JSON schema validation
  • Embeddings -- Dimension selection, base64 format, multi-provider support
  • Schema-driven -- Provider registry and API types compiled from JSON schemas, no runtime lookups

Architecture

liter-llm/
├── crates/
│   ├── liter-llm/           # Rust core library
│   ├── liter-llm-py/        # Python (PyO3) core
│   ├── liter-llm-node/      # Node.js (NAPI-RS) core
│   ├── liter-llm-ffi/       # C-compatible FFI layer
│   ├── liter-llm-php/       # PHP (ext-php-rs) core
│   └── liter-llm-wasm/      # WebAssembly (wasm-bindgen) core
├── packages/
│   ├── python/               # Python package
│   ├── typescript/           # TypeScript/Node.js package
│   ├── go/                   # Go (cgo) module
│   ├── java/                 # Java (Panama FFI) package
│   ├── ruby/                 # Ruby (Magnus) gem
│   ├── elixir/               # Elixir (Rustler NIF) package
│   ├── csharp/               # .NET (P/Invoke) package
│   └── php/                  # PHP (Composer) package
└── schemas/                  # Provider registry and API schemas

Quick Start

Install in your language of choice:

Language Install
Python pip install liter-llm
Node.js pnpm add @kreuzberg/liter-llm
Rust cargo add liter-llm
Go go get github.com/kreuzberg-dev/liter-llm/packages/go
Java dev.kreuzberg:liter-llm (Maven/Gradle)
Ruby gem install liter_llm
PHP composer require kreuzberg/liter-llm
C# dotnet add package LiterLlm
Elixir {:liter_llm, "~> 0.1"} in mix.exs
WASM pnpm add @kreuzberg/liter-llm-wasm
C/FFI Build from source -- see FFI crate

Usage

from liter_llm import LlmClient

client = LlmClient()

# Chat with any provider using the provider/model prefix
response = client.chat(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

# Stream responses
for chunk in client.chat_stream(
    model="anthropic/claude-3-5-sonnet-20241022",
    messages=[{"role": "user", "content": "Tell me a story"}],
):
    print(chunk.delta, end="", flush=True)

The same API is available in all 11 languages -- see the language READMEs below for idiomatic examples.

Core API

All bindings expose a unified chat() function:

Language Usage
Rust DefaultClient::new(config).chat(messages, options).await
Python LlmClient(api_key=...).chat(messages, config)
Node.js new LlmClient({ apiKey }).chat(messages, config)
Go client.Chat(ctx, messages, config)
Java client.chat(messages, configJson)
Ruby LiterLlm::LlmClient.new(api_key, config).chat(messages)
Elixir LiterLlm.chat(messages, config)
PHP LiterLlm\LlmClient::new($apiKey)->chat($messages, $config)
C# new LlmClient(apiKey).ChatAsync(messages, config)
WASM new LlmClient({ apiKey }).chat(messages, config)
C FFI liter_llm_chat(client, messages_json, config_json)

Language READMEs

Language README Binding
Python packages/python PyO3
TypeScript / Node.js crates/liter-llm-node NAPI-RS
Go packages/go cgo
Java packages/java Panama FFI
Ruby packages/ruby Magnus
Elixir packages/elixir Rustler NIF
PHP packages/php ext-php-rs
.NET (C#) packages/csharp P/Invoke
WebAssembly crates/liter-llm-wasm wasm-bindgen
C/C++ (FFI) crates/liter-llm-ffi C ABI

Part of kreuzberg.dev

liter-llm is built by the kreuzberg.dev team -- the same people behind Kreuzberg (document extraction for 91+ formats), tree-sitter-language-pack (multilingual parsing), and html-to-markdown. All our libraries share the same Rust-core, polyglot-bindings architecture. Visit kreuzberg.dev or find us on GitHub.

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.

Join our Discord community for questions and discussion.

License

MIT -- see LICENSE for details.