bluefly/ai_provider_langchain

LangChain provider for the Drupal AI platform with advanced chain orchestration

v0.1.3 2025-07-12 14:20 UTC

This package is auto-updated.

Last update: 2025-07-12 14:21:02 UTC


README

Provides LangChain integration for chains, agents, and RAG workflows within Drupal's AI ecosystem.

Overview

The AI Provider LangChain module integrates LangChain's powerful language model orchestration capabilities into Drupal. It provides a native Drupal implementation that works with any LangChain-compatible API endpoint, including LM Studio, Ollama, and LangChain Server.

Features

  • Real API Integration: Full HTTP client implementation for LangChain-compatible endpoints
  • Secure Key Management: Integration with Drupal's Key module for secure API credential storage
  • Chat Completions: Support for conversational AI interactions
  • Embeddings Generation: Create vector embeddings for semantic search and RAG workflows
  • Token Usage Tracking: Monitor API usage and costs
  • Multiple Provider Support: Works with LM Studio, Ollama, LangChain Server, and any OpenAI-compatible endpoint
  • Native Drupal Patterns: Built using Drupal best practices with dependency injection, configuration management, and logging

Requirements

  • Drupal 10.3 or higher / Drupal 11
  • PHP 8.1 or higher
  • AI module (drupal/ai) ^1.0
  • Key module (drupal/key) - Recommended for secure API key storage

Suggested Modules

  • AI Automators (drupal/ai_automators) - For field automation workflows
  • Search API (drupal/search_api) - For advanced search integration
  • Vector Database - For embedding storage and similarity search

Installation

  1. Install via Composer:

    composer require bluefly/ai_provider_langchain
    
  2. Enable the module:

    drush en ai_provider_langchain
    
  3. Configure the provider at /admin/config/ai/provider/langchain

Configuration

API Settings

Navigate to Configuration > AI > Providers > LangChain (/admin/config/ai/provider/langchain):

  1. Enable Provider: Toggle to activate the LangChain integration
  2. API Endpoint: Set your LangChain-compatible endpoint (default: http://localhost:1234/v1)
  3. API Key:
    • Select a key from Key module (recommended)
    • Or enter directly (less secure)
  4. Model Settings:
    • Chat Model (e.g., gpt-3.5-turbo, llama2, etc.)
    • Embedding Model (e.g., text-embedding-ada-002)
    • Temperature (0.0 - 1.0)
    • Max Tokens

Supported Endpoints

The module works with any OpenAI-compatible API, including:

  • LM Studio: Local model serving (http://localhost:1234/v1)
  • Ollama: With OpenAI compatibility layer (http://localhost:11434/v1)
  • LangChain Server: Full LangChain functionality
  • OpenAI API: Direct OpenAI integration
  • Custom Endpoints: Any OpenAI-compatible service

Usage

Basic Chat Completion

// Get the AI provider service
$provider_manager = \Drupal::service('ai.provider');
$langchain = $provider_manager->createInstance('langchain');

// Simple chat
$response = $langchain->chat('Explain Drupal in simple terms', 'gpt-3.5-turbo');
echo $response;

// Chat with context
$messages = [
  ['role' => 'system', 'content' => 'You are a Drupal expert.'],
  ['role' => 'user', 'content' => 'What are the benefits of Drupal?']
];
$response = $langchain->chatMessages($messages, 'gpt-3.5-turbo');

Generating Embeddings

// Generate embeddings for text
$text = 'Drupal is a powerful open-source CMS';
$embeddings = $langchain->embeddings($text, 'text-embedding-ada-002');

// Process multiple texts
$texts = [
  'Drupal provides flexibility',
  'Drupal has strong security',
  'Drupal supports multilingual content'
];
foreach ($texts as $text) {
  $embedding = $langchain->embeddings($text);
  // Store in vector database for similarity search
}

Integration with AI Module

The provider integrates seamlessly with Drupal's AI module ecosystem:

// Use through AI module's unified interface
$ai_service = \Drupal::service('ai.service');
$response = $ai_service->process([
  'provider' => 'langchain',
  'model' => 'gpt-3.5-turbo',
  'prompt' => 'Generate a Drupal module description'
]);

RAG (Retrieval-Augmented Generation) Workflow

// 1. Generate embeddings for content
$node = Node::load($nid);
$embedding = $langchain->embeddings($node->body->value);

// 2. Store in vector database
$vector_storage = \Drupal::service('vector_database.storage');
$vector_storage->store($node->id(), $embedding);

// 3. Query with similarity search
$query_embedding = $langchain->embeddings($user_query);
$similar_content = $vector_storage->search($query_embedding, 5);

// 4. Generate response with context
$context = implode("\n", array_column($similar_content, 'content'));
$prompt = "Based on this context: $context\n\nAnswer: $user_query";
$response = $langchain->chat($prompt);

Error Handling

The module provides comprehensive error handling:

try {
  $response = $langchain->chat($prompt);
} catch (\Exception $e) {
  \Drupal::logger('ai_provider_langchain')->error('Chat failed: @message', [
    '@message' => $e->getMessage()
  ]);
}

Logging and Debugging

Enable debug logging in settings.php:

$settings['ai_provider_langchain.debug'] = TRUE;

View logs at /admin/reports/dblog filtered by type ai_provider_langchain.

Token Usage Tracking

The module tracks token usage for cost monitoring:

$response = $langchain->chat($prompt);
$usage = $response['usage'] ?? [];
// $usage['prompt_tokens'], $usage['completion_tokens'], $usage['total_tokens']

Extending the Module

Custom Chain Implementation

namespace Drupal\my_module\Plugin\LangChain\Chain;

use Drupal\ai_provider_langchain\Plugin\LangChain\ChainBase;

/**
 * @LangChainChain(
 *   id = "my_custom_chain",
 *   label = @Translation("My Custom Chain"),
 *   description = @Translation("Custom LangChain workflow")
 * )
 */
class MyCustomChain extends ChainBase {
  public function execute(array $inputs) {
    // Custom chain logic
  }
}

Troubleshooting

Common Issues

  1. Connection Refused

    • Verify your LangChain service is running
    • Check the endpoint URL in configuration
    • Ensure firewall allows connection
  2. Invalid API Key

    • Verify key in Key module or configuration
    • Check API key permissions
    • Ensure key is properly formatted
  3. Model Not Found

    • Verify model name matches available models
    • Check endpoint documentation for supported models
    • Use /v1/models endpoint to list available models

Performance Considerations

  • Use caching for repeated queries
  • Batch embedding generation for multiple texts
  • Monitor token usage to control costs
  • Consider local models for high-volume usage

Security

  • Always use Key module for API credentials
  • Never commit API keys to version control
  • Use HTTPS for remote endpoints
  • Implement rate limiting for public-facing features

Support

License

This project is licensed under the GPL-2.0-or-later license.