drupal/ai_recipe_validations_image_safety

Blocks image uploads to Media: Image that contain nudity, sexually suggestive content, gore, graphic violence, or graphic medical imagery. Uses AI Validations and any vision-capable AI provider.

Maintainers

Package info

git.drupalcode.org/project/ai_recipe_validations_image_safety.git

Homepage

Type:drupal-recipe

pkg:composer/drupal/ai_recipe_validations_image_safety

Statistics

Installs: 0

Dependents: 0

Suggesters: 0

1.0.x-dev 2026-04-27 15:20 UTC

This package is auto-updated.

Last update: 2026-04-27 19:50:30 UTC


README

Blocks image uploads to the Media: Image bundle that contain nudity, sexually suggestive content, gore, graphic violence, or graphic medical imagery. Uses AI Validations with any vision-capable AI provider.

Requirements

  • Drupal 10.3+ or 11
  • AI module 1.3 or newer (provides the verifySetupAi config action)
  • A configured default provider for the chat_with_image_vision operation type (OpenAI GPT-4o, Anthropic Claude 3.5 Sonnet, Google Gemini 1.5 Pro, etc.) at /admin/config/ai/settings → Default Providers

Apply

composer require drupal/ai_recipe_validations_image_safety
php core/scripts/drupal recipe recipes/contrib/ai_recipe_validations_image_safety
drush cache:rebuild

If chat_with_image_vision has no default model configured, the recipe apply will abort and roll back with a message from the AI module's verifySetupAi action. Configure a default model and re-run.

What it does

  • Depends on drupal_cms_media (applied transitively).
  • Installs ai, ai_validations, and field_validation.
  • Creates a field validation rule set media_image_ai_safety attached to the Media: Image bundle's field_media_image field.
  • Runs the AI image constraint on the target_id column in entity validation mode — catches UI uploads, JSON:API posts, and programmatic saves alike.

Adjusting the threshold

Default behavior blocks anything outside the "Safe" tier (Flickr-style classification). To allow "Moderate" content and block only "Restricted":

  1. Go to Structure → Field Validation → media_image_ai_safety
  2. Edit the ai_image_safety rule
  3. Replace the === DECISION === block of the prompt with:
    - If the image is SAFE or MODERATE, respond with: XTRUE
    - If the image is RESTRICTED, respond with: XFALSE
    

Cost note

Every image upload triggers one vision-model API call. Bulk migrations, imports, or high-volume user upload flows will incur proportional cost. Test with your provider's pricing before rolling out to production.

Issue queue

Bugs and feature requests: https://www.drupal.org/project/issues/ai_recipe_validations_image_safety