Image Moderation / Principles

Image Moderation Principles

Introduction

Sightengine's Image Moderation API is used to moderate Images and detect whether they contain unwanted content (such as adult content, offensive content, commercial text, children, weapons...).

The API works both with standard images (such as JPG, PNG, WebP...) and animated multi-frame images such as GIF images.

How the API works

Sightengine Image Moderation is very straightforward to use:

  1. You submit an image to the Sightengine API, along with the list of models you wish to apply
  2. You immediately get a detailed response from the API giving you a description of what was found (if anything), along with moderation scores.

All needed data is given in the API response. There are no callbacks, no moderation queues, no need to wait for updates or track state.

Head to our Quickstart guide to start using the API with a few lines of code, or head to the API reference for a deeper look into the structure of requests and responses.

Models

A Model is a filter that will look for a specific type of content in your Image. For instance, nudity is the name of a model that has been trained to look for any adult content, racy content, suggestive content and specifically flags scenes ranging from explicit to mild nudity.

By specifying the list of Models you wish to apply to an image, you tell the API what you would like to detect and filter.

Head to our Model reference to see all the available Models along with their detection capabilities.

Did you find this page helpful?

We're always looking for advice to help improve our documentation!

Let us know what you think

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

OK