June 3rd, 2018

Models / Offensive Detection

Offensive Detection


The offensive detection model determines if an image contains offensive content and gives details on the position and the type of the offensive content it has found. The types of offensive content are:

  • Offensive gestures
  • Offensive flags
  • Offensive symbols
Confederate flag (offensive flags)
Middle finger (offensive gesture)


The Offensive detection does not use any image meta-data to determine the presence of a offensive content on an image. The file extension, the meta-data or the name will not influence the result. The classification is made using only the pixel content of the image or video.

On most sites and apps, images containing offensive content will be systematically removed.

Offensive Detection works with black and white images as well as color images or images with filters.

What type of content is detected?

  • Offensive gestures (middle fingers)
  • Offensive symbols (nazi symbols, KKK)
  • Offensvie flags (Confederate flag, ISIS flag)


  • Block or detect users who submit images or videos containing offensive content
  • Hide, Blur or Filter hateful symbols and references in images and videos
  • Protect your users from unwanted content


  • Elements smaller than 5% of the width or height of the image may not be detected.

Offensive probability

For each image or meaningful video frame, the model returns an "offensive probability". The value is between 0 and 1. Images with a value close to 1 are images with a high probability of containing offensive content while images with a probability closer to 0 have a lower probability of containing offensive content.


Along with the offensive probability, the model will return a list of all the offensive elements found in the image (if any), along with their positions and type.

Offensive Gestures

Detection of offensive gestures such as middle fingers in images and videos.

Example of offensive gesture

Offensive Flags

Detection of offensive flags such as ISIS flags and confederate flags.

Example of offensive flag

Offensive Symbols

Detection of offensive symbols such as nazi-era symbols (swastika, SS bolts, iron cross), and ku klux klan symbols in images and videos.

Example of offensive symbol

Use the model

If you haven't already, create an account to get your own API keys. You should then install the SDK that corresponds to your programming language. You can also implement your own logic to interact with our API if you prefer. Have a look at our API reference for more details.

# install cURL: https://curl.haxx.se/download.html

pip install sightengine

composer require sightengine/client-php

npm install sightengine --save

Detect offensive content in an image

Let's say you want to moderate the following image:

You can send the image by pointing to a public URL or uploading the byte content of the image.

curl -X GET -G 'https://api.sightengine.com/1.0/check.json' \
    -d 'models=offensive' \
    -d 'api_user={api_user}&api_secret={api_secret}' \
    -d 'url=https://sightengine.com/assets/img/doc/offensive/offensive1.jpg'

 # if you haven't already, install the SDK with 'pip install sightengine'
from sightengine.client import SightengineClient
client = SightengineClient('{api_user}','{api_secret}')
output = client.check('offensive').set_url('https://sightengine.com/assets/img/doc/offensive/offensive1.jpg')

// if you haven't already, install the SDK with 'composer require sightengine/client-php'
use \Sightengine\SightengineClient;
$client = new SightengineClient('{api_user}','{api_secret}');

// if you haven't already, install the SDK with 'npm install sightengine --save'
var sightengine = require('sightengine')('{api_user}','{api_secret}');
sightengine.check(['offensive']).set_url('https://sightengine.com/assets/img/doc/offensive/offensive1.jpg').then(function(result) {
    // The API response (result)
}).catch(function(err) {
    // Handle error

The API will then return a JSON response:

    "status": "success",
    "request": {
        "id": "req_24DNGegGf1Mo0n4rpaRwZ",
        "timestamp": 1512898748.652,
        "operations": 1
    "offensive": {
        "prob": 0.97,
        "details": [
            "type": "confederate",
            "prob": 0.96
    "media": {
        "id": "med_24DNJfN2BlCGPGQBoZ5dO",
        "uri": "https://sightengine.com/assets/img/doc/offensive/offensive1.jpg"

Any other needs?

See our full list of models for details on other filters and checks you can run on your images and videos.

Did you find this page helpful?

We're always looking for advice to help improve our documentation!

Let us know what you think

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more