Nudity Detection

Models / Offensive Content Detection

Offensive Content Detection


The offensive model is useful to determine if an image or a video contains offensive content or hate content. In addition to detecting such content, this model gives details on the position and the type of the offensive content found.

The offensive content detected falls broadly into the following categories:

  • Nazi era symbols
  • Confederate symbols
  • Supremacist symbols
  • Offensive gestures
  • Terrorist symbols

See the concept section for the full list of detected concepts.


This section lists the concepts detected by the Offensive Detection Model.

Nazi-era symbols

Nazi hakenkreuz

Nazi symbol that can be found on objects from the nazi era, costumes, drawings, cartoons, tattoos...


Nazi flags

Flags from the nazi era, such as the nazi party flag, the Reichskriegsflagge, the Reichsdienstflagge...


Iron cross

Military decoration of Prussia, later used in Nazi Germany. Present on nazi flags, costumes, medals and other objects.


SS bolts

Schutzstaffel sigrunes. They symbolised victory and are present on nazi era objects such as helmets, medals, costumes...



Sonnenrad or sunwheel. Ancient European symbol appropriated by the Nazis.



Emblem of Hitler's SA troops. Some neo-nazis and white supremacists currently use this symbol.


Supremacist symbols


Ku klux klan


Burning cross

Cross burning is a practice associated with the Ku Klux Klan.


Blood drop cross

Primary symbol associated to the Ku Klux Klan groups.


Celtic cross

There are many types of celtic crosses, with different meanings. The one depicted is one of the most commonly used white supremacist symbols.



Valknut or valknot. Old Norse symbol appropriated by some white supremacists.


Odal rune

Odal rune, also known as othala rune. Originally part of the runic alphabet system but appropriated by the Nazis as a symbol for aryanism



Old runic symbol appropriated during the Nazi era. It has become a symbol of choice for neo-Nazis


Confederate symbols

Confederate flag

Flag of the Confederate States of America



Middle finger

Obscene hand gesture in western culture


Other symbols

ISIS flag

IS / ISIL / ISIS / Daesh version of the Black Standard



The Offensive detection does not use any image meta-data to determine the presence of a offensive content on an image. The file extension, the meta-data or the name will not influence the result. The classification is made using only the pixel content of the image or video.

On most sites and apps, images containing offensive content will be systematically removed.

Offensive Detection works with black and white images as well as color images or images with filters.


  • Block or detect users who submit images or videos containing offensive content
  • Hide, Blur or Filter hateful symbols and references in images and videos
  • Protect your users from unwanted content


  • Elements smaller than 5% of the width or height of the image may not be detected.

Recommended threshold

When processing the "offensive" value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing offensive content while images or videos with a value below will be considered to be safe.

Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.

If you want to reduce false negatives, you may want to start with a threshold of 0.5 (meaning that images with an "alcohol" value above 0.5 would be flagged)

If you want to reduce false positives, you may want to start with a threshold of 0.8

If there are some concepts that you want to allow, or if you want to set different thresholds for different concepts, this is also something you can do, as the API will return a score for each object found.


Along with the offensive probability, the model will return a list of all the offensive elements found in the image (if any), along with their positions and type.

Use the model

If you haven't already, create an account to get your own API keys.

Detect offensive content in an image

Let's say you want to moderate the following image:

You can send the image by pointing to a public URL or uploading the byte content of the image.

curl -X GET -G '' \
    -d 'models=offensive' \
    -d 'api_user={api_user}&api_secret={api_secret}' \
    --data-urlencode 'url='

# this example uses requests
import requests
import json

params = {
  'url': '',
  'models': 'offensive',
  'api_user': '{api_user}',
  'api_secret': '{api_secret}'
r = requests.get('', params=params)

output = json.loads(r.text)

$params = array(
  'url' =>  '',
  'models' => 'offensive',
  'api_user' => '{api_user}',
  'api_secret' => '{api_secret}',

// this example uses cURL
$ch = curl_init(''.http_build_query($params));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);

$output = json_decode($response, true);

// this example uses axios
const axios = require('axios');

axios.get('', {
  params: {
    'url': '',
    'models': 'offensive',
    'api_user': '{api_user}',
    'api_secret': '{api_secret}',
.then(function (response) {
  // on success: handle response
.catch(function (error) {
  // handle error
  if (error.response) console.log(;
  else console.log(error.message);

The API will then return a JSON response:

    "status": "success",
    "request": {
        "id": "req_24DNGegGf1Mo0n4rpaRwZ",
        "timestamp": 1512898748.652,
        "operations": 1
    "offensive": {
        "prob": 0.86,
        "nazi": 0.01,
        "confederate": 0.81,
        "supremacist": 0.86,
        "terrorist": 0.01,
        "middle_finger": 0.01,
        "boxes": [
                "x1": 0.686,
                "y1": 0.802,
                "x2": 0.772,
                "y2": 0.923,
                "label": "blooddropcross",
                "prob": 0.86
                "x1": 0.004,
                "y1": 0.023,
                "x2": 0.832,
                "y2": 0.864,
                "label": "confederate",
                "prob": 0.79
                "x1": 0.582,
                "y1": 0.363,
                "x2": 0.631,
                "y2": 0.423,
                "label": "confederate",
                "prob": 0.81
    "media": {
        "id": "med_24DNJfN2BlCGPGQBoZ5dO",
        "uri": ""

Any other needs?

See our full list of Image/Video models for details on other filters and checks you can run on your images and videos. You might also want to check our Text models to moderate text-based content: messages, reviews, comments, usernames...

Was this page helpful?