Offensive Content Detection
Deprecated offensiveAutomatically detect images and videos depicting hateful content, such as Nazi symbols, offensive gestures, and supremacist imagery.
Overview
The offensive model is useful to determine if an image or a video contains offensive content or hate content. In addition to detecting such content, this model gives details on the position and the type of the offensive content found.
The offensive content detected falls broadly into the following categories:
- Nazi era symbols
- Confederate symbols
- Supremacist symbols
- Offensive gestures
- Terrorist symbols
See the concept section for the full list of detected concepts.
Concepts
This section lists the concepts detected by the Offensive Detection Model.
Nazi-era symbols
Nazi hakenkreuz
Nazi symbol that can be found on objects from the nazi era, costumes, drawings, cartoons, tattoos...
nazi
Nazi flags
Flags from the nazi era, such as the nazi party flag, the Reichskriegsflagge, the Reichsdienstflagge...
nazi
Iron cross
Military decoration of Prussia, later used in Nazi Germany. Present on nazi flags, costumes, medals and other objects.
nazi
SS bolts
Schutzstaffel sigrunes. They symbolised victory and are present on nazi era objects such as helmets, medals, costumes...
nazi
Sonnenrad
Sonnenrad or sunwheel. Ancient European symbol appropriated by the Nazis.
nazi
Sturmabteilung
Emblem of Hitler's SA troops. Some neo-nazis and white supremacists currently use this symbol.
nazi
Supremacist symbols
KKK
Ku klux klan
supremacist
Burning cross
Cross burning is a practice associated with the Ku Klux Klan.
supremacist
Blood drop cross
Primary symbol associated to the Ku Klux Klan groups.
supremacist
Celtic cross
There are many types of celtic crosses, with different meanings. The one depicted is one of the most commonly used white supremacist symbols.
supremacist
Valknut
Valknut or valknot. Old Norse symbol appropriated by some white supremacists.
supremacist
Odal rune
Odal rune, also known as othala rune. Originally part of the runic alphabet system but appropriated by the Nazis as a symbol for aryanism
supremacist
Wolfsangel
Old runic symbol appropriated during the Nazi era. It has become a symbol of choice for neo-Nazis
supremacist
Confederate symbols
Confederate flag
Flag of the Confederate States of America
confederate
Gestures
Middle finger
Obscene hand gesture in western culture
middle_finger
Other symbols
ISIS flag
IS / ISIL / ISIS / Daesh version of the Black Standard
terrorist
Principles
The Offensive detection does not use any image meta-data to determine the presence of a offensive content on an image. The file extension, the meta-data or the name will not influence the result. The classification is made using only the pixel content of the image or video.
On most sites and apps, images containing offensive content will be systematically removed.
Offensive Detection works with black and white images as well as color images or images with filters.
Use-cases
- Block or detect users who submit images or videos containing offensive content
- Hide, Blur or Filter hateful symbols and references in images and videos
- Protect your users from unwanted content
Limitations
- Elements smaller than 5% of the width or height of the image may not be detected.
Recommended threshold
When processing the "offensive" value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing offensive content while images or videos with a value below will be considered to be safe.
Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.
If you want to reduce false negatives, you may want to start with a threshold of 0.5 (meaning that images with an "alcohol" value above 0.5 would be flagged)
If you want to reduce false positives, you may want to start with a threshold of 0.8
If there are some concepts that you want to allow, or if you want to set different thresholds for different concepts, this is also something you can do, as the API will return a score for each object found.
Details
Along with the offensive probability, the model will return a list of all the offensive elements found in the image (if any), along with their positions and type.
Use the model (images)
If you haven't already, create an account to get your own API keys.
Detect hate & offensive content
Let's say you want to moderate the following image:
You can either share a URL to the image, or upload the image file.
Option 1: Send image URL
Here's how to proceed if you choose to share the image URL:
curl -X GET -G 'https://api.sightengine.com/1.0/check.json' \
-d 'models=offensive' \
-d 'api_user={api_user}&api_secret={api_secret}' \
--data-urlencode 'url=https://sightengine.com/assets/img/examples/example-fac-1000.jpg'
# this example uses requests
import requests
import json
params = {
'url': 'https://sightengine.com/assets/img/examples/example-fac-1000.jpg',
'models': 'offensive',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
r = requests.get('https://api.sightengine.com/1.0/check.json', params=params)
output = json.loads(r.text)
$params = array(
'url' => 'https://sightengine.com/assets/img/examples/example-fac-1000.jpg',
'models' => 'offensive',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/check.json?'.http_build_query($params));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios
const axios = require('axios');
axios.get('https://api.sightengine.com/1.0/check.json', {
params: {
'url': 'https://sightengine.com/assets/img/examples/example-fac-1000.jpg',
'models': 'offensive',
'api_user': '{api_user}',
'api_secret': '{api_secret}',
}
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
| Parameter | Type | Description |
| url | string | URL of the image to analyze |
| models | string | comma-separated list of models to apply |
| api_user | string | your API user id |
| api_secret | string | your API secret |
Option 2: Send image file
Here's how to proceed if you choose to upload the image file:
curl -X POST 'https://api.sightengine.com/1.0/check.json' \
-F 'media=@/path/to/image.jpg' \
-F 'models=offensive' \
-F 'api_user={api_user}' \
-F 'api_secret={api_secret}'
# this example uses requests
import requests
import json
params = {
'models': 'offensive',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/image.jpg', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/check.json', files=files, data=params)
output = json.loads(r.text)
$params = array(
'media' => new CurlFile('/path/to/image.jpg'),
'models' => 'offensive',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/check.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
data = new FormData();
data.append('media', fs.createReadStream('/path/to/image.jpg'));
data.append('models', 'offensive');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');
axios({
method: 'post',
url:'https://api.sightengine.com/1.0/check.json',
data: data,
headers: data.getHeaders()
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
| Parameter | Type | Description |
| media | file | image to analyze |
| models | string | comma-separated list of models to apply |
| api_user | string | your API user id |
| api_secret | string | your API secret |
API response
The API will then return a JSON response with the following structure:
{
"status": "success",
"request": {
"id": "req_24DNGegGf1Mo0n4rpaRwZ",
"timestamp": 1512898748.652,
"operations": 1
},
"offensive": {
"prob": 0.01,
"nazi": 0.01,
"confederate": 0.01,
"supremacist": 0.01,
"terrorist": 0.01,
"middle_finger": 0.01,
"boxes": []
},
"media": {
"id": "med_24DNJfN2BlCGPGQBoZ5dO",
"uri": "https://sightengine.com/assets/img/examples/example-fac-1000.jpg"
}
}
Successful Response
Status code: 200, Content-Type: application/json| Field | Type | Description |
| status | string | status of the request, either "success" or "failure" |
| request | object | information about the processed request |
| request.id | string | unique identifier of the request |
| request.timestamp | float | timestamp of the request in Unix time |
| request.operations | integer | number of operations consumed by the request |
| offensive | object | results for the model |
| media | object | information about the media analyzed |
| media.id | string | unique identifier of the media |
| media.uri | string | URI of the media analyzed: either the URL or the filename |
Error
Status codes: 4xx and 5xx. See how error responses are structured.Use the model (videos)
Detecting Hate & Offensive content in videos
Option 1: Short video
Here's how to proceed to analyze a short video (less than 1 minute):
curl -X POST 'https://api.sightengine.com/1.0/video/check-sync.json' \
-F 'media=@/path/to/video.mp4' \
-F 'models=offensive' \
-F 'api_user={api_user}' \
-F 'api_secret={api_secret}'
# this example uses requests
import requests
import json
params = {
# specify the models you want to apply
'models': 'offensive',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/video.mp4', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/video/check-sync.json', files=files, data=params)
output = json.loads(r.text)
$params = array(
'media' => new CurlFile('/path/to/video.mp4'),
// specify the models you want to apply
'models' => 'offensive',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check-sync.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
data = new FormData();
data.append('media', fs.createReadStream('/path/to/video.mp4'));
// specify the models you want to apply
data.append('models', 'offensive');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');
axios({
method: 'post',
url:'https://api.sightengine.com/1.0/video/check-sync.json',
data: data,
headers: data.getHeaders()
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
| Parameter | Type | Description |
| media | file | image to analyze |
| models | string | comma-separated list of models to apply |
| interval | float | frame interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional) |
| api_user | string | your API user id |
| api_secret | string | your API secret |
Option 2: Long video
Here's how to proceed to analyze a long video. Note that if the video file is very large, you might first need to upload it through the Upload API.
curl -X POST 'https://api.sightengine.com/1.0/video/check.json' \
-F 'media=@/path/to/video.mp4' \
-F 'models=offensive' \
-F 'callback_url=https://yourcallback/path' \
-F 'api_user={api_user}' \
-F 'api_secret={api_secret}'
# this example uses requests
import requests
import json
params = {
# specify the models you want to apply
'models': 'offensive',
# specify where you want to receive result callbacks
'callback_url': 'https://yourcallback/path',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/video.mp4', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/video/check.json', files=files, data=params)
output = json.loads(r.text)
$params = array(
'media' => new CurlFile('/path/to/video.mp4'),
// specify the models you want to apply
'models' => 'offensive',
// specify where you want to receive result callbacks
'callback_url' => 'https://yourcallback/path',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
data = new FormData();
data.append('media', fs.createReadStream('/path/to/video.mp4'));
// specify the models you want to apply
data.append('models', 'offensive');
// specify where you want to receive result callbacks
data.append('callback_url', 'https://yourcallback/path');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');
axios({
method: 'post',
url:'https://api.sightengine.com/1.0/video/check.json',
data: data,
headers: data.getHeaders()
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
| Parameter | Type | Description |
| media | file | image to analyze |
| callback_url | string | callback URL to receive moderation updates (optional) |
| models | string | comma-separated list of models to apply |
| interval | float | frame interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional) |
| api_user | string | your API user id |
| api_secret | string | your API secret |
Option 3: Live-stream
Here's how to proceed to analyze a live-stream:
curl -X GET -G 'https://api.sightengine.com/1.0/video/check.json' \
--data-urlencode 'stream_url=https://domain.tld/path/video.m3u8' \
-d 'models=offensive' \
-d 'callback_url=https://your.callback.url/path' \
-d 'api_user={api_user}' \
-d 'api_secret={api_secret}'
# if you haven't already, install the SDK with 'pip install sightengine'
from sightengine.client import SightengineClient
client = SightengineClient('{api_user}','{api_secret}')
output = client.check('offensive').video('https://domain.tld/path/video.m3u8', 'https://your.callback.url/path')
// if you haven't already, install the SDK with 'composer require sightengine/client-php'
use \Sightengine\SightengineClient;
$client = new SightengineClient('{api_user}','{api_secret}');
$output = $client->check(['offensive'])->video('https://domain.tld/path/video.m3u8', 'https://your.callback.url/path');
// if you haven't already, install the SDK with 'npm install sightengine --save'
var sightengine = require('sightengine')('{api_user}', '{api_secret}');
sightengine.check(['offensive']).video('https://domain.tld/path/video.m3u8', 'https://your.callback.url/path').then(function(result) {
// The API response (result)
}).catch(function(err) {
// Handle error
});
See request parameter description
| Parameter | Type | Description |
| stream_url | string | URL of the video stream |
| callback_url | string | callback URL to receive moderation updates (optional) |
| models | string | comma-separated list of models to apply |
| interval | float | frame interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional) |
| api_user | string | your API user id |
| api_secret | string | your API secret |
Moderation result
The Moderation result will be provided either directly in the request response (for sync calls, see below) or through the callback URL your provided (for async calls).
Here is the structure of the JSON response with moderation results for each analyzed frame under the data.frames array:
{
"status": "success",
"request": {
"id": "req_gmgHNy8oP6nvXYaJVLq9n",
"timestamp": 1717159864.348989,
"operations": 21
},
"data": {
"frames": [
{
"info": {
"id": "med_gmgHcUOwe41rWmqwPhVNU_1",
"position": 0
},
"offensive": {
"prob": 0.01,
"nazi": 0.01,
"confederate": 0.01,
"supremacist": 0.01,
"terrorist": 0.01,
"middle_finger": 0.01,
"boxes": []
},
},
...
]
},
"media": {
"id": "med_gmgHcUOwe41rWmqwPhVNU",
"uri": "yourfile.mp4"
},
}
You can use the classes under the offensive object to detect hateful or offensive content in the video.