Nudity Detection

Models / Graphic Violence & Gore Detection

Graphic Violence & Gore Detection

This page describes an old version of the Gore Detection model. The latest version is available here.

Table of contents


The Graphic Violence and Gore model helps you determine if an image or video contains horrific imagery such as blood, guts, self-harm, wounds, human skulls.

Scenes that contain weapons such as firearms and knives but no blood and no harm will not be flagged by this model. To flag weapons, use the Weapon Detection Model.


When processing the gore value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as being too graphic while images or videos with a value below will be considered to be safe.

Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.

We recommend setting a threshold at 0.5, meaning that images and videos with a gore value above 0.5 would be flagged.

Use the model

If you haven't already, create an account to get your own API keys.

Detect violent/gore content

Let's say you want to moderate the following image:

You can either upload a public URL to the image, or upload the raw binary image. Here's how to proceed if you choose to share the image's public URL:

curl -X GET -G '' \
    -d 'models=gore' \
    -d 'api_user={api_user}&api_secret={api_secret}' \
    --data-urlencode 'url='

# this example uses requests
import requests
import json

params = {
  'url': '',
  'models': 'gore',
  'api_user': '{api_user}',
  'api_secret': '{api_secret}'
r = requests.get('', params=params)

output = json.loads(r.text)

$params = array(
  'url' =>  '',
  'models' => 'gore',
  'api_user' => '{api_user}',
  'api_secret' => '{api_secret}',

// this example uses cURL
$ch = curl_init(''.http_build_query($params));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);

$output = json_decode($response, true);

// this example uses axios
const axios = require('axios');

axios.get('', {
  params: {
    'url': '',
    'models': 'gore',
    'api_user': '{api_user}',
    'api_secret': '{api_secret}',
.then(function (response) {
  // on success: handle response
.catch(function (error) {
  // handle error
  if (error.response) console.log(;
  else console.log(error.message);

The API will then return a JSON response:

    "status": "success",
    "request": {
        "id": "req_1OjggusalNb2S7MxwLq2h",
        "timestamp": 1509132120.6988,
        "operations": 1
    "gore": {
        "prob": 0.01
    "media": {
        "id": "med_1OjgEqvJtOhqP7sfNe3ga",
        "uri": ""

Any other needs?

See our full list of Image/Video models for details on other filters and checks you can run on your images and videos. You might also want to check our Text models to moderate text-based content: messages, reviews, comments, usernames...

Was this page helpful?