Products

SIGN UP LOG IN

Models / Weapon Alcohol Drug Detection

Weapon Alcohol Drug Detection

Overview

The Weapon Alcohol Drug detection model helps you determine if an image or video contains displays of weapons, alcoholic beverages, recreational drugs or medical drugs.

Weapon
Alcohol
Drugs

Weapon Detection

The API will return a weapon value between 0 and 1. This value reflects the API's confidence. Media with a weapon value closer to 1 have a higher probability of containing a weapon while media with a value closer to 0 have a lower probability.

weapon value 0.005
weapon value 0.415
weapon value 0.684
weapon value 0.955

Detected elements

The API has been designed to detect all displays wearable or personal weapons/arms typically found in user-submitted photos. Those include:

  • Rifles
  • Handguns, Revolvers, Pistols
  • Portable machine guns
  • Tools that are potential weapons or convey violence: some types of daggers, scabbards, chainsaws, cleavers, hatchets, axes
Rifle
Pistol
Portable machine gun
Axe

The detection will work even if for images with modified color schemes (black-and-white, changed saturation, icolor filter...)

The detection is also robust to varying levels of zooms, blurs, rotations.

The API has not been designed to detect the following concepts:

  • Military equipments such as planes, armored vehicles, bombs
  • Small knives and non-threatening cutlery
  • Razor blades
  • Toys (except for toys intended to look like real weapons)

Use cases

This model is usually used to moderate user-submitted images or videos and prevent users from posting or displaying unwanted content. Specific use-cases include:

  • Detecting suicidal poses
  • Detecting any glorification of weapons
  • Detecting threatful or violent contents
  • Detecting depictions of attacks or armed groups

You might also want to detect scenes containing blood, harm or horrific imagery. To do so, use the Gore and Graphic Violence Detection Model.

Limitations

  • Weapons that are smaller than 10% of the image dimensions, are very blurry or have a low contrast with the background may go undetected
  • Weapons that are not visible to human eyes (mostly hidden for instance or too small) might not be detected
  • The API has not been optimized for illustrations. Using the API to detect weapons in logos or drawings is not recommended

Recommended thresholds

When processing the weapon value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing a weapon while images or videos with a value below will be considered to be safe.

Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.

  • If you want to reduce false negatives, you may want to start with a threshold of 0.4 (meaning that images with a weapon value above 0.4 would be flagged)
  • If you want to reduce false positives, you may want to start with a threshold of 0.8

Alcohol Detection

The value returned is between 0 and 1, images with an alcohol value closer to 1 will have alcohol on the image, while images with an alcohol value closer to 0 will not have alcohol.

alcohol value 0.1
alcohol value 0.137
alcohol value 0.518
alcohol value 0.85

Detected elements

The API detects the following concepts:

  • Wine and Champagne, both in glasses and in bottles
  • Beer, both in glasses and in bottles
  • Cocktails, including cocktail shakers
Wine
Beer
Cocktail

The detection will work even if for images with modified color schemes (black-and-white, changed saturation, icolor filter...)

The detection is also robust to varying levels of zooms, blurs, rotations.

The API has *not* been designed to detect drunk people or other effects of alcohol consumption.

Use cases

This model is usually used to moderate user-submitted images or videos and prevent users from posting or displaying unwanted content. Specific use-cases include:

  • Respecting legislation that prohibit displays of alcoholic beverages
  • Detecting attempts by users to advertise for alcoholic beverages
  • Detecting glorifications of alcohol
  • Filtering messages encouraging users to consume alcohol
  • Protecting your brand
  • Protecting your advertisers or advertising networks (ads are typically not allowed to be displayed alongside depictions of alcoholic beverages)

Limitations

  • Beverages that are smaller than 5% of the image dimensions, are very blurry or have a low contrast with the background may go undetected
  • Beverages that are not visible to human eyes (mostly hidden for instance or too small) might not be detected
  • There maybe some level of confusion when non-alcoholic drinks look like alcoholic drinks (for instance a non-alcoholic beer, or grape juice in a wine glass) or when alcoholic drinks look like non-alcoholic drinks (for instance transparent alcohol such as vodka in a standard glass typically used for water consumption)
  • the API has not been optimized for illustrations. Using the API to detect alcoholic beverages in logos or drawings is not recommended

Recommended threshold

When processing the alcohol value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing alcoholic beverages while images or videos with a value below will be considered to be safe.

Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.

  • If you want to reduce false negatives, you may want to start with a threshold of 0.4 (meaning that images with an alcohol value above 0.4 would be flagged)
  • If you want to reduce false positives, you may want to start with a threshold of 0.8

Drug Detection

The value returned is between 0 and 1, images with a value close to 1 will contain displays of drugs, while images with a value closer to 0 are considered to be safe.

drugs value 0.01
drugs value 0.4
drugs value 0.7
drugs value 0.9

Concepts

The API detects the following concepts:

Cannabis leaf

Symbol used to represent marijuana / cannabis or more generally drugs.

drugs

Dried cannabis

Dried buds, cannabis crystals, kief.

drugs

Joints

Cannabis cigarette, also known as spliff or dooble. The model is designed to differentiate regular tobacco cigarettes from cannabis joints.

drugs

Bongs & glass pipes

Devices used to smoke cannabis and other herbal substances.

drugs

Syringes

Medical syringes, usually made of plastic, that might be used for recreational purposes such as drug injection.

drugs

Pills and pill bottles

Pills, tablets, and other small doses of medicine.

drugs

Snorting

Self administration of some recreational drugs such as ketamine, cocaine.

drugs

The detection will work even if for images with modified color schemes (black-and-white, changed saturation, icolor filter...)

The detection is also robust to varying levels of zooms, blurs, rotations.

Use-cases

  • Detect advertising
  • Protect your brand

Limitations

  • Drug-related items that are smaller than 5% of the image dimensions, are very blurry or have a low contrast with the background may go undetected
  • Items that are not visible to human eyes (mostly hidden for instance or too small) might not be detected

Recommended threshold

When processing the drugs value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing drugs while images or videos with a value below will be considered to be safe.

Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.

  • If you want to reduce false negatives, you may want to start with a threshold of 0.4 (meaning that images with a drugs value above 0.4 would be flagged)
  • If you want to reduce false positives, you may want to start with a threshold of 0.8

Use the model

If you haven't already, create an account to get your own API keys.

Detect offensive content

Let's say you want to moderate the following image:

You can either upload a public URL to the image, or upload the raw binary image. Here's how to proceed if you choose to share the image's public URL:


curl -X GET -G 'https://api.sightengine.com/1.0/check.json' \
    -d 'models=wad' \
    -d 'api_user={api_user}&api_secret={api_secret}' \
    --data-urlencode 'url=https://sightengine.com/assets/img/examples/example-tt-1000.jpg'


# this example uses requests
import requests
import json

params = {
  'url': 'https://sightengine.com/assets/img/examples/example-tt-1000.jpg',
  'models': 'wad',
  'api_user': '{api_user}',
  'api_secret': '{api_secret}'
}
r = requests.get('https://api.sightengine.com/1.0/check.json', params=params)

output = json.loads(r.text)


$params = array(
  'url' =>  'https://sightengine.com/assets/img/examples/example-tt-1000.jpg',
  'models' => 'wad',
  'api_user' => '{api_user}',
  'api_secret' => '{api_secret}',
);

// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/check.json?'.http_build_query($params));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);

$output = json_decode($response, true);


// this example uses axios
const axios = require('axios');

axios.get('https://api.sightengine.com/1.0/check.json', {
  params: {
    'url': 'https://sightengine.com/assets/img/examples/example-tt-1000.jpg',
    'models': 'wad',
    'api_user': '{api_user}',
    'api_secret': '{api_secret}',
  }
})
.then(function (response) {
  // on success: handle response
  console.log(response.data);
})
.catch(function (error) {
  // handle error
  if (error.response) console.log(error.response.data);
  else console.log(error.message);
});

The API will then return a JSON response:

                    
                    
{
    "status": "success",
    "request": {
        "id": "req_1OjggusalNb2S7MxwLq2h",
        "timestamp": 1509132120.6988,
        "operations": 1
    },
    "weapon": 0.773,
    "alcohol": 0.001,
    "drugs": 0,
    "media": {
        "id": "med_1OjgEqvJtOhqP7sfNe3ga",
        "uri": "https://sightengine.com/assets/img/examples/example-tt-1000.jpg"
    }
}
                    
                

Any other needs?

See our full list of Image/Video models for details on other filters and checks you can run on your images and videos. You might also want to check our Text models to moderate text-based content: messages, reviews, comments, usernames...

Was this page helpful?