The Face Analysis model is designed to detect faces in an image or video and provide additional insights, including:
This model can be used to validate profile pictures or assess face recognizability.
These classes categorize faces as either real or artificial, considering only human-like faces. Animal faces are automatically disregarded.
For all detected faces, coordinates are provided. However, additional analysis (obstruction, quality, angle, filters) is applied only to real faces, helping to define image profile criteria.
For all detected faces, coordinates define their position within the frame. These consist of two points forming a bounding box:
For real faces, additional major facial landmarks (eyes, nose tip, mouth corners) are also provided:
If any landmark is obstructed, its position is approximated.
All coordinates are normalized between 0 and 1, relative to the image's width and height, where x increases from left to right and y increases from top to bottom.
These classes quantify facial obstruction based on the visibility of key recognizable features (mouth, nose, eyes) rather than the overall percentage of coverage. Obstruction occurs when objects overlay the face or when parts of the face are out of frame.
Preliminary considerations:
The following images illustrate cases where no obstruction is detected:
Obstruction is categorized into six levels of increasing severity. The red-highlighted areas in the images below indicate obstructed regions.
These classes define the objective visual quality of a face, independent of artistic intent. Factors considered include lighting color and intensity, noise, blurring, and distortions caused by filters. A key criterion is facial recognizability, but face rotation and obstructions are not considered.
Faces are categorized into four quality levels, ranked from highest to lowest:
These classes define the face's relative position to the viewer, based on expected eye visibility. Obstructions or visual quality issues (e.g., a blindfold) are not considered.
These classes indicate whether a face has post-processing overlays, referred to as filters.
If you haven't already, create an account to get your own API keys.
Let's say you want to analyze the following image:
You can either share a URL to the image, or upload the raw binary image.
Here's how to proceed if you choose to share the image URL:
curl -X GET -G 'https://api.sightengine.com/1.0/check.json' \
-d 'models=face-analysis' \
-d 'api_user={api_user}&api_secret={api_secret}' \
--data-urlencode 'url=https://sightengine.com/assets/img/examples/example7.jpg'
# this example uses requests
import requests
import json
params = {
'url': 'https://sightengine.com/assets/img/examples/example7.jpg',
'models': 'face-analysis',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
r = requests.get('https://api.sightengine.com/1.0/check.json', params=params)
output = json.loads(r.text)
$params = array(
'url' => 'https://sightengine.com/assets/img/examples/example7.jpg',
'models' => 'face-analysis',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/check.json?'.http_build_query($params));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios
const axios = require('axios');
axios.get('https://api.sightengine.com/1.0/check.json', {
params: {
'url': 'https://sightengine.com/assets/img/examples/example7.jpg',
'models': 'face-analysis',
'api_user': '{api_user}',
'api_secret': '{api_secret}',
}
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
Parameter | Type | Description |
media | binary | image to analyze |
models | string | comma-separated list of models to apply |
api_user | string | your API user id |
api_secret | string | your API secret |
Here's how to proceed if you choose to upload the raw image:
curl -X POST 'https://api.sightengine.com/1.0/check.json' \
-F 'media=@/path/to/image.jpg' \
-F 'models=face-analysis' \
-F 'api_user={api_user}' \
-F 'api_secret={api_secret}'
# this example uses requests
import requests
import json
params = {
'models': 'face-analysis',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/image.jpg', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/check.json', files=files, data=params)
output = json.loads(r.text)
$params = array(
'media' => new CurlFile('/path/to/image.jpg'),
'models' => 'face-analysis',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/check.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
data = new FormData();
data.append('media', fs.createReadStream('/path/to/image.jpg'));
data.append('models', 'face-analysis');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');
axios({
method: 'post',
url:'https://api.sightengine.com/1.0/check.json',
data: data,
headers: data.getHeaders()
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
Parameter | Type | Description |
media | binary | image to analyze |
models | string | comma-separated list of models to apply |
api_user | string | your API user id |
api_secret | string | your API secret |
The API will then return a JSON response with the following structure:
{
"status": "success",
"request": {
"id": "req_gcTp4s63IAAni0lFOT7KK",
"timestamp": 1714997478.552115,
"operations": 1
},
"faces": [
{
"x1": 0.435,
"y1": 0.2439,
"x2": 0.5675,
"y2": 0.4991,
"features": {
"left_eye": {
"x": 0.5288,
"y": 0.334
},
"right_eye": {
"x": 0.4713,
"y": 0.3377
},
"nose_tip": {
"x": 0.5,
"y": 0.3677
},
"left_mouth_corner": {
"x": 0.5275,
"y": 0.4184
},
"right_mouth_corner": {
"x": 0.475,
"y": 0.4221
}
},
"attributes": {
"glasses": {
"sunglasses": 0.01,
"no_sunglasses": 0.99
},
"angle": {
"back": 0,
"side": 0.001,
"straight": 0.999
},
"filter": {
"false": 1,
"true": 0
},
"obstruction": {
"complete": 0,
"extreme": 0,
"heavy": 0,
"light": 0.335,
"medium": 0.006,
"none": 0.659
},
"quality": {
"high": 0.413,
"low": 0.001,
"medium": 0.001,
"perfect": 0.585
}
}
}
],
"artifical_faces": [],
"media": {
"id": "med_gcTpqyOZ18IMsiMe4Ar28",
"uri": "https://sightengine.com/img/examples/example7.jpg"
}
}
Here's how to proceed to analyze a short video (less than 1 minute):
curl -X POST 'https://api.sightengine.com/1.0/video/check-sync.json' \
-F 'media=@/path/to/video.mp4' \
-F 'models=face-analysis' \
-F 'api_user={api_user}' \
-F 'api_secret={api_secret}'
# this example uses requests
import requests
import json
params = {
# specify the models you want to apply
'models': 'face-analysis',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/video.mp4', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/video/check-sync.json', files=files, data=params)
output = json.loads(r.text)
$params = array(
'media' => new CurlFile('/path/to/video.mp4'),
// specify the models you want to apply
'models' => 'face-analysis',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check-sync.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
data = new FormData();
data.append('media', fs.createReadStream('/path/to/video.mp4'));
// specify the models you want to apply
data.append('models', 'face-analysis');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');
axios({
method: 'post',
url:'https://api.sightengine.com/1.0/video/check-sync.json',
data: data,
headers: data.getHeaders()
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
Parameter | Type | Description |
media | binary | image to analyze |
models | string | comma-separated list of models to apply |
interval | float | frame interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional) |
api_user | string | your API user id |
api_secret | string | your API secret |
Here's how to proceed to analyze a long video. Note that if the video file is very large, you might first need to upload it through the Upload API.
curl -X POST 'https://api.sightengine.com/1.0/video/check.json' \
-F 'media=@/path/to/video.mp4' \
-F 'models=face-analysis' \
-F 'callback_url=https://yourcallback/path' \
-F 'api_user={api_user}' \
-F 'api_secret={api_secret}'
# this example uses requests
import requests
import json
params = {
# specify the models you want to apply
'models': 'face-analysis',
# specify where you want to receive result callbacks
'callback_url': 'https://yourcallback/path',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/video.mp4', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/video/check.json', files=files, data=params)
output = json.loads(r.text)
$params = array(
'media' => new CurlFile('/path/to/video.mp4'),
// specify the models you want to apply
'models' => 'face-analysis',
// specify where you want to receive result callbacks
'callback_url' => 'https://yourcallback/path',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');
data = new FormData();
data.append('media', fs.createReadStream('/path/to/video.mp4'));
// specify the models you want to apply
data.append('models', 'face-analysis');
// specify where you want to receive result callbacks
data.append('callback_url', 'https://yourcallback/path');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');
axios({
method: 'post',
url:'https://api.sightengine.com/1.0/video/check.json',
data: data,
headers: data.getHeaders()
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
See request parameter description
Parameter | Type | Description |
media | binary | image to analyze |
callback_url | string | callback URL to receive moderation updates (optional) |
models | string | comma-separated list of models to apply |
interval | float | frame interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional) |
api_user | string | your API user id |
api_secret | string | your API secret |
Here's how to proceed to analyze a live-stream:
curl -X GET -G 'https://api.sightengine.com/1.0/video/check.json' \
--data-urlencode 'stream_url=https://domain.tld/path/video.m3u8' \
-d 'models=face-analysis' \
-d 'callback_url=https://your.callback.url/path' \
-d 'api_user={api_user}' \
-d 'api_secret={api_secret}'
# if you haven't already, install the SDK with 'pip install sightengine'
from sightengine.client import SightengineClient
client = SightengineClient('{api_user}','{api_secret}')
output = client.check('face-analysis').video('https://domain.tld/path/video.m3u8', 'https://your.callback.url/path')
// if you haven't already, install the SDK with 'composer require sightengine/client-php'
use \Sightengine\SightengineClient;
$client = new SightengineClient('{api_user}','{api_secret}');
$output = $client->check(['face-analysis'])->video('https://domain.tld/path/video.m3u8', 'https://your.callback.url/path');
// if you haven't already, install the SDK with 'npm install sightengine --save'
var sightengine = require('sightengine')('{api_user}', '{api_secret}');
sightengine.check(['face-analysis']).video('https://domain.tld/path/video.m3u8', 'https://your.callback.url/path').then(function(result) {
// The API response (result)
}).catch(function(err) {
// Handle error
});
See request parameter description
Parameter | Type | Description |
stream_url | string | URL of the video stream |
callback_url | string | callback URL to receive moderation updates (optional) |
models | string | comma-separated list of models to apply |
interval | float | frame interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional) |
api_user | string | your API user id |
api_secret | string | your API secret |
The Moderation result will be provided either directly in the request response (for sync calls, see below) or through the callback URL your provided (for async calls).
Here is the structure of the JSON response with moderation results for each analyzed frame under the data.frames array:
{
"status": "success",
"request": {
"id": "req_gmgHNy8oP6nvXYaJVLq9n",
"timestamp": 1717159864.348989,
"operations": 21
},
"data": {
"frames": [
{
"info": {
"id": "med_gmgHcUOwe41rWmqwPhVNU_1",
"position": 0
},
"faces": [
{
"x1": 0.435,
"y1": 0.2439,
"x2": 0.5675,
"y2": 0.4991,
"features": {
"left_eye": {
"x": 0.5288,
"y": 0.334
},
"right_eye": {
"x": 0.4713,
"y": 0.3377
},
"nose_tip": {
"x": 0.5,
"y": 0.3677
},
"left_mouth_corner": {
"x": 0.5275,
"y": 0.4184
},
"right_mouth_corner": {
"x": 0.475,
"y": 0.4221
}
},
"attributes": {
"glasses": {
"sunglasses": 0.01,
"no_sunglasses": 0.99
},
"angle": {
"back": 0,
"side": 0.001,
"straight": 0.999
},
"filter": {
"false": 1,
"true": 0
},
"obstruction": {
"complete": 0,
"extreme": 0,
"heavy": 0,
"light": 0.335,
"medium": 0.006,
"none": 0.659
},
"quality": {
"high": 0.413,
"low": 0.001,
"medium": 0.001,
"perfect": 0.585
}
}
}
],
"artifical_faces": [],
},
...
]
},
"media": {
"id": "med_gmgHcUOwe41rWmqwPhVNU",
"uri": "yourfile.mp4"
},
}
See our full list of Image/Video models for details on other filters and checks you can run on your images and videos. You might also want to check our Text models to moderate text-based content: messages, reviews, comments, usernames...
Was this page helpful?