Docs / AI Content Detection / AI-Generated Video Detection

AI-Generated Video Detection

genai

Detect if a video was generated with an AI model such as Sora, Veo, Runway, Pika and more.

This page describes AI video detection. AI image detection is also available.

Overview

The AI-Generated Video Detection Model can help you determine if a video was entirely generated by an AI model, or if it is a real video. This model was trained on millions of artificially-created and human-created videos spanning all sorts of content such as real life, art, cartoons and more.

The Model works by analyzing the visual (pixel) content of the video. No meta-data is used in the analysis. Tampering with meta-data such as EXIF data therefore has no effect on the scoring.

The Model was trained to detect videos created by the main generators currently in use: Veo, Sora, Runway, Pika, MidJourney, Kling... Additional generators will be added over time as they become available.

Use cases

  • Tag AI-generated videos as such, to limit the spread of misinformation and fake news
  • Implement stricter moderation rules on AI-generated videos
  • Detect potential fraud with fake video verifications
  • Limit AI-generated spam
  • Enact bans on AI-generated videos

Related model

For AI-image detection, you can use the AI-image detection model.

Generator-specific information

Sightengine's AI detection models compute per-generator confidence scores alongside a global AI probability score. For every image or video analyzed, the API response includes individual scores for each supported generator, giving you a complete fingerprint of the content.

The list of supported generators spans both images and videos, covering major commercial tools, open-source models, and older GAN-based architectures:

Video generators

GeneratorCreatorExample versions detected
HiggsfieldHiggsfield AIHiggsfield 1.0, Higgsfield Soul Cinema...
KlingKuaishouKling 1.0, Kling 1.5, ...
MidjourneyMidjourneyMidjourney Video, ...
PikaPikaPika 1.0, Pika 1.5, ...
RunwayRunwayGen-2, Gen-3, Gen-4...
SeedanceByteDanceSeedance 1.5, Seedance 2.0, ...
SoraOpenAISora, Sora 2, ...
VeoGoogleVeo 1, Veo 2, Veo 3, ...
WanAlibabaWan 2.1, Wan 2.2, ...
Other generatorsVariousDemamba, HotShot, LaVie, Hunyuan, Ray...

And more, new generators are added continuously as they appear in the wild.

Image generators

GeneratorCreatorExample versions detected
DALL-EOpenAIDALL-E 2, DALL-E 3, ...
FireflyAdobeFirefly 2, Firefly 3, ...
FluxBlack Forest LabsFlux.1 Dev, Flux.1 Schnell, Flux Pro, ...
GPT image generationOpenAIGPT-4o, GPT-1.5 image...
Grok ImaginexAIImagine, Imagine Pro...
HiggsfieldHiggsfield AIHiggsfield Soul...
IdeogramIdeogramIdeogram 2.0, Ideogram 3.0, ...
ImagenGoogleImagen 2, Imagen 3, ...
KlingKuaishouKling 2.0, Kling 3.0, ...
MidjourneyMidjourneyMidjourney v5, v6, v7, ...
Nano BananaGoogleNano Banana 2, Nano Banana Pro, ...
QwenAlibabaQwen2-VL, ...
RecraftRecraftRecraft V3, ...
ReveReveReve Image 1.0, ...
SeedreamByteDanceSeedream 2.0, Seedream 3.0, ...
Stable DiffusionStability AISD 1.5, SD 2.1, SDXL, SD3, ...
StyleGANNVIDIAStyleGAN2, StyleGAN3, ...
Z-imageAlibabaZ-image, Z-image Turbo, ...
Other generatorsVariousGenerators with a smaller audience

And more, new generators are added continuously as they appear in the wild.

Use the model

If you haven't already, create an account to get your own API keys.

Detect if a video was AI-generated

Option 1: Short video

Here's how to proceed to analyze a short video (less than 1 minute):


curl -X POST 'https://api.sightengine.com/1.0/video/check-sync.json' \
  -F 'media=@/path/to/video.mp4' \
  -F 'models=genai' \
  -F 'api_user={api_user}' \
  -F 'api_secret={api_secret}'


# this example uses requests
import requests
import json

params = {
  # specify the models you want to apply
  'models': 'genai',
  'api_user': '{api_user}',
  'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/video.mp4', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/video/check-sync.json', files=files, data=params)

output = json.loads(r.text)


$params = array(
  'media' => new CurlFile('/path/to/video.mp4'),
  // specify the models you want to apply
  'models' => 'genai',
  'api_user' => '{api_user}',
  'api_secret' => '{api_secret}',
);

// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check-sync.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);

$output = json_decode($response, true);


// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');

data = new FormData();
data.append('media', fs.createReadStream('/path/to/video.mp4'));
// specify the models you want to apply
data.append('models', 'genai');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');

axios({
  method: 'post',
  url:'https://api.sightengine.com/1.0/video/check-sync.json',
  data: data,
  headers: data.getHeaders()
})
.then(function (response) {
  // on success: handle response
  console.log(response.data);
})
.catch(function (error) {
  // handle error
  if (error.response) console.log(error.response.data);
  else console.log(error.message);
});

See request parameter description

ParameterTypeDescription
mediafileimage to analyze
modelsstringcomma-separated list of models to apply
intervalfloatframe interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional)
api_userstringyour API user id
api_secretstringyour API secret

Option 2: Long video

Here's how to proceed to analyze a long video. Note that if the video file is very large, you might first need to upload it through the Upload API.


curl -X POST 'https://api.sightengine.com/1.0/video/check.json' \
  -F 'media=@/path/to/video.mp4' \
  -F 'models=genai' \
  -F 'callback_url=https://yourcallback/path' \
  -F 'api_user={api_user}' \
  -F 'api_secret={api_secret}'


# this example uses requests
import requests
import json

params = {
  # specify the models you want to apply
  'models': 'genai',
  # specify where you want to receive result callbacks
  'callback_url': 'https://yourcallback/path',
  'api_user': '{api_user}',
  'api_secret': '{api_secret}'
}
files = {'media': open('/path/to/video.mp4', 'rb')}
r = requests.post('https://api.sightengine.com/1.0/video/check.json', files=files, data=params)

output = json.loads(r.text)


$params = array(
  'media' => new CurlFile('/path/to/video.mp4'),
  // specify the models you want to apply
  'models' => 'genai',
  // specify where you want to receive result callbacks
  'callback_url' => 'https://yourcallback/path',
  'api_user' => '{api_user}',
  'api_secret' => '{api_secret}',
);

// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);

$output = json_decode($response, true);


// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');

data = new FormData();
data.append('media', fs.createReadStream('/path/to/video.mp4'));
// specify the models you want to apply
data.append('models', 'genai');
// specify where you want to receive result callbacks
data.append('callback_url', 'https://yourcallback/path');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');

axios({
  method: 'post',
  url:'https://api.sightengine.com/1.0/video/check.json',
  data: data,
  headers: data.getHeaders()
})
.then(function (response) {
  // on success: handle response
  console.log(response.data);
})
.catch(function (error) {
  // handle error
  if (error.response) console.log(error.response.data);
  else console.log(error.message);
});

See request parameter description

ParameterTypeDescription
mediafileimage to analyze
callback_urlstringcallback URL to receive moderation updates (optional)
modelsstringcomma-separated list of models to apply
intervalfloatframe interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional)
api_userstringyour API user id
api_secretstringyour API secret

Option 3: Live-stream

Here's how to proceed to analyze a live-stream:


curl -X GET -G 'https://api.sightengine.com/1.0/video/check.json' \
    --data-urlencode 'stream_url=https://domain.tld/path/video.m3u8' \
    -d 'models=genai' \
    -d 'callback_url=https://your.callback.url/path' \
    -d 'api_user={api_user}' \
    -d 'api_secret={api_secret}'


# this example uses requests
import requests
import json

params = {
  'stream_url': 'https://domain.tld/path/video.m3u8',
  # specify the models you want to apply
  'models': 'genai',
  # specify where you want to receive result callbacks
  'callback_url': 'https://your.callback.url/path',
  'api_user': '{api_user}',
  'api_secret': '{api_secret}'
}
r = requests.post('https://api.sightengine.com/1.0/video/check.json', data=params)

output = json.loads(r.text)


$params = array(
  'stream_url' => 'https://domain.tld/path/video.m3u8',
  // specify the models you want to apply
  'models' => 'genai',
  // specify where you want to receive result callbacks
  'callback_url' => 'https://your.callback.url/path',
  'api_user' => '{api_user}',
  'api_secret' => '{api_secret}',
);

// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/video/check.json');
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$response = curl_exec($ch);
curl_close($ch);

$output = json_decode($response, true);


// this example uses axios and form-data
const axios = require('axios');
const FormData = require('form-data');
const fs = require('fs');

data = new FormData();
data.append('stream_url', 'https://domain.tld/path/video.m3u8');
// specify the models you want to apply
data.append('models', 'genai');
// specify where you want to receive result callbacks
data.append('callback_url', 'https://your.callback.url/path');
data.append('api_user', '{api_user}');
data.append('api_secret', '{api_secret}');

axios({
  method: 'post',
  url:'https://api.sightengine.com/1.0/video/check.json',
  data: data,
  headers: data.getHeaders()
})
.then(function (response) {
  // on success: handle response
  console.log(response.data);
})
.catch(function (error) {
  // handle error
  if (error.response) console.log(error.response.data);
  else console.log(error.message);
});

See request parameter description

ParameterTypeDescription
stream_urlstringURL of the video stream
callback_urlstringcallback URL to receive moderation updates (optional)
modelsstringcomma-separated list of models to apply
intervalfloatframe interval in seconds, out of 0.5, 1, 2, 3, 4, 5 (optional)
api_userstringyour API user id
api_secretstringyour API secret

Moderation result

The Moderation result will be provided either directly in the request response (for sync calls, see below) or through the callback URL your provided (for async calls).

Here is the structure of the JSON response with moderation results for each analyzed frame under the data.frames array:

            
                  
{
  "status": "success",
  "request": {
    "id": "req_gmgHNy8oP6nvXYaJVLq9n",
    "timestamp": 1717159864.348989,
    "operations": 40
  },
  "data": {
    "frames": [
      {
        "info": {
          "id": "med_gmgHcUOwe41rWmqwPhVNU_1",
          "position": 0
        },
        "type": {
          "ai_generated": 0.99,
        }
      },
      ...
    ]
  },
  "media": {
    "id": "med_gmgHcUOwe41rWmqwPhVNU",
    "uri": "yourfile.mp4"
  },
}


            

You can use the classes under the type object to detect AI-generated parts in the video.