Sightengine's Image Moderation API is used to moderate Images and detect whether they contain unwanted content (such as adult content, offensive content, commercial text, children, weapons...).
The API works both with standard images (such as JPG, PNG, WebP...) and animated multi-frame images such as GIF images.
Sightengine Image Moderation is very straightforward to use:
All needed data is given in the API response. There are no callbacks, no moderation queues, no need to wait for updates or track state.
Head to our Quickstart guide to start using the API with a few lines of code, or head to the API documentation for a deeper look into the structure of requests and responses.
A Model is a filter that will look for a specific type of content in your Image. For instance, nudity is the name of a model that has been trained to look for any adult content, racy content, suggestive content and specifically flags scenes ranging from explicit to mild nudity.
By specifying the list of Models you wish to apply to an image, you tell the API what you would like to detect and filter.
Head to our Model reference to see all the available Models along with their detection capabilities.
Generate code snippets to start using the Image Moderation API with this online builder
Explore all the image moderation categories and models you can choose from.
Define your moderation rules directly from your online dashboard instead of defining them in your back-end code
Learn how to create lists of images to catch duplicates, spam and disallowed images
Was this page helpful?