Our competitors have a one-size-fits-all approach to nudity detection. They will typically categorize images in two buckets: good or bad.
We have found that reality is more complex and there is a vast array of situations that are in-between. Not all customers have the same definition of what nudity is. Not all customers have the same expectations for what should be allowed and what should be banned. That is why we return a precise description of the type of content found.
As an example, some apps and websites will prevent users from posting photos of suggestive cleavages while others may allow that. This is why instead of choosing on our own what to accept or not, we simply return the detailed description, stating that an image contains a suggestive cleavage. Your app logic can then either reject or accept the image.
See our product page on nudity detection for an illustration of all the concepts we can help you detect, from very explicit to more suggestive ones.
Was this page helpful?