How does Sightengine compare to competitors?
We are not the only company doing Image Processing. Google and Microsoft for instance are developing general image vision APIs.
But we are different in that we specialize in Image Moderation. Understanding what meaning and emotions an image conveys (by being suggestive / violent etc) are tasks that are very different from traditional object detection.
Our work is not to recognize cats and dogs in photos, it is to infer the meaning and intent of an image from subtler visual cues. That is what we excel at.
How does Sightengine compare to Human Moderation?
Our service is more than 10x cheaper than Human Moderators and 20x faster.
The service is always on, and results are consistent.
Images remain private. No Humans look at your images to moderate them
What is Sightengine's Return on Investment?
We believe our solution can help:
Here are a few examples of what we have been told by our users:
How fast is Sightengine?
Sightengine has been built with speed in mind. Images are usually processed within a few hundreds of milliseconds. This is much faster than what human moderators can achieve.
We achieve that by building optimized algorithms and running them on specialized hardware (high-end GPUs).
There are a few things to keep in mind to make sure you get the quickest responses:
What is the underlying technology?
The Technology that Sightengine is built upon is called Deep Learning. It is a type of Artificial Intelligence that makes use of so-called Deep Convolutional Neural Networks.
Neural Networks have been around since the 1950s. They were an attempt to mimic the way neurons work and interact in the brain (in a very simplistic way). They did achieve interesting results but due to the limited capacities of computers at that time, they were limited to just a few neurons and the results were therefore limited.
It is only recently that the processing power of computers combined with new smart algorithms have made it possible to build extremely large networks, with billions of neurons. Those neurons are arranged into successive layers (hence the "Deep") in an attempt to mimic the inner workings of the brains.
It is those networks that have helped achieve huge leaps in computer vision, one of which was achieved in 2012, right at the time when Sightengine was being built.
How long have you been doing this?
Sightengine went live in 2013 and has been continuously improving since then.
Is it possible to customize the Moderation API to my needs?
We know that there is no one-size-fits-all solution with Image Moderation. Expectations will vary between countries, cultures and applications.
This is why our API endpoints do not simply return a binary classification. They send you more information so that you can make a fine-grained decision on how to handle each case. The Nudity Detection, for instance, will tell you what 'level' of nudity it has encountered. When faced with 'partial nudity', the algorithms will tell you exactly what the image contains, so that you can take appropriate action - or simply accept the photo.
How long does it take to integrate Sightengine?
This will depend on the way your back-end and your current image handling works.
We have worked hard to make the API simple and straightforward to use. You could be up and running in 5 minutes. In some cases we have had users who took a few days to perform a deeper integration.
Most of the time, users need no more than a few hours to be live.
Are there libraries that can be used?
Yes there are. If you are using PHP, Python, Node.js you can head to our Github repository for more. Our libraries are open-source and you can freely use them in your projects.
You are also welcome to share your feedback and contribute to those projects.
What image types are supported?
We aim to support all types of images. We currently support all the common formats (JPEG, GIF, PNG) as well as less used formats such as WebP.
If you have some other format that is not yet supported, send us a message.
Do you analyze only one frame for GIFs?
When it comes to GIF images we analyze all frames. We know that offensive content can be present in just a small subset of frames, so for safety's sake we will process them all.
Remember that for GIF images each frame will be analyzed. So if we will count one operation per frame (or more if you run more complex analysis on your GIF).
What are the acceptable / recommended image dimensions?
There is no upper bound on the width and height of the image.
There is, though, a lower limit to provide you with acceptable levels of accuracy. Your images need to have a width or height of at least 200 pixels. Our recommendation is to send images with a min(width,height)=400. This strikes the right balance between accuracy and bandwidth usage.
If your image is deemed to be to small and is rejected by our moderation API, we will not count an operation.
What are the constraints on the image size?
The image has to be less than 20 megabytes. We do recommend, though, that you send an optimized version of your image (typically less than 400kB) to reduce latency and reduce bandwidth usage.
Can I run Sightengine on my own servers (on-premise)?
No, Sightengine is a SaaS offering.
That being said, with our Enterprise Plan you can have Sightengine run on dedicated hardware instead of common shared servers.
With our Pro Plan you can choose in what countries we will process and analyze your images.
What is an Operation?
An Operation is counted when an analysis is performed on an image. For instance, running Nudity Detection on 1 image counts as 1 operation.
Some API endpoints may bundle several operations (such as nudity detection, face detection, type detection etc...). Each API call that consumes operations will return a specific field to let you know how many operations were used.
For videos, operations are counted each time a frame is analyzed, which typically happens once a second.
How does the Moderation Engine work?
The Moderation Engine is entirely automated. There are no humans involved in the analysis of your images.
We have developed and fine-tuned so-called Deep Neural Networks with huge image sets. They have been trained to identify subtle visual cues and determine how an image could be perceived.
Do you use image meta-data?
No we don't. We know image meta-data is not reliable as it is easily changed or removed. Just like humans will not need meta-data to determine what an image contains, so do we. You can name your image 'safe.jpg' or 'nsfw.jpg' or whatever, that won't change the result.
How relevant are the results?
The output we provide are probabilities. A 'nudity score' of 0.98 roughly means that there is a 98% probability that the image contains nudity.
I have come across a false positive / false negative, can I report it?
Yes, we have a feedback API that you can use to report images you believe were misclassified.
This feedback is precious and will help us continue to improve our algorithms.
What probability thresholds should I use?
Thresholds should be adapted to your specific use and will depend on how tolerant you are to false negatives or false positives.
For nudity detection for instance, users will usually start with a nudity threshold somewhere between 0.75 and 0.9.
What is your uptime? Do you provide a Service Level Agreement?
We work hard to maintain a perfect uptime. We do so by:
Some of our plans come with a Service Level Agreement. More details on the way the SLA is structured are available on Plan pages.
Do you throttle requests?
Our throttling strategy is very liberal, so you shouldn't have to worry about it unless you deliberately concentrate all your requests over a short timespan.
If we do throttle your request, you will get a 429 HTTP error code along with an error message telling you that you were throttled. You should then retry with an exponential backoff strategy, meaning that you retry after 4 seconds, then 8 seconds, then 16 seconds etc.
How does this differ from Skin Detection?
Skin Detection is a flawed approach as it will incorrectly flag images of faces, hands, arms or people at the beach as being Nude images. Our approach was designed to be smarter.
Plus Skin Detection works most of the time by recognizing pixels having a specific color. Which easily leads to false negatives due to things like sand, wood, walls etc having a skin-like color.
How do you define nudity?
We have worked to set up a classification of Nudity that is aligned with what our users expect. We define 3 levels of nudity:
Do you detect soft nudity / partial nudity?
Yes we do. Soft/Partial nudity will be flagged as such by our API along with additional information describing what type of partial nudity was encountered. This way you can choose to treat bikinis differently from bare-chested males for instance.
How do you handle images of women in bikinis?
We flag them as Partial Nudity and will return a Tag field in the API response to tell you that we found a woman in bikini in the image.
How do you handle images of men with bare chests?
We flag them as Partial Nudity and will return a Tag field in the API response to tell you that we found a bare-chested male in the image.
How do you handle images of suggestive cleavages?
We flag them as Partial Nudity and will return a Tag field in the API response to tell you that we found a suggestive cleavage in the image.
Does nudity detection work with drawings/cartoons?
The Nudity Detection works best on natural photos. That is what it was designed for. Nudity in drawings and paintings may be detected, though the accuracy will be lower.
When should I use People detection?
The People Detection model was developed to help dating apps, messaging apps etc determine if a given image can be used as a profile image. By profile image we mean an image that contains exactly one person. Group images and images with nobody would be rejected.
How is this different from other Face detection algorithms?
This endpoint is not intended to work like a face detection API. It therefore works even in cases where a user's face is not detected by traditional face detection algorithms.
Do you detect faces with low light / glasses / hats?
Yes, those should be detected by the API.
When should I use Type detection?
Type Detection helps you determine if a given image is a natural photo or if it is an illustration.
What is an Illustration?
An illustration is an image that is not a natural photo. This includes drawings, cliparts, cartoons, paintings, icons, screenshots etc
Where will my images be processed? I need to make sure my images stay in Europe / North America / China...
Our default API endpoint will direct your request to currently available clusters.
If you need to control exactly where your images are analyzed, you can sign up to our Pro plans. You will then be able to choose the most appropriate datacenters so that your data does not leave a specific geographic area.
Is my data safe and secure?
The security and safety of our customers are our highest priorities.
All our API endpoints are accessible over HTTPS (TLS 1.0, TLS 1.1, TLS 1.2).
Transfers between servers are encrypted with the strongest encryption in the industry - AES-256. Even transfers within the same rack are encrypted.
Our infrastructure is hosted by providers such as AWS that have the highest levels of security and monitoring, and that have built their datacenters following industry security standards.
What happens if I don't use all operations in my plan?
Nothing special will happen. You will get your new quota for the next billing cycle. Unused operations will not rollover to the new cycle.
What happens if I exceed a plan?
No worries if you exceed your quota. We know this can happen and your service shouldn't be affected.
If you exceed your quota we will simply apply the per operation rate and charge you for this additional usage a few days later.
Do you offer a free trial?
Yes there is a Free plan that you can use to test our API.
Can I stay on the Free plan forever?
Yes you can. There is no time limit.
Do failed requests / errors count as an Operation?
No, when the API returns an error (4xx codes), we will not count an operation.
When is my card charged?
When you sign up for a paid plan, your card is immediately charged for the amount of the first billing cycle. Your card will then be charged once a month on the renewal day.
If you exceed your monthly quota by a large amount, we may have to perform a one-time charge before the end of the billing cycle to make sure your services stay live.
Can I pay annually up front?
Yes. Please get in touch for more details.
How do I cancel?
Canceling is very simple. Just log on to your Sightengine account. You have an option to cancel your plan.
What happens if I cancel my plan?
If you cancel your plan, we will stop charging your card (except if there is a remaining usage beyond-quota that has not been paid for).
Your access will continue to work until the last day of the ongoing subscription cycle.
Can I use Sightengine on multiple domains / applications / projects?
Yes, you can use your Sightengine account for any number of domains, applications or projects.
Do you have plans for non-commercial use?
If you are working in Academia or for a Non-Profit we can give you a free access to one of our larger plans.
We are happy to support Academia, Research and Non-profits. Send us a message and we will explore options.
Is it possible to perform Bulk Analysis?
If you have a set of images that you need to moderate as part of a one-off job, we can come up with a custom solution for you.
This is a great way to get started and this is how many customers actually started working with us.