FAQ / Why Sightengine

How fast is Sightengine? How can I make Sightengine requests faster?

How fast is Sighteninge?

Sightengine has been built with speed in mind. Images submitted to our Nudity Detection API for instance are usually processed within a few hundreds of milliseconds. This is much faster than what human moderators can achieve.

We achieve through a series of steps:

  • our inference algorithms have been optimized both for accuracy and speed
  • we run them on specialized hardware (such as high-end CPUs and GPUs)
  • we hate queues. Whereas vendors typically put incoming requests in queues and let them wait until a resource becomes available, we provision resources to make sure you never have to wait in queue. Our default is queue-less processing
  • we control the complete stack from end-to-end, obsessing over every detail to obtain high quality infrastructure (powerful bare-metal servers)
  • we optimize request latency by giving access to a geo-located Inference Network, where you can choose the location of the inference servers you use (this is an Enterprise option only)

Some of our models are heavier and take a bit more time to run than Nudity Detection. The largest ones are currently the Text Detection, Offensive and Weapon Detection models. Those take closer to 2 seconds of processing on our standard plans.

Make requests even faster

When talking about speed, there are several elements that will influence the latency of requests to Sightengine:

  • The network overhead. This is the time it takes for you to perform a request to our servers. The time here will be influenced by the different steps (DNS lookup, TCP round-trips, SSL negotiation...) and the distance between the servers
  • Retrieving the image or video. Depending on how you submit the image, our servers might have to perform an extra-step to retrieve the image
  • The actual image processing. This is the most important step, where we process your image or video

That being said, there are a few things you can do to make sure you get the quickest responses:

  • Properly re-use connections between subsequent requests, so that you don't perform a new SSL negotiation/handshake on each request
  • Prefer POSTing the images directly to the API, rather than sending a public URL to the image. This way our servers don't need the extra time to fetch the image
  • Make sure you optimize the size and content of the images or videos you submit, as this will save both time and bandwidth.
  • If you need to run multiple checks on a given image or video, then make sure you batch those checks in a single request rather than doing separate API requests for each
  • With our Enterprise plans, you can choose exactly where your images are processed and thus minimize latency by choosing a datacenter near your back-end. With the Enterprise plans you can also get ask for priority processing.

Did you find this page helpful?

We're always looking for advice to help improve our documentation!

Let us know what you think