Sexual content includes both legal sex and sexual abuse. Sexual abuse is characterized by its non consensual or illegal aspect, either because it involves exploitation, violence or trafficking.
The different types of sexual abuse between adults are:
Sexual abuse also refers to Child Sexual Exploitation (CSE) content including:
Note that some categories such as sextorsion could also apply to minors.
The first step is to decide whether you want to make the difference between abusive / illegal sexual content and legal sex.
Decisions are different and nuanced depending on the platform. For example:
When it comes to sexual abuse, platforms have the responsibility to detect it from a legal and ethical point of view, as it mainly refers to activities and behaviors that are considered as illegal in the UK, US and EU and most part of the world.
Sexual content moderation really depends on your platform but one thing is sure: if your platform or application hosts minors, you must be very careful with this content as they shouldn't be able to see it. Also, minors should be protected from CSE. Failure to do so will bring serious legal consequences. There are more and more initiatives to help making the internet safer for children:
The main challenge to detect sexual abuse and unsolicited sex is to differentiate it from legal and consensual adult content.
When it comes to protecting minors, image-identification and content filtering technologies such as MD5, SHA-1, PhotoDNA or PDQ hashes are widely used to detect CSAM by relying on databases of known images.
That said, apps and platforms typically resort to four levels of detection measures:
Platforms should rely on the reports coming from their users. Users can easily make the difference between what is legal and what is abuse and report it to the trust and safety team so that these are reviewed by moderators, handled by the platform and / or reported to the authorities. It's a simple way to do moderation, the user just needs a way to do the report. The main disadvantage of user reports is the delay for the content to be reported and handled by the platform.
To simplify the handling of user reports and make the difference between the products mentioned in the message, it is important that users have the option to categorize their reports, with categories such as legal and abuse or CSE for instance.
Human moderators can also help to detect content related to sexual abuse as they are expert in making the difference between what is legal or illegal in terms of sexual content.
However, human moderation has some common issues such as the lack of speed and consistency. To assist human moderators, it is important to have well defined guidelines and examples, as well as resources with already encountered grey area cases. Platforms should also take into consideration the wellness of moderators, especially for those working with CSE content.
Keywords are a good way to filter the data by detecting all the words related to legal and illegal sex so that a shorter sample can be verified by human moderators but they are not very efficient to distinguish the two types of content:
The use of ML models may complement the keywords and human moderators approach by detecting sexual content with the help of annotated datasets containing different types of sexual content, abuse or not. It is very important to remember that collecting CSAM to train your models is totally illegal.
Automated moderation can also be used to moderate images and videos submitted by users and by detecting depictions of any type of nudity (nudity in a sexual context as well as people in underware for instance) or minors.
More or less severe sanctions can be used by platforms and applications depending on the case. Removing the sexual comment, suspending the account of the user for a limited time or permanently or reporting the user to the legal authorities are examples of sanctions that can be applied.
Several criteria should be taken into account when deciding on the sanction:
Sexual content involving minors is illegal in the US, UK, EU and in most regions of the world. It should therefore lead to serious consequences.
If you encounter contents related to CSE and especially CSAM, grooming or sextorsion directed to minors on your platform, you should report the case to the local authorities and to specialized NGOs such as:
Sexual abuse proactive detection and prevention should be one of your priorities when building your platform and its community guidelines, especially if your platform is hosting users being minors and therefore more vulnerable. Remember that this type of behavior may have a serious negative impact on the life of affected users.
This is a complete guide to the Trust&Safety challenges, regulations and evolutions around GenAI for imagery.
Learn how Sightengine performed in an independent AI-media detection benchmark, outperforming competitors with advanced methodologies.