Spam refers to "unsolicited usually commercial messages (such as emails, text messages, or Internet postings) sent to a large number of recipients or posted in a large number of places" with the intent to further promote their misleading contents.
In a survey from Orbit Media, QuestionPro and Foundation asking more than 1000 people about spam, 35% indicated seeing spam comments every day on social media (see graphic below).
Number of people (out of ~1000) seeing spam on social media daily, weekly, monthly or rarelyFor platforms and applications, it mainly corresponds to unwanted advertising or promotion, but also problematic behaviors such as:
Moreover, these behaviors occur through a platform circumvention method: users are led to leave the platform to go to another one.
Platforms and applications should decide if they want to detect promotional messages and spam, as there is no legal obligation to do so, even if spam is actually illegal in most countries. However, promotion and spam could have consequences regarding the reputation of the platform and the safety of its users. A platform deciding not to detect such content would risk having its reputation damaged as users may want to leave because of a "negative customer experience", says Meta.
Below are some facts and pros for online promotion and spam detection:
More generally, platform circumvention is a method that should be prohibited on platforms and applications to maintain safety and comfort for users, but also reputation and no unfair competition. Upwork, a marketplace for freelancers, says that freelancers getting paid by clients outside their platform is dangerous and against their terms of service for two reasons:
Apps and platforms generally resort to four levels of detection measures to detect online promotion and spam, as well as fraud attempts:
Platforms should consider relying on the help that other users can provide: they can report potential spam, promotion or platform circumvention attempts to the trust and safety team so that these are reviewed by moderators and handled by the platform. It's a simple way to do moderation, the user just needs a way to do the report. The main disadvantage is the delay needed to properly handle a potentially large volume of reports.
To simplify the handling of user reports and make the difference between the types of issue, it is important that users have the option to categorize their reports, with categories such as scam / phishing, sexual promotion or company promotion.
Human moderators are also good at detecting this kind of content, especially because spam and promotion comments are often similar. But note that human moderation has some common issues like the lack of speed and consistency.
When it comes to fraud attempts such as catfishing, it can be challenging to identify such cases since the attackers frequently use techniques to make their profiles seem real. Yet there are some warning signs that content moderators can watch out for, like:
Keywords are actually not a very good way to detect such content as there are no specific words allowing to filter the data but more syntactic patterns such as:
List of words such as follow, share or click could still be used to try to get a shorter sample of data to be verified by human moderators, but it would probably miss many examples and catch too many false positives (i.e. comments containing these words but that are not spam nor promotion).
The use of ML models would possibly be more relevant than a keyword approach for this topic. It would help detect such content with annotated datasets containing spam and promotion comments, but also circumvention attempts.
Automated moderation can also be used to moderate images and videos submitted by users by detecting for instance cases of impersonation or scams.
When identifying cases of scam, fraud or phishing, platforms and apps should always restrict or permanently disable the responsible user accounts. It means that if possible, platforms should be able to identify the user's IP address to avoid the creation of new fraudulent accounts.
In the worst scenario, when a user has been tricked and their money or personal data stolen, platforms should offer help to facilitate procedures with the local authorities by providing as much information as possible to be able to identify the fraudster or scammer in real life.
Actions that can be taken when encountering spam and promotion in comments are mainly the following:
Spam contents as well as fraud should always be taken seriously as they may lead to the reputation and brand of the companies to be tarnished. Spam contents can also lead to misinformation and be used to spread fake news. More generally, users being scammed on a platform could start feeling insecure, lacking trust and leave. Detecting this kind of content is essential from the company's point of view.
This is a guide to detecting, moderating and handling sexual abuse and unsolicited sex in texts and images.
This is a guide to detecting, moderating and handling harassment, bullying and intimidation in texts and images.