Our Ruled-based Text Moderation is a lot stronger than word-based filters. It uses advanced language analysis to detect objectionable content, even when users specifically attempt to circumvent your filters.
As an example, for each word we will be looking up millions of variations that might be used to evade filtering, while smartly ignoring all situations that might generate false positives. Here is a partial list of the situations that we cover:
Characters being repeated to avoid basic word filtering
Replacement of characters with typographical symbols
Adding spaces, punctuation and more within words
Unusual non-ASCII characters used to evade basic word filters
Changing word spellings while retaining their original meaning or pronunciation
Replacing some alphabetical characters with a combination of punctuation, digits and letters
Catching profanity based embeddings, while smartly ignoring potential false positives such as bassguitar amass...
Machine learning models can detect problematic content in situations that would otherwise have been missed or incorrectly flagged by simple keyword filtering and even by Rule-based models because they are able to take context into account.
As an example, words such as dick, kill or failure will be understood in context:
this is my Dick β |
I grew up reading Dick Tracy's comics βοΈ |
I read a book to kill my neighbor β |
I read a book to kill time βοΈ |
you are such a failure β |
I'm not used to failure βοΈ |
Was this page helpful?