Content Moderation / Guide

Trust & Safety changelog

This changelog is in Beta. As we iterate to make it as useful as possible, let us know if you have feedback.

Stay on top of the latest changes affecting your Trust & Safety operations.
This changelog is crowdsourced, feel free to suggest missing items.

Filters: RegulatoryProduct     Geos: USEUUKChinaRoW

Adult contentChild SafetyExtremismHarassmentHateMisinformationPrivacySelf-HarmSubstancesViolence

March 2023

Yubo and the AFNOR Group are forming a working group to create a new safety standard on the prevention of risks and protection of minors on social networks. Read more or even more WorldAFNORYubo ProductChild Safety
Meta is introducing a new way for users to authenticate their account with a missed call. Read more WorldMeta ProductPrivacy
The House of Representatives is passing a bill to protect online speech from federal officials. Read more US RegulatoryPrivacy
Snapchat is publishing guidelines in its Family Center, detailing how content gets algorithmically recommended to minors. Read more WorldSnapchat ProductChild Safety
Texas is proposing a bill requiring platforms to protect minors from harmful content and to get parental consent before data collection. Read more US RegulatoryChild SafetyPrivacy
Wisconsin is joining other states requesting court order for TikTok to comply with investigation into its communication practices. Read more USTikTik ProductChild SafetyPrivacy
Meta's oversight board is reviewing the moderation of the Arabic word "shaheed", meaning "martyr" in English, the word associated with most content removals. Read more WorldMeta ProductExtremism
Whatsapp is agreeing to be more transparent on changes to its terms of service according to the European Commission. Read more or even more WorldMeta Product
Facebook is revamping its "cross-check" moderation system after facing criticism for applying different review processes for VIP vs regular users. Read more WorldMeta Product
Twitter is now prohibiting "wishes of harm" in its new violent speech policy, banning users expressing desire for harm. Read more WorldTwitter ProductHateViolence
TikTok is setting a new default 60-minute daily screen time limit for minors. Read more WorldTikTok ProductChild Safety

February 2023

Meta is reforming its penalty system "Facebook Jail", saying users will now be receiving a warning first for most violations. Read more WorldMeta Product
Tennessee is examining a bill asking phone manufacturers to track and block harmful content from minors. Read more US RegulatoryAdultChild SafetyViolence
Meta is rolling out a new version of its ad-matching tool, providing more information about how users activities are feeding ML models. Read more WorldMeta ProductPrivacy
Meta's oversight board is announcing it will now review more types of content moderation cases and publish some decisions on an expedited basis. Read more WorldMeta Product
Arkansas is proposing a new bill requiring ID to access pornographic websites. Read more US RegulatoryAdultChild Safety
California is attempting for the second time to hold social media liable for addicting child users to their product. Read more US RegulatoryChild Safety
Google is introducing a blur feature helping users avoiding explicit images while using the search engine. Read more WorldGoogle ProductAdultViolence
Meta is launching new comment moderation tools allowing creators on Facebook to view moderation statistics and manage conversations. Read more WorldMeta Product
California is examining a bill targeting fentanyl and firearms sale and harm promotion to children on social media. Read more US RegulatoryChild SafetySubstancesViolence
TikTok is rolling out a revamped account enforcement system, including a new strike system and features dealing with recommendations. Read more WorldTikTok Product
TikTok is opening transparency and accountability centers to visitors. Read more WorldTikTok Product

January 2023

The Supreme Court is taking up Section 230 for two cases to be heard in February, involving social media's relationship to terrorist activity. Read more US RegulatoryExtremism
Twitter is planning to limit permanent suspensions of accounts breaking its rules, adding that any user will be able to appeal an account suspension. Read more WorldTwitter Product
Meta's oversight board is announcing the removal of the Ukrainian far-right military group Azov Regiment from its list of dangerous individuals and organizations. Read more WorldMeta ProductExtremism
The Supreme Court is asking The Biden administration to weigh in on two content moderation cases. Read more or even more US Regulatory
CNIL is warning that some age verification methods requiring face scans to access pornographic websites could be used for blackmail. Read more US RegulatoryAdultChild Safety
Meta's oversight board is sharing their decision to redefine Facebook and Instagram's community rules regarding nudity in a less discriminatory way. Read more or even more WorldMeta ProductAdult
Louisiana is passing a new law requiring residents to provide proof of their age with an official ID to access pornographic websites. Read more US RegulatoryAdultChild Safety
The United Nation’s 2022 Internet Governance Forum (IGF) is discussing internet fragmentation among democracies vs. driven by authoritarian states. Read more World Regulatory
Google is developing a free moderation tool for terrorist material identification and removal for smaller platforms. Read more WorldGoogle ProductExtremism
TikTok is announcing creators are now able to restrict their videos to adult viewers. Read more WorldTikTok ProductChild Safety

December 2022

Meta is launching HMA, a new free tool helping platforms identify and remove violating content. Read more or even more WorldMeta ProductExtremism
Apple is cancelling its plan to scan photos stored in iCloud to detect CSAM. Read more WorldApple ProductChild Safety
Meta's oversight board is suggesting that Meta's cross check programme is more commercially driven than commited to human rights. Read more WorldMeta Product
TikTok and Bumble are joining Meta to stop revenge porn by blocking images from StopNCII.org's bank of hashes. Read more or even more WorldTikTokBumbleMeta ProductAdult

November 2022

Naver Z is introducing its Safety Advisory Council providing expertise on their policy and features. Read more or even more WorldNaver Product
Twitter is announcing its Covid misinformation policy is no longer enforced. Read more or even more WorldTwitter ProductMisinformation
Teleperformance is announcing it will no longer accept any new highly egregious content moderation work. Read more WorldTeleperformance Product

October 2022

TikTok is introducing new and updated features and policies for its LIVE community. Read more WorldTikTok ProductChild Safety
Twitter is reviewing its policies around permanently banning users. Read more WorldTwitter Product
Spotify is acquiring Kinzen, a firm specialized in identifying harmul audio content. Read more WorldSpotify ProductHateMisinformation

September 2022

Tumblr is creating community labels allowing users to avoid seeing unwanted content. Read more WorldTumblr ProductAdultSubstancesViolence
Twitter is opening the Twitter Moderation Research Consortium (TMRC) to researchers. Read more WorldTwitter Product
Facebook is experimenting with asking 250 users to help moderate climate speech. Read more WorldMeta ProductMisinformation
Instagram is developing a feature protecting users from receiving unsolicited nude photos. Read more WorldMeta ProductAdult
YouTube is announcing updated content moderation policies to prohibit violent extremist content. Read more WorldYouTube ProductExtremism
California is requiring platforms to report how they moderate hate speech, extremism, harassment and other objectionable behaviors. Read more US RegulatoryExtremismHarassmentHate
Twitter is expanding its fact-checking feature Birdwatch, allowing users to add additional context to tweets. Read more WorldTwitter ProductMisinformation

August 2022

OpenAI is introducing a content moderation endpoint assessing whether the content is sexual, hateful or promoting self-harm. Read more WorldOpenAI ProductAdultHateSelf-harm
TikTok is said to be training its moderators to detect CSAM using graphic images and videos as a reference guide. Read more WorldTikTok ProductChild Safety

See missing items? You can add them here.
Want to keep an eye on what's going on? Consider subscribing to Ben Whitelaw's newsletter.

Read more

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

OK