AI-generated (GenAI) images are becoming increasingly common online. While they open up possibilites for creative expression and exploration, they can also become a tool to help spread misinformation and generate illegal imagery. As such, Trust & Safety teams will need to keep a close tab on this technology, the regulatory environment and approaches to combat nefarious GenAI.
Generative AI is a subset of deep learning models that learn characteristics of training data to create new content having similar patterns to the data it was trained on.
This article focuses on image and video generation. The most popular models here are:
Model | Latest versions | Description |
Stable Diffusion (Stability AI) | SDXL, SD2.1 | open-source deep learning, text-to-image model based on generative diffusion techniques that is one of the most popular ones among users and that has led to a large number of variants and submodels |
MidJourney (MidJourney Inc) | 6.0 | closed-source text-to-image tool that is accessible through a Discord bot and that gives users several variations and engines to choose from, making it very versatile |
DALL-E (OpenAI) | 3 | closed-source text-to-image model that is available through OpenAI APIs, ChatGPT and Bing Image Creator, and that allows great customization and advanced capabilities |
Imagen (Google) | closed-source text-to-image diffusion model with a great photorealism capability | |
Firefly (Adobe) | text-to-image model mainly used in the design field and embedded into Adobe products |
To generate an image with one of these models, you should use a prompt, i.e. an input using natural language that is given to a GenAI model so it can then generate an output (here it would be a visual output).
Let's say we want to generate an image with the following prompt: "A teenager on a computer being tricked by AI-generated images, cartoon style". Below are examples of AI-generated images that we would obtain.
Stable Diffusion
DALL-E 3
MidJourney 5.2
Firefly
All these tools and platforms have become very popular among people by allowing them to generate any image from a description in natural language. According to this Everypixel study, as of August 2023, more than 15 billion images have been created using text-to-image algorithms, which is as many images as photographers have taken in 150 years, from the first photograph taken in 1826 until 1975.
Number of AI-generated images as of August 2023 (Everypixel, 2023)Anyone can become an artist, creating a superb, beautiful or impressive image. But because these tools are very new, accessible to the general public, and used by an increasing number of users, they also come with more negative, even dangerous, aspects that need to be regulated in a very urgent way.
Generating funny, informative or artistic visual content is become increasingly common, thanks to its ease and impressive results. However, GenAI comes with its own set of challenges, including:
Note that what we call deepfakes includes both AI generated content that spreads false information about someone, as well as any other content that has been digitally modified by a human to mislead other users.
Images and videos that are AI-generated are subject to the same rules as real images. The fact that the image is fictional and that the people in the image are not "real" does not change the legality of said image. In short:
In the EU, the use of AI will soon be regulated by the AI Act, the world’s first comprehensive law on the development and use of AI technologies. GenAI falls under the scope of these regulations.
Companies hosting user-generated content will need to indicate whether a given image or video was AI-generated. This means that companies will have to ask users to label images accordingly and/or automatically detect AI-generated imagery.
Companies developing or deploying GenAI models will further have to comply with the following:
Depending on the scope and risks, some GenAI tools could also be banned or assessed before being put on the market and after, if they are used to manipulate vulnerable groups or if they are related to biometric identification for instance.
You should in any case stay informed about relevant laws and regulations in the regions where your platform operates. Failing to comply with these requirements could have serious legal consequences and damage your reputation.
As of today, most platforms currently allow AI-generated content, as long as it adheres to the platform's community standards and advertising policies:
However, some other social media platforms have introduced new rules or tools for such content:
We expect platforms to further update their rules and processes ahead of the upcoming regulations, especially as these issues have already been raised, for instance in the case of political deepfakes: at the beginning of October 2023, US legislators called on Meta to explain why deceptive AI-generated political advertisements were not flagged on Facebook and Instagram.
To stay up-to-date, don't forget to check our Trust and Safety changelog.
Given the current and upcoming regulations around AI and GenAI, we expect platforms hosting user-generated content to need to systematically identify and flag AI-generated images and videos.
While platforms will in some cases ask users to self-flag or self-report GenAI, additional tools will be needed. Possible tools are: Watermarking and AI-Detection.
Watermarking can be used to mark an image as being AI-generated, and later help platforms identify the source of the image.
While watermarks originally consisted of designs or signs that were simply added to digital images, making them easy to edit out, newer watermarks are invisible to the naked-eye and are embedded into the image itself. They can also used to identify the generator of the image. Examples include:
There are some limitations to the use of watermarks for AI-generated image detection:
Detecting AI-generated content can also be done by leveraging AI models specifically trained to recognize AI-generated imagery.
At Sightengine, we have trained a model on millions of examples of real and generated images, spanning photography, digital art, memes and illustrations. Platforms and apps can use the AI-generated image detection model to flag AI-generated content and/or apply specific moderation processes to generated imagery.
Results and insights from our AI or not game: how well humans identify AI images, when they get fooled and what we can learn from this.
This is a guide to detecting, moderating and handling self-harm, self-injury and suicide-related topics in texts and images.