Find out what tricks you can use to manually detect AI-generated images! This blogpost brings together all the clues you need and also explains the difficulties and limitations of human screening.
The rise of AI-generated images and deepfakes has introduced new complexities to the digital world. According to this Everypixel study, people are creating an average of 34 million images per day using text-to-image tools such as DALLE-2 or Adobe Firefly. These images, while fascinating and often highly realistic, can be used to mislead or deceive people.
GenAI in Trust & Safety is an essential matter. New regulations are currently being developed and applied. Platforms hosting user-generated content have begun implementing new features to keep people informed and aware of misleading content. But knowing how to identify AI-generated content is crucial, whether you’re a journalist, researcher, or just a curious user. In this article, we'll explore how human perception can help in identifying AI-generated images, why this task is becoming increasingly hard, and the tools available to verify the authenticity of images.
Human perception is our primary defense against AI-generated images. It’s also what we might call “intuition”, and even if you think otherwise, everyone has some, it just takes a little practice! While AI has become incredibly sophisticated, there are still subtle details that humans can grasp, revealing an image's artificial origin.
One of the common giveaways in AI-generated images is the image texture. AI often struggles with replicating natural textures, leading to surfaces that appear unnaturally smooth or inconsistent.
The most common example is the representation of human skin, which often lacks pores or appears overly polished, far from reality. Although flawless skin is often depicted in magazines and on social networks, pimples are common in real life. AI-generated human faces are sometimes so perfect they look like they come straight out of an animated movie.
The person in the image below ticks all the boxes: no flaws, no pores, perfect, smooth, etc.
AI image showing a woman cooking
But the image also shows that texture issues are not only skin-related. Any visible object in this image seems to have a texture problem:
These are just a few examples of texture issues you could find in AI-generated images. You can use them as an inspiration to better identify such glitches in the images you encounter online.
AI-generated human faces are getting more convincing, but they’re not perfect.
In addition to human skin that sometimes has specific characteristics when generated by AI, you should always pay attention to features like ears, hands, or eyes—these areas often have subtle distortions. Eyes might lack symmetry or have unusual reflections, while fingers can be misshapen or positioned in unnatural ways.
Fingers and hands are probably the most difficult body parts to generate for AI. The following mistakes are the most common ones:
AI typically struggles with generating text within images. Here are some common mistakes found in AI images:
Let’s practice! See the image example below. Everything looks normal at first sight. But if you have a closer look, you might notice signs which, when zoomed in, are just random nonsensical texts, with a strange font and inconsistent size.
AI image showing the front of a restaurant
The background in AI-generated images can often appear strange or out of place. This can result in a disjointed or surreal image that feels "off."
Here are some common issues related to backgrounds in AI images:
All the features above are good indicators to know if an image is likely to be real or not. But sometimes, AI images don’t show any of these hints.
The last resort is then to look at whether the content of the image is probable or not. In other words, the question you should ask yourself as a human with world knowledge and good intuition is “Would it be possible for me to see this in real life?”.
The example below shows incredibly big snowdrops growing in the wild, but also on a woman’s hat… And it feels particularly unreal!
AI image showing a woman surrounded by snowdrops
Let’s have a look at a few famous fake photos that appeared online during the last months.
Image 1 shows Donald Trump, a well-known American politician, being arrested by policemen. The image background is blurred and details are not sharp. The background also looks like a copy-paste of police officers and uniforms, which seems a little strange. The face in the foreground is quite realistic, but the fingers of his right hand don't look complete.
Image 2 also shows Donald Trump in an outfit that looks like a mix between a suit and a jail uniform. This feels again very odd. The background is blurred and shows the judges’ table with far too many armchairs to be true! And let’s talk about Trump’s arms and hands: arms seem too short and fingers are missing…
Finally, Image 3 shows Pope Francis wearing an exaggerated white down jacket, which is unlikely. The texture of the image is a little too smooth, and the lighting of the foreground compared to the background seems far too strong.
All these indicators should be easy to spot for trained eyes but can go unnoticed if you just take a quick look.
Image showing three known fakes
Despite the clues outlined above, spotting AI-generated images is becoming increasingly difficult. The evolution of AI technology and the increasing number of models make these images more realistic than ever.
One way to get an idea of this task difficulty is to have a look at how well humans perform. We at Sightengine have decided to build an “AI or not?” game to challenge our website’s visitors and test their GenAI detection skills.
Results of the game are based on 150 images, half of them AI and half of them real. They show that the 2,500 participants that went through the test achieved an accuracy of 71%. This is a good score but when looking at more detailed stats, we notice that 20% of images fooled most players, meaning that more than half of them mislabelled these images. Also, people seem to have more doubts when it comes to real images identification: only 23% of them are correctly recognized by 80% or more people while it is 45% for AI images.
The above takeaways suggest that distinguishing between real and generated content is becoming increasingly challenging, even for trained eyes.
AI-generated images have experienced remarkable advancements in recent years, thanks to improvements in models like GANs (Generative Adversarial Networks) or diffusion models. These advancements make the task of differentiating real images from AI-generated images more difficult especially because they have significantly enhanced:
The four images below are examples of how models can evolve. The same prompt “photo close up of an old sleepy fisherman's face at night” was used to generate each one of these images with different versions of Midjourney models:
Image showing results of the same prompt with four different MJ model versions
These are good examples of how quick and impressive the evolution of genAI results is. The generated face shows more and more realistic details. At first, the face almost looks animated because of how smooth the skin looks. The final image shows human facial flaws similar to those you would find in any older man’s face. It makes the image very authentic, closer to what we see in real life.
When human judgment is not enough, several tools can help verify the authenticity of an image.
Reverse image search tools such as Google Lens allow you to upload an image and see where it has appeared online.
To use Google Lens, nothing simpler! Here are the two ways:
After searching with your image, you get a list of links where the image seems to appear. This can help you determine if the image has been altered, reused, or taken out of context. If an image appears on multiple unrelated websites, it may be a sign that it has been manipulated. Sometimes, known deepfakes are even indicated as such on some web pages.
Image showing a reverse search example with Google Lens
There are specific AI detection tools designed to analyze images and determine whether they are AI-generated. They can be useful, especially because many images do not contain any clues indicating they could be AI-generated, and because
We at Sightengine have been working on an AI-generated image detection model that was trained on millions of artificially-created and human-created images spanning all sorts of content such as photography, art, drawings, memes and more.
Image showing Sightengine's AI detector
This model can be used for personal use to make sure the images encountered online are real. It can also be used by platforms and apps seeking to protect their users from seeing deepfakes for instance: the model flags AI images and the user can apply some specific moderation processes to detected images.
The algorithm does not rely on watermarks, i.e. visible or invisible signs added to digital images allowing to identify the generator. Watermarks are indeed not always present on images and can be easily removed from them. It is the visual content that is analyzed during the process, making the approach much more scalable.
Images generated by the main models currently in use (Stable Diffusion, MidJourney or GANs for instance) are flagged and associated with a probability score.
Have a look at our documentation to learn more about the model and how the API works.
As AI continues to evolve, the line between reality and artificiality blurs. While human perception can still recognize inconsistencies in AI-generated images, the task is becoming increasingly challenging, even when knowing all the tips! Engaging with tools such as AI detectors can provide additional assurance in verifying the authenticity of an image. Staying informed and vigilant is key in safely navigating the internet.
Learn how Sightengine performed in an independent AI-media detection benchmark, outperforming competitors with advanced methodologies.
Blanket bans on nudity and specifically on bare breasts are coming under increased scrutiny. We hear they clash with cultural expectations and impede right to expression for women, trans and nonbinary people.