Guide

Faked photos and videos: what's possible today and how to spot fakes

David Lee
31.7.2018
Translation: machine translated

Advances in software, hardware and artificial intelligence are bringing us images and videos that appear to be real, but which are ultimately manipulated or generated by computers. How do we deal with this? A thorny question. In any case, it's important to know as much as possible about falsification techniques, in order to separate the real from the fake.

Rarely any photo retouching is forgery. Because falsification requires fraudulent intent. There are, of course, different opinions on the question of how much photos can be retouched, for example for fashion shoots, and the line between "optimisation" and "fraud" remains blurred. But that's another subject. I won't go into that in more detail in this article. Here, the main question is whether it is possible to detect image falsification and, if so, how.

Seeing is believing - our control body is lax

Recognising a falsification obviously depends on the quality of the falsification. The experts are unanimous on this point: a good forgery is extremely difficult to detect, unless you have the original image to compare. The same applies to paintings, as shown by the case of art forger Wolfgang Beltracchi, for example. In a fascinating interview with the NZZ, the artist Beltracchi explains how art experts stubbornly insisted in expert appraisals that paintings were authentic, until he admitted that they were forgeries.

Of course, it all starts with a suspicion. Normally, you will be suspicious of an unrealistic motif if the geometry and proportions seem bizarre or if the source seems unreliable. However, the internal control body is much more lax with images than it is with text. And yet images are just as misleading as words. Why do we trust images more? The most obvious reason: we can't pay attention to everything. We always only contemplate a few details of an image.

Our credulity about images may also have something to do with evolution. For as long as man has used speech, he has manipulated with words. With images, it's a more recent exercise. What's more, many people simply take a photograph as a reproduction of reality. This belief has always been naïve and, strictly speaking, wrong: even so-called 'real' photos are artificial creations, just like paintings or fairy tales. Especially in the digital age.

Finally, the type of image manipulation is decisive in detecting falsification. Let's take a closer look at some important techniques.

Photo editing

In photo montage, an object from another photo is inserted into the image. To achieve a realistic result, various things must coincide:

  • proportions. This is not always very simple, as objects close to the camera need to be larger, but how large?
  • angle and perspective. If person A is visible in a slight low angle, then person B must also be visible
  • the direction of the light (clearly visible with the shadows) and also the type of light (hard or soft light, cold or warm)
  • the brightness, contrast, saturation, colour balance and sharpness of the inserted image section should be approximately the same as the rest of the image.

Kenyan Seve Kinya posted this photo on Facebook on 2 March 2016. The beautiful story behind this photo, if the media are to be believed: Seve would have loved to go to China, but she had no money, not even a passport. So she travelled in her imagination with home-made photo montages. Of course, everyone can spot the forgery at a glance, because the contrast, colours and light don't match. For some reason, this photo montage went viral, people inserted the woman into numerous other photos. For Seve, it was a stroke of luck: a compatriot, in a fit of altruism, collected donations for her. That same year, Seve was able to post a real photo of herself on the Great Wall.

By comparing with this photo, you can immediately see that the proportions on the falsified upper part of the image do not match.

On the points above, if the work is not done properly, you recognise a falsification with the naked eye. The edges of the object are an important distinguishing feature: its contours. Before an object can be inserted into the image, it must first be completely detached from its original background. This task alone can be extremely tedious, depending on the contours and the background. When it comes to cutting out a person, you have to work around the locks of hair. A much easier exercise when the original environment represents a homogeneous background (in a studio, for example).

Touching up

Touch-up is, so to speak, the opposite of editing. An object is removed from an image. In this case, there shouldn't be a hole, but the area should be reworked with plausible image parts. Depending on the size and the image, this operation can range from very easy to very difficult. Anyone can touch up a pimple on the face. To do this, use the retouching brush and click on the button. That's it. The tool takes a piece of surface from its surroundings and copies it onto the area to be removed. Several similar tools in Photoshop have the same function, for example the duplication buffer.

Depending on the image background, making whole people or large objects disappear is difficult. With a homogeneous background, Photoshop does this fully automatically with the "Fill according to content" function.

Like the duplication stamp and the retouching brush, fill after content also uses the surrounding image areas for retouching. This is why patterns repeat: for example, we'll see two perfectly identical clouds or two perfectly identical waves in the water. In most cases, our eyes will not notice this, unless we are naturally suspicious. The computer can easily detect this type of retouching. You can find out more in the section devoted to forensic tools.

Deformations

With the Fluidity tool in Photoshop, it's very easy to modify contours.

Such tools (or similar ones) are readily used to give women an advantageous silhouette and even more often to transform women who already have an advantageous silhouette into surreal creatures.

Lengthen your legs, yes, but not too much anyway! But they illustrate very well what I am explaining in my article. Screenshot: aliexpress.com
Lengthen your legs, yes, but not too much anyway! But they illustrate very well what I am explaining in my article. Screenshot: aliexpress.com

It has to be said, though, that these manipulations are often easy to spot. Here, the background has also been distorted. Unless we're out in nature, we're surrounded by lines that should be straight: door and window frames, table edges, flagstones and so on. When these lines are curved strangely around the chest or hips, you know right away what's going on. Likewise if the waist is very thin and the forearms too wide.

This photo shows a dress intended for purchase on aliexpress.com. The mannequin was later given a feminine shape, which can be seen from the curved lines of the cabinet to her right.
This photo shows a dress intended for purchase on aliexpress.com. The mannequin was later given a feminine shape, which can be seen from the curved lines of the cabinet to her right.

On a neutral background, under certain conditions, it is possible to directly deform the object without it being noticeable. However, these days, models who appear exclusively in front of such backgrounds are immediately considered suspect.

Forensic tools

Various forensic methods make it easier to detect image falsifications. Swiss software engineer Jonas Walker has developed an easy-to-understand browser interface that lets you try out some common methods for yourself. But before you do, let me warn you against unrealistic expectations. These tools will only be useful if you know what you need to take into account. What's more, they only give you an indication of possible manipulation, not proof. Finally, these methods are most effective for original files. For images you've taken somewhere on the web, they often prove ineffective.

Clone Detection: automatically detects duplicated image parts. The function indicates areas from which objects have been removed with the duplication buffer or a similar tool.

**Error Level Analysis (ELA): detects JPEG artefacts created by (reiterated) compression. In themselves, artefacts are not yet an indication of manipulation. What is suspicious is when one area contains significantly different artefacts compared to another image area that shows something very similar.

In the ELA representation, the colours always get darker with each save. When an object is inserted into the image, it is much brighter in the ELA than it should be.

It's important to bear in mind, however, that the ELA is not the same as the ELA.

It should be borne in mind, however, that images on the Internet are likely to have already been saved and modified several times anyway. Conversely, we cannot say that the absence of these irregularities is an indication that the image has not been manipulated.

Principal component analysis: In this post, Jonas Walker explains how principal component analysis (PCA) works and its usefulness. His example highlights Photoshop's aforementioned 'content-based filling'. Although this analysis sheds more light on transformations, you still need to know what to look out for.

Automatic tools (Deep Fakes) allow you to make faked videos

So a photo montage that looks authentic isn't so easy to make. This is all the more astonishing given that in recent times, tools have been developed to enable the manipulation not just of individual images, but of an entire video clip. The key word here is [Deep Fake] (https://www.sciencesetavenir.fr/high-tech/intelligence-artificielle/deepfake-le-pouvoir-de-manipulation-de-l-intelligence-artificielle-en-un-mot_124308). For "fakes" (falsifications), Deep Learning methods are used.

For many, it may come as a shock to see porn videos circulating on the net featuring the face of a celebrity inserted in a more or less convincing way. But this didn't happen overnight. Software that automatically recognises faces, regardless of the angle of view and other variables, has been around for a long time. Using the automatically recognised angle of view on another existing image is the next logical step in development. But this, too, has been around for a while now, for example in the form of various morphing apps, which distort faces, or Face Swap in the Snapchat app, a function that allows faces to be swapped. With Deep Learning, the software is trained until it is able to mount faces without any human assistance. And, once it works, the technique can also be applied to an entire video.

What I mean by this is: it was many small evolutionary steps, not one big step, that made these "deep fake" videos possible. Now, however, that the technique has evolved to the point where it can be used to make falsified porn, the media stir (or rather excitement) is at its peak.

Until now, most of the time, it was easy to see that these were faked videos. They're blurry, everything seems to float haphazardly, and sometimes you can see bizarre and unrealistic mixes. Despite everything, these videos have a strong potential to engender a feeling of insecurity, because when it comes to videos, we're even more gullible than when it comes to photos. It has to be said that until now, videos, unlike photos, could only be falsified with extreme effort. This is no longer the case.

The future: are we losing our sense of reality?

Results will certainly become ever more sophisticated and it will be ever more difficult to detect falsifications. If, one day, it should no longer be possible to distinguish authentic photos and videos from their manipulated versions, the justice system will be missing one of its most important pieces of evidence. Is progress a step backwards? It would be a step backwards for jurisprudence by more than a century. And of course, generally speaking, it is not beneficial to a society when there is no longer a consensus on what is real and what is not.

But computerised detection of forgeries is also constantly being perfected. It has become a real competition. In my opinion, this is no reason to see everything in black. We humans are also sensitive in our perception. Imagine someone from 1990 watching a well-made video game from 2018. They'd probably mistake that scene for a film. On the other hand, a 20-year-old today can tell the difference between a film and a video game much more easily. But even if the virtual world is getting closer and closer to the real world, that doesn't mean that we won't be able to distinguish between them one day. Future generations will develop new capabilities in this area.

. Ed is a 3D human, a creation of artist Chris Jones.

59 people like this article


User Avatar
User Avatar

My interest in IT and writing landed me in tech journalism early on (2000). I want to know how we can use technology without being used. Outside of the office, I’m a keen musician who makes up for lacking talent with excessive enthusiasm.

These articles might also interest you

  • Guide

    Retro look, sweet or simple: only one thermal printer convinced me when tested

    by Stefanie Lechthaler

  • Guide

    Seven tips for better photo portraits

    by David Lee

  • Guide

    Photo tip: If the back and front are not right, try focus stacking

    by David Lee

22 comments

Avatar
later