Photoshop may be renowned for its ability to help people edit images in weird and wonderful ways. As one of the major tools of bringing photo manipulation into the 21st century, it has shaped the world's visual culture.
But with the threat of image and video fakery quickly reaching epic proportions (think deepfakes and face swapping), fake content, images and videos are becoming a major issue.
Adobe and researchers at UC Berkeley in the United States have teamed up to create a way of detecting image edits that were made with Photoshop Face Aware Liquify feature.
Adobe has already researched image manipulation detection based on cloning, splicing and removal techniques. This research focuses on Face Aware Liquify because it's a popular way to adjust faces.
Researchers believe that by using a deep learning method called a convolutional neural network (CNN), it will be able to recognise faces that have been manipulated.
The researchers programmed Photoshop to run Face Aware Liquify on thousands of images from the internet. Of those images, some were chosen to train the CNN, with additional help with an artist's touch.
"We started by showing image pairs (an original and an alteration) to people who knew that one of the faces was altered," explains Adobe researcher Oliver Wang. "For this approach to be useful, it should be able to perform significantly better than the human eye at identifying edited faces.
The neural network detected manipulated images about 99% of the time, while humans could only pick out the fakes 53% of the time.
The neural network was also able to show where faces had been edited and it was able to 'undo' edits, essentially reverting images back to their original state.
"This is an important step in being able to detect certain types of image editing, and the undo capability works surprisingly well," explains Adobe's head of research Gavin Miller.
UC Berkeley researcher Professor Alexi A. Efros explains that detecting image fakery may seem impossible because there are many elements to facial geometry.
"But, in this case, because deep learning can look at a combination of low-level image data, such as warping artifacts, as well as higher level cues such as layout, it seems to work.
"The idea of a magic universal 'undo' button to revert image edits is still far from reality," Richard adds. "But we live in a world where it's becoming harder to trust the digital information we consume, and I look forward to further exploring this area of research.
"Beyond technologies like this, the best defence will be a sophisticated public who know that content can be manipulated — often to delight them, but sometimes to mislead them," Miller adds.
Although the research is in the early stages, it highlights a broader effort to better detect image, video, audio, and document manipulations.
"Adobe is firmly committed to finding the most useful and responsible ways to bring new technologies to life – continually exploring using new technologies, such as artificial intelligence (AI), to increase trust and authority in digital media," Adobe concludes.