Worrying about the news being “real” has gone from conspiracy theory to a genuine concern. Governments have demanded that the big publishers do more to stop the spread of “fake news”, but new technology is making the fakes harder to spot.
Old tricks, new tech
Doctoring images with programs like Photoshop has been a mainstay of the advertising industry for decades. We’ve always had to take what we see with a pinch of salt—we know that toothpaste won’t make our teeth that white, or the latest burger from our favourite restaurant won’t look nearly as appetising in real life. And that technology has become more accessible and easier to use—your phone can probably automatically remove red eye and turn you into a cat, there are even apps that predict what you’ll look like in 40 years.
But now, thanks to advanced artificial intelligence (AI) systems and machine learning, we’re starting to see a whole new class of fakery. Generative Adversarial Networks (GANs) can create entirely new images based on things they’ve seen, whether that’s producing works of art or photographs of people. And the results are often very hard to tell from the real thing. GANs rely on two networks—the generative network creates a new image and the discriminative network tries to pick holes in it. They go back and forth until the discriminative can’t find any more faults.
Don’t believe your eyes
This technology has many positive uses from the videos generated to test and train self-driving cars to apps built for entertainment value, like FaceApp. If you’ve ever wondered what your favourite movie would like with a different actor, well there’s a deepfake for that too–Jim Carrey in The Shining is particularly convincing.
But there are many people out there that are inclined to abuse the technology. Even simple fakes—like the video that was slowed down to make Nancy Pelosi appear drunk—have fooled plenty of people. So what chance do we have against something as sophisticated as this one featuring former president Barack Obama?
Thanks to AI, director Jordan Peele is able to put words into Obama’s mouth in real-time—the programme takes the audio, alters it to sound like Obama and creates video of his mouth moving to match the words. Fortunately, Peele had teamed up with Buzzfeed to warn about the dangers of this technology.
The rise of deepfakes
Over the last few weeks we’ve seen the worrying case of the DeepNude program that can turn an innocent photo into one baring all—not terribly surprisingly, it only works on women. That’s after the open-source software that superimposed the faces of celebrities onto porn videos.
What’s most alarming about these examples is that they’re being made by members of the general public. With some open-source software, readily available hardware and a modicum of knowledge, anybody can create realistic images, videos and audio that are hard to discern from reality. This isn’t just harmless fun, it could have potentially huge consequences.
What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?
The next steps
Unfortunately, there’s no simple answer to the rise of deepfakes. Unlike a fake old master, there’s nothing to carbon-date or canvas to analyse—a pixel is a pixel—so there’s unlikely to be a technology-based solution. And we all know that governments struggle to keep up with the rapid changes in technology—good regulation takes time to write and put into force. And even when legislation exists, it’s often impossible to bring perpetrators to justice—either they can’t be found, or they reside beyond the reach of the law
So, it’s down to us, the general public, to be more critical of what we’re consuming, and more importantly, what we share. As part of this, we all need to be aware of our own biases and how they affect our propensity to believe what we see. We can’t stop the creation of deepfakes, but we can all play our part in preventing the harmful ones spreading.
Posted by Katie on 27 August 2019