Deepfake is one of the newest internet storms to hit. So, what is it?
Most of us by now have seen the face-ageing and face-swapping apps that are available. Smartphone camera apps are becoming increasingly more sophisticated, making these pictures easy to create. The images these apps generate are not overly realistic. Still, there is a technology that can make a person look and/or sound entirely like someone else by swapping faces in videos.
“Deepfake” is an amalgamation of “Deep Learning” and “fake”. Deep Learning is a branch of AI (artificial intelligence) that aims to mimic the human brain and how it works. This may include deciphering supplied information and such, without the need for a real human brain to do it for them. The use of the second word “fake” is exactly what it says- fake images, speech, and even people. There are different ways to face swap and not all of them use AI, but deepfakes do. Simply put, it is a way of making one thing look like another, whether it be an image or a video clip.
The implications of this technology are astoundingly far-reaching, both positively and negatively. Unfortunately, the negative appears to currently outweigh the positive.
Deepfake has gathered infamy for being at the centre of fake news, hoaxes, fraud, celebrity porn and revenge porn. It would appear that “seeing is no longer believing”. Celebrities and politicians are now finding themselves stars of videos that they never participated in. Clips are emerging online, such as:
- US House Speaker Nancy Pelosi found herself at the centre of a backlash, allegedly slurring and stammering throughout a speech (when in reality, the video had been slowed to 75% of its speed). Trump even Tweeted the said video with the headline “PELOSI STAMMERS THROUGH NEWS CONFERENCE”. The clip was viewed two and a half million times on media sites.
- Angela Merkel’s head superimposed with Donald Trump’s.
- Famous actresses are finding their faces pasted over porn stars bodies in illicit videos.
Fraud has even been committed by imitating voices over the telephone, lulling customers into believing the voice belongs to someone they know and parting with money and performing requested bank transfers.
Deepfake news clips can become very harmful with a lack of knowledge about what they are. Professor Hao Li from the University of California predicts that within a few months, deepfakes will become almost indistinguishable from original videos. Andrea Hickerson, The Director of School of Journalism and Communications in South Carolina Says they are “Lies designed to look like the truth”.
When used maliciously, the effects could be devastating. Imagine seeing a political leader, touting hate speech, and inciting violence. What if another country then decides to take action? If we take these videos as the truth, consequences could be dire. If you believe it, why shouldn’t the next person? Once shared on social media, it becomes worldwide news in no time.
Facebook has recently announced that it will contribute $10 million to a fund that aims to improve deepfake detection technology. In addition to this, it plans to work with other organisations to expose the people behind it. Meanwhile, Google, Twitter, Reddit, and Discord are blocking and banning such videos with rumours of bringing charges too, such as identity theft, harassment, cyberstalking and revenge porn.
In spite of the negativity surrounding deepfake, there are many positives to emerge from this. AI is very likely to to be featured heavily with the movie industry in the future. Makeup and clothing are areas where this could easily be adapted. With an ever-evolving world of technology, the tools within video editing available will become more accessible and easier to use.
Digitally altered humans have already appeared on our screens: Harrison Ford as his younger self in Han Solo, a de-aged Princess Leia in Rogue One and Samuel L Jackson in Captain Marvel, to name but a few. Peter Cushing was resurrected for his role in Rogue One in 2017, despite having died in 1994.
David Beckham recently voiced a petition with Malaria No More in which his own voice was translated into 9 languages, using AI with a company called Synthesia.
Gamers could find that they can create life-like avatars sporting their own image. AI could change the ‘face’ of fashion by creating avatars to model new designs.
AI could be used within medical fields with people that have a poor body image. It can create an image they are more comfortable with, and this could be used to inspire, for example, dietary changes etc.
Despite the dubious nature of AI and deepfake, it needs to be better understood for its positives and not just feared for the negatives. Currently, technology is fighting against technology in a seemingly never-ending battle. The algorithms used to determine deepfake are very similar to those that deepfake uses and both sides are continually evolving.
On that basis, what can we do to differentiate the two?
There are tell-tale signs on some videos, such as:
- The lighting flickers.
- The face just “doesn’t seem to move quite right”.
- Eyes can move independently of each other.
- Deeper than normal voices could mean the video has been slowed down.
- Frequently done at a lower resolution to disguise the issues above.
At the same time, we also need to be more self-aware of what we read and repost on social media. It is time for us to be much more critical and fact check authenticity before sharing sensational and inflammatory posts. A few seconds to double-check could stop the sharing of a false post by the thousands. If you can’t determine the truth, don’t share it, simple. The rest of the battle remains firmly in the hands of the tech people dedicated to fighting cybercrime, fake news, fraud and too many others to mention!