
Things are not as they may seem…
Imagine this: you turn the news on after work. First the broadcaster speaks on the weather conditions for tomorrow. Next they mention the station will be showing a clip of the president from a press conference.
You continue to listen to the news while doing things around the house. The channel announces that President Donald Trump is beginning to speak. After finishing your tasks, you sit down to watch the end of the press conference. But, what you see is extremely alarming.
On the TV, it looks as though Putin is speaking instead of President Trump. How can this be? It’s Trump’s voice, but Putin’s face on his body. Even so, it looks so real it’s unbelievable.
How could this happen?
The trick is, is is still Trump in the video speaking and interacting. But, someone has created a deep fake of the video and put Putin’s face on Trump’s body.
This type of tactic can be used maliciously in the news and other platforms. Let’s breakdown what a deep fake is, how they are created and why they can be dangerous.
What is a deep fake?
To begin, by definition, a deep fake is when a video is altered with the use of AI to “produce or alter video content” into something that in fact, never occurred.
Further, this is similar to the example above. Someone took a video of President Trump, used AI to create a deep fake to look like Putin is there instead of him.
The breakdown here is that someone can take a video, and alter the subject (like Trump) to say / look like anything through the use of AI.
Scary isn’t it?
Here is an example:
In the video, you can see how easily Bill Hader becomes Tom Cruise and Seth Rogan. The transition is seamless and extremely real-looking.
How do you create a deep fake?
To begin, deep fakes are created through the use of artificial intelligence. CSO Online delivers a great breakdown of the technical aspect of a deep fake:
Deep fakes exploit this human tendency using generative adversarial networks (GANs), in which two machine learning (ML) models duke it out. One ML model trains on a data set and then creates video forgeries, while the other attempts to detect forgeries. The forger creates fakes until the other ML model can’t detect the forgery.
In simpler terms, computers are smart enough to integrate false images or audio on the real, original image until another computer cannot detect any abnormalities.
Why is this dangerous?
Deep fakes have been around for a while now. Further, they were originally created either to be funny or used for revenge. Users have made videos of political figures saying funny things. But others have used deep fakes to create revenge porn to break marriages up or win in court cases.
The range for what deep fakes can be used for is drastic.
In the recent news, attackers created a deep fake out of a CEO’s voice. With that, they created messages that asked employees to transfer funds out of the company. Unsuspecting, they did, under the presumption that it was at the request of the CEO.Continuing, the amount of money lost due to the deep fake ended up at half a million dollars.
Now, we can see the damage that a deep fake can do in a large corporation. But, this type of media can also be used to sabotage the news or even the government.
Deep fakes of the president can be created, like in the example that I mentioned at the beginning of the article. Imagine if a terrorist group chose to create an extremely realistic deep fake of the president and used it to target the US with misinformation that could cause mass panic. This is why so much concern exists around deep fakes.
How do you defend against deep fakes?
Many companies are taking the initiative to fight the spread of malicious deep fakes. For example, Facebook recently partnered with Microsoft to fight the creation of such media.
Facebook opened a deep fake detection challenge which is being sponsored by major companies who have put forth million dollar rewards. The hope of this initiative is to draw in other sponsors who will aid in the fight against these fake videos.
Continuing, social media platforms have requested that users report malicious videos and always check the sources of the content. This is a small step to take to prevent the spread of deep fakes.
Takeaway
All in all, as consumers we must exercise caution with this type of media. Deep fakes can be used to make funny videos, but also harmful ones. Social media is an enormous platform, and technology is ever evolving. In that we must stay cautious online, and report suspicious content to stop the spread of misinformation.
By Taylor Ritchey
Check out our article on ransomware to review the basics and find out how to protect yourself from hackers!