future innovation

DeepFakes are getting busted . . . with upcoming Swedish app

 
 
 
 
DEEPFAKE vs. DEEPREAL? We’ve all seen the videos on the internet: Obama making fun of Trump or Mark Zuckerberg speaking into a camera about data-control and confirming our beliefs in the conspiracy of his ulterior motives. DeepFake – or fake videos made with AI technology – is only taking its first steps and it has already unleashed rounds of debate regarding its uses for nefarious purposes and machinations. This is not only about the mere manipulation and setting out to embarrass politicians, but it has reached a point where manipulations of global events can be skewed or distorted in such a way as to create issues of security for entire governments and in the realm of geopolitics. Fortunately, it hasn’t happened yet – but this ticking time bomb is running out of time. Luckily, there seems to be an answer from a Swedish app that seeks to fight fake news with its own technology. Even though it doesn’t have a name yet, at Evolvera we propose “Deepreal” . . .
 
“President Trump is a total and complete dipshit” – Obama said in the viral deepfake video of last year that really seemed realistic, were it not for Jordan Peele’s voice giving it away. Buzzfeed’s video really showed how far we’ve come in the realm of deepfakes. It was in 2017 that the term came into existence and came out of academic institutions in the field of computer vision. It was not long, however, until the online world, as intrigued as they were by the technology, that amateurs started doing similar things . . . for completely other reasons that you may guess – Pornography. Pornography was being shared on the website Reddit, but not with faces morphed from another supermodel – no, that would be too easy, but with the actor Nicolas Cage. The internet delivered.
 
 
The term Deepfake, ultimately, derives from the type of AI algorithm used to create the videos, and the process called Deep learning. Deep learning in simple terms, as the name suggests, learns how to recreate a person’s face by analyzing thousands of images over and over again until it reaches the desired results. Better Deepfake AI create more realistic results. This realism, is why many are worried about abuses. What happens when a Deepfake video is manipulated into saying something explicit or sensitive, for example a leader of a nation? Interestingly enough, the three first viral deepfake videos were leaders controlling the largest nuclear arsenal – Barack Obama, Vladimir Putin and Donald Trump. You may understand why the wrong words may trigger a dangerous scenario. Fortunately, there is someone to save the day when that happens. 
 
Researchers at Lund University in the south of Sweden are developing an app that should be able to reveal these manipulations and fakes. In the autumn, they will begin their work on their own AI algorithm that will be able to tell whether a clip or video of a person is authentic . . . and as one of the researchers on the project, Kalle Åström, says: “We may have get used to the fact that we cannot trust an image or an audio file”. That is precisely what people around the world are not doing as this Swedish team is not the only team that is battling Deepfake, in fact the US government has its own tools for it. The Swedish team is focusing on deepfakes in relation to news material and will be using a technique called GAN (generative adverserial network) – this is when two networks in an AI algorithm continuously compete against each other to produce even more realistic material. It will be in the form of an app that is set to be completed in 2023.
 
 
 
In other cases, the US Defence Department has been studying the deepfake algorithm and initially had a hard time to develop an effective tool against it, even after extensively studying 50 deepfake videos. That is, until they stumbled upon a rather simple trick. Look at the eyes. Deepfakes rarely, if ever blink according to TechnologyReview. It also becomes easier upon noticing strange head movements or peculiarities of a specific video. From establishing this, they have developed an effective tool, which, for the moment, remains a secret. They say that they have an advantage over forgers right now, but for how long? What will happen when the AI technology becomes even more advanced with machine learning? Will the Swedish app make deepfake debunking accessible to all in a time of crisis? Will it be massively downloaded before election cycles to be able to distinguish fact from fiction? Whatever the case, it should be welcomed at a time when fake news dominates our headlines.
 
Evolvera – evolve in a new era  . . .
 
 
 
 
 
 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: