Foreign and Security Policy

“At the tip of the iceberg with regard to disinformation”

“At the tip of the iceberg with regard to disinformation” Photo: Brookings Institution

Alina Polyakova regards the USA and Europe as inappropriately prepared for the threat posed by so-called Deep Fakes. Polyakova, who is the David M. Rubenstein Fellow at the Brookings Institution, analyzes disinformation campaigns that have been produced with the help of Artificial Intelligence. Both social media companies and policy makers have a responsibility to act, she says.

Interview: Tyson Barker


Dr. Polyakova, when we talk about disinformation campaigns and political warfare the year 2016 stands out. The presidential election in the United States and the Brexit referendum were the wake-up call for the West. What has happened since and what is the next iteration of this threat?

What happened in 2016 was not new in terms of Russia’s global desire to undermine influence politics in other countries like Ukraine, Georgia and countries closer to Russia that it considers to be the legitimate testing grounds for these kinds of operations. This tool kit has become so diffuse. We are seeing other state actors using these kinds of techniques. Iran, North Korea and China will eventually enter this space in a significant way as well. If you look at what the Russian intentions and activities have been in places like Ukraine, we can make some assertions about what we might expect.

First of all disinformation campaigns especially in social media have not stopped. Secondly they have certainly evolved in the sense that we now see a lot of content that we can identify as disinformation from the Russian side jumping across platforms and being amplified by Facebook, Twitter and Instagram. Google results are being manipulated when it comes to certain search terms. The social media companies have responded to a certain extent. In my view, their response is relatively superficial and minimal. They have started taking down disinformation networks and reducing automated accounts in rank. However, we are quite far behind in truly addressing this threat.

The social media companies have responded to a certain extent. In my view, their response is relatively superficial and minimal.

Ukraine has been the laboratory for these kinds of tools and techniques. Do we see new mutations of these tools being used in Ukraine?

The positive outcome of the exposure to these influence operations in the information space has been hyperawareness. There has been a general acknowledgement that Ukraine has been the primary victim and target of Russian efforts. There have been attacks on critical infrastructure systems, especially on electrical grids. We are going to see much more of these kinds of aggressive attacks. The presidential elections in Ukraine in March are just around the corner. This is going to be an event to watch.

You wrote an excellent report at Brookings entitled “Weapons of the Weak” in which you highlighted the return on investment of so-called Deep Fakes. What exactly are we talking about here?

‘Deep Fakes’ has become the term of a catch-all-concept for manipulation of audio and video content online. Deep Fakes are A.I. driven and produced, incredibly advanced and original content. Doctored videos and images are nothing new. But this concept is profoundly different because our ability to detect manipulated video and audio has not caught up with the ability of these algorithms to produce this false audio and video material.

Deep Fakes are A.I. driven and produced, incredibly advanced and original content.

Can you give us an example?

Here is a quick and famous one: If you google Obama and Deep Fake, you will see four videos of President Obama speaking. I regularly show these and give talks about them. I always ask the audience: which one is the real Obama? People pick one, but of all them are fake of course! This signals to us that AI is closely linked to Big Data. If you can feed a huge amount of data into this algorithm, it can produce a new video. It looks so real and so convincing that we can’t detect it is false content.

If you can feed a huge amount of data into this algorithm, it can produce a new video. It looks so real and so convincing that we can’t detect it is false content.

Frankly, some implications of this phenomenon are frightening. An audio voicemail being left for troops serving in the Baltic states or an e-mail coming in with a video of the commanding officer sharing what is actually false and misleading information take some time to verify. Meanwhile, that will subserve the role of disorientating potential military operations. Fake videos are often much more fun to look at because they are more sensationalist than accurate content. They are spread much faster than accurate content, and it takes longer to debunk their false narratives. It is open source based, so it is out there. We have to prepare ourselves for this.

Russia will probably not be the developer of this technology but more of an early adopter and weaponizer of it. Who is developing this technology?

Russia has great military capacities and of course a nuclear arsenal. But if you look at its financial capabilities, its struggling economy and its declining demographics, Russia doesn’t look like a great power. From the Russian perspective, investing in these kinds of technologies and asymmetric threats is a great way to balance out the other inequities that the country experiences vis-à-vis Western Europe and the United States.

From the Russian perspective, investing in these kinds of technologies and asymmetric threats is a great way to balance out the other inequities that the country experiences vis-à-vis Western Europe and the United States.

Well, who are the innovators? Certainly, it is a global market. If we look at a few different indicators, it becomes very clear who is going to be an innovator. Obviously, you need well trained, sophisticated, highly skilled individuals in machine learning. You also need access to massive amounts of data. A.I. has been around for decades, some of the first A.I. algorithms were written in the 1950s. In fact, the new algorithms are not that different. However, the critical change has been that we now have the ability to process data and that we have a mass availability of data. China will consequently lead in this sphere. Whoever controls the data, will control the world.

Another leader in this space is the United States. How should governments regulate A.I. to protect our democracy from these kinds of technologies being weaponized in the civil society?

I haven’t seen any serious regulatory and legislative efforts in the United States so far to grapple with this problem. Mainly because we don’t see a clear technological solution. On the regulatory side it is very difficult to see where this will go. There is a general reluctance among policy makers in the Republican and Democratic parties around regulatory efforts for the big tech firms who are driving so much of the economic growth in the United States and are also funding many congressional campaigns.

There is a general reluctance among policy makers in the Republican and Democratic parties around regulatory efforts for the big tech firms who are driving so much of the economic growth in the United States and are also funding many congressional campaigns.

We have to start from the very beginning. This means policy makers need to know what they are talking about before they start legislating. Right now, there is a serious gap between tech and policy regarding the understanding of technology. There is some precedence showing how to deal with big new industries and big revolutionary changes, like forcing these companies to have common terms of use. This is something that the U.S. Congress has forced on credit card companies, it has also forced regulatory efforts on big tobacco back in the 1960s.

Is there an increasing pressure on the companies to take more responsibility?

The pressure is certainly there. Facebook and Twitter have made some efforts to deal with this. If I had to rank them in terms of how cooperative they have been with researchers and government, Twitter is most transparent in sharing data on some of the networks that are identified as being malicious. The companies now call this kind of manipulative behavior an offensive coordinated activity. We have heard a lot of very highly publicized takedowns of networks on Facebook and pages associated with Sputnik for example. But we don’t have a good sense whether this really makes a difference. We are very much at the tip of the iceberg. The platforms are perfectly designed to be the vectors of the diffusion of disinformation.

The platforms are perfectly designed to be the vectors of the diffusion of disinformation.

How do you deter an authoritarian state where a civil society doesn’t really exist?

That is where governments can be most effective. They should send very clear messages via intelligence as to the consequences to specific actions on our critical infrastructure. You have to convey that there is a price to pay if you attack our societies. There has to be an economically punitive component. Others have suggested not so much a defensive strategy but an offensive strategy. But this is very dangerous.

You have to convey that there is a price to pay if you attack our societies. There has to be an economically punitive component.

Tyson Barker is Deputy Director and Fellow at the Aspen Institute Germany. He is a foreign and economic professional with in-depth experience working at senior levels in government and the think tank community. He is a 2014 Young Leader Alumnus of Atlantik-Brücke.

Stay up-to-date and subscribe to our newsletters RECAP and INSIGHTS.