Economy & Innovation

“It’s the job of the state to secure society”

At the London Regional Chapter event “Artificial Intelligence and the Future of Espionage”, held on April 27 at the Warburg Institute, Sir Richard Dearlove, former Chief of the British Secret Intelligence Service (MI6), and Dr. Anthony Vinci, former Chief Technology Officer and Associate Director for Capabilities at the National Geospatial-Intelligence Agency and author of “The Fourth Intelligence Revolution: The Future of Espionage and the Battle to Save America”, discussed how artificial intelligence is reshaping the future of espionage. The discussion was moderated by Ray Eitel-Porter. Read a short interview with Anthony Vinci on this topic here.

Dear Dr. Anthony Vinci, how do you assess the trade-off between open AI development and closed, controlled systems in terms of national security and global stability—and where do you think the line should be drawn?

Open versus closed AI development is the most important question of our time. Open AI development, as in the US, moves much faster and is likely to achieve AGI sooner – a world changing event. But AI, especially AGI, is very powerful technology that has immense downsides in terms of the possibility of dystopian levels of total surveillance, intractable disinformation operations and new forms of warfare. While China is already closing off its AI for internal security concerns, the US is very open – so far – but I assess will soon begin to regulate and perhaps limit public access to the most powerful model features.

From an intelligence perspective, does restricting access to advanced AI models—like the approach taken by Anthropic—actually reduce risk, or does it create blind spots compared to more open ecosystems historically associated with OpenAI?

It’s the job of the state to secure society and I believe that a new role of intelligence agencies should be to help assess the potential damage from releasing new technologies, particularly the most powerful AI models. CISA and the NSA, for example, may be best placed to assess the ramifications of Mythos’ cyber security threat. I do think that governments should retain the ability to use such models in order to best ensure national security. But as with any potentially dangerous technology, such as bioweapons, government’s should also have the ability to limit access by the public or other nations. This seems prudent to me.

If a Stanislav Petrov–type scenario occurred today in an AI-driven early warning system, would a human still realistically have the time, authority, and willingness to override it—or have we designed modern defense systems in a way that makes that kind of intervention unlikely?

Yes, I think things are already moving too fast for decisions in some spheres, certainly at the tactical level. The world will only move faster going forward. However, one useful aspect of AI is that it helps us to comprehend very complex information and situations, that includes thinking through unlikely events. I think that this can help us to wargame out possible Stanislav Petrov–type scenario. It is not foolproof, of course, but thinking ahead better is the best way to prepare to make decisions.