The geopolitics of artificial intelligence.
Brave new world.

In this week’s Not in Dispatches, we look at the relationship between geopolitics and artificial intelligence. And speaking of major technological advances, you can now follow Geopolitical Dispatch on Instagram (we’re also on the platform formerly known as Twitter).
HAL hath no fury like a robot scorned.
Artificial intelligence may one day be viewed as humanity’s greatest invention and most powerful tool.
Indeed, it’s not uncommon to hear that the wheel, the printing press, penicillin, and nuclear weapons could all pale in comparison to computers that may one day talk, reason and invent as well as – or better – than humans.
King Charles, opening the UK’s AI safety summit this week, was surely right to say: “we are witnessing one of the greatest technological leaps in the history of human endeavour”.
Predicting how artificial intelligence will ultimately “evolve”, however, is challenging.
Optimists hope AI will boost productivity, accelerate scientific progress and free humans from the shackles of work. Pessimists fear AI will outsmart humans, steal our jobs and, like Frankenstein’s monster, turn on its creators.
Elon Musk, in conversation with UK Prime Minister Rishi Sunak at the summit, embodied both hopes and fears, imagining a world where his awkward son would get a new robot friend while worrying it may suddenly become “not so friendly any more”.
While artificial intelligence represents a technological leap with unpredictable consequences, the fundamental human drives – from the quest for power, to the fear of the unknown – will remain unchanged.
Intrinsic human characteristics will dictate how societies integrate AI, whether we approach its potential with optimism or scepticism, and whether we wield it as a tool for good or evil. And the fundamental drivers of geopolitics – promoting national security and prosperity – will shape both the development of AI and geopolitics itself.
Fear and loathing in Bletchley Park.
The UK’s AI safety summit – the first major diplomatic conference on AI – provides a taste of how governments will compete and cooperate on artificial intelligence.
Twenty-eight governments signed an official declaration that spoke to the dystopian risks posed by AI with its “potential for serious, even catastrophic, harm” and committed to “international cooperation”. The United Nations signalled an intention to create an expert panel akin to the Intergovernmental Panel on Climate Change. Leading AI firms signed up to a voluntary agreement to allow governments to test their latest models for social and national security risks.
Indeed, the summit had the tenor of early climate change conferences: sounding the alarm to newly discovered “existential” risks; promising international cooperation; and proposing to act on an AI version of the “precautionary principle”.
But, as with climate change, national interests may trump the common good.
Despite convening the conference, the UK expressed its ambition to become an “AI superpower”. In the same week, the US and EU moved forward with their own laws and regulations. And some business participants criticised China’s involvement, saying that the focus should not be managing far-away existential threats but supporting local industry.
All-too-human ambition, control and self-interest were on display as much as artificial intelligence.
Written by former diplomats and industry specialists, Geopolitical Dispatch gives you the global intelligence for business and investing you won’t find anywhere else.
One small step for AI.
Nations already see AI as a pathway to economic growth, military prowess, and global influence.
Great powers will undoubtedly push for technological supremacy, while those lagging in adoption might face economic stagnation or dependency on AI-leading nations. And so, the incentives to power ahead even in the face of acknowledged risks may well be overwhelming – just as the world has continued to burn fossil fuels while ignoring the warnings of the IPCC’s climate change experts.
But even putting aside the (hopefully) long-term risk of algorithms transforming into malign robot overlords, artificial intelligence will almost certainly affect power dynamics between states. It will also pose major international security dilemmas. And may even change fundamental features of the international system.
The rise and fall of nations are often driven by economic, political and external pressures.
Since the relative power of states is largely determined by their economic base, nations that effectively deploy artificial intelligence across industry to drive productivity will become more powerful. And those that don’t will fall behind.
If economic history is any guide, nations with robust intellectual property laws, research institutions and supportive regulatory environments might have an edge in AI innovation and deployment.
With the brevity of a media digest, but the depth of an intelligence assessment, Daily Assessment goes beyond the news to outline the implications.
But, as with the space or the nuclear arms races of the twentieth century, state-driven economic models could give more laissez-faire approaches a run for their money.
After all, the USSR beat the US into space, but never made it to the moon, while the US beat the USSR to the atom bomb, but quickly faced a competitor with nuclear parity.
America’s present edge over China on generative AI may be similarly short-lived. And, as during the Cold War, competition may begin as a two-horse race, but technology will ultimately spread. Today, nine countries have nuclear weapons and there are between 500 million and 2.5 billion Teflon pans in the world (according to Chat GPT’s estimates).
How nations handle the potential social, cultural and political disruptions from AI will also impact their relative power.
Democratic societies may face challenges from AI-generated misinformation during elections, not to mention mass unemployment or societal pushback from rapid technological change. Authoritarian regimes may find AI to be an unwelcome empowering tool for individuals. And poorer countries may suffer from a rapidly widening “digital divide” leading to even greater economic inequality.
Nations will pursue AI-driven development according to their prevailing political, societal and economic models.
China became the first to regulate generative AI this summer, doing so with classic Chinese characteristics: algorithms must be assessed for “public opinion or social mobilisation attributes” while AI-generated content must “adhere to core socialist values” and “not incite the subversion of state power”.
EU technocrats have taken a typically more languorous path: spending the past four years studying AI with the view to soon legislating the findings with a focus on protecting citizens’ data and privacy.
And the White House, facing an uncooperative Congress on all things, has skipped the legislative path. This week, Joe Biden issued an executive order requiring developers to share with the government test results posing a national security risk while also flagging an “AI Bill of Rights”.
Businesses must adapt not only to exponential technological advances, but also to rapidly changing, fragmenting and inconsistent regulatory approaches from governments as they play catch-up.
The Man Who Saved the World.
As nations invest heavily in AI-driven defence capabilities, such as autonomous weapons and surveillance systems, they may inadvertently escalate tensions.
George Orwell’s fictional 1984 is often cited as a cautionary tale about the consequences of totalitarianism and mass surveillance enabled by technology. As artificial intelligence becomes more powerful, the risk increases that it will be used as a tool of repression by authoritarian states. And in a world increasingly divided along geopolitical lines (and eerily similar to Orwell’s Oceania, Eurasia and Eastasia), AI’s potential to empower authoritarian states could increase hostility and friction.
But even more relevant than 1984 is 1983.
In that year, a Soviet early warning satellite system in Moscow twice reported that five US intercontinental ballistic missiles were heading towards the Soviet Union. Stanislav Petrov, the duty officer that night, decided to wait for corroborating evidence, which never arrived, rather than immediately relaying the warning up the chain of command – a decision that most likely prevented a retaliatory nuclear strike that would have triggered a full-scale nuclear war.
As artificial intelligence becomes both more powerful and ubiquitous, governments may be tempted to replace Petrovs with C3POs.
Taking humans “out-of-the-loop” – not just for battlefield decisions with autonomous weapons but for strategic ones like how to respond to suspected nuclear attacks – may be judged wise to deter adversaries and demonstrate strength. But it could also lead to AI mistaking signals, misinterpreting data or miscalculating risks – with the potential for unintended escalations.
Just as the early years of the nuclear arms race created instability until the two superpowers stumbled on Mutual Assured Destruction, the present AI arms race may lead to instability until an equilibrium is found and governments develop mutually understood protocols for signalling, escalating, and de-escalating.
Emailed each weekday at 5am Eastern (9am GMT), Daily Assessment gives you the strategic framing and situational awareness to stay ahead in a changing world.
The End of Geography?
The rising prominence of artificial intelligence – like the invention of the internet before it – will also alter the “geo” in “geopolitics”.
Access to resources like data, computing power and skilled human capital will become even more crucial. Nations with vast digital data resources (like China) will have a distinct advantage in training sophisticated AI models. Nations that are more interconnected and open to international collaborations (like the US) might benefit from faster AI technology diffusion. Traditional trade agreements promoting the free exchange of goods, services and capital could become less important than those promoting data flows.
Back in 1919, the father of geopolitics Halford John Mackinder summarised his “heartland theory” of international relations by saying that whoever controls eastern Europe controls the heartland (the centre of the interlinked continents of Asia, Africa and Europe) and whoever controls the heartland commands the world.
In the age of AI, the theory may need a software update.
Control of the “digital heartlands” – key digital infrastructures, data centres, internet chokepoints, and satellite launch facilities – may end up being the most important factor in determining the fate of nations. Or, as Vladimir Putin, no stranger to control of the traditional heartland, recently said: “Whoever becomes the leader in AI will rule the world”.
We hope you are enjoying Not in Dispatches. (And yes, this was written by humans).
Best,
Michael, Cameron, Damien, Yuen Yi, Andrea, and Kim.

