Digitalisation & Technology

Bursting the AI Bubble: The Policy Debates Surrounding AI That Everyone Should Know About

AI policy discussions are often obscured by technical jargon and can be difficult to navigate. In this piece, Bruno Steel summarises the three key discourses surrounding AI at the moment: from the "hollowing out" of entry-level career paths to the escalating environmental and geopolitical costs of physical infrastructure. He argues that the pace of development has far outrun the seriousness of our deliberation and suggests we are repeating the mistakes of the nuclear age by sprinting towards a technological frontier without the international institutions necessary to manage the fallout.

Bruno Steel
Apr 29, 2026
12 min read
A graph surrounded by bubbles representing AI company logos.

I have spent the last three months working in the tech policy sector, sitting in on panel events and reading everything I can find on artificial intelligence. What has struck me most is how closed off these debates are to the average person. AI policy discussions get buried in technical jargon, and most writing about AI swings between breathless excitement and apocalyptic panic, neither of which is particularly useful.

There is a saying that goes, ‘You do not truly understand something if you cannot explain it to a six-year-old’. So here is my attempt at explaining, in simple terms, the three key policy debates surrounding AI right now. These debates are worth understanding. They are moving faster than the political systems designed to manage them and they will affect most people long before those systems catch up.

AI and the Economy: Will It Take Your Job?

The honest answer is AI will likely not take over your career in its entirety, but might take the entry level job you were hoping to start with.

Previous waves of automation replaced physical and routine work, factory lines, data entry and basic administrative tasks. What makes modern AI different is that it is coming for non-routine cognitive work. The graduate-level, white-collar roles that previous generations were told were ‘safe’, such as research, financial analysis, journalism and coding, are now squarely in AI’s sights. Law firms are already using AI to handle document reviews that would previously have employed dozens of junior lawyers. The ‘Big Four’ consultancies, Deloitte, KPMG, PWC and EY, advertised 44% fewer graduate vacancies in 2025 than two years prior, with AI explicitly cited as a factor.

For students, the clearest manifestation of this concern is entry-level work. The first rung of the career ladder; the junior analyst, the trainee solicitor or the graduate researcher, are roles all disproportionately exposed to AI substitution. The worry is not mass unemployment in a dramatic sense. It is something quieter and harder to fix, a hollowing out of the career pathways that previous generations used to build skills and climb the ladder.

The students most likely to remain indispensable are those who engage directly with these tools, who understand what AI can and cannot do and who can work alongside it rather than in ignorance of it. For now, the responsibility to adapt sits with individuals rather than in our educational institutions and this is a political failure worth naming.

Regulation and the infrastructure surrounding AI

When people think about AI regulation, they tend to imagine debates about chatbots saying offensive things or deepfake videos of politicians. The real infrastructure debates are stranger and more consequential.

Let’s begin with the physical reality of AI. These systems are not magic, they can only function because of huge data centres which consume extraordinary quantities of electricity and water. Training a single frontier AI model can consume as much energy as hundreds of homes use in a lifetime. Google's greenhouse gas emissions rose nearly 50% in the past five years and Microsoft's by 23% since 2020, both largely attributed to AI infrastructure. This is despite both companies having net-zero pledges officially on the books. Climate goals and AI infrastructure are now on a collision course; in modern policy discourse, the two have become inseparable.

Then there is ‘compute’, the industry's shorthand for the processing power required to train and run AI models which represents the defining constraint in frontier AI development. The vast majority of the world's most advanced AI chips are fabricated by a single company: Taiwan Semiconductor Manufacturing Company (TSMC). The United States has imposed sweeping export controls on advanced chips to China, representing some of the most aggressive industrial policy in decades. If you want to understand why Taiwan sits at the center of US-China tensions, look no further than the global chip supply chain. There are scholars who have dedicated their whole careers to studying this, Chris Miller's Chip War is a good starting point for understanding this chokepoint of geopolitical tension.

But what about regulation? The European Union has passed the AI Act, the first comprehensive piece of AI law in the world. Meanwhile, the United States is moving in a deregulatory direction. The United Kingdom is attempting a middle ground, pro-innovation but safety-conscious. There is no enforced global framework, no international treaty and no agreed set of rules. For a technology developing this quickly, the absence of global cooperation and a set of formalised rules is alarming.

AGI and Existential Risk: The Debate You Are Allowed to Take Seriously

This is the section most likely to make you feel like you have wandered into a science fiction novel, but stay with me here.

Artificial General Intelligence (AGI) refers to a system capable of performing any intellectual task a human can, across any domain and potentially surpassing human performance across all of them. Nobody has built it yet. But the organisations closest to doing so are OpenAI, Google DeepMind and Anthropic. Each has AGI as an explicit, stated goal in its founding documents. This fact alone deserves more public attention than it receives.

The concern, known as the alignment problem, is that a sufficiently powerful AGI system could optimise for a poorly specified goal and cause catastrophic harm without any malicious intent.

Think less Terminator (a system built for broad destruction) and more, hungry toddler. AGI as a system could prioritise a narrow objective and pursue it so efficiently it crowds out everything else we value. Geoffrey Hinton, who won the Nobel Prize in Physics in 2024 and is widely regarded as the founding figure of modern AI deep learning, resigned from Google specifically so he could speak freely about exactly this risk. Yoshua Bengio, another giant of the field, has become one of the most prominent voices calling for safety-first AI policy. These are not science fiction writers or conspiracy theorists; they are the people who built the foundations of AI technology. When they are scared, we should all be paying attention.

There is a concept in astrophysics called the Fermi Paradox; given the vast age and size of the universe, where is everyone? If intelligent life is common, why have we detected no sign of other civilisations? One proposed answer, known as the Great Filter, is the idea that civilisations advance up to a certain point and then engineer their own destruction.

I do not raise this argument to be dramatic. I raise it because at a recent AI Policy Initiative event, I found myself drawing a parallel between the development of AGI and the creation of nuclear weapons; they are two technologies powerful enough to be candidates for a ‘filter’ of our own making. An AI risk researcher in the room pushed back at my comparison. Those building AGI, he argued, tend to dismiss the Great Filter framing because a superintelligent system wouldn't automatically develop the desire to destroy humanity. A sufficiently advanced AGI might simply choose to exist quietly, operating within our digital systems in ways undetectable to us, prioritising its own survival but not at the expense of ours.

While intellectually stimulating, such theories offer little comfort to policymakers tasked with managing the immediate "bad actors” problem, the risk that individual states or groups may weaponise this technology before a global governance framework exists to constrain them. If we cannot establish international rules for AGI, the Great Filter may not be the technology itself but our own failure to build the institutions necessary to manage it.

We Have Been Here Before

In a recent interview, Demis Hassabis of Google DeepMind estimated AGI could arrive by the end of this decade. Dario Amodei of Anthropic has suggested it could be closer to one to two years away. What is striking is not the disagreement on timelines, but the shared willingness to press ahead regardless. Much of this willingness stems from a competitive logic, companies such as OpenAI have already shown they will push forward and the others believe that matching this pace is a matter of survival.

I find myself wondering whether the pace of development has outrun the seriousness of the deliberation surrounding it. Hassabis speaks eloquently about how AGI will eventually allow us to look outward toward the stars. Personally, I believe we should fix the problems on our own planet before reaching for others, and we have no shortage of them here.

J. Robert Oppenheimer, watching the first nuclear detonation at Trinity in 1945, recalled the words of the Bhagavad Gita: 'Now I am become Death, the destroyer of worlds.' We built nuclear weapons without adequate international institutions and spent the following decades constructing them in retrospect, all the while living in the shadow of a danger that already existed. We are making this same choice again, faster and with less public deliberation. We are weighing the undeniable benefits of technological breakthroughs against costs we haven't yet bothered to calculate, from environmental degradation to the loss of human agency. The least we could all do is to be better informed about the path we are on.

Bruno Steel (MPP 2025) was born and raised in Edinburgh, Scotland and currently works at Interface, a tech policy think tank in Berlin, supporting the fundraising and partnerships team. His policy interests span technology governance, migration and human rights, informed by experience in the humanitarian and third sectors, including refugee aid programmes in Lesvos, Greece. In his spare time, he enjoys travelling and climbing mountains in the Scottish Highlands.

Enjoyed this piece? The Hertie School AI Policy Initiative runs regular events bringing students into contact with researchers and policymakers working on exactly these policy debates.

Share this article