
Image - James Davis
Have you ever wondered if AI is going to destroy the world? It’s a question I have found myself considering at times. Luckily for me, behind the windows of the second-floor lecture theatre at Leicester University, my questions will finally be answered. “We’ll just wait ten more minutes as we did have responses from about fifty people,” an organiser announces. It is the first open meeting of the Leicester branch of ‘PauseAI’, and an eager crew of students are preparing to explain to the great and good of Leicester’s student community, at least those who just so happen to be available at 2pm on a Wednesday afternoon, why AI poses an existential threat to humanity and why we should pause it.
I can only assume the twenty students who didn’t come were already so assured of the threat that a lecture on its risk was too much to bear. Among those who were willing to discover how the human race could become extinct, there was an air of tension as we waited to begin. They say you can’t tell what subject students study based on their appearance and behaviour, but in the ten-minute wait between when the talk was supposed to begin and when it did, it seemed the best way to fill the time. The boys with glasses and River Island hoodies to my right oozed engineering vibes. The group sitting at the front in blazers screamed science. The crew perched at the back, as far away from the front as possible, had to be maths students given the sticker emblazoned on the back of one of their laptops – ‘Mistakes Allow Thinking to Happen, MATH’. This left me, a student of the social sciences dressed in my social science-y, Primark-sourced attire, out of my depth among these AI sceptics of the STEM variety. Nonetheless, my social sciences brain was calibrated and, as the hush ended and the talk began, I opened my mind up to try and understand the specifics of exactly how, and why, AI will spell doom for humanity.
The driver of the campaign to halt AI is the ‘alignment problem’: if we carry on developing AI, it will have goals different to our own, with consequences that we may not be able to control. The way that AI is developed was explained and the creation of large language models was detailed. The script was simple: AI models are constantly being upgraded to become stronger. At some point, they will diverge from human goals in a way that poses an existential risk to humanity and will, by then, be too powerful to turn off. Because of this, action must happen now.
On PauseAI’s website, it says: “We don’t think current AI models are an existential threat” but adds, “If we keep building more and more powerful AI systems, we will reach a point where one will become an existential threat.” I didn’t know I was capable of mirroring an orangutan’s contortions but, as this talk went on, I found myself sitting closer and closer to the edge of my seat, and hunched more and more forward waiting for the killer point: how will we know that AI is too powerful, and why won’t we be able to turn it off?
Offered was an analogy: “you may say that if it goes too far, why can’t we turn it off? After all, it’s only a computer.” “But an AI is no more inside a computer than we are a brain inside a head.” What I expected to follow was a clear indicator of the warning sign. What was it that would tell us that it has gone too far? Would it be a threshold breach of some kind? Would it be a policy event? Is there a timeline for this? My questions remained unanswered. In this case, I may be a victim of a social science brain in a talk designed for the more technical STEM mind, but to me, concerns about AI appear to be missing the obvious. The biggest problem we face is AI’s ability to remove the need for human thought in the workplace. I cannot help but think that we are panicking about an undefined and hypothetical AI-induced extinction event whilst overlooking the prospect of half of the room’s degrees being useless in ten years.
Nobody in that room thought that the world was ending tomorrow, but it was definitely ending soon. It was just difficult to say when, how, and why we couldn’t have stopped it. I hope I can be forgiven for not entirely buying into the idea that current AI models are not an existential threat but models in the future will be and that we, therefore, must ‘pause’ AI now because of this unspecified and unclear threat. It does feel as if there is an element of “it’s not happening now, but just trust me, it will at some point.”
I am open to the idea that I am a fool, and that I may end up being that one guy in 50 years’ time that everyone blames for refusing to listen. I did, though, find myself rethinking the premise of the brain-computer analogy, used to show how AI is unable to be turned off, given that AI is famously dependent on massive amounts of energy and water to function. “Surely if we ran out of water and power, AI would cease to function?” I thought, testing the ChatGPT app on my phone by closing it down and seeing if it could turn back on by itself. It didn’t.
‘Regulation’ is a word that appears repeatedly in discussions of AI and its dangers. It is undeniably true that an ideal world would bring regulation of AI in all areas deemed necessary – to allow the best of AI to develop without the pitfalls of immense intellectual, military and political power being held in the hands of a few. This very well-reasoned intention, though, clashes fundamentally with the reality that discussing these issues now is proof that regulation isn’t overlooked. For all the flaws of humanity, I suspect that if we accidentally created an immensely hostile and intelligent tool intent on destroying us, then we would be motivated to alter it - especially given that we developed it in the first place. ‘We need regulation’ feels to me much like saying that ‘we need rain’ after a drought without specifying whether we need a light drizzle or a Noah’s Ark-style flooding situation.
Is the refinement we see now not regulation? OpenAI changed their large language models after gaps allowed people to form intimate relationships with ChatGPT, leaving some heartbroken but AI still not transcending the realms of acceptability. Grok, Elon Musk’s AI tool, halted the ability to produce sexualised images of children and women after massive backlash and government pressure. I see no reason why we would suddenly lose our ability to spot the negatives of AI models or become blind to any future hostility. In many ways, the fact that this discussion is even being had is strong evidence that should such dangers in this “superhuman AI” develop, we would be able to identify problems and act.
What would pausing AI look like in practice? Would we gather the world’s tech developers together, sit them down and politely ask them to stop thinking? The vision of the giants of Silicon Valley begrudgingly complying, like a misbehaving child told to sit on the naughty step to think about their actions is hard to imagine. The more likely comparison is the adolescent who continues to play Roblox on their iPad under the sheets after dark on a school night.
It feels as if two separate debates are taking place over AI. On the one hand, there are the warnings of unspecified AI superpower and its threat to the existence of humanity. On the other hand, there is the overlooked imminent change to the world of work and the established employment settlement that has existed since industrialisation. The way that AI development is going, there is a very real likelihood that in ten years, many jobs that were once a staple of our service economy cease to exist. Anything involving data will be easily replaced by AI bots.
Maybe I’m wrong. I often am. Maybe AI will end the world. Whether it does or not, we are going to have to learn to live with it. ‘What if AI ends humanity?’ was the question being asked as I left the building that Wednesday afternoon. Instead, I thought, we should be asking: what do we do when AI leaves half of us unemployed?
James is a third year Politics and International Relations student at the University of Leicester. Interested in British politics and political parties.