Let’s start with the big question. Are we just a few years away from machines becoming smarter than us? Some say yes. Some say not quite. But the fact that we’re even asking seriously tells you just how fast things are moving.
For decades, the idea of the singularity lived in the realm of sci-fi. It was a plot device. A tech prophecy. A cocktail-party concept people threw around after watching Her or Ex Machina. But now, the lines are blurring and quickly.
The singularity refers to a hypothetical moment when artificial intelligence surpasses human intelligence, becoming self-improving and, potentially, uncontrollable. Sounds dramatic, sure. Yet many leading voices in AI think we’re heading straight for it.
Back in 2005, futurist Ray Kurzweil said we’d hit artificial general intelligence (AGI) by 2029, with full-blown singularity arriving in 2045. A bold claim at the time. Less so now.
Recent predictions are speeding things up. A March 2024 report from the AI forecasting group Epoch suggests we might reach AGI by 2033. Others are even more bullish. Some leaders at Anthropic and OpenAI hint at real AGI-level systems being possible in the second half of this decade. As in, before you’ve replaced your next phone.
That’s not wild speculation. These forecasts come from people building the tools and people watching the rate of progress inside their own labs.
In a 2022 survey of over 700 AI researchers, half believed AGI would arrive by 2061. Almost all thought it would happen before the century’s out. What’s shifted since? Language models like GPT, Claude, and Gemini have made gains that seemed years if not decades ahead of schedule.
Let’s be honest. The idea that machines could soon outthink us is unsettling. Not just because of what they might do, but because of what it says about where we are.
There are upsides, of course. An AGI that works for us could revolutionise medicine, education, climate science and pretty much every field you can name. Imagine cancer treatments designed in days. Personal tutors for every child. Supply chains that predict global disruptions weeks before they happen. It’s not just wishful thinking. These are the kinds of scenarios researchers actually model.
But there are risks too. Big ones.
Geoffrey Hinton, often called the godfather of deep learning, left Google in 2023 to speak openly about his concerns. He’s not worried about killer robots. He’s worried about systems that act in ways we don’t fully understand. That optimise for things we never quite meant. That are smarter than us in a way we can’t unpick.
In May 2023, over 350 AI experts signed a one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.” No dramatic language. No long explanation. Just that.
Mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.”
That was the headline that sparked fresh debate earlier this year. Some insiders reportedly think it’s close; closer than we’d like to admit. Others caution that this kind of intelligence takes more than processing power. It needs deeper reasoning, long-term memory, and better grounding in the real world.
They’re probably both right in some way. We may be nowhere near true AGI and yet also standing on the edge of something that feels a lot like it.
It depends on how you look at progress.
If you see AI as a tool, then these leaps are thrilling. If you see it as a force, they’re more complicated. The truth is, we’re still figuring out what kind of thing AI really is.
But here’s what’s clear: the singularity is no longer just a thought experiment. It’s a real fork in the road. Whether it’s five years away or fifty, the work we do now, how we shape these tools, what rules we set, what we prioritise etc. will decide which way we go.
So no, this isn’t panic time. It’s planning time. Because you only get one first contact with something smarter than yourself.
Hi There 👋
What would you like to know about GOT AI?