Back

"AI Superintelligence”: what it is and why we are not ready for it yet

When the Future of Life Institute (FLI) published a new statement calling for a ban on AI superintelligence, the announcement instantly went viral. Not just because of what it said but because of who signed it.

 

Among the signatories were Prince Harry and Meghan MarkleGeoffrey Hinton (the “Godfather of AI”), Steve WozniakSteve Bannon, and even former U.S. National Security Adviser Susan Rice — a mix of scientists, political figures, economists, and cultural icons.

 

The message was simple but explosive:

 

No one should develop “superintelligent” AI until we have scientific consensus that it can be done safely, and strong public support for doing so.

It’s a call that has already gathered more than 60,000 signatures

But behind the headlines lies a deeper, more complicated story about how we talk about AI, how we fear it, and how language can both clarify and distort one of the most important debates of our time.

 

Understanding the Evolution of Intelligence

 

Artificial Intelligence didn’t start as an existential threat. It began as a technical dream: building machines that could perform tasks that normally require human intelligence.

 

In scientific literature we often find three broad stages:

 

  1. Narrow AI—what we have today. AI that excels at specific tasks: writing text, recognizing faces, predicting protein structures. Systems like ChatGPT, AlphaFold, or Midjourney are brilliant but specialized. They don’t “understand” the world the way humans do.
  2. Artificial General Intelligence (AGI) – the next theoretical milestone. An AGI would be able to reason, learn, and adapt across multiple domains like a human mind. It could write poetry one moment and design software the next, understanding both the logic and the emotion behind each task.
  3. Artificial Superintelligence (ASI) – a leap beyond humanity. A hypothetical form of intelligence that surpasses us in every dimension: scientific creativity, strategic reasoning, emotional insight, and even self-improvement.

 

At that stage,  we’re talking about a form of intelligence so advanced it might reshape civilization itself.

 

That’s why some call ASI a “digital god.”

Not in a religious sense, but as a metaphor for something omniscient, autonomous, and potentially uncontrollable.

 

The Problem with the Term “Superintelligence”

 

And this is precisely where the debate becomes tricky.

The word “superintelligence” is emotionally charged. It evokes images of omnipotent machines, human extinction, or a silicon deity ruling the planet.

But scientifically, we’re still struggling to define what “general intelligence” even means; let alone how to create something that surpasses it.

So when public figures or organizations use “superintelligence” as if it were imminent, it risks distorting the conversation. It collapses decades of scientific research, ethics, and governance into a single fear-driven term.

It’s not that the danger is fake; it’s that the framing can become sensationalistic.

And sensationalism can be both powerful and dangerous: it grabs attention, but it can also mislead.

 

 

By calling for a “ban on AI superintelligence,” FLI pushes the conversation out of the tech bubble and into the mainstream. It brings AI risk usually confined to research papers and ethics panels into headlines, podcasts, and dinner-table conversations.

In other words: they use “superintelligence” not because it’s the perfect scientific term, but because it’s the one people hear.

Still, even the best intentions face harsh reality.

Calling for a global prohibition on the development of “superintelligence” sounds noble but turning that into enforceable policy is almost impossible right now.

 

The obstacles are enormous:

 

• Technical ambiguity: We don’t yet have clear metrics to define what counts as “superintelligent.”

• Legal limitations: No international law exists to regulate the boundaries of AI capability.

• Economic incentives: The AI race is deeply tied to national competitiveness and economic power.

• Geopolitical fragmentation: Expecting the U.S., China, Europe, and others to agree on a shared AI moratorium is, at least for now, unrealistic while some of these countries are racing to build AGI.

 

The implementation would require unprecedented global coordination; the kind of collaboration humanity has rarely achieved, even in the face of existential risks like climate change.

 

Why It Still Matters

 

And yet, the symbolic power of this call shouldn’t be underestimated.

 

Signing the statement is not just about policy; it’s about public awareness.

It signals that people across the political spectrum, from scientists to celebrities, recognize that the trajectory of AI development affects everyone.

 

Even if the ban never materializes, the conversation it triggers is vital. It forces society to reflect on where we are, where we’re going, and what “progress” really means.

 

Whether you view the idea of “AI superintelligence” as a legitimate concern or as futuristic speculation, one truth remains: we risk making AI discourse a mirror of our collective fears instead of our collective wisdom. But if we ignore it entirely, we risk walking blindly into a future we don’t understand.

 

We don’t need to build a digital god but we do need to build digital systems that respect human dignity, safety, and accountability.

 

In the end, the real challenge is not about banning “superintelligence.” It’s about governing intelligence with humility, precision, and courage.