Back

Governing the Machine: Why We Need a Global Charter for the Ethical use of AI

Artificial intelligence is no longer an experiment confined to research labs. It has become embedded in the everyday decisions that shape people’s lives. Algorithms now influence who gets hired, who qualifies for a loan, which patient receives urgent medical attention, and how governments handle matters of national security.  AI is already part of the infrastructure of modern society and while it promises efficiency and innovation, it also comes with new vulnerabilities.

The world now faces two interlocking dangers. The first is the lack of a unified global framework for AI regulation. Different countries are writing their own rules, driven as much by national interests as by safety concerns. This patchwork will produce loopholes and incentives for companies to “forum shop,” routing their operations through jurisdictions with the weakest oversight and pushing compliance-light products into global markets. The second danger is the speed of AI itself. While engineers and corporations iterate at breakneck pace, lawmakers, regulators, and courts move far more slowly. The result is a dangerous lag, a period in which untested and unverified systems are deployed on real people, and the harms only become clear after they spread.

Together, fragmentation and lag create a vicious cycle. AI is deployed quickly and unevenly, harms are discovered late, and when regulators finally act, their efforts are already outdated or out of sync with the global market. The absence of coordination means no single jurisdiction’s rules can fully contain transnational risks. A model trained under loose oversight in one country can spread across borders instantly, carrying its flaws and dangers with it.

These are not abstract worries. History has already given us warning shots. IBM’s Watson for Oncology (https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/) was marketed to hospitals even though it sometimes recommended unsafe treatments — a vivid illustration of hype moving faster than independent validation. The COMPAS recidivism tool (https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm) in the United States embedded racial biases into sentencing and parole decisions, proving that “black-box” algorithms can reinforce structural injustices. Clearview AI (https://www.edpb.europa.eu/news/national-news/2022/french-sa-fines-clearview-ai-eur-20-million_en) built a business on scraping billions of facial images without consent, only to face fines and bans in Europe after the damage had already been done. These cases underscore a basic truth: voluntary standards and market pressures are not enough to prevent harm when the stakes are health, liberty, and democratic trust.

And yet, despite these lessons, the world continues to regulate AI in fragmented and inconsistent ways. Europe has adopted the most ambitious statute to date, the AI Act, which imposes binding obligations on “high-risk” systems and classifies uses by levels of danger. The United States, by contrast, leans on sectoral oversight and voluntary standards in order to preserve speed and innovation, leaving significant gaps for cross-cutting harms. The United Kingdom has embraced a “pro-innovation” philosophy that emphasises flexibility over heavy legislation. China, meanwhile, pursues a radically different path, embedding AI in a framework of political control, national security priorities, and strict data governance. Smaller and emerging economies experiment with their own combinations of guidance and emerging laws.

The outcome is predictable: a global mosaic of rules that do not align, cannot easily interoperate, and leave plenty of room for exploitation. Companies can arbitrage between jurisdictions, regulators struggle to keep up, and citizens everywhere are exposed to uneven protections. Worse still, the rapid velocity of AI means even well-crafted statutes risk becoming obsolete before they are fully enforced. Lawmakers legislate for yesterday’s risks while new capabilities race ahead, ungoverned and unexamined.

This is why proof, not promise, must become the foundation of AI’s legitimacy. No society should have to rely on corporate marketing claims when public safety, civil rights, and democratic institutions are at stake. Independent audits, continuous monitoring, and enforceable transparency obligations are not bureaucratic luxuries; they are the minimal safeguards required when deploying systems that can shape the most intimate aspects of human life. Trust in AI will not come from slogans but from verifiable evidence that these systems are safe, fair, and accountable.

But isolated national approaches are no longer enough. AI is inherently transnational. Its development pipelines cross borders, its deployment is global by default, and its risks cannot be contained within any one jurisdiction. What we need is a shared framework that establishes common ethical principles, minimum safety standards, and mechanisms for coordination across borders. In short, we need a Global Charter for the Ethical Use of AI.

Such a charter would not erase national differences. Countries will always bring their own values, priorities, and political structures to the table. But it would provide a foundation: a set of commitments around transparency, accountability, human rights, and safety that all states agree to uphold. It would make it harder for companies to exploit regulatory gaps and easier for regulators to cooperate when harms cross borders. And, crucially, it would signal to the public that AI is being governed with seriousness and foresight, not left to the mercy of market incentives.

We are at an inflection point. The path of least resistance is to let the current patchwork continue, hoping that market discipline and national regulators will be enough. But history shows that when safeguards are added only after disasters, the costs — measured in human lives, rights lost, and trust eroded — are far greater. The alternative is to act now: to build a global framework that recognises AI’s velocity, respects its transnational character, and sets enforceable floors for safety and accountability.

This is the purpose of our open letter: a call to action for governments, industry leaders, researchers, and civil society to come together and establish a Global Charter for Ethical AI, backed by a system of international cooperation capable of governing, not merely chasing, the machine. We do not have the luxury of waiting until the next scandal forces our hand. Proof, not promise, must guide us, and coordination, not fragmentation, must be the principle.

AI will shape the next century of human life. The question is whether we will shape it together — or allow it to shape us, ungoverned and unchecked.