When people talk about artificial intelligence, they usually focus on the shiny parts: the chatbots, the creative tools, the promise of machines that can think. But the more important story is happening behind the curtain. It’s a story not about algorithms, but about power. And it’s a story that should worry all of us.
Right now, the infrastructure that makes advanced AI possible is controlled by a handful of companies. The world’s most advanced AI capabilities, from large-scale models to compute infrastructure, are increasingly controlled by a handful of actors.
The gates of innovation are locked, and only the richest corporations have the keys. Training a top-tier model doesn’t cost thousands—it costs hundreds of millions. That puts academia, nonprofits, and even most governments out of the race.
What happens when only a few players can afford to innovate?
The “AI divide” isn’t just about who has access to the technology. It’s about who gets to decide what AI is for and how it should be developed. And this concentration of AI capabilities enables forms of control that extend far beyond business. Whoever controls the infrastructure controls the pace, direction, and purpose of progress.
This divide risks creating a two-tier system: one in which a small set of actors push the boundaries of what is possible, while the rest of the world is relegated to implementing whatever tools are handed down. The result is not global participation in AI, but global dependency on it.
We like to romanticize startups as the challengers that disrupt monopolies. But in AI, the challengers rarely make it to adulthood. They get acquired. And with each acquisition isn’t just a headline—it’s one less independent vision of what AI could be. Competition shrinks, diversity of approaches shrinks, and soon we are left with the same few firms making the same kinds of choices, always optimized for scale and profit. It’s like an ecosystem where every plant has been replaced by the same invasive species.
The concentration of AI power also maps onto a geography of inequality. Advanced infrastructure is clustered in a small number of regions, while much of the world depends on systems developed elsewhere. This creates a form of digital dependency that is harder to break than traditional economic inequality, because it is not just about access to tools but about control over knowledge itself.
That means not only economic dependency but also a loss of cultural agency.
Nations import systems trained on data from elsewhere, coded with someone else’s values, run on servers they don’t own. They get the tool, but not the agency.
Most troubling is inequality. If the wealth created by AI accrues overwhelmingly to those who own the infrastructure, while millions lose jobs or face displacement without redistribution, the economic gap widens to dangerous levels.
Can a Few Companies Define Ethics for the World?
Big Tech loves to publish “AI principles.” They read well. They talk about fairness, transparency, and responsibility. If these decisions are concentrated in the hands of a small number of organizations, then so too are the definitions of fairness, responsibility, and safety.
No matter how well-intentioned, a handful of perspectives cannot reflect the values of a diverse global society. Ethics becomes a set of corporate guidelines rather than a collective conversation. And once embedded in technology, these choices scale silently and globally, shaping millions of lives without debate.
Above all, the governance of AI must be transparent, pluralistic, and inclusive. Ethics should not be decided in a boardroom, but through processes that bring together civil society, academia, industry, and communities worldwide.
If these dynamics persist, the consequences extend far beyond today’s markets or tomorrow’s politics. Centralized AI could entrench a techno-feudal order, where a handful of actors permanently control the most powerful general-purpose technology humanity has ever built.
Imagining a Democratic AI
It would start with recognizing AI as a public concern, not just a private product. Governments could invest in public compute infrastructure, the way they invest in universities. Open-source projects could be funded at scale, regulation could be international, so on.
Most importantly, civil society—the people actually living with the consequences of AI—would have a seat at the table. Not just academics and CEOs, but teachers, workers, activists, and voices from all around the world.
The Future We Choose
The concentration of power we see today is not inevitable—it is a political and economic choice. But if we don’t act, the choice will be made for us.
The danger isn’t just that a few corporations will control the future of technology. The danger is that, in doing so, they will control the future of society: who has opportunity, who has agency, who has dignity.
AI should belong to all of us. It should be developed in the open, regulated in the public interest, and deployed with fairness as its foundation.
If we don’t demand that now, we may wake up in a decade to discover that the most powerful technology of our time has already been captured—and that democracy has been quietly rewritten by machines we don’t own, run by people we didn’t elect.
The time to decide is now—before the gates close entirely.
¿Tener una cuenta? Iniciar Sesión