Artificial intelligence is embedded in our economies, our public administrations, our security systems, and increasingly in our daily lives. Just as industrial accidents did not halt industrialization, AI-related harms will not reverse the technological trajectory we are already on.
The real question, therefore, is not whether AI will continue to develop, but how society, lawmakers, and legal systems will respond when things go wrong.
Society, Accidents, and Legal Expectations
Historically, societies tend to react to technological risks only after harm has occurred. AI law may follow the same pattern. The legal system will be asked to provide answers to questions it is not yet fully prepared to address:
Who is responsible when an AI system causes damage?
How should liability be allocated?
Can existing legal frameworks absorb these challenges, or do we need fundamentally new concepts?
These questions become even more urgent in areas such as law enforcement, automated decision-making, healthcare, and critical infrastructure, where AI errors may cost not only money, but human lives.
Lawmakers and the Knowledge Gap
A fundamental concern remains: do lawmakers truly understand what they are regulating?
In many cases, regulation appears to be driven by fear rather than by technical understanding. Legislators are under pressure to “protect society” from economic, social, and environmental risks associated with AI. This protective instinct is legitimate, but fear-based regulation carries its own risks.
If laws are drafted without a clear understanding of how AI systems work, they may either:
• Fail to prevent real harms, or
• Overregulate, stifling innovation without achieving meaningful protection.
Law-making is slow by nature. Technological innovation, by contrast, evolves at an exponential pace. A regulatory process that takes years may already be obsolete by the time it comes into force. This structural lag creates frustration not only among innovators, but also among regulators themselves.
Developers, Regulation, and Regulatory Flight
The role of developers is often overlooked in legal debates. Yet developers are highly sensitive to regulatory environments. If regulation is too strict, overly punitive, or obligation-heavy, developers and companies may simply relocate—to other states, jurisdictions, or regulatory “safe havens.”
This raises an uncomfortable but necessary question for regulators:
What impact will our rules have on AI development itself?
Regulation must strike a delicate balance. If it is too permissive, it may fail to prevent harm. If it is too restrictive, it may drive innovation elsewhere, leaving certain regions technologically dependent rather than technologically sovereign.
Responsibility and the “Human at the End of the Pipe”
Perhaps the most critical legal issue is responsibility. When an AI system causes damage, who should be held liable?
Several options are debated:
• The developer
• The user
• Both jointly
• Or, more radically, the AI system itself through a form of legal personality
Environmental Costs and Contradictions
AI regulation cannot ignore environmental impacts. AI systems consume vast amounts of electricity, water, and hardware resources, and they generate electronic waste. At the same time, AI offers powerful tools for environmental protection, such as climate modeling, deforestation prediction, and energy optimization.
This creates a fundamental contradiction:
Can the environmental harm caused by AI be justified by its environmental benefits?
At present, there is no definitive answer. What is clear is that environmental sustainability must become part of the AI regulatory conversation, not an afterthought.
The International Regulatory Vacuum
From an international law perspective, the absence of a global AI treaty is striking. AI is inherently transnational, yet regulation remains fragmented and national. While the EU AI Act represents a bold and pioneering step with its risk-based approach, it cannot function as a global solution on its own.
A future international framework—if it emerges—will likely need to establish at least a minimum common standard, acceptable to jurisdictions as diverse as the EU, the United States, China, and developing economies. Without such coordination, regulatory fragmentation will persist.
Conclusion
We are still at the very beginning of the AI regulatory era. The technology is far ahead; the law is only starting to catch up. For lawyers, policymakers, and scholars, this moment represents both a challenge and an opportunity.
The key task ahead is clear:
to regulate AI not through fear, but through understanding—protecting fundamental rights while allowing innovation to flourish.
Whether we succeed will shape not only the future of AI, but the future of law itself.
Find out more in this speech of Dr Gabor Kecskes
¿Tener una cuenta? Iniciar Sesión