Can Nations Govern Autonomous AI?

India-AI Impact Summit confronts the risks of self-directed systems and synthetic media

In a world increasingly shaped by algorithms rather than assemblies, more than 45 national delegations have converged in New Delhi for the India-AI Impact Summit, a high-stakes diplomatic gathering focused on one pressing question: Can the world cooperate fast enough to manage artificial intelligence before its risks outpace its rewards?

The summit, hosted by India, brings together ministers, technology leaders, civil society representatives, and the Secretary-General of the United Nations at a moment of intensifying global concern. Its timing is no accident. Days before the summit opened, the 2026 International AI Safety Report warned that the rapid rise of “agentic AI” systems and hyper-realistic deepfakes could destabilize economies, democracies, and public trust if governance frameworks remain fragmented.

The message was blunt: AI is no longer an experimental technology, it is geopolitical infrastructure.

From Innovation Race to Governance Race

Over the past three years, the AI narrative has shifted dramatically. What began as a race to deploy chatbots and generative models has evolved into a race to regulate autonomous systems capable of acting, reasoning, and executing tasks with minimal human supervision.

“Agentic AI”, systems that can independently plan and carry out multi-step objectives, has moved from theoretical discussion to enterprise reality. Corporations are integrating AI agents into logistics, customer support, compliance monitoring, and financial trading.

But as capability accelerates, so do concerns.

The AI Safety Report highlights three primary risk zones:

  • Autonomous decision-making without adequate oversight
  • Synthetic media manipulation at political scale
  • Cross-border regulatory fragmentation

These concerns form the backbone of the summit’s agenda.

Deepfakes: The Erosion of Shared Reality

If misinformation was once a matter of misleading headlines, deepfakes now threaten to rewrite visual and auditory truth itself.

Advanced generative systems can fabricate video evidence indistinguishable from authentic footage. In election cycles across multiple continents, governments have reported coordinated deepfake campaigns aimed at undermining political stability.

The summit’s delegates are expected to debate:

  • International watermarking standards
  • Cross-border takedown protocols
  • Real-time detection infrastructure
  • Criminal liability for malicious AI deployment

For developing nations in particular, the stakes are acute. Weak detection infrastructure combined with high social media penetration creates fertile ground for information chaos.

Agentic AI: From Tools to Actors

Unlike earlier AI systems that required human prompting for each action, agentic AI can:

  • Formulate strategies
  • Execute sequences
  • Adjust to feedback
  • Operate across digital systems autonomously

In enterprise environments, this translates into productivity gains. In geopolitical terms, it introduces complexity.

The AI Safety Report warns that agentic systems operating at scale could:

  • Amplify cyber vulnerabilities
  • Exploit financial market gaps
  • Trigger automated escalation in digital conflicts
  • Act unpredictably in poorly supervised contexts

The summit’s core question: How do nations regulate systems that do not reside neatly within their borders?

India’s Strategic Positioning

By hosting the summit, India positions itself as a bridge between advanced AI powers and emerging economies.

New Delhi’s message is carefully calibrated: AI governance must not become a tool of technological protectionism. Developing nations argue that restrictive compliance regimes risk widening the digital divide.

India has proposed a three-pillar framework:

  1. Equitable access to AI infrastructure
  2. Global safety standards
  3. Shared research and capacity building

This approach reflects India’s dual ambitions to lead in AI innovation while advocating for inclusive development.

Fragmented Governance, Shared Risks

The global regulatory landscape is already patchwork:

  • The European Union enforces risk-tiered AI legislation.
  • The United States emphasizes voluntary safety commitments.
  • China promotes state-guided AI oversight.
  • Emerging economies seek flexible frameworks that allow experimentation.

Without coordination, companies face compliance conflicts, and malicious actors exploit gaps between jurisdictions.

Delegates in New Delhi are expected to discuss whether a global AI coordination body, potentially under UN auspices is necessary.

Such a move would echo climate governance models, where national sovereignty coexists with shared accountability.

Energy, Infrastructure, and Inequality

Beyond safety, the summit also addresses AI’s physical footprint.

Data centers powering generative models now consume unprecedented electricity levels. As AI adoption scales, so does its environmental impact.

Developing countries worry that AI infrastructure investment could:

  • Concentrate power among hyperscalers
  • Create dependency on foreign compute resources
  • Exacerbate energy inequality

The safety report underscores that AI risk is not purely algorithmic, it is infrastructural.

AI governance must therefore extend beyond software ethics into hardware and energy strategy.

Diplomacy in the Age of Algorithms

Historically, global summits addressed tangible crises, nuclear arms, climate change, financial instability.

AI presents a subtler challenge: it is not a singular threat but a multiplier.

It influences:

  • Elections
  • Defense systems
  • Supply chains
  • Healthcare diagnostics
  • Education platforms

Unlike weapons treaties, AI governance cannot simply limit stockpiles. It must balance innovation with precaution.

That balance is delicate.

Overregulation risks stifling growth. Underregulation risks systemic instability.

Trust as Defining Variable

At its core, the India-AI Impact Summit is about trust.

Trust that:

  • AI systems behave predictably
  • Governments cooperate rather than compete recklessly
  • Enterprises prioritize safety alongside profit
  • Citizens retain agency in algorithm-mediated environments

The 2026 AI Safety Report concludes with a stark warning: the window for coordinated action is narrowing as capabilities compound.

The report does not advocate halting AI development. It urges synchronized governance before self-directed systems outpace human oversight capacity.

What Success Would Look Like

A successful summit would deliver:

  • A shared definition of high-risk agentic AI
  • International deepfake authentication standards
  • Cross-border research collaboration
  • Agreement on transparency and reporting obligations

Even symbolic alignment would signal progress.

Failure, by contrast, would reinforce regulatory fragmentation and deepen geopolitical AI competition.

A Turning Point, Not a Conclusion

As delegates debate frameworks and language in New Delhi, the world outside the conference halls continues to accelerate.

New AI agents are being deployed daily. Synthetic media tools grow more powerful by the month. Autonomous systems are integrating into finance, defense, and healthcare.

The summit will not resolve every tension. But it may mark a psychological shift, from viewing AI solely as an economic engine to recognizing it as a shared global responsibility.

The question is no longer whether AI will shape the future.

It is whether that future will be coordinated or contested.