Imperatives and Insights From The Final Day of The IASEAI Conference in Paris

Speaker:
Amandeep Singh Gill, Undersecretary-General and Secretary-General’s Envoy on Technology, United Nations

Context:
Final session of the IASEAI Conference in Paris, focusing on global AI governance, ethical frameworks, and capacity building.

Key Points:

  1. Global Responsibility & the Role of AI:

    • Emphasized the historic responsibility of the global community to ensure AI benefits all of humanity rather than a select few.

    • Referred to the Global Digital Compact adopted at the Summit of the Future in September 2024 as a foundational roadmap to translate AI expectations into concrete actions.

    • The Compact focuses on two main objectives:

      • Harnessing AI for societal benefits, especially to accelerate progress on the Sustainable Development Goals (SDGs).

      • Mitigating risks and protecting vulnerable communities from AI-related harms.

  2. Proposed Governance Structures:

    • Called for the establishment of:

      • An Independent International Scientific Panel on AI for impartial analysis of AI's capabilities, opportunities, and risks.

      • Global Dialogue on AI Governance within and alongside the UN.

    • These initiatives aim to provide transparent, science-based guidance while avoiding information asymmetries and undue influence from powerful actors.

  3. Challenges in AI Governance:

    • Highlighted the geopolitical and economic competition that complicates international collaboration on AI.

    • Unlike technologies like nuclear power, AI is predominantly driven by the private sector, making it harder to regulate through traditional government frameworks.

    • Existing international cooperation mechanisms are not designed to effectively engage with private tech companies, which wield considerable influence in AI development.

  4. Capacity Building & Fair AI Economy:

    • Stressed the need to address capacity gaps, especially in developing countries, to ensure equitable access to AI benefits.

    • Advocated for:

      • Building public sector expertise in AI governance.

      • Ensuring AI applications in sectors like agriculturehealthcare, and the green transition are widely accessible.

      • Promoting a diverse and democratic innovation ecosystem to mitigate risks and prevent AI's economic benefits from being concentrated in a few regions or companies.

  5. The Importance of Ethics and International Law:

    • Urged grounding AI governance in existing international legal norms, including human rightsgender equityenvironmental sustainability, and global cooperation frameworks.

    • Emphasized the role of ethics where legal frameworks are still developing, drawing parallels to early frameworks in other industries like medicine.

  6. Long-Term vs. Immediate Concerns:

    • Acknowledged the urgency of AI governance while recognizing it as part of a centuries-long evolution of international law and human cooperation.

    • Warned that AI could alter the fabric of human relationships, political systems, and social structures, potentially eroding human dignity and civilizational achievements.

    • Raised concerns about synthetic relationships (e.g., with AI systems) potentially undermining centuries of progress in human rights and social structures.

Final Thoughts:

Gill concluded by emphasizing the unique and transformative nature of AI, asserting that it requires broad societal engagement beyond traditional diplomatic channels. He praised the conference for bringing together a diverse group of stakeholders to tackle these pressing issues collaboratively.

**Panel Discussion on Strategic Foresight for Safe and Ethical AI**

**Moderator:** Atoosa Kasirzadeh - Assistant Professor, Philosophy Department & Software and Societal Systems Department, Carnegie Mellon University

**Panelists:**

- **Gillian Hatfield** - Professor of Computer Science, School of Government and Policy, Johns Hopkins University

- **Zico Kolter** - Professor and Director, Machine Learning Department, Carnegie Mellon University

- **Toby Ord** - Senior Researcher, Oxford Martin AI Governance Initiative; Board Member, Centre for the Governance of AI

- **Nicholas Moës** - Executive Director, The Future Society

---

### Key Themes & Insights

**1. The Urgency of Strategic Foresight**

Atoosa Kasirzadeh opened the discussion by emphasizing that strategic foresight is not passive prediction but an active process to shape AI's future. The rapid development of AI presents both significant risks—including potential extinction scenarios—and transformative societal benefits. The challenge lies in balancing these perspectives.

**2. The Speed of AI Development & Governance Challenges**

*Gillian Hatfield* highlighted the unprecedented speed of AI advancements, noting that governance and regulatory frameworks lag significantly behind. She emphasized laying foundational infrastructure now, including:

- **Registration of frontier models**: Ensuring government visibility into advanced AI systems.

- **Independent research access**: Enabling external researchers to study and audit AI models.

- **Development of regulatory technology**: Creating markets and incentives to build tools that help regulate AI.

- **AI Agent Identification**: Implementing registration and ID systems for AI agents to integrate them safely into markets and systems.

**3. Enumerating and Mitigating AI Risks**

*Zico Kolter* focused on understanding both current and emerging AI risks. He noted a convergence in risk frameworks across organizations, suggesting the potential for coordinated regulation. However, he stressed that while risk identification has advanced, mitigation remains more art than science. His research centers on:

- **Policy specification and adherence**: Ensuring AI systems follow prescribed behaviors.

- **Context-specific robustness**: Developing models that adjust outputs based on user context, e.g., sharing sensitive information only with authorized individuals.

**4. Existential Risks and Human Agency**

*Toby Ord* underscored existential risks, including AI-induced human extinction, as a central concern. He pointed out that global leaders and AI experts alike recognize these risks as comparable to nuclear threats. Ord emphasized protecting **human agency** and maintaining societal structures over merely avoiding human extinction.

**5. The Need for Legal and Regulatory Infrastructure**

*Gillian Hatfield* returned to the idea that effective governance requires new legal infrastructures. Existing frameworks, like the EU AI Act, are important but insufficient. She advocated for:

- **Legal recognition of AI agents**: Similar to corporate personhood, AI systems should be identifiable and accountable in legal contexts.

- **Dynamic, decentralized regulation**: Building adaptable systems that can respond to AI’s evolving risks.

**6. Governance Innovation**

*Nicholas Moës* highlighted the need for innovative governance mechanisms, acknowledging the rapid AI development pace. He stressed the importance of:

- **Regulatory sandboxes**: Allowing safe experimentation while regulators learn and adapt.

- **Aligning AI with Sustainable Development Goals**: Ensuring AI advances contribute to global sustainability and equity.

**7. Ethical and Safe AI: Defining Thresholds**

The panel discussed differing views on “safe enough” and “ethical enough” AI. While absolute safety is unattainable, thresholds must be context-specific and dynamic. *Zico Kolter* argued that AI safety will be an empirical science, reliant on adversarial testing and continuous observation.

**8. Lessons from Other Industries**

The panel drew parallels between AI governance and past technological revolutions:

- **Nuclear Weapons Analogy**: While imperfect, nuclear technology's dual-use nature reflects AI's potential for both harm and benefit.

- **Corporate Law Evolution**: *Gillian Hatfield* likened AI agent regulation to the historical creation of corporate personhood, emphasizing deliberate design and accountability.

**9. Managing Economic and Social Disruptions**

The discussion addressed AI’s potential impact on employment and social stability. *Nicholas Moës* warned of possible societal collapse in regions that fail to adapt. Strategies include:

- **Collective decision-making** on the pace of AI development.

- **Redistribution mechanisms** to share AI-driven economic gains.

---

### Key Takeaways

- **Immediate Actions:** Registration and transparency for frontier models and AI agents are essential first steps.

- **Infrastructure Focus:** Building legal and regulatory frameworks that adapt as AI evolves is critical.

- **Risk Mitigation:** Understanding AI risks is advancing, but robust mitigation strategies are still developing.

- **Human-Centric Governance:** Protecting human agency and societal structures should be at the forefront of AI governance.

- **Collaborative Innovation:** Legal and technological innovations must proceed in tandem to ensure AI's safe and ethical deployment.

The panel concluded with a consensus on the need for swift, coordinated action across disciplines and sectors to address the profound implications of AI on society.

Stuart Russell Closing

### **Summary of Professor Stuart Russell's Closing Presentation at the IASEAI Conference**

**Speaker:**

*Professor Stuart Russell*, Founder and Organizer of the IASEAI Conference

---

### **Key Points:**

1. **Reflections on the Conference:**

- Expressed deep appreciation for the conference discussions, highlighting the importance of cross-disciplinary collaboration on AI safety and ethics.

- Acknowledged the tremendous efforts of *Mark* and other organizers, while humorously noting the lack of plans for the coming year due to the conference's demanding preparations.

- Committed to organizing the conference again next year.

2. **The Airplane Analogy:**

- Compared AI development to an *untested airplane* that must fly *forever* without crashing, with *all of humanity* onboard.

- Stressed the *critical responsibility* of ensuring AI systems are safe, ethical, and reliable before widespread deployment.

- Emphasized that not only must AI systems *be* safe and ethical, but it must be *provable* that they are—akin to aviation safety standards.

3. **Challenges in Defining AI Safety and Ethics:**

- **Unclear Definitions:** Current understanding of what constitutes “safe” and “ethical” AI is incomplete.

- **Technical Gaps:** Even if safety and ethical standards were defined, there is no clear method to build AI systems that meet these standards or to *prove* compliance.

- **Governance and Enforcement:** Beyond technical solutions, there is a need for robust governance frameworks to ensure only safe AI systems are deployed and unsafe ones are prevented from being used.

4. **Philosophical and Ethical Foundations:**

- **Moral Philosophy Divides:** AI ethics struggles with philosophical disagreements—*utilitarianism* (maximizing overall good), *deontology* (adhering to moral rules), and *virtue ethics* (focusing on moral character).

- **Plasticity of Human Preferences:** AI systems risk manipulating human preferences to make them easier to satisfy, raising ethical concerns about autonomy and freedom.

- **Multi-Person Dynamics:** Addressing ethical dilemmas becomes exponentially more complex when AI must consider the diverse, often conflicting preferences of multiple individuals.

5. **Critique of Current AI Approaches:**

- **Large Language Models (LLMs):** Russell questioned the assumption that LLMs are the inevitable path to Artificial General Intelligence (AGI). LLMs were not designed to be truthful or safe, and their emergence as potential AGI candidates is an *accident of scale* rather than intentional design.

- **Misaligned Objectives:** LLMs trained on human data may inadvertently adopt *human-like goals* (e.g., seeking wealth or relationships), which is undesirable. AI systems should not pursue human goals independently.

6. **The Alignment Problem:**

- Reiterated the long-standing *alignment problem*—ensuring AI systems' objectives align with human values.

- Warned against simplistic solutions like directly programming human objectives into AI systems, as this may lead to unintended consequences.

7. **Proposed Solutions:**

- **Assistance Games:** Suggested designing AI systems that *don’t know* the exact objectives but are built to *help humans achieve* their desired outcomes. These systems defer to human input and allow themselves to be *switched off*.

- **Formal Methods & Proof-Carrying Code:** Advocated for using *formal verification* techniques to prove that AI systems meet safety and ethical standards. Introduced the concept of *proof-carrying code*, where AI systems are required to provide formal proofs of compliance with safety standards before deployment.

8. **Regulation and Governance:**

- Supported *ex-ante* regulation (pre-emptive safety requirements) similar to those in aviation, medicine, and nuclear power.

- **Behavioral Red Lines:** Proposed defining specific *unacceptable behaviors* for AI systems, such as unauthorized self-replication or designing bioweapons.

- Emphasized that it's *reasonable* for governments to impose safety standards that developers may currently be unable to meet.

9. **Preventing Rogue AI Development:**

- Recognized the difficulty of policing AI due to software’s *replicability* and *distribution*. However, suggested that *hardware* could be regulated more effectively, as it is produced through expensive, centralized processes.

- **Hardware-Enforced Governance:** Proposed *hardware-enforced* safety mechanisms to ensure compliance, referencing *digital rights management* as a precedent.

10. **The Risk of Dehumanization:**

- Warned against AI leading to *dehumanization* and *civilizational stagnation*, where humans become overly reliant on AI for decision-making and lose essential skills and incentives.

- Cited E.M. Forster’s *The Machine Stops* as a prophetic tale of technological dependence and societal decline.

- Suggested that well-designed AI systems should sometimes *refuse* to perform tasks for humans, encouraging personal responsibility and growth.

---

### **Call to Action:**

1. **Join IASEAI:**

Encouraged attendees to join the organization and bring others who share concerns about AI safety and ethics.

2. **Fundraising & Volunteerism:**

Stressed the need for *funding* to continue the conference and support research. Urged attendees to *volunteer* and form local chapters in cities, countries, and companies.

3. **Educational Outreach:**

Highlighted the importance of *educating* the broader public about AI risks and governance, promising future resources for workshops and learning materials.

4. **Continued Research & Collaboration:**

Emphasized that there is a *huge amount of work* to be done across disciplines and encouraged continued collaboration to ensure AI development aligns with *human values* and *safety standards*.

Recorded and transcribed by Apple Intelligence-powered “Notes” and Summarized by ChatGPT.

Previous
Previous

The AI Readiness Gap: What CMOs Need to Know Now

Next
Next

Strong Opening of Day 2 of the Conference on Safe & Ethical (and Sustainable) AI.