Skip to content

odeb1/Responsible_AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

Responsible AI governance: A response to UN interim report on governing AI for humanity [Full Report]

If you find this work useful, please feel free to cite it.

Recommended Citation Format in BibTex:
@techreport{soton488908,
          number = {10.5258/SOTON/PP0057},
           title = {Responsible AI governance: A response to UN interim report on governing AI for humanity},
          author = {Sarah Kiden and Bernd Stahl and Beverley Townsend and Carsten Maple and Charles Vincent and Fraser Sampson and Geoff Gilbert and Helen Smith and Jayati Deshmukh and Jen Ross and Jennifer Williams and Jesus Martinez del Rincon and Justyna Lisinska and Karen O?Shea and M{\'a}rjory Da Costa Abreu and Nelly Bencomo and Oishi Deb and Peter Winter and Phoebe Li and Philip Torr and Pin Lean Lau and Raquel Iniesta and Gopal Ramchurn and Sebastian Stein and Vahid Yazdanpanah},
       publisher = {Public Policy, University of Southampton},
            year = {2024},
             url = {https://eprints.soton.ac.uk/488908/}
}

*All the authors contributed equally to this work.

Key Highlights from the report:

Opportunities and Enablers:

  • AI for Sustainable Development Goals (SDGs): AI has the potential to transform access to knowledge and enhance efficiency, aligning with global SDGs.
  • Inclusive AI Policies: Governments are urged to ensure equitable, secure, and reliable AI access, especially for vulnerable groups, including children and marginalized communities.
  • Cross-Border Regulations: AI governance should address systems deployed across jurisdictions, ensuring compliance irrespective of company registration status.
  • Infrastructure Investment: Investments in AI systems should include broadband, electricity, and connectivity to support their deployment.
  • Stakeholder Responsibilities: Developers, policymakers, and end-users must share accountability for AI systems' design, deployment, and use.

Risks and Challenges:

  • Potential Inequalities: AI could amplify societal inequalities without proper safeguards and fairness-focused development.
  • Privacy and Security Concerns: AI systems must emphasize privacy protections, transparency, and informed decision-making for users.
  • Environmental Impacts: Calls for more focus on AI’s environmental footprint, including energy consumption and hardware disposal.
  • AI Literacy: Promoting AI literacy to help individuals make informed choices and safeguard privacy.
  • Bias and Misuse Risks: Sectors like finance, policing, and healthcare require stringent checks to reduce AI biases and misuse.

International Governance of AI

  • Global Frameworks and Norms: Proposes alignment with international laws, standards, and ethical principles while addressing cultural and economic diversity.
  • Equity, Diversity, and Inclusion (EDI): Highlights the need to integrate EDI in AI governance to overcome systemic inequities.
  • Collaborative Governance Models: Recommends adopting non-Westphalian approaches for global cooperation based on shared goals.
  • Multistakeholder Engagement: Advocates for including civil society, industry, academia, and marginalized voices in governance discussions.

Guiding Principles for AI Governance

  • Inclusivity: Focus on reducing digital divides and ensuring AI benefits all, especially vulnerable groups.
  • Public Interest Governance: Suggests supplementing non-binding recommendations with enforceable regulations.
  • Data Governance: Recommends exploring models like data commons, cooperatives, and fiduciaries to safeguard data.​
  • Transparency and Accountability: Emphasizes the need for transparent decision-making processes to build public trust.​
  • Alignment with Human Rights Laws: Governance should be rooted in international norms, including human rights and sustainability frameworks​.

Institutional Functions and Recommendations

  • Scientific Consensus Building: Proposes forming multidisciplinary bodies, like the IPCC, to oversee AI assessments.
  • Standards and Safety Regulations: Advocates harmonizing safety and risk frameworks globally while addressing enforcement challenges.
  • Data and Talent Equity: Calls for international collaboration to address disparities in access to resources and talent.
  • Reporting and Peer Reviews: Recommends regular assessments and independent reviews to ensure accountability and integrity.
  • Legal Frameworks: Balances enforceable norms with the flexibility to respect sovereignty and cultural diversity.

Final Reflections

The report underscores that achieving responsible AI governance requires continuous monitoring, capacity-building, and collaboration across nations and sectors. It advocates for frameworks rooted in fairness, transparency, and inclusivity while balancing innovation with accountability.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published