Center for AI and Digital Policy’s Post

📢 In a statement to the UN, CAIDP Calls for an Immediate Moratorium on Autonomous Lethal Weapons Systems (LAWS) and Classification of 'Loitering' AI Missile Systems as Weapons of Mass Destruction "The upcoming meeting in Geneva is a pivotal moment to address the ethical, legal, and security challenges posed by increasingly autonomous military technologies. Rapid advancements in AI have led to complex applications in warfare, as seen in recent conflicts like Ukraine and Gaza." Key Concerns with Lethal Autonomous Weapons ⚠️ ❗ Unpredictability and Lack of Control ⚠️ ❗ Exponential Lethality ⚠️ ❗ Ethical and Legal Implications Recommendations 1️⃣ Immediate Moratorium: Enact a temporary ban on deploying LAWS until comprehensive regulations are established. 2️⃣ Classification as WMDs: Classify lethal autonomous weapons, like 'loitering' AI missile systems, under weapons of mass destruction due to their scalable lethality. 3️⃣ Ban Non-Compliant AI Systems: Prohibit AI systems that cannot adhere to international human rights and humanitarian laws. 4️⃣ Monitoring Framework: Implement standardized reporting and allow independent oversight of AI in military operations. 5️⃣ Appoint a UN Special Rapporteur on AI and Human Rights: Encourage transparency and human rights alignment 6️⃣ Promote Democratic Accountability: Ensure civilian control and prevent unverified AI systems from influencing military decisions. "The majority of UN Member States support regulating LAWS despite opposition from a few powerful countries. Immediate action is crucial to prevent an AI arms race, protect human rights, and maintain international peace and security." Merve Hickok Marc Rotenberg Ayca Ariyoruk Dominique Greene-Sanders, Ph.D. Nana Khechikashvili Nidhi Sinha Heramb Podar Pat Szubryt MBA #aigovernance #PeaceAndSecurity United Nations

Simon Falk

Chief Executive Architect | Metaphorically Significant™ FrameWork | Business Owner at YourFinestOut | DeepTech Leader | Pioneering Ethical AI & Multidimensional Data Science

1mo

The urgent call for a moratorium on Lethal Autonomous Weapons Systems (LAWS) reflects the critical need for ethical alignment in AI. This aligns with the guiding principles within the ’Joint Planetarian Ethics Initiative’ from which the ’Dimensional Ethics™ Engine’ derives. This document, currently drafted for training purposes, perhaps could be a vision for the future(?) The Dimensional Ethics™ Engine is in its turn one of the core pillars of Inter Dimensional Computation™ | IDC™ This technology safeguards human rights and ties ethical values to the very core of future intelligent systems. We are kindly reaching out to invite institutions to help shape a comprehensive training document for this system. Your expertise will be invaluable in strengthening our commitment to responsible AI. IDC™ is built with adaptability in mind, prepared to conform to any regulations set forth, ensuring that ethical values and human rights are inextricably ingrained and entrenched at the computational core of this technology. Together, we can pave the way for AI systems that prioritize humanitys greatest ideals and operate within these boundaries.

  • No alternative text description for this image

I wrote my entire PhD thesis advocating for these steps in 2020. Thank you for keeping the issue centre-stage.

Very urgent and important initiative!

See more comments

To view or add a comment, sign in

Explore topics