Ayca Ariyoruk’s Post

View profile for Ayca Ariyoruk, graphic

Driving Human-Centered AI Policy for Ethical Innovation and Education | Building Collaborative Teams and Strategic Partnerships for Global Impact *views are my own unless you want them*

Is it technology that makes humans lethal, or are humans who determine the lethality of a given technology? Conceptually useful question and a philosophically fun topic to discuss. But in practice, it makes little difference. For over a decade, parties to the Convention on Conventional Weapons have discussed Lethal Autonomous Weapons Systems (#LAWS). They will continue to do so this week at the United Nations in Geneva. Yet if recent deployments and their consequences in active war zones are any indication, there is nothing conventional about these systems. A single system can generate thousands of unverified target lists, or a single individual can launch countless weapons with the potential to wipe out entire populations. Precise? Yes. Accurate? No. Controllable? Hardly. Intelligent? Not at all. But their lethality is exponential. If you think high-tech military supremacy will make us stronger and safer, think again. These systems can be hacked, and/or fall into the hands of non-state actors, or lead to accidental escalation among nuclear-armed states. Or worse, used as a cover-up for war crimes. The dehumanization of warfare is not something we should be striving for. Why then allow "Black Boxes" to penetrate military command, control, communications, and intelligence? How will democracies ensure civilian authority in high-tech militaries? That's why the Center for AI and Digital Policy called for a Weapons of Mass Destruction classification for LAWS at the UN. LAWS = WMDs We need new rules, and them to be discussed at the higher levels with urgency. Marc Rotenberg, Merve Hickok, Dominique Greene-Sanders, Ph.D., Pat Szubryt MBA, Nidhi Sinha, Nana Khechikashvili, Heramb Podar. #AIGovernance #PeaceandSecurity #Disarmament #Nonproliferation #ArmsControl

View organization page for Center for AI and Digital Policy, graphic

64,278 followers

📢 In a statement to the UN, CAIDP Calls for an Immediate Moratorium on Autonomous Lethal Weapons Systems (LAWS) and Classification of 'Loitering' AI Missile Systems as Weapons of Mass Destruction "The upcoming meeting in Geneva is a pivotal moment to address the ethical, legal, and security challenges posed by increasingly autonomous military technologies. Rapid advancements in AI have led to complex applications in warfare, as seen in recent conflicts like Ukraine and Gaza." Key Concerns with Lethal Autonomous Weapons ⚠️ ❗ Unpredictability and Lack of Control ⚠️ ❗ Exponential Lethality ⚠️ ❗ Ethical and Legal Implications Recommendations 1️⃣ Immediate Moratorium: Enact a temporary ban on deploying LAWS until comprehensive regulations are established. 2️⃣ Classification as WMDs: Classify lethal autonomous weapons, like 'loitering' AI missile systems, under weapons of mass destruction due to their scalable lethality. 3️⃣ Ban Non-Compliant AI Systems: Prohibit AI systems that cannot adhere to international human rights and humanitarian laws. 4️⃣ Monitoring Framework: Implement standardized reporting and allow independent oversight of AI in military operations. 5️⃣ Appoint a UN Special Rapporteur on AI and Human Rights: Encourage transparency and human rights alignment 6️⃣ Promote Democratic Accountability: Ensure civilian control and prevent unverified AI systems from influencing military decisions. "The majority of UN Member States support regulating LAWS despite opposition from a few powerful countries. Immediate action is crucial to prevent an AI arms race, protect human rights, and maintain international peace and security." Merve Hickok Marc Rotenberg Ayca Ariyoruk Dominique Greene-Sanders, Ph.D. Nana Khechikashvili Nidhi Sinha Heramb Podar Pat Szubryt MBA #aigovernance #PeaceAndSecurity United Nations

To view or add a comment, sign in

Explore topics