Global majority united on multilateral regulation of AI weapons Foreign ministers and civil society representatives say that multilateralism is key to controlling the proliferation and use of AI-powered autonomous weapons, but that a small number of powerful countries are holding back progress https://lnkd.in/eP5dpark
Sebastian Klovig Skelton’s Post
More Relevant Posts
-
Our latest report from the Center for AI and Digital Policy examines the urgent issue of Lethal Autonomous Weapons Systems (LAWS). We analyze the ethical and regulatory challenges these systems pose, especially the critical need for clear international standards that uphold human oversight and accountability. And as AI and autonomous systems advance, it’s crucial that policy aligns with human rights and global security. #AIethics #DigitalPolicy #ResponsibleAI
📢 In a statement to the UN, CAIDP Calls for an Immediate Moratorium on Autonomous Lethal Weapons Systems (LAWS) and Classification of 'Loitering' AI Missile Systems as Weapons of Mass Destruction "The upcoming meeting in Geneva is a pivotal moment to address the ethical, legal, and security challenges posed by increasingly autonomous military technologies. Rapid advancements in AI have led to complex applications in warfare, as seen in recent conflicts like Ukraine and Gaza." Key Concerns with Lethal Autonomous Weapons ⚠️ ❗ Unpredictability and Lack of Control ⚠️ ❗ Exponential Lethality ⚠️ ❗ Ethical and Legal Implications Recommendations 1️⃣ Immediate Moratorium: Enact a temporary ban on deploying LAWS until comprehensive regulations are established. 2️⃣ Classification as WMDs: Classify lethal autonomous weapons, like 'loitering' AI missile systems, under weapons of mass destruction due to their scalable lethality. 3️⃣ Ban Non-Compliant AI Systems: Prohibit AI systems that cannot adhere to international human rights and humanitarian laws. 4️⃣ Monitoring Framework: Implement standardized reporting and allow independent oversight of AI in military operations. 5️⃣ Appoint a UN Special Rapporteur on AI and Human Rights: Encourage transparency and human rights alignment 6️⃣ Promote Democratic Accountability: Ensure civilian control and prevent unverified AI systems from influencing military decisions. "The majority of UN Member States support regulating LAWS despite opposition from a few powerful countries. Immediate action is crucial to prevent an AI arms race, protect human rights, and maintain international peace and security." Merve Hickok Marc Rotenberg Ayca Ariyoruk Dominique Greene-Sanders, Ph.D. Nana Khechikashvili Nidhi Sinha Heramb Podar Pat Szubryt MBA #aigovernance #PeaceAndSecurity United Nations
To view or add a comment, sign in
-
The 𝐂𝐞𝐧𝐭𝐞𝐫 𝐟𝐨𝐫 𝐀𝐈 𝐚𝐧𝐝 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐏𝐨𝐥𝐢𝐜𝐲 (𝐂𝐀𝐈𝐃𝐏) has called on the UN to address the dangers of 𝐋𝐞𝐭𝐡𝐚𝐥 𝐀𝐮𝐭𝐨𝐧𝐨𝐦𝐨𝐮𝐬 𝐖𝐞𝐚𝐩𝐨𝐧𝐬 𝐒𝐲𝐬𝐭𝐞𝐦𝐬 (𝐋𝐀𝐖𝐒). Their recommendations include a moratorium on deployment, classifying "loitering" AI missiles as WMDs, and ensuring independent oversight. Ethical and legal safeguards are vital as AI technologies advance to protect global peace and human rights. Encode Justice India 𝐞𝐧𝐝𝐨𝐫𝐬𝐞𝐬 𝐭𝐡𝐢𝐬 𝐩𝐨𝐬𝐢𝐭𝐢𝐨𝐧 and urges everyone to join in building a future free of smart weapons. #AI #HumanRights #Peace #AIGovernance #AISafety SHASHWAT MISHRA Heramb Podar Navis Rohan Aditya Anil Sanjith Chandrashekar Srividhya Perumal Joshua Amo Avi Shah Shreya Sampath Shruti Jagtap Ishita Pai Raikar Rupali Lekhi Immanuel Aquino Keerti N Dr. Ikpenmosa Piyal Uddin John Okyere , Adrian Ali Klaits, Ayca Ariyoruk, Merve Hickok
📢 In a statement to the UN, CAIDP Calls for an Immediate Moratorium on Autonomous Lethal Weapons Systems (LAWS) and Classification of 'Loitering' AI Missile Systems as Weapons of Mass Destruction "The upcoming meeting in Geneva is a pivotal moment to address the ethical, legal, and security challenges posed by increasingly autonomous military technologies. Rapid advancements in AI have led to complex applications in warfare, as seen in recent conflicts like Ukraine and Gaza." Key Concerns with Lethal Autonomous Weapons ⚠️ ❗ Unpredictability and Lack of Control ⚠️ ❗ Exponential Lethality ⚠️ ❗ Ethical and Legal Implications Recommendations 1️⃣ Immediate Moratorium: Enact a temporary ban on deploying LAWS until comprehensive regulations are established. 2️⃣ Classification as WMDs: Classify lethal autonomous weapons, like 'loitering' AI missile systems, under weapons of mass destruction due to their scalable lethality. 3️⃣ Ban Non-Compliant AI Systems: Prohibit AI systems that cannot adhere to international human rights and humanitarian laws. 4️⃣ Monitoring Framework: Implement standardized reporting and allow independent oversight of AI in military operations. 5️⃣ Appoint a UN Special Rapporteur on AI and Human Rights: Encourage transparency and human rights alignment 6️⃣ Promote Democratic Accountability: Ensure civilian control and prevent unverified AI systems from influencing military decisions. "The majority of UN Member States support regulating LAWS despite opposition from a few powerful countries. Immediate action is crucial to prevent an AI arms race, protect human rights, and maintain international peace and security." Merve Hickok Marc Rotenberg Ayca Ariyoruk Dominique Greene-Sanders, Ph.D. Nana Khechikashvili Nidhi Sinha Heramb Podar Pat Szubryt MBA #aigovernance #PeaceAndSecurity United Nations
To view or add a comment, sign in
-
📢 In a statement to the UN, CAIDP Calls for an Immediate Moratorium on Autonomous Lethal Weapons Systems (LAWS) and Classification of 'Loitering' AI Missile Systems as Weapons of Mass Destruction "The upcoming meeting in Geneva is a pivotal moment to address the ethical, legal, and security challenges posed by increasingly autonomous military technologies. Rapid advancements in AI have led to complex applications in warfare, as seen in recent conflicts like Ukraine and Gaza." Key Concerns with Lethal Autonomous Weapons ⚠️ ❗ Unpredictability and Lack of Control ⚠️ ❗ Exponential Lethality ⚠️ ❗ Ethical and Legal Implications Recommendations 1️⃣ Immediate Moratorium: Enact a temporary ban on deploying LAWS until comprehensive regulations are established. 2️⃣ Classification as WMDs: Classify lethal autonomous weapons, like 'loitering' AI missile systems, under weapons of mass destruction due to their scalable lethality. 3️⃣ Ban Non-Compliant AI Systems: Prohibit AI systems that cannot adhere to international human rights and humanitarian laws. 4️⃣ Monitoring Framework: Implement standardized reporting and allow independent oversight of AI in military operations. 5️⃣ Appoint a UN Special Rapporteur on AI and Human Rights: Encourage transparency and human rights alignment 6️⃣ Promote Democratic Accountability: Ensure civilian control and prevent unverified AI systems from influencing military decisions. "The majority of UN Member States support regulating LAWS despite opposition from a few powerful countries. Immediate action is crucial to prevent an AI arms race, protect human rights, and maintain international peace and security." Merve Hickok Marc Rotenberg Ayca Ariyoruk Dominique Greene-Sanders, Ph.D. Nana Khechikashvili Nidhi Sinha Heramb Podar Pat Szubryt MBA #aigovernance #PeaceAndSecurity United Nations
To view or add a comment, sign in
-
Is it technology that makes humans lethal, or are humans who determine the lethality of a given technology? Conceptually useful question and a philosophically fun topic to discuss. But in practice, it makes little difference. For over a decade, parties to the Convention on Conventional Weapons have discussed Lethal Autonomous Weapons Systems (#LAWS). They will continue to do so this week at the United Nations in Geneva. Yet if recent deployments and their consequences in active war zones are any indication, there is nothing conventional about these systems. A single system can generate thousands of unverified target lists, or a single individual can launch countless weapons with the potential to wipe out entire populations. Precise? Yes. Accurate? No. Controllable? Hardly. Intelligent? Not at all. But their lethality is exponential. If you think high-tech military supremacy will make us stronger and safer, think again. These systems can be hacked, and/or fall into the hands of non-state actors, or lead to accidental escalation among nuclear-armed states. Or worse, used as a cover-up for war crimes. The dehumanization of warfare is not something we should be striving for. Why then allow "Black Boxes" to penetrate military command, control, communications, and intelligence? How will democracies ensure civilian authority in high-tech militaries? That's why the Center for AI and Digital Policy called for a Weapons of Mass Destruction classification for LAWS at the UN. LAWS = WMDs We need new rules, and them to be discussed at the higher levels with urgency. Marc Rotenberg, Merve Hickok, Dominique Greene-Sanders, Ph.D., Pat Szubryt MBA, Nidhi Sinha, Nana Khechikashvili, Heramb Podar. #AIGovernance #PeaceandSecurity #Disarmament #Nonproliferation #ArmsControl
📢 In a statement to the UN, CAIDP Calls for an Immediate Moratorium on Autonomous Lethal Weapons Systems (LAWS) and Classification of 'Loitering' AI Missile Systems as Weapons of Mass Destruction "The upcoming meeting in Geneva is a pivotal moment to address the ethical, legal, and security challenges posed by increasingly autonomous military technologies. Rapid advancements in AI have led to complex applications in warfare, as seen in recent conflicts like Ukraine and Gaza." Key Concerns with Lethal Autonomous Weapons ⚠️ ❗ Unpredictability and Lack of Control ⚠️ ❗ Exponential Lethality ⚠️ ❗ Ethical and Legal Implications Recommendations 1️⃣ Immediate Moratorium: Enact a temporary ban on deploying LAWS until comprehensive regulations are established. 2️⃣ Classification as WMDs: Classify lethal autonomous weapons, like 'loitering' AI missile systems, under weapons of mass destruction due to their scalable lethality. 3️⃣ Ban Non-Compliant AI Systems: Prohibit AI systems that cannot adhere to international human rights and humanitarian laws. 4️⃣ Monitoring Framework: Implement standardized reporting and allow independent oversight of AI in military operations. 5️⃣ Appoint a UN Special Rapporteur on AI and Human Rights: Encourage transparency and human rights alignment 6️⃣ Promote Democratic Accountability: Ensure civilian control and prevent unverified AI systems from influencing military decisions. "The majority of UN Member States support regulating LAWS despite opposition from a few powerful countries. Immediate action is crucial to prevent an AI arms race, protect human rights, and maintain international peace and security." Merve Hickok Marc Rotenberg Ayca Ariyoruk Dominique Greene-Sanders, Ph.D. Nana Khechikashvili Nidhi Sinha Heramb Podar Pat Szubryt MBA #aigovernance #PeaceAndSecurity United Nations
To view or add a comment, sign in
-
#Topics The US Wants China to Start Talking About AI Weapons [ad_1] When US president Joe Biden meets with his Chinese counterpart Xi Jinping in the San Francisco Bay Area this week, the pair will have a long list of matters to discuss, including the Israel-Hamas war and Russia’s ongoing invasion of Ukraine.Behind the scenes at the APEC summit, however, US officials hope to strike up a dialog with China about placing guardrails around military use of artificial intelligence, with the ultimate goal of lessening the potential risks that rapid adoption—and reckless use—of the technology might bring.“We have a collective interest in reducing the potential risks from the deployment of unreliable AI applications” because of risks of unintended escalation, says a senior State Department official familiar with recent efforts to broach the issue and who spoke on condition of anonymity. “We very much hope to have a further conversation with China on this issue.”Biden’s meeting with Xi this week may provide momentum for more military dialog. “We're really looking forward to hopefully a positive leaders meeting,” the State Department official says. “We can really understand from that conversation, where our possible bilateral arms control and non-proliferation conversation could pro...
The US Wants China to Start Talking About AI Weapons - AIPressRoom
To view or add a comment, sign in
-
My latest opinion piece: "The boundaries between truth and lies are increasingly blurred in our 24/7 digital and media environment. The strategic manipulation of information continues to be a threat in today’s conflicts." "This can be seen in attacks on faith, the Israeli and Hamas conflict, and the Russian invasion of Ukraine." "Understanding the foundations of Soviet-style active measures and how they are still used in today’s online environment is crucial for developing effective countermeasures." [Excerpt] Read my opinion piece in the Washington Times' Higher Ground to find key terms for how we are manipulated and potential AI solutions that respect freedom of speech. Link to article in comments.
To view or add a comment, sign in
-
-
While there are many points of contention between the United States and China on the future of AI, neither power wants to see chemical and biological weapons spread any further than they already have. Talks on strategic nuclear controls may be difficult, but this is one area in which there are clear, shared interests. #cbrn #cbrne #bioterrorism #terrorism #wmd #intelligencecommunity #defenseindustry #defence CBRNE ATLANTIC BRIDGE (CAB)
The 2024 China-US AI Dialogue Should Start With an Eye on Chem-Bio Weapons
thediplomat.com
To view or add a comment, sign in
-
https://lnkd.in/gM_VvAY3 The Global Conference on the Role of Artificial Intelligence in Advancing the Implementation of the Chemical Weapons Convention (CWC) will bring together scientists, industry, and policymakers to examine challenges and opportunities posed by Artificial Intelligence (AI) in chemical disarmament and non-proliferation.
Global Conference on AI in CWC implementation
opcw.org
To view or add a comment, sign in
-
🌍 New Article Published on Medium! I’m excited to share my latest article titled "Martens Clause, Murkier, and Mistier: Navigating Humanity and Technology in Ukraine's Defense Strategy During the Russo-Ukrainian War." This commentary explores the complex interplay between humanitarian principles and the integration of artificial intelligence (AI) in Ukraine's military strategy amid ongoing conflict. 📌 Key topics covered: The relevance of the Martens Clause in contemporary warfare and its implications for AI deployment; Ethical dilemmas surrounding the use of AI in military operations and its impact on humanitarian norms; Case studies illustrating Ukraine's innovative use of AI technology in defense strategies; and The broader implications for international law and the future of warfare in an increasingly tech-driven landscape. 📖 Read the full article here: https://lnkd.in/g8mSmYDX I look forward to your thoughts and engaging in a meaningful discussion about the intersection of technology, humanity, and military strategy in today’s world!
Martens Clause, Murkier, and Mistier: Navigating Humanity and Technology in Ukraine’s Defense…
link.medium.com
To view or add a comment, sign in
-
Q: How to find scallywags at sea? A: Get a great geospatial intelligence platform & employ a team of excellent analysts. On a daily basis, our team at #RokeIntelligence are providing our clients with accurate intelligence and comprehensive investigations into nefarious actors activity within the maritime domain. We've uncovered Russian arms smuggling, Iranian sanctions evasion networks, drug trafficking from South America and countless cases of non-compliance with the widespread (and growing) international sanctions regimes. This has saved our clients £ millions and protected hard earned reputations. I'm often asked: How do we do it? The simple formula is this: We take an excellent team of analysts operating to military intelligence standards + give them a leading geospatial intelligence platform = reporting that enables high confidence decision making clarity for our clients. We take well trained, experienced human expertise and combine it with the latest AI & ML technology within our Geollect platform, which is also loaded full of state of the art satellite imagery and other cheeky data sets that 10 years ago would have been a top secret military capability. Do you need this capability? Whether its just outsourcing analysis, or access to the Geollect platform, or both, we're always eager to help. Please get in touch with me and we can arrange a call. For more on Geollect, please check out the URL below... https://lnkd.in/evEV_HRu ... or watch the glossy video attached. #compliance #noncompliance #geospatialintelligence #sanctions #sanctionsevasion #maritime #intelligence #investigations #intelligenceasaservice #roke
To view or add a comment, sign in