Expert Consultation report – Artificial intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed conflicts: Current Developments and Potential Implications https://lnkd.in/eCe6c4pd
Christina Ayiotis, Esq., CRM, CIPP/E, AIGP’s Post
More Relevant Posts
-
🌍 New Article Published on Medium! I’m excited to share my latest article titled "Martens Clause, Murkier, and Mistier: Navigating Humanity and Technology in Ukraine's Defense Strategy During the Russo-Ukrainian War." This commentary explores the complex interplay between humanitarian principles and the integration of artificial intelligence (AI) in Ukraine's military strategy amid ongoing conflict. 📌 Key topics covered: The relevance of the Martens Clause in contemporary warfare and its implications for AI deployment; Ethical dilemmas surrounding the use of AI in military operations and its impact on humanitarian norms; Case studies illustrating Ukraine's innovative use of AI technology in defense strategies; and The broader implications for international law and the future of warfare in an increasingly tech-driven landscape. 📖 Read the full article here: https://lnkd.in/g8mSmYDX I look forward to your thoughts and engaging in a meaningful discussion about the intersection of technology, humanity, and military strategy in today’s world!
Martens Clause, Murkier, and Mistier: Navigating Humanity and Technology in Ukraine’s Defense…
link.medium.com
To view or add a comment, sign in
-
Experts on the laws of war, already alarmed by the emergence of AI in military settings, are concerned that its use in Gaza, as well as in Ukraine, may be establishing dangerous new norms that could become permanent if not challenged. My latest for TIME: https://lnkd.in/eY9tgFZE
What Israel's Use of AI in Gaza May Mean for the Future of War
time.com
To view or add a comment, sign in
-
The reported use of #AI-enabled decision-support systems (#AIDSS), particularly the #Gospel and #Lavender systems, by the Israel Defense Forces (#IDF) in their military operations in #Gaza has been controversial. Many expert interventions have emerged in response to the IDF’s reported use of AI-DSS. Klaudia Klonowska explores the main points of contention and the framing of legal issues surrounding these technologies. She concludes that it is important to avoid exalting either humans or AI when formulating legal arguments on the topic of military AI. Read more: https://lnkd.in/eXBynFHJ #articlesofwar
Israel-Hamas 2024 Symposium - AI-Based Targeting in Gaza: Surveying Expert Responses and Refining the Debate - Lieber Institute West Point
https://lieber.westpoint.edu
To view or add a comment, sign in
-
🔎 How could States govern the use of AI in the military domain? Our latest Occasional Paper, “Governance of AI in the Military Domain,” outlines three governance options: 1️⃣ United Nations disarmament forums. Various options are available to States depending on the content, scope, mandate and structure of discussions: UN General Assembly, Conference on Disarmament, UN Disarmament Commission, and existing treaties and conventions. 2️⃣ A New UN Body, in the form of an Agency or an intergovernmental scientific body. 3️⃣ Governance outside of the UN, including ongoing bilateral & multilateral arrangements, regional regulation measures, and national/sector-specific governance. 📄 Read the report to here to learn more: https://lnkd.in/dpdPtSn2
To view or add a comment, sign in
-
-
🌍 International Guidelines for Military AI Discussed Over 60 nations, including Australia, Japan, the UK, and the US, have endorsed a blueprint for the ethical use of AI in military applications at the REAIM summit in Seoul, South Korea. The guidelines stress human control, risk assessments, and safeguards to prevent AI from threatening peace and violating human rights. While nonbinding, this framework encourages collaboration to manage risks, including preventing the spread of AI in weapons of mass destruction. The blueprint leaves individual countries responsible for creating their own technical standards but emphasises accountability and oversight to prevent escalation and misuse. Notably, China and over 30 other countries chose not to sign, highlighting the complexity of international cooperation on AI in defence. 🛡️ Key points: Human oversight and control of AI in military use Risk management and safeguards against misuse National strategies aligned with human rights laws Prevention of AI escalation in arms races While this blueprint is a step forward, AI-driven military technologies like autonomous weapons and targeting systems already highlight the need for international norms. We need a global dialogue to ensure that AI enhances rather than undermines security and humanitarian efforts. Learn more about the EU’s AI Act here and the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law here. Full Article can be read https://lnkd.in/efdN_AhH here #AI #MilitaryAI #HumanRights #AIethics #GlobalSecurity #Technology #InternationalLaw #REAiM #Innovation
REAIM Summit 2024
reaim2024.kr
To view or add a comment, sign in
-
It is official: dr. Jessica Dorsey, co-founder and co-project leader of our Realities of Algorithmic Warfare project (RAW) at Utrecht University, is one of the key expert advisors for the RE-AIM Global Commission on Responsible Artificial Intelligence in the Military Domain. This is exactly the kind of impact we aim for at RAW! At a time where we are witnessing the realities of how AI-enabled military systems increase the speed of decision making and the scale of destruction in Gaza, we need now more than ever that scholars reflect on the lived experiences of these systems in the context of war and to trace the lines of responsibility and accountability back to the military-industrial-commercial complex that are innovating and deploying them. Read about how we conduct research on this at RAW here: https://lnkd.in/e8nVYiXX And a list of our research and impact output here: https://lnkd.in/eaGC7Mnv It is really important that Jessica will be able to bring our academic insights into public and political debates at RE-AIM.
I am deeply honored and excited to announce my appointment as Expert Advisor for the GC REAIM - Global Commission on Responsible Artificial Intelligence in the Military Domain. This Commission was announced during the international REAIM Summit on Responsible AI in the Military Domain, held in The Hague last year, during which our research project, The Realities of Algorithmic Warfare, coordinated an expert panel to discuss some of the most pressing issues regarding the realities of the use, impact and regulation of military AI. I hope to take these messages forward in my work. The Commission’s objectives are to serve as an international forum to promote mutual awareness and understanding among the various AI communities, thereby contributing to an essential global task: supporting fundamental norm development and policy coherence in this field. At a time in history where we are seeing in parallel the rapid development and deployment of emerging and disruptive technologies like AI on the battlefield and the devastating and widespread effects of armed conflict on civilians, we are faced with complicated questions about our own humanity and the way forward. I am privileged to add my voice and expertise to initiatives within the Commission’s work to center considerations of the realities of these technological developments, their implications for civilians caught up in warfare, and offer ways to view the development and deployment of military AI through a human-centric lens on the legitimacy of military operations. I am thrilled to join a group of exceptional practitioners, scholars, and experts in this space, some of whom I’ve had the pleasure to connect with already, like Yasmin Afina, Nehal Bhuta, Dr Ingvild Bode, Thompson Chengeta, Rain Liivoja, Roy Lindelauf, Mary Ellen O'Connell, Stuart Russell, and Marietje Schaake, as well as those I don’t yet know but very much look forward to working with, like Patricia Adusei Poku MBA, MSc, BSc, CIPM, CIPP, Prince2, Vincent Boulanin, Ariel Conn, Missy Cummings, Denise Garcia, jeroen van den hoven, Adam J. Hepworth, PhD, James Johnson, Matthijs M. Maas, Illah Nourbakhsh, Mun-eon Park, Kenneth Payne, Giacomo Persi Paoli, Edson Prestes, Michael Raska, Emma Ruttkamp-Bloem, Nayat Sanchez-Pi, Mohammed Soliman, Maria Vanina Martinez, Jimena Viveros LL.M., and Toby Walsh. Keep your eyes on this website as the group continues to grow and we begin as a Commission to give form to shaping policy on and regulating some of the current challenges and those that lie ahead: https://lnkd.in/ecwJweCf #artificialintelligence #ai #responsibleai #aigovernance #internationallaw #militaryai #GCREAIM #REAIM
Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM) - Commissioners - HCSS
https://hcss.nl
To view or add a comment, sign in
-
According to +972 and Local Call, the IDF judged it permissible to kill more than 100 civilians in attacks on a top-ranking Hamas officials. “We had a calculation for how many [civilians could be killed] for the brigade commander, how many [civilians] for a battalion commander, and so on,” one source said. “There were regulations, but they were just very lenient,” another added. “We’ve killed people with collateral damage in the high double digits, if not low triple digits. These are things that haven’t happened before.” There appears to have been significant fluctuations in the figure that military commanders would tolerate at different stages of the war. 👉How can Israel claim to respect International humanitarian law when a target is decided in 20 seconds ? 👉Where is the mandatory principle of proportionality ? There questions must be answered now. All States have the duty to ensure respect for IHL. #IHL #gaza #ceasefire International Committee of the Red Cross - ICRC
‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets
theguardian.com
To view or add a comment, sign in
-
Today, Marta Bo and I published "The 'Need' for Speed--The Cost of Unregulated AI-Decision Support Systems to Civilians" in the Opinio Juris symposium on Military AI and the Law of Armed Conflict. Unfortunately, given the recent news about Israel's reported use of AI-enabled decision-support systems, our post is all the more relevant. In our contribution, we argue that these kinds of systems must receive more attention from a regulatory perspective because of the humanitarian consequences their use may entail. Read here: https://lnkd.in/eCXhW8WU #LAWS #AWS #IHL #AIDSS #responsibleAI #militaryAI #algorithmicwarfare
Symposium on Military AI and the Law of Armed Conflict: The ‘Need’ for Speed – The Cost of Unregulated AI-Decision Support Systems to Civilians
http://opiniojuris.org
To view or add a comment, sign in
-
The downward spiral continues. An untested #AI system "Lavender" with a reported error rate of 10%, has been used by Israel to draw up kill lists of up to 37,000 people in Gaza, reports +972 Magazine Direct Quote: The [Israeli] army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander If report details are true, “many Israeli strikes in Gaza would constitute the war crimes of launching disproportionate attacks” according to Professor Ben Saul. We need a #moratorium on the use of AI in warfare now. We need a #ceasefirenow. We accountability for this #genocide.
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
972mag.com
To view or add a comment, sign in