THE SECRETARY-GENERAL UN Nothing can justify the collective punishment of the Palestinian people. I am also deeply troubled by reports that the Israeli military’s bombing campaign includes Artificial Intelligence as a tool in the identification of targets, particularly in densely populated residential areas, resulting in a high level of civilian casualties. No part of life and death decisions which impact entire families should be delegated to the cold calculation of algorithms. I have warned for many years of the dangers of weaponizing Artificial Intelligence and reducing the essential role of human agency. AI should be used as a force for good to benefit the world; not to contribute to waging war on an industrial level, blurring accountability. @iacongresoucm24 #InteligenciaArtificial #IA #IACongresoUCM #ArtificialIntelligence #TransformaciónDigital #Digitalización
IACongresoUCM24’s Post
More Relevant Posts
-
The investigation reveals that the Israeli military has developed an artificial intelligence program called "Lavender" to mark tens of thousands of Palestinians in Gaza as potential targets for assassination during the current war. Lavender uses data collected through mass surveillance to assess the likelihood that a person is affiliated with Hamas or Palestinian Islamic Jihad and give them a rating. Those with high ratings are automatically marked as potential targets. Once flagged by Lavender, systems like "Where's Daddy?" track the targets and alert them when they enter their homes, prompting airstrikes aimed at killing entire families. Intelligence sources say junior alleged militants marked this way were often struck with unguided "dumb" bombs to destroy whole buildings, as it was considered too costly to use precision munitions on low-ranking targets. Shockingly, the sources state that in the early weeks, the army approved killing up to 15-20 civilians for each junior target and hundreds for senior commanders - an unprecedented policy driving mass casualties. Automated tools provided imprecise estimates of civilians present, and there was often no real-time verification before strikes occurred hours later when targets may have moved. The sources describe a "permissive" atmosphere after Hamas' October 7th attacks, with intense pressure to rapidly generate targets. Many Palestinians marked by the imperfect AI were not militants at all. Human oversight was severely lacking - officers often just checked the target's gender before approving airstrikes based on Lavender's determinations. The scale of deaths was staggering - 15,000 Palestinians, mostly civilians, were killed in the first six weeks largely due to this systematic policy of bombing homes. Whole families were wiped out even when targets were absent, in strikes described as "disproportionate" and driven by "hysteria" and "revenge" after the Hamas attacks. While the military denies the claims, multiple sources give firsthand accounts of these practices, saying the army knew they would kill many civilians yet continued anyway, sacrificing precision for lethality against Hamas. The investigation raises serious questions about potential violations of international law. #IDF #israel #gaza #palestine #war #ai #lavender https://lnkd.in/gCEHNvqV
To view or add a comment, sign in
-
Statement by the Syrian Minister of Defense Regarding the Fall of Hama: This evening, addressing the Syrian public in light of the recent developments in Hama, the Syrian Minister of Defense sought to reassure citizens: “Our situation on the ground remains stable and under control! What occurred today in Hama (the Syrian army’s withdrawal or strategic repositioning) is part of a temporary tactical maneuver. Do not be misled by the numerous fake videos circulating on social media; many of them are fabricated using AI technology.” However, in contrast to this claim, a video from the rebels, recorded at the governor’s palace in Hama, showcases their bold declaration to Assad: “We are coming for Damascus!” This video is evidently not a product of AI manipulation, reflecting the gravity of the situation.
To view or add a comment, sign in
-
"To a man with a hammer, everyone starts looking like a nail" is what I thought when I read this story about an #AI software that has a very flexible and adjustable threshold for civil collateral damage and for false positives when automatically pre-selecting human targets for military killing - what is the proper ratio for killed civillian collateral damage - 15:1, 20:1 or a 100:1 per killed "terrorist"? "(...) the IDF judged it permissible to kill more than 100 civilians in attacks on a top-ranking Hamas officials. “We had a calculation for how many [civilians could be killed] for the brigade commander, how many [civilians] for a battalion commander, and so on,” one source said." “So you’re willing to take the margin of error of using artificial intelligence, risking collateral damage and civilians dying, and risking attacking by mistake, and to live with it,” they added. "One source said that the limit on permitted civilian casualties “went up and down” over time, and at one point was as low as five. During the first week of the conflict, the source said, permission was given to kill 15 non-combatants to take out junior militants in Gaza." - "But at one stage earlier in the war they were authorised to kill up to “20 uninvolved civilians” for a single operative, regardless of their rank, military importance, or age." I do not even ask HOW the AI classifier has been evaluated scientifically for correctness, and how the training data had been quality-assured before the bombs were thrown. Let's see how this story develops in the investigative news channels - and what we will learn from it for building #ethical and safer Artificial Intelligence! I think we need to have a serious conversation about the global consequences of the recent #armsproliferation of destructive, anti-democracy and anti-human rights digital tools and services we have seen emerging into the commercial domain out of Israel's intelligence services (Pegasus, Team Jorge, Lavender). https://lnkd.in/dbcm7e3q #ethicalai #collateraldamage #targetselection #killerrobots #aiweapons #war #terrorists #automatedkilling #artificialintelligence #ai
‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets
theguardian.com
To view or add a comment, sign in
-
AI powered tools to create kill lists in war - what can go wrong. Project Lavender whether it exists or not is a harbinger of what is to come with militaries around the globe using AI powered tools to determine who to bomb, what to bomb and more... The world of military AI that is coming is terrifying and the death and destruction it will wreak is much worse than what conflicts till date have looked like. #Democracy #Fascism
Analysis | Israel offers a glimpse into the terrifying world of military AI
washingtonpost.com
To view or add a comment, sign in
-
"Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses. Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences. The result, as the sources testified, is that thousands of Palestinians — most of them women and children or people who were not involved in the fighting — were wiped out by Israeli airstrikes, especially during the first weeks of the war, because of the AI program’s decisions." A genocide being enacted through the use of AI with little to no human oversight to maximize the death toll and maximize damage to civilian infrastructure. As data centre designers you have to ask yourself... where are systems like Lavender and Where's Daddy hosted? Is the work we are doing contributing in some way to this genocide?
‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza
972mag.com
To view or add a comment, sign in
-
According to Israeli military 555 000 murdered civilians in Gaza is acceptable. Here is how I arrived at that number: Israel is using #AI to identify and target 37000 potential Hamas militants. For each of them, it sets parameters that allow 15 civilians to be killed as acceptable “collateral damage”. Family homes were deliberately targeted as, according to AI, this increased the likelihood of “success”. Is this the world you want to live in? Is this the military you want to finance?
‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets
theguardian.com
To view or add a comment, sign in
-
AI, Foreign Malign Influence, & Narrative Intelligence 1. "Beyond Ukraine, we continue to work together to disrupt the reckless campaign of sabotage across Europe being waged by Russian intelligence, and its cynical use of technology to spread lies and disinformation designed to drive wedges between us.” 2. Geopolitical events and debates in particular receive disproportionate degrees of foreign malign influence 3. AI is exceedingly weaponized for FMI 4. FMI campaigns often result in physical disruption if not coercion. 5. Narrative Intelligence is the first line of defense, if not offense EdgeTheory Joe Stradinger William Usher Libby Lange Meaghan Waff Randy B Sue Gordon Andy Lee https://edgetheory.com/ H/T Dave Schroeder 🇺🇸 H/T Jake Shapiro What you established during your "sabbatical" is finally helping us understand the cognitive dimension of strategic competition. #generativeAI #artificialintelligence #nationalsecurity #mediaintegirty #cognitivewar #democraticresilience #informationwarfare #narrativeintelligence #strategiccompetition #disinformation #FMI #ForeignMalignInfluence
Heads of CIA and MI6 say world order 'under threat not seen since Cold War'
bbc.com
To view or add a comment, sign in
-
My latest opinion piece: "The boundaries between truth and lies are increasingly blurred in our 24/7 digital and media environment. The strategic manipulation of information continues to be a threat in today’s conflicts." "This can be seen in attacks on faith, the Israeli and Hamas conflict, and the Russian invasion of Ukraine." "Understanding the foundations of Soviet-style active measures and how they are still used in today’s online environment is crucial for developing effective countermeasures." [Excerpt] Read my opinion piece in the Washington Times' Higher Ground to find key terms for how we are manipulated and potential AI solutions that respect freedom of speech. Link to article in comments.
To view or add a comment, sign in
-
-
#machinelearning accurately predicts #terrorism. But the enemy responds strategically too. This is why it’s important to understand causality in #gametheory based counterterrorism. Mere predictive models are naive at best.
How America built an AI tool to predict Taliban attacks
economist.com
To view or add a comment, sign in