How far does the AI system go… is it behind the AI decision to starve the population of Gaza?
And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?
How far does the AI system go?
Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”
There’s so much about this death machine AI I would like to know.
> How far does the AI system go… is it behind the AI decision to starve the population of Gaza?
No, the point of this program seems to be to find targets for assassination, removing the human bottleneck. I don't think bigger strategic decisions like starving the population of Gaza was bottlenecked in the same way as finding/deciding on bombing targets is.
> is it also behind the decision to kill the aid workers who are trying to feed the starving?
It would seem like this program gives whoever is responsible for the actual bombing a list of targets to chose from, so supposedly a human was behind that decision but aided by a computer. Then it turns out (according to the article at least) that the responsible parties mostly rubberstamped those lists without further verification.
> can an AI commit a war crime?
No, war crimes are about making individuals responsible for their choices, not about making programs responsible for their output. At least currently.
The users/makers of the AI surely could be held in violation of laws of war though, depending on what they are doing/did.
No, the point of this program seems to be to find targets for assassination, removing the human bottleneck.
There is also another AI system that tracks when these target get home.
Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.
I think "assassination" colloquially means to pinpoint and kill one individual target. I don't mean to say you are implying this, but I do want to make it clear to other readers that according to the article, they are going for max collateral damage, in terms of human life and infrastructure.
“The only question was, is it possible to attack the building in terms of collateral damage? Because we usually carried out the attacks with dumb bombs, and that meant literally destroying the whole house on top of its occupants. But even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”
Yeah, I wasn't 100% sure of using the "assassination" wording in my comment, but after thinking about it I felt it most neutral approach is to use the same wording they use in the article itself, in order to not add my own subjective opinion about this whole saga.
> In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.
I'd agree with you that once you decide it's worth to kill 100 civilians for one target, it's really hard to call it "assassination" at that point...
> Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”
It's not that the "AI" described here is an autonomous actor.
> During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing
Obviously all this is to be taken with a grain of salt, who knows if it's even true.
"An AI" doesn't exist. What is being labeled "AI" here is a statistical model. A model can't do anything; it can only be used to sift data.
No matter where in the chain of actions you put a model, you can't offset human responsibility to that model. If you try, reasonable people will (hopefully) call you out on your bullshit.
> There’s so much about this death machine AI I would like to know.
The death machine here is Israel's military. That's a group of people who don't get to hide behind the facade of "an AI told me". It's a group of people who need to be held responsible for naively using a statistical model to choose who they murder next.
> I know the following question sounds absurd, but they say there’s no such thing as a silly question…
People say a lot of things.
Some questions are ill-posed; some bake-in false assumptions.
What you do _after_ you concoct a question is important. Is it worth our time answering?
> Does the AI use regular power to run, or does it run on the tears and blood of combatant 3 year old children - I mean terrorists?
From where I sit, the question above appears to be driven mostly by rhetoric or confusion. I'm interested in reasoning, evidence, and philosophy. I don't see much of that in the question above. There are better questions, such as:
To what degree does a particular AI system have values? To what degree are these values aligned with people's values? And which people are we talking about? How are the values of many people aggregated? And how do we know with confidence that this is so?
If you believe your question is worth pursuing, then do so. From where I sit, it was ill-posed at best, most likely just heated rhetoric, and maybe even pure confusion. But I was willing to spend some time in the hopes of pointing you in a better* direction.
You can burn tremendous time on poorly-framed questions, but why do that? Perhaps you don't want to answer the question, though, because you didn't ask it. You get to ask questions of us, but don't reply to follow-up questions that push back on your point of view?
* Subjectively better, of course, from my point of view. But not just some wing-nut point of view. What I'm saying is aligned with many (if not most) deep thinkers you'll come across. In short, if you ask poorly-framed questions, you'll get nonsense out the other end.
P.S. Your profile says "I’m too honest. Deal with it." which invites reciprocity.
IDF flag everything. You just need to watch the live comments on any YT video about what they are doing in Gaza to see they have a mommoth operation going on of silencing everything and putting up lots of Anti Muslim vile.
Yes, but specifically the Palestinian impact is why it’s such a terrible policy for Israel unless you assume their goal is perpetual war. Most people do not want to kill other people but each innocent killed like this is leaving behind friends, family, and neighbors who will want vengeance and some fraction of them will decide they need to resort to violence because the other mechanisms aren’t being used. Watching this happen has been incredibly depressing as you can pretty much mathematically predict a revenge period measured in decades.
This assumes they're going to leave enough people alive to even enact vengeance. If they murder everyone, than there's no need to worry about any Gazan revenge; there will be no Gazans.
It's very plausible. Keep in mind that from the get-go, the major global powers, (including Russia!) have adopted the mindset of Israel can do no wrong, and we can't criticize them at all
Israel could glass the entire Gaza strip and the reaction would be a slap on the wrist at best.
There's millions of Palestinians living in the West Bank or as refugees abroad, expelled or descended from those expelled in previous rounds of ethnic cleansing. Even if IDF go final solution on the 2 million Palestinians living in Gaza ghetto, this will not be the end of all Palestinians or the Palestinian struggle. See: https://en.m.wikipedia.org/wiki/Palestinian_diaspora
Could you please clarify what you mean by "Hamas wanted in ghe first place"? If I'm not mistaken, you're referring to the attack on the 7th of October, right? May I perhaps add that just on the days preceding that attack, Israelis killed a Palestinian in the West Bank[0]. So it was not really peaceful before that specific date.
I don't see any point in rehashing the I/P infinite regress. You end up trying to figure out where bronze age tribes lived. For the purposes of this conversation I'm going to step away from I/P and look at the strategy that I believe was used in the abstract.
You've got two sides, Able and Baker, with a range of opinions on both sides, from a moderate majority to an extreme minority.
Able extremists attack Baker in a way which is big, shocking and violent.
Baker is provoked into retaliation against Able. Crucially, the retaliation is against the whole of Able, including the moderates.
When it all dies down, there are less Able moderates and more Able extremists. (Because if someone dropped an Acme piano on my family, I'd be tempted to strap on the Acme exploding underpants, too).
This "leverage your enemy's strength to radicalize your own people" approach is common. 9/11 is probably the clearest example, but you could even see the non-violent Civil Rights protests in America in this light (march, provoke violent response, gain converts and sympathy). If this wasn't one of the factors behind the October attacks, Hamas are dumber than I give them credit for.
Thus, I see "the Palestinian people will not forget this" as "the cycle of violence is locked in for another generation".
I agree with you. Let's not rehash this finger pointing argument. It doesn't get us anywhere.
I however disagree with the framing in the example. Starting from the event that Able attacked Baker without mentioning the reasons or the context clearly portrays Baker as not having done anything to provoke such an attack. Nothing ever happens in a vaccuum.
> Starting from the event that Able attacked Baker without mentioning the reasons or the context clearly portrays Baker as not having done anything to provoke such an attack.
Adding at "step zero" with that information in would not change my argument at all. The relative righteousness of the two sides has nothing to do with strategy selection. For the purposes of this abstract argument, it's unnecessary fluff.
I wasn't necessarily trying to point fingers at a specific party. I wanted to better understand the parent's comment and while doing I wrote what I assumed was meant by them. I agree that to solve this issue that has been going on for many, many years we will have to go to the root cause and address that.
How far does the AI system go… is it behind the AI decision to starve the population of Gaza?
And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?
How far does the AI system go?
Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”
There’s so much about this death machine AI I would like to know.