Skip Navigation

Who is responsible when an AI weapon pulls the trigger?

Who is responsible when an AI weapon pulls the trigger?

The advent of 'smart' targeting poses a stark challenge to existing laws of war

The Israeli case provides the best forensically documented example of this. Lavender, which created lists of targets for Israeli strikes in Gaza, was not a rogue program. It was officially approved, and officers were given the right to kill up to twenty civilians per junior Hamas operative the system flagged. The approval processes were cut down to a few seconds of rubber-stamping, according to 972 Magazine. The humans “in the loop” were simply laundering algorithmic decisions with a veneer of authorization.

Also notable here is the diffusion of this paradigm to client states. The literature on AI warfare focuses disproportionately on the competition between the U.S. and China. What it conceals is the way U.S. and NATO AI doctrine, and its underlying assumptions regarding acceptable civilian casualties, is exported to allied militaries under Foreign Military Sales agreements without the ethical supervision framework that purportedly limits it in the home country.

Comments

5