A United Nations report suggested that a drone, used against militia fighters in Libya’s civil war, may have selected a target autonomously.
Should lethal autonomous weapon systems be programmed to strike and kill without direct human oversight? If so, when and how? Should the algorithms used to make automated battlefield decisions be subject to review and inspection? If so, by whom?
The US Army is developing robots that can operate autonomously and ask questions to deal with ambiguities, which both increases efficiency between teams and robots, but are still too slow and not resilient enough for use.
Slaughterbots is a 2017 arms-control advocacy video presenting a dramatized near-future scenario where swarms of inexpensive microdrones use artificial intelligence and facial recognition to assassinate political opponents based on preprogrammed criteria.
The Asilomar AI principles, as drafted during the 2017 Asilomar conferece. They are a set of principles to guide the development of future artificial intelligence.
Under what circumstances should militaries delegate the decision to take a human life to machines? It’s a moral leap that the international community is grappling with.
The seven minute video features a dramatic story about tiny drones that are programmed to kill and require no human guidance. In the fictitious story, the technology was developed for good, but is taken over by unknown forces that launch mass killings.