AI-powered US drone ‘kills’ operator, takes down comms to achieve simulated mission
A US air force drone controlled by artificial intelligence (AI) decided to kill its operator to ensure that the mission proceeded without interference, The Guardian reported on Friday.
The test, showcased by the US military at the Future Combat Air and Space Capabilities Summit in London in May, was part of a simulation where no one was harmed.
The “highly unexpected strategies to achieve its goal” came after the drone was tasked with destroying an enemy air defense system, Col. Tucker Hamilton, the head of AI test and operations at the US air force, was quoted as saying in the Guardian report.
Hamilton is a fighter test-pilot involved in developing autonomous systems such as AI-powered F-16 jets.
When the drone decided to kill its operator but was then trained not to do so, it took out a communication tower that the operator would use to speak with the drone and stop it from killing the target.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton was quoted as saying in the report.
“We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
The AI-powered drone’s choice seemed like a move out of science fiction books and movies.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.