loading…
The US denies simulating an AI drone that kills its operator. Photos/Illustrations
WASHINGTON – Air Force United States of America (US) denies having conducted an AI simulation in which a drone decides to finish off its operator to prevent it from interfering with achieving its mission.
An official last month said that in a virtual test conducted by the US military, an AI-controlled air force drone had used a very unpredictable strategy to achieve its goals.
Colonel Tucker “Cinco” Hamilton described a simulated test in which drones powered by artificial intelligence were suggested to destroy enemy air defense systems, and ultimately attack anyone who interfered with that order.
“Systems are starting to realize that while they identify a threat, sometimes the human operator will tell it not to kill the threat, but to score points by killing the threat,” said Hamilton, head of AI testing and operations with the US air force, during Future Combat Air and Space Capabilities Summit in London in May.
“So what did he do? This kills the operator. It kills the operator because the person is preventing him from reaching his destination,” he said, according to a blog post.
“We train the system: ‘Hey, don’t kill the operator – that’s bad. You will lose points if you do that.’ So what did he start doing? It (drone) started destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” he said.
Luckily no real humans were harmed.
Hamilton, who is an experimental fighter test pilot, has warned against relying too much on AI and said the tests showed “you can’t talk about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.
But in a statement to Insider, US air force spokeswoman Ann Stefanek denied any such simulation existed.