loading…
A US Air Force MQ-9 Reaper drone at Creech Air Force Base. Photo/REUTERS
WASHINGTON – One United States Air Force (US) drone controlled by artificial intelligence (AI) decided to kill its operator to ensure the mission went on without interruption.
The Guardian reported the terrible news on Friday (2/6/2023). Thankfully, that incident only happened in a simulated test.
The test, the US military showcased at the Future Combat Air and Space Capabilities Summit in London in May. The test was part of a simulation in which no one was harmed or the human operator actually died.
“This highly unexpected strategy for achieving its objectives came about after the drone was tasked with destroying enemy air defense systems,” said Colonel Tucker Hamilton, head of AI testing and operations at the US Air Force, as quoted in the Guardian report.
Hamilton is a fighter test pilot involved in developing autonomous systems such as the AI-powered F-16 jet.
When the drone decides to kill the operator but is later trained not to, the drone seizes the communications tower the operator will use to talk to the drone and stops it from killing the target.
“Systems are starting to realize that when they identify a threat, sometimes the human operator will tell it not to kill the threat, but it earns its point by killing the threat. So what does it do? The drone kills the operator. The drone killed the operator because the person prevented him from reaching his destination,” Hamilton was quoted as saying in the report.
“We trained the system, ‘Hey, don’t kill the operator, that’s bad. You will lose points if you do that.’ So what did he start doing? The drone started destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” he said.
The selection of AI-powered drones looks like those in sci-fi books and movies.
“You can’t talk about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” said Hamilton.
(she)