On May 23 and 24 the Royal Aeronautical Society organized an event called “Future Combat Air & Space Capabilities Summit” in London. There, 70 speakers offered various talks on the future of aerial combat, and in one of them they spoke of a supposed experiment that was reminiscent of Skynet’s killer AI in ‘Terminator 2’. Things were not (far from it) as terrible as he painted them.
Tucker ‘Cinco’ Hamilton. This USAF colonel is the person in charge of AI tests in this military body. He knows something about it: he was in charge of directing the development of the F-16 Autonomous Ground Collision Avoidance Systems (Auto-GCAS) that prevents these planes from crashing and has already saved lives. He is also part of the work team that wants to develop an autonomous F-16 with which some success has been achieved.
A very hypothetical experiment. According to Vice, during his talk, Hamilton seemed to point out that the Air Force had run a simulation with terrible results. In it, they would have trained an autonomous drone controlled by AI to identify and destroy the threat of surface-to-air missiles (SAM). A simulation operator would confirm the target before taking him down, but then something happened.
The AI that “killed” the human. It turned out that if the operator ordered that threat not be killed, the AI would end up killing the human operator because it scored points for killing the original threat. Upon detecting that situation, the AI would be reprogrammed to specify that it could not kill the operator, and would lose points for doing so. What did the AI do? Destroy the communications tower that the operator used to communicate with the drone to prevent it from not taking out the threat and the target.
There were no deaths, not even simulation. The story published in Vice caused a stir on social networks such as Reddit, where it was quickly commented that we were witnessing that terrible moment in which an AI decides to destroy the human being as Skynet already did in “Terminator 2”. However, Air Force spokeswoman Ann Stefanek denied on Insider that such a simulation had taken place. “It appears the colonel’s comments were taken out of context and intended to be anecdotal,” she explained. There was no death of a human being in real life—as some headlines seemed to imply—or in the simulation, because there was not even a simulation. It was all a dystopian story. So that?
We must be alert. Although Colonel Hamilton has not managed to explain why his words were taken out of context, everything indicates that he told that story to warn of the danger of the application of AI in the military field. In an interview with Defense IQ Press in 2022, Hamilton himself explained that “AI is also very fragile, that is, it is easy to fool and/or manipulate. We have to develop ways to make AI more robust and have more awareness.” why software code is making certain decisions…”.
The Clip Maximizer. The scenario described with Hamilton is reminiscent of previous dystopias in which it is clear that the alignment of the AI —to do what we want— is more complex than it seems because we may not specify this entire scenario well. In 2003 the philosopher Nick Bostrom —who stated that there is a 50% chance that we live in a simulation— spoke of the “clip maximizer”, an experiment according to which an AI would end up rebelling against the human being by pursuing a specific objective. In that case, a powerful AI was commanded to make as many clips as possible. The AI would lie, beg, or cheat to maximize production, but it would even end up killing the humans because it would decide that they might end up shutting it down.
so watch out. Hamilton’s approach is therefore similar to Bostrom’s, and reminds us that when developing systems like these it is necessary to be very careful: aligning them with human objectives is not as easy as it seems: we can forget to take into account some factor , and that would cause an unpredictable impact. Fortunately, in this case, everything has remained in that “anecdote”, but the warning is there. Sam Altman himself, CEO of OpenAI, made it clear in his recent appearance before the US Congress: “I think that if this technology goes badly, it can go badly enough.”
Image | Wikipedia, the free encyclopedia
In Xataka | EuroMALE: this is the drone of the military future of Europe in which Spain will invest 1,900 million euros