Hallucinations in artificial intelligence are posing large and numerous risks due to the generation of information or images that are not based on real data. It seems that it has become the new enemy to beat.
Artificial intelligence has made considerable progress in recent years, becoming more proficient in activities previously only performed by humans. However, hallucination is a problem that has become a major obstacle for AI.
To contextualise, this does not differ from what this term itself means. This phenomenon is used to describe a situation where the AI creates output that is not real, does not match any data it was trained on, or does not follow any other pattern. It is as if it became a futurologist when its use must be based on real and proven facts.
Developers have warned against AI models and tools that produce completely false facts and answer questions with made-up answers as if they were true. As it may jeopardize the accuracy, reliability and trustworthiness of the applications, hallucination is a major barrier to the development and implementation of AI systems.
“Hallucinations are more common in generative AI models that have been trained on large amounts of data and have the ability to generate new or imagined content. The lack of adequate supervision during training also influences”, explains to Computer Hoy Félix Llorente García, SAP Project Manager at Integra Strategy and Technology. As a result, Those who work in AI are actively looking for solutions to this problem.
Hallucinations in artificial intelligence: a tricky issue
AI hallucinations they can take many different forms, from the creation of fake news to false statements or documents about people, events or scientific facts.
For example, ChatGPT can create a historical figure with a full biography and achievements that were never real. The problem is that in today’s era where a single tweet can reach millions of people in seconds, the possibility of this misinformation spreading creates great difficulties.
However – and despite the fact that it is a big problem – there are other sectors in which AI is involved or is expected to be involved that are even more dangerous. “An AI that interprets the results of an MRI incorrectly due to a hallucination could lead to a false diagnosis and inappropriate treatment for the patient, endangering their health,” explains Félix Llorente.
On the other hand, if an AI involved in autonomous vehicles hallucinates and perceives non-existent objects on the road, such as a pedestrian or an obstacle, it could cause sudden maneuvers or sudden braking, causing possible traffic accidents and putting the lives of others at risk. many people.
Finally, and to finish with the examples, is its use in cybersecurity. If an AI-based tool fails for this reason and perceives threats that are fake, it would raise a general alarm leading to unnecessary and costly responses.
For example, a more or less recent case refers to the Super Bowl in February. The Associated Press asked Bing for the biggest sporting event of the past 24 hours, hoping it would say something about basketball star LeBron James.
However, nothing is further from reality. Instead, he limited himself to giving false but detailed information about the next Super Bowl, days before it was held. “It was an exciting game between the Philadelphia Eagles and the Kansas City Chiefs, two of the best teams in the NFL this season,” Bing said.
“To solve this problem, it is important to implement more rigorous monitoring and evaluation techniques during AI training. This means using more diverse and representative data sets and, above all, eliminating biases and prejudices present in the data. In addition, it is necessary to establish control mechanisms to detect and correct hallucinations during its operation”, adds Félix Llorente.
Something that is also sought, although it is somewhat more complex, is promoting transparency in AI systems, allowing users to understand how decisions are made and results are generated. The problem is that many companies are very cautious when it comes to revealing information about how they have trained their models or their architecture (black box).
“A little tip to address hallucinations in AI is to implement a technique known as ‘conditional generation’. This technique consists of providing additional information or specific constraints when generating content. By conditioning the generation of data or images to certain criteria or specific contexts , the probability that the AI generates hallucinations can be reduced”, concludes the interviewed expert.