A few weeks ago we heard on Xataka how artificial intelligence is being used for fraud in various ways. In this article we tell the story of Ruth Card, 73, an older woman who received a call from what appeared to be her grandson Brandon, but was actually a group of con men: “Grandma, I’m in jail, without a wallet, No phone. I need bail money.” The scam was performed using an audio deepfake that mimicked her grandson’s voice while the call was being made.
The thing has not stopped there. This type of scam has evolved rapidly in recent months with the incipient creation of AI tools and voice and even video clones of a person are already being used to scam their relatives by video call.
AI with the aim of defrauding. As El Mundo points out in this article, a man from northern China received a WeChat video call from his wife. She asked him for 3,600 euros because she had been in a car accident and she had to solve the situation with the other driver. Of course, she was not his woman. And although she called him from another account, the man fell into the trap since the face that appeared in the video call gesturing and speaking with the same tone of voice was that of his wife. Although it was another imitation with AI.
Apparently, the scammers had been watching the couple and knew their habits, in addition to the fact that the woman had a quite popular cooking channel on a social network and from there they took the captures of her face and voice to make the deepfake.
A global trend. A new wave of scams using AI-generated voices is growing around the world. The Washington Post brings together several recent cases and warns that, according to FTC data, in 2022 this type of fraud in which someone impersonates another person was the second most frequent, with more than 36,000 complaints from people who were deceived (or almost) by others who pretended to be friends or relatives. In 2021, a person managed to steal $35 million from a bank using this technology.
How does it work? Advances in artificial intelligence already make it possible to replicate a voice with an audio sample of just a few sentences (something very easily accessible on the person’s social networks). Speech generation software analyzes what makes a person’s voice unique (age, gender, or accent), and searches a vast database of voices to find similar voices and predict patterns. You can then recreate the individual pitch, timbre, and sounds of a person’s voice to create a similar effect. From there, the scammer can say whatever they want in that voice.
In most cases it is almost impossible to distinguish it, much less when the person making the call does so with a certain urgency. And it is even more complicated for an older person who is unaware of these technologies to realize the danger. Companies like ElevenLabs, an AI speech synthesis startup, transform a short vocal sample into a synthetically generated voice for a modest price ranging from 5 euros to 300 per month, depending on the audio limit.
Concerns in China. In the Asian country, this phenomenon is already a cause for concern on the part of the authorities, who have begun advising the public through publications on Weibo, the Chinese twitter, that they “be cautious when providing biometric information and refrain from sharing videos and other images of themselves on the Internet”. There are very different cases. One has created some controversy in the e-commerce industry because some users are using this technology to clone the faces of famous streamers and sell their products.
Another high-profile case was the arrest of a man who had used ChatGPT to create a false article about a train accident with nine deaths. Not only that, he had managed to position it at the top of the Baidu search engine.
legislation. It is the biggest obstacle to stop this scourge. Experts say regulators, law enforcement and the courts do not have enough resources to curb this growing phenomenon. First, because it is very difficult to identify the scammer or track calls, which are located around the world and the jurisdiction of a country does not always reach everywhere. And second, because it is a new technology and there is not enough jurisprudence for the courts to hold companies responsible for this.
In China they are leading the battle against this type of fraud. In the Asian country, a new law has been approved that regulates AI technologies that generate text, images and videos. This law, launched by the Cyberspace Administration, which deals with the Internet field in China, was passed shortly after the launch of ChatGPT, the OpenAI chatbot, which is also censored in the country, although many have accessed it illegally. .
Image:
In Xataka | Differentiating the real Chicote from the AI generated deepfake Chicote is already almost impossible (and it is a problem)