Last Monday morning a worrying message appeared on Twitter. According to (among others) the verified Bloomberg Feed account, there had been an explosion at the Pentagon. Beneath the eye-catching headline was a photo of a large column of black smoke very close to a building that was similar in construction to other Pentagon buildings. It was all a lie: that account has no relationship with Bloomberg despite its name trying to indicate otherwise, and Twitter suspended it shortly after publishing that false news.
Deepfake peligroso. The message posted by that verified account began to share virally and dizzying. Although no major news outlets in the US carried it, many users and even the Russian media organization Russia Today (RT) did. The impact of this news was important in the financial field.
sudden stock market crash. In a few minutes there were drops in various stock indices. The S&P 500 fell 0.3%, while the Dow Jones Index also fell sharply in just 4 minutes. Both recovered just as quickly after the news was revealed to be false.
light damage. Despite those sudden drops, the actual damage appears to have been minimal. The Arlington County Fire Department (where the Pentagon is located) quickly denied the news. in a row. One of its members, Captain Nate Hiner, explained that “just looking at the image, that’s not the Pentagon. I have no idea what building it is. There’s not a building like it in Arlington.”
It was all a lie.A close inspection of the photo made it clear that it was a deepfake, a false image created digitally. Apparently it could have been created by some generative AI model, but as the experts explained, that was not really the problem. This was indicated by Renée DiResta, director of research at the Stanford Internet Observatory and an expert in how disinformation circulates, who indicated it in The Washington Post:
“This is not an AI problem per se. Anyone with Photoshop experience could have made that image; ironically, he probably could have done it better. But it’s an example of how the signals that help people decide whether breaking news information is trustworthy on Twitter have become useless, just as the ability to create high-resolution unreality has become more widely available. reach of the whole world”.
Whose fault is it. Not from the AI, of course. One of the keys is in Twitter, which after the massive layoffs has been left without the teams that moderated and censored in the field of disinformation. It’s also a problem that now anyone can get a verified account by paying for Twitter Blue’s $8/month subscription and confirming with their mobile phone. That has already provoked practical jokes recently.
Beware of retweeting. The fault lies with the way in which the virality of these images works: if one does not check if the tweet is real, the problem worsens. As indicated in WaPo, the problem here is that other verified accounts like OSINTdefender retweeted the message to their 336,000 followers.
They later deleted the message — it had already been viewed at least 730,000 times — but explained that they saw the news on Discord from someone named “Walter Bloomberg” and the image of the explosion came from someone who had posted it on Facebook and who He claimed to work in Arlington. That Facebook page was later deleted.
If you don’t have a blue check, I don’t believe you (so much). Other accounts with thousands of followers — whether or not they are bots — republished the news, and some of them had the verified mark. The Pentagon Protection Corps Agency that precisely protects this body does not have that verified mark and Twitter has not granted it the gray check that shows that it is a verified institution.
This agency retweeted the message from the Arlington Fire Department that precisely denied the facts. That tweet now has more than 117,000 views, much less than the originals in which the false news was spread, and which were replicated by a multitude of accounts. Twitter has since flagged several of the tweets that spread the false information and has added a box with the text “Stay informed” warning that this message is probably false.
An old problem that is likely to become more prevalent. Although in this case the image could be deceiving at first glance, detecting that it was false was relatively easy. This is increasingly complicated with generative AI models and it is probably almost impossible to detect that an image is false in the short term, but the problem will not really be that: users and especially the media and official organizations will be the ones who will have We have to be very careful not to help this false information to spread. Here platforms like Twitter would do well to dedicate more staff to this problem, especially now that with these AI models it probably appears much more frequently. Meanwhile, yes, systems are trying to appear that validate or at least label that content generated by AI.
In Xataka | Verify Internet information in times of ‘fake news’: a problem that neither Facebook nor Google are going to solve