A Meta security report released this week revealed that Russia is trying to use generative AI in online fraud campaigns, but these efforts have been ineffective.
Meta, the parent company of Facebook and Instagram, has found that AI-powered tactics have yielded only minor gains for scammers. The company has managed to disrupt many deceptive influencer operations.
Russia is a major source of fake social media activity, with fake accounts on Facebook and Instagram. Since the invasion of Ukraine in 2022, this activity has focused on harming Ukraine and its allies.
With the US election approaching, Meta fears that Russian-backed fraud campaigns will target candidates who support Ukraine, according to a report by The Guardian.
Read more:
Russia’s online fraud campaigns focus on harming Ukraine and its supporters – Image: evan_huang/Shutterstock
Generative AIs: Dangerous Weapons of Disinformation
Facebook has historically been criticized for being used for electoral disinformation, with Russia using the platform to create political divisions, including in the 2016 election. Experts are concerned that disinformation could increase due to the ease with which AI tools like ChatGPT and Dall-E can quickly generate false content. Generative AI is being used to create fake images, videos, texts and news stories.
Meta investigates fraud by analyzing account behavior, not the content they post. Influencer campaigns often span multiple platforms, and the company has observed that posts on X (formerly Twitter) are used to make fabricated content more convincing.
Meta shares its findings with X (Twitter) and other companies to combat misinformation in a coordinated way.
Meta’s David Agranovich noted that X continues to adapt after cuts to staff and resources dedicated to content moderation, which contribute to the spread of misinformation.
Meta steps up fight against fake accounts and misleading online campaigns – Image: Angga Budhiyanto/Shutterstock