top of page
  • Khushboo Pareek

OpenAI thwarted 5 international disinformation campaigns that used its tools


OpenAI has identified and disrupted five online campaigns that used its generative AI technology to manipulate public opinion and influence geopolitics.


These campaigns, run by state actors and private companies from Russia, China, Iran, and Israel, leveraged OpenAI’s technology to create social media posts, translate and edit articles, write headlines, and debug computer programs, the company said in a release on Thursday.


The report by OpenAI marks the first time a major AI company has disclosed how its tools were used for online deception, according to social media researchers.


The rise of generative AI has sparked concerns about its role in spreading disinformation, particularly in a year with major global elections.


Ben Nimmo, a principal investigator for OpenAI, said that the company aimed to reveal the realities of how generative AI is changing online deception after much speculation on the topic.


"Our case studies provide examples from some of the most widely reported and longest-running influence campaigns that are currently active," he said.


The campaigns frequently used OpenAI’s technology to post political content, Nimmo said. However, the company had trouble determining if the targets were specific elections or simply to provoke people.


He added that the campaigns did not gain much traction and the AI tools did not seem to increase their reach or impact.


Graham Brookie, senior director of the Atlantic Council’s Digital Forensic Research Labs, warned that the online disinformation landscape could change as generative AI technology becomes more powerful.


OpenAI reported that its tools were utilised in long-standing influence campaigns, notably the Russian Doppelganger and Chinese Spamouflage efforts.


In the Doppelganger campaign, OpenAI’s technology generated anti-Ukraine comments posted across various platforms in multiple languages.


These tools were also employed to translate pro-Russian articles into English and French, and to convert anti-Ukraine news into Facebook posts.


OpenAI’s tools were also used in an undisclosed Russian campaign targeting individuals in Ukraine, Moldova, the Baltic States, and the United States, primarily through messaging app Telegram.


This campaign generated comments in Russian and English concerning the Ukraine conflict and Moldovan politics, and debugged computer code for automated Telegram posts.


The Chinese campaign, Spamouflage, used OpenAI to debug code and create social media posts criticising government critics.


Spamouflage is a term used to describe a covert online influence campaign, particularly associated with China. It refers to a campaign that uses deceptive tactics, such as disguising propaganda or misinformation as genuine content, to influence public opinion on social media platforms.


An Iranian campaign by the International Union of Virtual Media produced and translated articles promoting pro-Iranian, anti-Israeli, and anti-US sentiments.


The Israeli campaign, dubbed Zeno Zeno, created fictional personas using OpenAI to post anti-Islamic content on social media in Israel, Canada, and the United States.


Image Source: Unsplash 

コメント


bottom of page