NewsNation

OpenAI disrupts disinformation operations tied to China, Russia

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. A barrage of high-profile lawsuits in a New York federal court, including one by the New York Times, will test the future of ChatGPT and other artificial intelligence products. (AP Photo/Michael Dwyer, File)

(NewsNation) — ChatGPT maker OpenAI has caught groups from China, Russia, Iran and Israel using its artificial intelligence tools for “deceptive activity” across the internet, the company said Thursday.

In a report shared on its website, OpenAI said in the last three months, it has disrupted five “covert influence operations” intended to “manipulate public opinion” or “influence political outcomes.”


Those operations used OpenAI’s tools to create political content, generating comments and articles in multiple languages. The company said that in some cases, the groups used the models to “create the appearance of engagement” across social media.

The deceptive campaigns focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, criticisms of the Chinese government as well as politics in Europe and the United States.

In a Russian campaign OpenAI dubbed “Bad Grammar,” threat actors used the company’s technology to set up a content spamming pipeline on Telegram. Those comments often argued that the United States should not support Ukraine, OpenAI said in its report.

Another Russian-based campaign called “Doppelganger” focused on generating content for websites and social media, often portraying Ukraine, the U.S. and the European Union in a negative light.

Spamoflauge, a previously known group in China, generated content that ranged from praising the Chinese government to criticizing abuses of Native Americans in the United States.

An Iranian group known as the International Union of Virtual Media (IUVM) used OpenAI to generate long-form articles and headlines. That content tended to be anti-U.S. and anti-Israel while praising the Palestinians and Iran.

OpenAI also banned a cluster of accounts operated by a political campaign firm in Israel. The for-hire group posted anti-Hamas, anti-Qatar and pro-Israel content for an influence operation that spanned social media platforms like Facebook and Instagram, the report said.

According to the company, none of the campaigns gained much traction.

OpenAI said it’s dedicated to finding and mitigating abuse by harnessing the power of generative AI, outlining several advantages of the tech.

For example, OpenAI said that on multiple occasions, the models refused to generate the text or images that the actors asked for. The company said it has shared this information with others in the AI industry so they can be on the lookout.