OpenAI Says It Caught a ChatGPT-Powered ‘Iranian Influence Operation’

Published:


OpenAI said on Friday that it caught an “Iranian influence operation” using ChatGPT. The group, known as Storm-1679, generated articles and social-media comments to shape public opinion around Vice President Kamala Harris and former President Donald Trump, according to OpenAI. In addition to targeting 2024 U.S. presidential candidates, OpenAI said Storm-1679 also generated content around Israel’s invasion of Gaza and its presence at the 2024 Olympics, the rights of U.S.-based Latinx communities, Venezuelan politics, and Scottish independence from the U.K.

Most of the posts and articles spotted by OpenAI received little pickup from real people, the company said. Still, it described the incident in detail on its blog, writing that it found a dozen X (formerly Twitter) accounts posing as conservatives and progressives and using hashtags such as “#DumpTrump” and “#DumpKamala.” Storm-1679 also tapped at least one Instagram account to spread AI-generated content, per OpenAI.

OpenAI has previously described “state-affiliated threat actors” using its tools, but this is the first time it’s disclosed a specific election interference campaign utilizing ChatGPT.

OpenAI said it responded to said discovery by banning a “cluster” of accounts that created the content; the company also said it “shared threat intelligence with government, campaign, and industry stakeholders.” The firm did not name those stakeholders specifically, but it did share screenshots of a few of the posts. Those screenshots featured view counts ranging from 8 to 207 views and hardly any likes.

OpenAI’s screenshot of X posts generated with ChatGPT to influence the election. © OpenAI

OpenAI said Storm-1679 also shared ChatGPT-generated articles across several websites that “posed as both progressive and conservative news outlets.” The firm added, “The majority of social media posts that we identified received few or no likes, shares, or comments. We similarly did not find indications of the web articles being shared across social media.”

An August 6 report from Microsoft described Storm-2035 in a similar manner — as an Iranian network with “four websites masquerading as news outlets.” According to Microsoft, the network created “polarizing” posts about the election, LGBTQIA+ rights, as well as Israel’s invasion of Gaza.

Reports of online foreign interference in U.S. elections are now virtually commonplace. Microsoft’s August 6 report, for example, also detailed an Iran-linked phishing attack that targeted an unnamed, “high-ranking” U.S. campaign official. Shortly after Microsoft dropped the report, the Trump campaign announced that “foreign sources” had stolen some of its emails and documents in an attempt to influence the 2024 presidential election. Eight years earlier, a Russia-linked hacking group known as Guccifer 2.0 made off with Democratic National Committee emails through a similar phishing attack; they ultimately leaked thousands of DNC emails and documents ahead of the 2016 Democratic National Convention.

Under tidelike pressure from lawmakers, big tech companies have launched various efforts over the years in response to such incidents. Their efforts include meme fact checks, wishful thinking, a short-lived political ad ban, a “war room,” and collaborations with rivals and cops alike.

Related Updates

Recent Updates