INTRODUCTION
In this news blog, we are going to tell you about the OpenAI accuses of an Iranian group of Election meddling Using ChatGPT. For more news updates like this stay tuned to Observer Time.
OpenAI Accuses Iranian Group of Election Using ChatGPT
The AI company found its chatbot was used to make content for websites and social media posts aimed at increasing U.S. political polarization but says the effort didn’t gain traction.
Artificial intelligence company OpenAI said Friday that an Iranian group had used its ChatGPT chatbot to generate content to be posted onwebsites and social media seemingly aimed atstirring up polarization among American voters in the presidential election.
The sites and social media accounts that OpenAI discovered posted articles and opinions made with help from ChatGPT on topics including the conflict in Gaza and the Olympic Games.
They also posted material about the U.S. presidential election, spreading misinformation and writing critically about both candidates, a company report said.
Some appeared on sites that Microsoft last week said were used by Iran to post fake news articles intended to amp up political division in the United States, OpenAI said.
The AI company banned the ChatGPT accounts associated with the Iranian efforts and said their posts had not gained widespread attention from social media users.
OpenAI found “a dozen” accounts on X and one on Instagram that it linked to the Iranian operation and said all appeared to have been taken down after it notified those social media companies.
Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, said the activity was the first case of the company detecting an operation that had the U.S. election as a primary target.
“Even though it doesn’t seem to have reached people, it’s an important reminder that we all need to stay alert but stay calm,” he said.
The OpenAI report adds to recent evidence of tech-centric Iranian attempts to influence the U.S. election, detailed in reports from Microsoft and Google.
One website flagged Friday by OpenAI, Teorator, bills itself as “your ultimate destination for uncovering the truths they don’t want you to know,” and posted articles critical of Democratic vice-presidential candidate Tim Walz.
Another site called Even Politics posted articles critical of Republican candidate Donald Trump and other conservative figures such as Elon Musk.
In May, OpenAI first detailed attempts by government actors to use its AI to create propaganda, saying it detected groups from Iran, Russia, China, and Israel using ChatGPT to create content in multiple languages.
None of those influence operations got widespread traction with internet users, Nimmo said at the time. OpenAI also has acknowledged that the company may have failed to detect stealthier operations using its technology.
As billions of people vote in elections around the world this year, democracy advocates, politicians, and AI researchers have raised concerns about the ability of AI to potentially make it easier to generate large amounts of propaganda that appears to be written by real people.
So far, authorities have not reportedwidespread evidence that foreign governments are succeeding in influencing Americans to vote a certain way.
For regular updates subscribe to the newsletter of Observer Time