More than 250,000 requests to OpenAI platforms to make deepfakes of US election candidates were rejected, the company says.

ADVERTISEMENT

ChatGPT refused more than 250,000 requests to generate images of the US election candidates using their artificial intelligence (AI) platform.

OpenAI, the company behind the AI chatbot, said in a blog update on Friday that their platform DALL-E, used to generate images and video, rejected requests to make images of president-elect Donald Trump, his choice for vice president JD Vance, current president Joe Biden, democratic candidate Kamala Harris, and her vice-presidential pick, Tim Walz.

The refusals were due to “safety measures” that OpenAI put in place before election day, the blog post said.

“These guardrails are especially important in an elections context and are a key part of our broader efforts to prevent our tools being used for deceptive or harmful purposes,” the update read.

The teams behind OpenAI say they “have not seen evidence” of any US election-related influence operations going viral by using their platforms, the blog continued.

The company said in August it stopped an Iranian influence campaign called Storm-2035 from generating articles about US politics and posing as conservative and progressive news outlets.

Accounts related to Storm-2035 were later banned from using OpenAI’s platforms.

Another update in October disclosed that OpenAI disrupted more than “20 operations and deceptive networks,” from across the globe that were using their platforms.

Of these networks, the US election-related operations they found weren’t able to generate “viral engagement,” the report found.

Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *