New Era of AI Deepfakes Complicates 2024 Elections

Deceptive videos, audio and images are more sophisticated, easier to make as tech industry wrestles with how to keep up

The explosion of artificial-intelligence technology makes it easier than ever to deceive people on the internet, and is turning the 2024 U.S. presidential election into an unprecedented test on how to police deceptive content.

An early salvo was fired last month in New Hampshire. Days before the state’s presidential primary, an estimated 5,000 to 25,000 calls went out telling recipients not to bother voting.

“Your vote makes a difference in November, not this Tuesday,” the voice said. It sounded like President Biden, but it was created by AI, according to an analysis by security firm Pindrop. The message also discouraged independent voters from participating in the Republican primary.

On social media, however, the call’s origin was up for debate. On Meta Platforms ’ Threads app, some users saw an attempt to suppress voter turnout. “This IS election interference,” wrote one. On former President Donald Trump ’s site Truth Social, some users blamed Democrats for the call. “Probably not fake,” one posted.

When Pindrop analyzed the audio, they found telltale signs the call was phony. The Biden voice pronounced the noisy fricative sounds that make up the letters S and F, for example, in a very unhuman way.

Two weeks later, the New Hampshire attorney general’s office said it identified a Texas-based company named Life Corp. as the source of the calls and that it issued a cease-and-desist order citing law against voter suppression. Representatives for Life Corp. didn’t respond to emails seeking comment.

Thanks to recent advances in generative AI, virtually anyone can create increasingly convincing but fake images, audio and videos, as well as fictional social-media users and bots that appear human . With a busy year for elections worldwide in 2024, voters are already running into AI-powered falsehoods that risk confusing them, according to researchers and U.S. officials.

The proliferation of AI fakes also comes as social-media companies are trying to avoid having to adjudicate thorny content issues around U.S. politics. Platforms also say they want to respect free-speech considerations.

Around 70 countries estimated to cover nearly half the world’s population—roughly four billion people—are set to hold national elections this year, according to the International Foundation for Electoral Systems.

While AI makers and social-media platforms often have policies against using AI in deceptive ways or misleading people about how to vote, how well those companies can enforce those rules is uncertain.

OpenAI Chief Executive Sam Altman said at a Bloomberg event in January during the World Economic Forum’s annual meeting in Davos, Switzerland, that while OpenAI is preparing safeguards, he’s still wary about how his company’s tech might be used in elections. “We’re going to have to watch this incredibly closely this year,” Altman said.

OpenAI says it is taking a number of measures to prepare for elections , including prohibiting the use of its tools for political campaigning; encoding details about the provenance of images generated by its Dall-E tool; and addressing questions about how and where to vote in the U.S. with a link to CanIVote.org, operated by the National Association of Secretaries of State.

In early February, the oversight board of Facebook parent Meta Platforms called the platform’s rules around doctored content incoherent, after reviewing an incident last year in which Facebook didn’t remove an altered video of Biden.

The board, an outside body created by the company, found that Facebook abided by existing policy, but said the platform should act quickly to clarify its policy around manipulated content before upcoming elections. A Meta spokesman said the company was reviewing the board’s guidance and would respond within 60 days.

Meta says its plan for elections in 2024 is largely consistent with previous years. For example, it will prohibit new political ads in the final week before the U.S.’s November contest. Meta also labels photorealistic images created using its AI feature.

People who’ve studied elections debate how much an AI deepfake could actually sway someone’s vote, especially in America where most people say they’ve likely already decided who they’ll support for president. Yet the very possibility of AI-generated fakes could also muddy the waters in a different way by leading people to question even real images and recordings .

Claims about AI are being used to “discredit things people don’t want to believe”—for example, legitimate video shot around the Oct. 7 Hamas attacks on Israel, said Renée DiResta, research manager at the Stanford Internet Observatory.

Social-media giants have been struggling for years with questions around political content. In 2020, they went to aggressive lengths to police political discourse, partly in response to reports of Russian interference in the U.S. election four years earlier.

Now, they’re easing up on some counts, particularly at Elon Musk ’s X.

Since his 2022 acquisition of Twitter, Musk has renamed the site and rolled back many of its previous restrictions in the name of free speech. X has reinstated many previously suspended accounts and began selling verified check marks previously designed for notable figures. X also cut over 1,200 trust and safety workers, according to figures it disclosed to an Australian online safety regulator last year, part of widespread layoffs Musk said were needed to stabilize the company’s financial situation.

More recently, X has said it was hiring more safety staffers, including some 100 content moderators who will work in Austin, Texas, and other positions globally.

YouTube said it stopped removing videos claiming widespread fraud occurred in the 2020 and other past U.S. elections, citing concerns about curtailing political speech. Meta took a similar stance when deciding to allow political ads to question the legitimacy of Biden’s 2020 victory.

Meta also let go many employees who were working on election policy during broader layoffs starting in late 2022, though the company says its overall trust and safety efforts have expanded.

X, Meta and YouTube all have reinstated Trump after banning him following the Jan. 6, 2021, attack on the U.S. Capitol, citing reasons including that the public should be able to hear what candidates are saying. Trump has repeatedly made the false claim that he won the 2020 election or that it was “rigged.”

Katie Harbath , a former Facebook public-policy director, said she thinks platforms have gotten exhausted trying to adjudicate issues around political content. There’s no clear agreement around exactly what the rules and penalties should be, she added.

“A lot of them have been more like, ‘It’s probably better for us to be as hands-off as possible,’” Harbath said.

The companies say they remain committed to fighting deceptive content and helping users get trustworthy information about how and where to vote. X says its efforts include bolstering its fact-checking feature Community Notes , which relies on volunteers to add context to posts.

Critics, including Musk and many conservatives, have assailed steps that social-media giants took to manage political content around 2020, particularly Twitter. They have pointed, for example, to an episode shortly before the November 2020 vote, when Twitter temporarily blocked links to New York Post articles about Hunter Biden , son of now-President Biden.

(The Post and The Wall Street Journal are both owned by News Corp.)

Twitter executives later conceded they had overstepped but said they had acted out of concern around possibly hacked materials, not due to political leanings.

Other changes this election cycle have come out of a lawsuit led by the Republican attorneys general of Missouri and Louisiana, who allege that Biden administration officials policed social-media posts in ways that amounted to unconstitutional censorship. Lower courts issued rulings imposing limits on how the federal government could communicate with social-media platforms, though the Supreme Court later put those decisions on hold. The case is now pending before the Supreme Court . Congressional Republicans also have been investigating anti-disinformation efforts.

“We’re having some interaction with social-media companies, but all of those interactions have changed fundamentally in the wake of the court’s ruling,” Federal Bureau of Investigation Director Christopher Wray said during a Senate hearing in October. He said the agency was acting “out of an abundance of caution.”

Democratic officials and disinformation researchers say such communications are critical for combating nefarious online activity, including foreign influence efforts.

Federal authorities say they’re on alert. So far, the U.S. hasn’t detected a major foreign-backed interference operation targeting the 2024 election, according to senior intelligence officials.

Gen. Paul Nakasone , the recently retired chief of U.S. Cyber Command and the National Security Agency, vowed before stepping down that the 2024 U.S. election would be “the most secure election we’ve had to date” from foreign interference . “If this isn’t necessarily going to work in the same methodology it did in ’22 or ‘20,” he added, “then we’ve got to find new ways to do it.”

Write to Robert McMillan at robert.mcmillan@wsj.com , Alexa Corse at alexa.corse@wsj.com and Dustin Volz at dustin.volz@wsj.com

Follow tovima.com on Google News to keep up with the latest stories
Exit mobile version