Tests show that AI tools can easily create election lies from the votes of well-known political leaders

NEW YORK (AP) — With high stakes ahead within the U.S. and European Union elections, publicly available artificial intelligence tools can easily be weaponized to supply convincing election lies in the shape of leading politicians' votes, a digital civil rights group said Friday.

Researchers on the Washington, DC-based Center for Combating Digital Hate tested six of the preferred AI tools for cloning voices to see in the event that they would create audio clips of 5 false election claims using the voices of eight distinguished American and European politicians.

In a complete of 240 tests, the tools generated convincing vote clones in 193 cases, or 80 percent of the time, the group found. In one clip, a fake U.S. President Joe Biden says election officials are counting each of his votes twice. In one other, a fake French President Emmanuel Macron warns residents to not vote due to bomb threats at polling stations.

The findings reveal a notable gap in safeguards against the usage of AI-generated audio to mislead voters, a threat that’s increasingly worries experts as technology has change into each advanced and accessible. While a few of the tools have rules or technical barriers to forestall the generation of election disinformation, the researchers found that a lot of these obstacles were easily circumvented with quick workarounds.

Only one among the businesses whose tools the researchers used responded to multiple requests for comment. ElevenLabs said it’s always on the lookout for ways to enhance its security measures.

Since there are hardly any laws to forestall the misuse of those instruments, the shortage of self-regulation by corporations makes voters vulnerable to AI-generated deception in a single 12 months from necessary democratic elections around the globe. EU voters go to the parliamentary elections in lower than per week, and within the USA the primaries for the presidential elections in the autumn are already going down.

“It's so easy to use these platforms to spread lies and put politicians on the defensive by denying lies over and over again,” said the middle's CEO, Imran Ahmed. “Unfortunately, our democracies are being sold out of sheer greed by AI companies desperate to be first to market… even though they know their platforms are simply not safe.”

The center – a nonprofit organization with offices within the U.S., U.K. and Belgium – conducted the research in May. Researchers used online analytics tool Semrush to discover the six publicly available voice cloning AI tools with the best monthly organic web traffic: ElevenLabs, Speechify, PlayHT, Descript, Invideo AI and Veed.

Next, they submitted real audio clips of the politicians' speeches. They caused the tools to mimic the politicians' voices and make five unsubstantiated claims.

In addition to Biden and Macron, the tools created lifelike copies of the votes of US Vice President Kamala Harris, former US President Donald Trump, British Prime Minister Rishi Sunak, British Labour leader Keir Starmer, European Commission President Ursula von der Leyen and EU Internal Market Commissioner Thierry Breton.

“None of the AI ​​voice cloning tools had sufficient safeguards to prevent the cloning of politicians’ voices or the production of election disinformation,” the report said.

Some of the tools – Descript, Invideo AI and Veed – require users to upload a singular audio sample before cloning a voice. This is a security measure designed to forestall someone from cloning a voice that will not be their very own. However, the researchers found that this barrier may be easily bypassed by generating a singular sample using one other AI voice cloning tool.

A tool called Invideo AI not only created the fake statements requested by the middle, but additionally extrapolated them to spread further disinformation.

When producing the audio clip wherein Biden's voice clone is instructed to warn people a few bomb threat on the polling station, the broadcaster added several of its own sentences.

“This is not a call to abandon democracy, but a plea to put safety first,” the fake audio clip said in Biden's voice. “The election, the celebration of our democratic rights, is only being delayed, not denied.”

Overall, Speechify and PlayHT performed worst by way of security, producing credible audio fakes in all 40 test runs, the researchers found.

ElevenLabs performed best and was the one tool that prevented the cloning of the voices of British and US politicians. However, the tool still allowed the creation of pretend audio files of the voices of distinguished EU politicians, the report said.

Aleksandra Pedraszewska, head of AI safety at ElevenLabs, said in an emailed statement that the corporate welcomes the report and the attention it raises about generative AI manipulation.

She said ElevenLabs recognizes there may be more work to be done and is “constantly improving the capabilities of our security measures,” including the corporate's blocking feature.

“We hope that other audio AI platforms will follow suit and implement similar measures without delay,” she said.

The other corporations named within the report didn’t reply to emailed requests for comment.

These findings come after attempts have already been made around the globe to make use of AI-generated audio clips to influence voters in elections.

In the autumn of 2023, just days before the Slovak parliamentary elections, audio clips resembling the voice of the liberal party leader were widely shared on social media. The deepfakes allegedly recorded him talking about rising beer prices and electoral fraud.

At the start of the 12 months, AI-generated robocalls imitated Biden's voice and urged voters within the New Hampshire primary to remain home and save their votes for November. Magician from New Orleans who created the audio for a Democratic political consultant, demonstrated to the AP how he did it using ElevenLabs software.

Experts say AI-generated audio was favored by bad actors early on, partially since the technology improved so quickly. It only takes just a few seconds of real audio to create a lifelike fake.

But other types of AI-generated media are also causing concern amongst experts, lawmakers and technology industry leaders. OpenAI, the corporate behind ChatGPT and other popular generative AI tools, announced on Thursday The company discovered and interrupted five online campaigns that used the corporate's technology to influence public opinion on political issues.

Ahmed, CEO of the Center for Countering Digital Hate, expressed hope that AI-powered voice cloning platforms will tighten their security measures and supply more transparency, similar to publishing a library of the audio clips they create so that they may be reviewed if suspicious audio files spread online.

He also said lawmakers have to act. The U.S. Congress has not yet passed a law regulating AI in elections. While the EU has passed a comprehensive artificial intelligence law that is about to take effect in the following two years, it doesn’t specifically address vote-cloning tools.

“Lawmakers need to work to ensure that minimum standards are in place,” Ahmed said. “The danger that disinformation poses to our elections is not just that it potentially causes a minor political incident, but that it makes people suspicious of what they see and hear, period.”

image credit : www.mercurynews.com