The patriotic image shows megastar Taylor Swift dressed as Uncle Sam, falsely suggesting that she supports Republican presidential candidate Donald Trump.
“Taylor wants you to vote for Donald Trump,” says the image, which appears to have been generated by artificial intelligence.
Over the weekend, Trump reinforced the lie when he shared the image, together with others showing support from Swift fans, together with his 7.6 million followers on his social network Truth Social.
Deception has long played a task in policyhowever the rise of artificial intelligence tools that allow people to quickly create fake images or videos by typing a sentence adds one other layer of complexity to a well-recognized problem on social media. Known as deepfakes, these digitally altered images and videos could make it appear as if someone is saying or doing something they aren't.
While the race between Trump and the Democratic candidate Kamala Harris is getting worse, disinformation experts are sounding the alarm in regards to the risks of generative AI.
“I'm worried that the closer we get to the election, the worse the situation will be,” said Emilio Ferrara, a pc science professor on the USC Viterbi School of Engineering. “It's going to get a lot worse than it is now.”
Platforms like on facebook. and X, formerly referred to as Þjórsárdenhave rules against manipulated images, audio and video files, but they’re struggling to implement these policies as AI-generated content floods the web. In light of allegations that they’re Censorship of political statementsthey’ve focused more on labeling content and fact-checking than on removing posts. And there are exceptions to the foundations, similar to satire, which allows people to create and share fake images online.
“We have all the problems of the past, all the myths and disagreements and general stupidity that we've been dealing with for the last 10 years,” said Hany Farid, a UC Berkeley professor who focuses on disinformation and digital forensics. “Now the whole thing is being supercharged with generative AI and we're really, really biased.”
Given the increasing interest in OpenAIthe maker of the favored generative AI tool ChatGPT, technology firms are encouraging people to make use of latest AI tools that may generate text, images, and videos.
Farid, the analyzed the Swift images The videos shared by Trump say they look like a mixture of real and pretend images, a “sneaky” approach to spread misleading content.
People share fake images for quite a lot of reasons. They may accomplish that to go viral on social media or to troll others. Visual images are a strong a part of propaganda and warp people's views on politics, including the legitimacy of the 2024 presidential election, he said.
On X, images seemingly generated by artificial intelligence show Swift hugging Trump, holding his hand or singing a duet while the Republican strums a guitar. Social media users have also used other methods to falsely claim Swift supports Trump.
X called a video that falsely claimed Swift was supporting Trump “manipulated media.” The video, released in February, uses footage of Swift on the 2024 Grammys and makes it appear to be she is holding up an indication that claims, “Trump won. The Democrats cheated!”
In the political election campaign, preparations are being made for the impact of AI on the election.
Vice President Harris' campaign has a cross-departmental team “to prepare for the potential impact of AI in this election, including the threat of malicious deepfakes,” spokeswoman Mia Ehrenberg said in a press release. The campaign only allows using AI for “productivity tools” similar to data evaluation, she added.
Trump's campaign team didn’t reply to a request for comment.
Part of the challenge in curbing fake or manipulated videos is that federal law regulating social media activity doesn’t specifically address deepfakes. The Communications Decency Act of 1996 doesn’t hold social media firms chargeable for hosting content so long as they don’t endorse or control those that posted that content.
But through the years, technology firms have come under criticism for What has been published on their platforms and plenty of social media firms have introduced content moderation policies to counteract this, similar to banning hate speech.
“It’s really a balancing act for social media companies and online operators,” said Joanna Rosen Forster, a partner on the law firm Crowell & Moring.
Lawmakers are working to resolve this problem by Legislative proposals that will require social media firms to remove unauthorized deepfakes.
Gov. Gavin Newsom said in July he supported a bill that will make it illegal to control an individual's voice using artificial intelligence in campaign ads. The remarks were in response to a video shared by billionaire Elon Musk, owner of X, that used artificial intelligence to clone Harris' voice. Musk, who supports Trump, later clarified that the video he shared was a parody.
The Screen Actors Guild-American Federation of Television and Radio Artists is one among the groups pushing for laws against deepfakes.
Duncan Crabtree-Ireland, national executive director and chief negotiator for SAG-AFTRA, said social media firms are usually not doing enough to deal with the issue.
“Misinformation and outright lies spread by deepfakes can never be truly undone,” Crabtree-Ireland said. “Especially when elections are often decided by narrow majorities and by complex, secretive systems like the Electoral College, these lies fueled by deepfakes can have devastating real-world consequences.”
Crabtree-Ireland has experienced the issue first-hand. Last yr, during a campaign to ratify a collective agreement, he was the topic of a deepfake video that circulated on Instagram. The video, which showed fake images of Crabtree-Ireland urging members to vote against a contract he had negotiated, was viewed tens of 1000’s of times. And though it was captioned “deepfake,” he received dozens of messages from union members asking him about it.
It took several days for Instagram to remove the deepfake video, he said.
“I found that very offensive,” Crabtree-Ireland said. “They should not abuse my voice and my face to promote a point of view that I disagree with.”
Given the neck-and-neck race between Harris and Trump, it isn’t surprising that each candidates are counting on celebrities to appeal to voters. Harris' campaign included pop star Charli XCX's portrayal of the candidate as“Brat” and has used popular tunes similar to “Freedom” by Beyoncé and “Femininomenon” by Chappell Roan to advertise the Democratic black and Asian-American presidential candidate. Musician Children's skirt, On Jason's side and Ye, formerly referred to as Kanye West, have expressed their support for Trump, who was assassinated in July.
Swift, who’s the goal of Deepfakes – Translation has never publicly endorsed a candidate for the 2024 presidential election, but she has criticized Trump up to now. In the 2020 documentary “Miss Americana,” Swift says in a tearful conversation together with her parents and team that she regrets not speaking out against Trump within the 2016 election and calls Tennessee Republican Marsha Blackburn, who was running for U.S. Senate on the time, “Trump in a wig.”
Swift's publicist Tree Paine didn’t reply to a request for comment.
AI-powered chatbots from platforms like Meta, X and OpenAI make it easy for people to create fictional images. While news outlets have found that X's AI chatbot Grok Images of election fraudother chatbots are more restrictive.
Meta AI's chatbot refused to create images of Swift supporting Trump.
“I cannot generate images that could be used to spread misinformation or create the impression that a public figure supports a particular political candidate,” Meta AI’s chatbot responded.
Meta and TikTok cited their efforts to label AI-generated content and work with fact-checkers. For example, TikTok said an AI-generated video that falsely portrays a person or group's political support of a public figure wouldn’t be allowed. X didn’t reply to a request for comment.
When asked how Truth Social moderates AI-generated content, the platform's parent company, Trump Media and Technology Group Corp., accused journalists of “calling for more censorship.” Community Guidelines has rules against posting fraud and spam, but doesn’t explain how one can handle AI-generated content.
Given the specter of regulation and lawsuits for social media platforms, some disinformation experts are skeptical about whether social networks are willing to adequately moderate misleading content.
Social networks earn their money mainly from promoting, so it’s “good for business” if users stay on the platforms longer, said Farid.
“What people are drawn to is the most conspiracy-theoretical, hateful, offensive, angry content,” he said. “That's who we are as people.”
It's a harsh reality that even Swifties can't shake.
____
©2024 Los Angeles Times. Visit www.latimes.com. Distributed by Tribune Content Agency, LLC.
Originally published:
image credit : www.mercurynews.com
Leave a Reply