Meta now has an AI chatbot. Experts say prepare for more AI-powered social media – The Mercury News

When you utilize Facebook Messenger today, you'll be greeted by a brand new prompt: “Ask Meta AI anything.”

You could have opened the app to text a friend, but Meta's recent artificial intelligence-powered chatbot beckons you with encyclopedic knowledge just a number of keystrokes away.

Meta, the parent company of Facebook, has integrated its self-developed chatbot into its WhatsApp and Instagram services. Now billions of Internet users can open certainly one of these free social media platforms and access Meta AI's services as a dictionary, guide, guide or illustrator, in addition to many other tasks it may perform – although not all the time reliable or infallible.

“Our goal is to build the world’s leading AI and make it accessible to everyone,” Meta CEO Mark Zuckerberg said when he announced the chatbot’s launch two weeks ago. “We believe that Meta-AI is now the smartest AI assistant you can freely use.”

As Meta's moves suggest, generative AI is making its way into social media. TikTok has an engineering team focused on developing large language models that may recognize and generate text They are hiring Authors and reporters who can comment on and improve the performance of those AI models. Instagram’s help page states: “Meta is allowed to be used [user] News to train the AI ​​model to help improve AIs.”

TikTok and Meta didn’t reply to a request for comment, but AI experts said social media users can expect this technology to have a greater impact on their experience — for higher or worse.

One of the explanations social media apps are investing in AI is to turn out to be more “appealable” to consumers, said Ethan Mollick, a professor on the University of Pennsylvania's Wharton School who teaches entrepreneurship and innovation. Apps like Instagram try to maintain users on their platforms for so long as possible because the eye they capture generates promoting revenue, he said.

On Meta's first-quarter earnings call, Zuckerberg said it would take time for the corporate to generate a return on its investments within the chatbot and other AI applications, but it surely has already seen the technology transform user experiences on its Platforms influenced.

“Currently, about 30% of posts in the Facebook feed are powered by our AI recommendation system,” Zuckerberg said, referring to the behind-the-scenes technology that shapes what Facebook users see. “And for the first time ever, more than 50% of the content people see on Instagram is now AI recommended.”

In the longer term, AI is not going to only personalize user experiences, said Jaime Sevilla, head of Epoch, a research institute that studies AI technology trends. In autumn 2022 there have been tens of millions of users enthusiastic about Lensa's AI capabilities because bizarre portraits were created from selfies. Expect to see more of the identical, Sevilla said.

“I think in the end there will be completely AI-generated people posting AI-generated music and so on,” he said. “We could live in a world where the role people play on social media is just a small part of the whole.”

Mollick, writer of the book “Co-intelligence: Living and Working with AI,” said these chatbots already produce a few of what people read online. “AI is increasingly driving online communication,” he said. “[But] We actually don’t know how much AI writing there is.”

Sevilla said generative AI is unlikely to switch the digital town square created by social media. People crave authenticity of their online interactions with family and friends, he said, and social media firms need to keep up a balance between that and AI-generated content and targeted promoting.

Although AI will help consumers find more useful products in on a regular basis life, there’s also a dark side to the lure of technology that may result in coercion, Sevilla said.

“The systems will be pretty good at persuading,” he said. This is what a recently published study by AI researchers on the Swiss Federal Institute of Technology in Lausanne found GPT-4 was 81.7% more practical than a human to persuade someone to agree in a debate. Although the study has not yet been peer-reviewed, Sevilla said the outcomes are concerning.

“That concerns that [AI] “I want to significantly expand the ability of fraudsters to contact many victims and commit more and more fraud,” he added.

Sevilla said policymakers should pay attention to the risks of AI in spreading misinformation because the United States faces one other politically charged election season this fall. Other experts warn that it isn’t a matter of if, but of how AI could play a job in influencing democratic systems around the globe.

In Reddy's experience, AI is nice at detecting things like bias and pornography on online platforms. It has been using AI for content moderation since 2016, when it released an anonymous social network app called Candid that relied on natural language processing to detect misinformation.

Regulators should ban people from using AI to create deepfakes of real people, Reddy said. However, she criticizes laws reminiscent of those of the European Union far-reaching restrictions in the event of AI. In their view, it’s dangerous for the United States to lag behind competing countries reminiscent of China and Saudi Arabia Paying out billions of dollars in the event of AI technology.

So far, the Biden administration has released one “Blueprint for an AI Bill of Rights” It makes suggestions for the safeguards the general public must have, including safeguards for data protection and against algorithmic discrimination. It is unenforceable, although there are hints of possible laws to come back.

Sevilla acknowledged that AI moderators will be trained to acknowledge an organization's biases, resulting in some views being censored. But human moderators have also shown political bias.

For example in 2021 The Times reported to complaints that pro-Palestinian content was hard to seek out on Facebook and Instagram. And conservative critics accused Twitter of political bias in 2020 since it blocked links to a New York Post article in regards to the contents of Hunter Biden's laptop.

“We can actually study what kind of bias there is [AI] reflects,” Sevilla said.

Still, he said, AI could turn out to be so effective that it could severely suppress free speech.

“What happens when everything in your timeline is perfectly within company policy?” Sevilla said. “Is this the type of social media you want to consume?”

image credit : www.mercurynews.com