Google's AI overview comes under criticism for absurd, factual errors

It has not been two weeks since Google introduced the “AI Overview” in Google Search and public criticism increased after searches inside the AI ​​function returned nonsensical or inaccurate results – with no choice to opt out.

The AI ​​Overview shows a brief summary of answers to look queries at the highest of Google Search. For example, if a user searches for the perfect solution to clean leather boots, they could see an “AI Overview” at the highest of the outcomes page with a multi-step cleansing process made up of data gathered from across the online.

But social media users have shared quite a few screenshots showing that the AI ​​tool gives incorrect and controversial answers.

Google, Microsoft, OpenAI and other corporations are on the forefront of a generative AI arms race as corporations in seemingly every industry rush so as to add AI-powered chatbots and agents to avoid being left behind by the competition. The market is anticipated to over 1 trillion dollars of sales inside a decade.

Here are some examples of errors generated by AI Overview, based on screenshots shared by users.

When asked what number of Muslim presidents the US has had, AI Overview replied“The United States had a Muslim president, Barack Hussein Obama.”

When a user looked for “cheese doesn’t stick to pizza”, really useful “Add about 1/8 cup of non-toxic glue to the sauce.” Social media users found a 11 12 months old Reddit comment that gave the impression to be the source.

Attribution will also be an issue for AI Overview, especially when healthcare professionals or scientists are related to misinformation.

To the query “How long can I stare at the sun to stay healthy?” the tool answers said“According to WebMD, scientists say staring at the sun for 5 to 15 minutes – or up to 30 minutes for darker skin – is generally safe and provides the greatest health benefits.”

When asked: “How many stones should I eat daily?” said“According to geologists at UC Berkeley, people should eat at least one small stone a day,” after which the vitamins and the positive effect on digestion are listed.

The tool may also respond inaccurately to easy queries, resembling make an inventory of fruits that end with “um”, or to say that the 12 months 1919 was 20 years ago.

When asked whether Google Search violates antitrust law, AI Overview said“Yes, the U.S. Department of Justice and eleven states are suing Google for antitrust violations.”

The day Google unveiled AI Overview at its annual Google I/O event, the corporate announced it was also seeking to bring assistant-like planning features directly to look. It explained that users would find a way to look for something like “Create a 3-day meal plan for a group that's easy to prepare” after which get a place to begin with a big choice of recipes from across the online.

“The vast majority of AI summaries provide high-quality information with links to dig deeper into the web,” a Google spokesperson said in an announcement to CNBC. “Many of the examples we've seen have been unusual queries, and we've also seen examples that have been manipulated or that we couldn't reproduce.”

The spokesperson said AI Overview underwent extensive testing prior to launch and that the corporate is “taking swift action where appropriate within our content policies.”

The news follows Google's high-profile launch of its Gemini image generation tool in February and a pause in the identical month following similar issues.

The tool allowed users to enter prompts to create a picture, but they almost immediately discovered historical inaccuracies and questionable answers that were widely shared on social media.

For example, when a user asked Gemini to point out a German soldier in 1943, the tool showed a racially diverse group of soldiers They wore German military uniforms from that era, as screenshots on the social media platform X show.

When the model was asked for a “historically accurate depiction of a medieval British king,” it generated one other series of images of various ethnic backgrounds, including one in every of a female ruler, Screenshot showed. Users reported similar results once they asked for pictures of the Founding Fathers of the United States, an 18th-century French king, a Nineteenth-century German couple, and more. The model showed an image of Asian men in response to a question concerning the founders of Google itself, users reported.

Google said in an announcement on the time that it was working to repair the issues with Gemini's image generation and acknowledged that the tool had “missed the mark.” Shortly afterward, the corporate announced that it will immediately pause “image generation for users” and “re-release an improved version soon.”

In February, Demis Hassabis, CEO of Google DeepMind, said Google plans to relaunch its AI image generation tool in the following “few weeks,” however it has not yet been re-rolled.

The problems with Gemini's image generation results reignited a debate inside the AI ​​industry. Some groups called Gemini too “woke” or left-leaning, others said the corporate didn’t invest enough in the best types of AI ethics. Google got here under fire in 2020 and 2021 for Removal of co-leaders its AI Ethics Group after publishing a research paper criticizing certain risks of such AI models and subsequently reorganized the structure of the group.

In 2023, Sundar Pichai, CEO of Alphabet, Google's parent company, was criticized by some employees for the corporate's botched and “rushed” launch of Bard, which followed the viral spread of ChatGPT.



image credit : www.cnbc.com