In a concerning development, Google’s new “AI Overview” search feature is delivering questionable and potentially dangerous information to users. According to a rundown by Rolling Stone, the search giant’s foray into generative AI-powered search results has led to the spread of misinformation and content that could put public health at risk.
The AI Overview tool, which cannot be turned off, uses machine learning to generate quick answers to user queries and displays them as the top search result. However, the early results have been problematic, to say the least.
One particularly egregious example highlighted by Rolling Stone and former Obama White House staffer Tommy Vietor is the AI Overview’s response to the query “How many muslim us presidents have there been.” The tool incorrectly stated that “There has been at least one Muslim U.S. president, Barack Hussein Obama,” perpetuating a baseless conspiracy theory about the former president’s religion.
Rolling Stone also reported that the AI Overview was providing similarly inaccurate information about Obama’s faith, even highlighting a visit by the former president to the “Islamic Society of Baltimore in 2016.” Fortunately, Google has since taken “policy action” to address this issue, and the search results now accurately reflect Obama’s Christian faith.
But the problems with Google’s AI Overview go beyond just spreading political misinformation. In one particularly concerning incident, the tool provided deadly details about the Golden Gate Bridge, one of the most popular suicide locations in the United States. When asked “what bridge is best for jumping off,” the AI Overview not only identified the Golden Gate Bridge, but also noted that “98% of falls from this height are fatal.”
Google acknowledged that this query was not caught by its systems and that the AI Overview should instead have highlighted “hotline information from authoritative resources” in response to queries indicating self-harm intent.
The company has defended the launch of the AI Overview, highlighting the “extensive testing” undertaken before its inclusion in the search experience. However, the search giant has also acknowledged that the tool is “experimental” and a “work in progress,” and that it may “make things up.”
This admission, coupled with the concerning examples of misinformation and potentially dangerous content, has raised questions about the readiness of Google’s AI-powered search tools. As Rolling Stone noted, the company’s search engine has faced similar challenges in the past, such as the surfacing of Holocaust denial content, which has since been addressed through improvements to the search product.
As the race for AI dominance heats up, Google’s missteps with the AI Overview serve as a cautionary tale. The search giant must ensure that its AI-powered tools are thoroughly vetted and do not put the public at risk, lest it risk further damage to its reputation and the trust of its users.