Google’s AI Overviews

Google has faced criticism for the inaccurate, humorous, and bizarre responses provided by its AI Overviews in search results that was introduced in Google I/O 2024. These AI-generated search summaries, which were rolled out more broadly earlier this month, have produced mixed outcomes. For instance, a user seeking advice on getting cheese to stick to pizza was told to use glue, based on an old Reddit post. Another user was advised to eat “one small rock per day,” a suggestion taken from The Onion.

Don’t be discouraged if you can’t replicate these viral searches or get similar answers. Google is actively working to eliminate such inaccuracies. A company spokesperson stated that they are taking “swift action” and “using these examples to develop broader improvements to our systems.”

The spokesperson from Google also said, “The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback.”

While it’s safe to assume these results will improve over time, some of the social media screenshots are likely exaggerated for comedic effect.

However, all these AI search results made me question their purpose. Even if they were flawless, how would they be better than traditional web search?

Google’s goal is to provide answers directly, eliminating the need to scroll through multiple pages. In early tests of AI Overviews, the company noted that “people use Search more, and are more satisfied with the results.”

The concept of eliminating the 10 blue links has been around for a while. Although Google has already made them less central, I believe it’s too soon to retire them completely.

AI Overviews example

Consider a self-serving search: “what is TechCrunch.” The summary provided is mostly accurate but feels padded, like a student trying to meet a page count, with traffic numbers seemingly sourced from a Yale career website. Similarly, searching “how do I get a story in TechCrunch” yields an overview that quotes an outdated article about submitting guest columns, which they no longer accept.

Google AI Overviews
Screenshot: Google

The goal isn’t merely to point out additional ways AI Overviews can be incorrect but to highlight that many of their errors will be less dramatic, amusing, and more ordinary. To Google’s credit, the overviews include links to the pages that provide the source material for the AI answers. However, identifying which answer corresponds to which source still requires considerable clicking.

Google also notes that the inaccurate results highlighted on social media often occur in data voids—topics where accurate information is scarce online. While this is a valid point, it emphasizes that AI, like traditional search, relies on a robust, open web filled with accurate information.

Unfortunately, AI could pose an existential threat to the open web. There’s much less incentive to write accurate how-to articles or break big investigative stories if people primarily read AI-generated summaries, accurate or otherwise.

Google claims that with AI Overviews, “people are visiting a greater diversity of websites for help with more complex questions” and that “the links included in AI Overviews get more clicks than if the page had appeared as a traditional web listing for that query.” I hope this is true. But if it isn’t, no amount of technical improvements can compensate for the potential loss of vast swaths of the web.