Eat rocks and run with scissors — Google’s AI Overviews are wild
From getting basic US history wrong to surfacing racist conspiracy theories, the results are not encouraging.
Earlier this month Google began rolling out its AI Overview feature to the masses — and it’s going poorly.
Google, in some instances, has been using generative AI to answer questions at the top of people’s searches, rather than surface relavent links there and show tidbits of that information like it used to. The responses are direct and in plain language, offering an air of authority. The problem is when you “let Google do the Googling for you” the results can be at best hilarious and at worst out-right dangerous.
A Google spokesperson told me these errors come from “generally very uncommon queries, and aren’t representative of most people’s experiences.” But that doesn’t acknowledge just how widely and wildly Google Search is used. “We conducted extensive testing before launching this new experience, and will use these isolated examples as we continue to refine our systems overall,” the spokesperson said.
Naturally, people have been having a field day seeing just how bad the AI’s responses can be. Here are some fun and scary examples of Google’s AI Overview gone wrong that I’ve been able to confirm are real:
Apparently people should “eat at least one small rock a day” (it told me ingesting “pea gravel slowly” was fine), which suggests it’s pulling answers from satire magazine The Onion. Apparently it also said that the CIA uses black highlighters, which would have come from this Onion story, but I wasn’t able to replicate that. Google didn’t respond to a question about whether it trained its AI on The Onion.
her pic.twitter.com/FGbvO923gk
— Tim Onion (@oneunderscore__) May 23, 2024
Here’s AI Overview telling me running with scissors is just fine!
It said president Barack Obama is Muslim, a known conspiracy theory. Google told me they’ve since taken this down since they said it violates their policies.
Google, FFS. pic.twitter.com/UHtLQ5SdpG
— Melanie Mitchell (@MelMitchell1) May 23, 2024
It suggested many US presidents have been non-white. This bears some similarity to Google’s ill-fated “woke” image generator that showed Black founding fathers and Nazis. Google subsequently paused the feature.
I'm learning a lot about American history with Google's AI Overview pic.twitter.com/37vmpepdHK
— Bobby Allyn (@BobbyAllyn) May 23, 2024
It suggested adding glue to get the cheese to stick to pizza, a result apparently pulled directly from an 11-year-old Reddit post. Google pays Reddit $60 million a year to use its content.
Google AI overview suggests adding glue to get cheese to stick to pizza, and it turns out the source is an 11 year old Reddit comment from user F*cksmith 😂 pic.twitter.com/uDPAbsAKeO
— Peter Yang (@petergyang) May 23, 2024
It said there’s no country in Africa that starts with a “K.” Sorry Kenya!
how is Google so god damn shitty at its job pic.twitter.com/bdx97oZNv6
— Ed Zitron (@edzitron) May 23, 2024
It is bad at spelling fruit.
Google’s AI Overview also says that Google violates antitrust law. However, the “yes” here actually goes on to say “yes, the U.S. Justice Department and 11 states are suing Google for antitrust violations.” This is partly true but actually doesn't add there is a second, near-identical lawsuit involving 35 states.
Found one thing Google's new AI Overview definitely gets right pic.twitter.com/ixJZjCsquM
— Brian Merchant (@bcmerchant) May 23, 2024
Google has shut off AI Overview for many of these queries after they went viral.
“Our systems aim to automatically prevent policy-violating content from appearing in AI Overviews,” the Google spokesperson said. “If policy-violating content does appear, we will take action as appropriate.”
For now it seems like a game of Whac-A-Mole. Google didn’t respond to a question about whether they’d keep the AI Overview feature up and running.