You’ve decided to make a homemade pizza on Friday night. You carefully prepare your pie, bake it, and eagerly get ready to enjoy it. But as you take your first bite, the cheese slides right off! Frustrated, you turn to Google for a quick fix.
Google’s advice? “Add some glue.” Google’s new AI Overviews feature suggests mixing 1/8 cup of Elmer’s glue with your sauce. This bizarre tip comes from a decade-old Reddit joke, not a legitimate solution.
Since Google launched this feature widely, it’s been riddled with such comical errors. Other examples include claims that James Madison graduated from the University of Wisconsin 21 times, a dog played in major sports leagues, and Batman is a cop.
Google spokesperson Meghann Farnsworth downplays these errors as rare and not reflective of typical experiences, adding that they’re using these incidents to refine the AI. But despite a “Generative AI is Experimental” disclaimer, these mistakes highlight the tool’s current unreliability.
Post by @miasato.2View on Threads
Google isn’t alone—other tech giants like OpenAI and Meta also struggle with AI inaccuracies. However, Google’s large-scale deployment makes its blunders particularly noticeable.
For now, it seems we have to be cautious about taking AI-generated advice too seriously. Idealists believe in the technology’s potential, but until then, we might see someone trying to glue their pizza. Welcome to the wild world of the internet.