Whether you’re searching up why dolphins jump or what stainless steel is made of, chances are an AI overview will be the first thing you see. And while it’s common knowledge that you can’t trust everything on the internet. It’s become easier than ever to forget that misinformation is everywhere with technology like AI summaries and ChatGPT.
ChatGPT can be useful, but its limitations shouldn’t be overlooked. ChatGPT is a large language model. Taking the information it was trained on, the AI predicts what its next words will be. It’s impossible for ChatGPT to comprehend what it outputs because responses are generated based on statistical probability. ChatGPT can and has generated false responses. Made-up citations, books, and even people creep into genuine fact. A recent study by Purdue University found that 52% of ChatGPT’s answers contained false information regarding programming (which ChatGPT is often used for). Even asking something as simple as “When was the most recent earthquake in the U.S.?” will prompt ChatGPT to say that a 7.0 magnitude hit Alaska on January 23, 2025. This is blatantly untrue, yet ChatGPT will leave off with a “Let me know if you want more details on recent seismic events or earthquake preparedness!” OpenAI—the company that created ChatGPT—states that “ChatGPT can be a helpful tool, but it’s not perfect.” It encourages users to use other sources along with ChatGPT. AI should not be used to do all the work. It can help point you in the right direction; ask it for physics websites, articles on the Civil War, or places to go for writing help. Use Google to double-check any information it gives you. Just remember that ChatGPT makes mistakes, and it makes them often.
But Google’s AI overview that pops up during searches is even more unreliable. It often draws from untrustworthy sources, and those sources can vary when asked the same question. It isn’t unusual for Facebook or Reddit posts to enter the AI overview’s sources. Even typos can heavily influence the overview. Searching “how to care for a rat snake” produced a summary that used nine sources, three of which were consistently from Reddit or Youtube. “How to care for a rat snake” had only four sources used to generate the AI overview, but all were either vet websites or reptile centers. More concerning is how, according to an AP News article, the overview had called Barack Obama the first Muslim president of the United States when first released. And while that particular issue has been resolved, countless other AI results continue to generate errors.
AI draws from existing sources—mainly the internet—and doesn’t know how to filter out the misinformation like we can. ChatGPT can’t double-check its response to ensure everything is real; Google’s AI overview can’t sort fact from fiction. It only takes a few minutes to check, a few seconds to scroll past, so why not?