Keeping AI Honest: Four Ways to Spot and Stop Hallucinations

By JP Snow, Principal & Founder at Customer Catalytics, April 24, 2025

Gen AI tools can make things up with stunning confidence. In researching a book I’m working on right now, I recently prompted an AI engine to help me find some supporting points. I specifically asked it to use only credible sources and provide citations. I was delighted to receive 5 relevant article titles, cited from publications that included WSJ, New York Times, and Forbes. Unfortunately, none of the stories were real. AI made up the stories, the titles, and even the hyperlinks.

As AI systems become more embedded in our business workflows, the risk of these “hallucinations” grows. In response, we need to grow our skills for managing what’s generated.

Here are four strategies you can use today to spot and stop AI hallucinations in your business.

1. Keep a Skeptical Human in the Loop

You’re ultimately responsible for what you publish or act on. Check claims for reasonability and verify sources before using AI output in high-stakes work. The most costly hallucinations slip through when we trust too much, typically because we’re moving too quickly. Build a habit of asking “How would I know if this were wrong?” before accepting what AI tells you. Your skepticism is your strongest safeguard.

2. Catch Lies Early

AI tools work by generating the next best words they can find. Once they get off track with a hallucination, they continue that narrative, much like a liar covering themselves with more lies. Spot-check facts early in your chat stream. If you find an error, restart the prompt rather than trying to correct course. Breaking the chain of falsehoods is easier than fixing them mid-stream.

3. Include Guardrails in Your Prompts

We’re just 29 months in since ChatGPT launched. Prompt engineering keeps evolving. Experiment with techniques like adjusting “temperature” settings, explicitly asking for source verification, and clearly stating when you want facts versus creative ideation. The best practices for today will change tomorrow, but the core principle won’t. Tell AI the rules of the game you’re playing.

4. Fight Fire with Fire

Use a second AI system to validate the output of the first. Different models have different knowledge and training, making them useful cross-checks. In the near future, we can expect AI providers will build fact-checking agents into their systems to automatically verify claims. Until then, this manual double-check provides an extra layer of safety for critical information.

Activate Your Future

Thanks for reading. Leaders work with me to get faster growth through data and scale. My approach is built on what works: Data Decides. Insights Inform. Moments Matter. Systems Sustain. Talent Transforms.

🔹 Stay Informed: Subscribe to this newsletter and follow me on LinkedIn.

🔹 Ready for Results? Book a call with me at customercatalytics.com/connect to schedule an introductory meeting.

© 2025 Customer Catalytics. All rights reserved.

Scroll to Top