9 Ways to Know You’re Under-using Generative AI Right Now
You aren’t using AI
to check, challenge
or improve your own
thinking. You’re prompting
based on viral tips
rather than your own testing
You aren’t using AI
to check, challenge
or improve your own
thinking. You’re prompting
based on viral tips
rather than your own testing
In the previous parts of this series, I’ve covered how AI accelerates data gathering, analysis and segmentation. All that value is academic until it’s converted into strategic decisions that get implemented. In this edition, I’ll show how generative AI helps you stress-test customer strategy decisions before you commit resources.
In my consulting engagements, I’ve used generative AI for market sizing, business modeling, product naming, and growth forecasting. I can’t think of an analytics application more naturally suited to AI than customer segmentation.
The latest AI systems automatically coordinate multiple specialized models to tackle complex problems, combining best-of-breed language models with specialized tools for search, analysis, coding, and reasoning. The industry has realized that no single AI excels at everything. The same concept applies to the gen AI tools themselves. Here are three reasons for making a practice of using more than one generative AI tool.
For the past few decades, SAS, Python and R have dominated exploratory analytics and modeling, supplemented by a progression of interactive tools, starting with OLAP in the 1990’s and progressing to visual tools like Tableau in recent years. The most productive versions of these solutions still required weeks of researching data sources, downloading them (often for a fee), multiple trial-and-error cycles on quality, and wrestling with mismatched keys across datasets. Once the data was ready, producing analysis required laborious hand-coding to produce and present insights.
Since ChatGPT launched just 30 months ago, leaders have been asking if it’s fundamentally changing how customer analytics works. The short answer is: yes. While analytics has used AI for decades in the form of regression models and predictive algorithms, what I’ve found through recent client work is striking: when wielded by skilled practitioners, generative language models don’t replace human insight – they amplify it. These tools apply in numerous and rapidly growing ways that aren’t immediately obvious to everyone. The key difference? Gen AI dramatically shortens the path from question to insight, allowing us to focus more on the “so what” than the “how to.”
The Consumer Financial Protection Bureau (CFPB) has emerged as a focal point of federal agency reductions in early 2025, following the launch of the new administration’s DOGE initiative. As political and media scrutiny of the Bureau intensified, public engagement with the agency also surged.
Gen AI tools can make things up with stunning confidence. In researching a book I’m working on right now, I recently prompted an AI engine to help me find some supporting points. I specifically asked it to use only credible sources and provide citations. I was delighted to receive 5 relevant article titles, cited from publications that included WSJ, New York Times, and Forbes. Unfortunately, none of the stories were real. AI made up the stories, the titles, and even the hyperlinks.