A nagging question remains after asking, “What’s next after zero-click search?” How do frontier models – like OpenAI’s o3 series or Google’s Gemini 3 – learn?
Garbage in, garbage out
Large language models (LLMs) learn from massive amounts of text data, including billions of words, public information, and, you guessed it, internet data. And like any other model, they are subject to bias.
For example, one article from MIT News outlines findings on “position bias” with LLMs. This is the tendency to return results at the beginning or end of a document (similar to how humans have primacy or recency bias, or the tendency to favor the first or last impression). Another type of cognitive bias is “confirmation bias,” the tendency to process information in a way that is consistent with existing beliefs.
One review of clinical large language models revealed that confirmation bias (along with other cognitive biases) does exist in LLM output, sometimes with worrisome results. This article from Digital Medicine reveals pathology experts retaining erroneous estimates from LLMs, “illustrating how human and model errors can co‑reinforce rather than correct one another.”
Some models are useful
LLMs are models. We’d do well to heed George Box's advice: “All models are wrong, but some are useful.” And the data on which they are trained limits their usefulness.
A quick (and fun) example
For a fun explanation of this phenomenon, watch this short clip featuring Sonar Technician 1st Class Ronald “Jonesy” Jones aboard the U.S.S. Dallas in the 1990 film The Hunt for Red October.
A $40 Million Dollar Computer Says It's Magma →
While fictional, that movie clip illustrates how model output can create misdirection. And while LLMs can provide direction through an AI overview, a ChatGPT response, or a similar AI-generated response, their use should not be a destination. We should question the output when it’s misaligned with other trusted forms of data.
So, the next question we must ask is, “On what data do OpenAI and Google train their frontier models?”
The answer may surprise you.
You Reddit here
We make more data than ever before – text, photos and video. Much of this data exists in online platforms like Reddit (referred to as the “Heart of the Internet”).
Would it surprise you to learn that Reddit is among the most-cited sources in Google’s AI overviews?
It’s true.
In fact, overall search is down, about 20% per user year-over-year, according to the Q4 2025 “State of Search Report.” And popular tools like ChatGPT and Gemini actually “layer into the (search) process.” What does this mean?
Fewer clicks mean fewer eyeballs, making relevance even more critical.
Why Reddit?
The Columbia Journalism Review reports that between August 2024 and June 2025, Reddit was the most cited domain by Google AI overviews and Perplexity, and the second most cited by ChatGPT (OpenAI).
This follows a $60 million deal announced in February 2024, giving Google access to real-time content from Reddit’s user-generated forums (a similar deal, speculated to be valued in the $70 million range, was also struck with OpenAI). This means user-forum data is being used to fine-tune those organizations’ generative AI models.
Said differently, generative AI learns from forum data – and ultimately influences the output.
An insurance example
Declining search in and of itself may not be an issue. But if the model is not trained on, say, data about purchasing auto insurance, it cannot provide accurate advice.
A cursory search of Reddit for “auto insurance comparison” returns conversations about “What site doesn’t sell your data?” or “Is a $300 premium normal?” Few subreddits exist addressing insurance.
One community, r/Car_Insurance_Help, shows 26 thousand weekly visitors, and does have some promotional ads – but Redditors in this community must abide by the “No spam” rule (Do not post commercial links or to your own site or application).
So, the community’s rules automatically subvert how search, ads, digital and SEO drive traffic to a company website.
Many consumers trust AI when shopping for insurance
As someone who supports the insurance industry, I can share that none of the insurance executives I talked with in 2025 had any familiarity with these trends. Yet consumer behavior has already cataclysmically shifted toward the use of AI tools. And what’s more, consumers trust the output.
Don’t believe me? Let me change your mind.
J.D. Power reported in Q3 2025 that 40% of insurance consumers (US) used AI tools to shop for insurance, and 80% trusted the results. (Said differently, 1 in 3 consumers trust AI tools when shopping for insurance.)
This truth is corroborated in an IDC report, commissioned by SAS (“The Trust Imperative”). In the report, respondents’ answers show they trust generative AI 200% more than machine learning (despite the latter being around for 30 years compared to GenAI’s barely three).
In the new AI reality, insurance companies, financial services firms – and really any organization advertising on the internet – risk falling entirely out of the buying conversation.
CMOs have already been scrambling to adjust their strategies based on this new reality. The rest of us need to catch up. We will not put the GenAI toothpaste back in the tube, so we might as well brush our teeth.
Top 3 recommendations for surviving in a world of AI responses
Again, my context is insurance, but I’d invite you to apply your own lens against the following guidance on coming out of the trough of the GenAI hype cycle with good momentum.
1. Focus on what you’ve got
First, double down on channels in which you already have your customers engaged. The goal here is to get ahead of the trend toward reliance on AI-generated responses.
One Deloitte thought leader talks about “always on” advice, calling it a “mega shift” in the marketplace. This sage advice matches what we have seen our customers do with AI technology: meeting customers where they are and deploying relevant messages at the right time via the right channel – instantly.
For example, ERGO’s innovative use of technology and omnichannel approach has led to the strongest new business results in a decade. I personally had the privilege of hearing their Head of CRM, Alexander Hombach, discuss their use of AI and geofencing to provide this same “always on” advice.
The lesson here is simple. Customers won’t go shopping if they are deeply satisfied.
2. Revisit your content strategy
Understanding how LLMs and AI responses work, how they source data, and the preferences they demonstrate should inform how you create and deliver content.
LLMs tend to source authoritative, long-form, ungated content. So, for example, they can review the text of a PDF if set up to do so, but the results LLMs surface from PDFs are far less reliable than those from HTML content.
The good news – it’s likely your current content already contains much of what you need to get started.
For example, my former organization has an entire page online about auto safety. Replicating this type of expert advice in online forums known to be picked up by LLMs is a good first step to showing up in AI responses.
Also, investing in cross-channel marketing hubs (CCMHs) – which include capabilities like data management, content management, automated workflows and channel integration – can help manage content effectiveness and engagement.
Again, the goal here is to simply ensure you’re showing up in AI-driven conversations.
3. Invest in your people
Much has been made about AI replacing jobs. In the 2025 “Frontier Firm” report, survey respondents dispel this notion. Of over 9,000 leaders surveyed, “Nearly half of leaders (45%) say expanding team capacity with digital labor is a top priority in the next 12 to 18 months – second only to upskilling their existing workforce (47%).”
This means not cutting jobs but augmenting existing jobs with digital capacity.
You need only look back through human history to see a similar trend. With every major technological breakthrough, humanity moved upstream – new jobs were created (consider the role of a prompt engineer, which did not exist before ChatGPT).
This phenomenon is explained by Jevons Paradox – technological innovation does not lead to less consumption; it actually increases consumption due to efficiency gains. My former colleague, Aaron Stout, covered this idea in our webinar just last year.
AI’s use doesn’t decrease needs – that’s because increased efficiency creates more demand. The conclusion, simply put, is that we still need people (and always will).
I don’t believe AI responses will eliminate the need for teams to find ways to meet customers where they are, when they need help the most.
The final word: Is search dead?
A Forbes article from October 2025, Ohanian and Altman Warn of ‘Dead Internet Theory,’ references a Wall Street Journal panel on which Reddit co-founder, Alexis Ohanian, discusses the idea that non-human activity dominates the internet (running contrary to its genesis).
Ohanian has “long subscribed to the dead internet theory,” acknowledging it was regarded as a conspiracy theory a decade ago but is now “a very real thing” because of the proliferation of bots on social media, as well as humans using AI to create and amplify content.
If Ohanian’s belief holds true, then the internet has outlived its usefulness, and no one should Google anything ever again.
Do I believe search is dead? Not yet.
But I do believe that understanding this shift and the data that drives it will be key for any organization, in any industry, to adapt and thrive.
In this new world, AI’s use becomes even more important. Especially in digital channels, capabilities such as semantic analysis and AI technologies, such as natural language processing, will become increasingly critical. Firms that understand and embrace this trend will attract more new customers and retain more existing customers by developing and championing meaningful customer interactions.
For now, keep those browsers open.