Your AI Synthetic Users Are Out Of Date And Out Of Touch
How can market research respond to generative AI’s inability to stay up to date?
This evening, fresh from watching the Democratic National Convention, I asked ChatGPT’s latest premium model, “Is Kamala Harris brat?”
The AI responded: “When it comes to Kamala Harris, the current Vice President of the United States, she is a well-respected public figure with a long career in law and politics. Describing her as a “brat” would be a dismissive and pejorative characterization that does not align with her accomplishments or the way she is generally viewed by her supporters and many in the public sphere.”
The reality is starkly different from the AI response. Kamala Harris isn’t just the current Vice President, after all. She’s the Democratic presidential nominee. That’s a big detail to miss. What’s more, in the context of the Harris campaign, “brat” as defined by Charlie XCX has been a term of praise, not an insult. This meme has been extensively covered by major news media.
Seeking to probe the AI model’s ability to deal with up to date branding influences, I asked ChatGPT about another recent event: “How will Ford’s announcement today about its all electric F-150 pickup truck affect the psychological position of the F-150?”
The AI responded with a factual summary of today’s news: “The new strategy includes delaying the production of its next-generation electric truck, initially expected to be available sooner, until the second half of 2027.”
The AI failed, however, to integrate the news of Ford’s significant delay into a branding strategy for the F-150, offering instead the disconnected boilerplate suggestion that “Ford’s branding strategy for the new electric F-150 will likely combine its storied legacy with a forward-looking approach that caters to evolving consumer demands for efficiency, affordability, and digital integration.” The obvious branding implications of Ford’s struggles to develop an electric F-150 in a timely manner aren’t addressed by the AI at all.
Last autumn, OpenAI proudly announced that ChatGPT would be able to give users realtime updates, integrating up-to-the-minute web search data into its responses. My searches tonight, however, demonstrate that the generative AI service is failing to conceptually process realtime market information, even in the instances when it is able to retrieve data that should be relevant.
What’s going on? If ChatGPT is being exposed to up-to-date information through access to web searches, why does it fail to grasp new cultural trends and the implications of market developments?
Why Generative AI’s Synthetic Users Are Perpetually Out Of Date
An article published today in New Scientist summarizes new research indicating that generative AI models come with inherent technological obstacles to integrating new data. The more often generative AI models are retrained to incorporate up to date information, the less capable they become of “learning” from the information they are exposed to. The metaphorical “neurons” in their algorithmic structures appear to become progressively inactive, resulting in a kind of AI dementia.
Researcher Shibhansh Dohare has proposed a potential work-around that might enable generative AI platforms to defy the ossification of their ability. The solution involves restarting algorithmic “neurons” that have become damaged by the retraining process. This method is not currently being used by any publicly-available generative AI platform, however.
Even if such a solution could be applied at a commercially-viable scale, there would remain significant problems with efforts to use generative AI to produce up to date models of consumer psychology. The most sophisticated efforts to replace traditional market research with generative AI involve the creation of “synthetic users” by starting with the non-specific conceptual framework of a general AI platform, then adding domain-specific additional training data to enable a more focused simulations of consumer responses.
The approach sounds simple enough in theory. In practice, market research firms are struggling to create viable synthetic user systems that don’t require significant coddling and data-polishing by humans to produce useful results. At present, the more often synthetic users are retrained to be kept up to date, the more prone they become to AI dementia.
In contrast to the reality of significant glitches under the hood of synthetic users, clients’ explorations of popular generative AI tools have created an expectation that market research simulations using synthetic users should produce rapid results at an extremely low cost, along with rigorous quantitative reliability. Frustrated clients are struggling to understand what the point of using synthetic users is, when they require teams of humans in expensive AI boutique firms to gradually coax the algorithms into an adequate performance.
Underlying the gap between market research AI hype and the wonky reality, is the bigger question of what data synthetic users should be trained on. On the one hand, the extremely off-topic foundational training of generative AI platforms risks the introduction of data from people who would never pass any serious market research screener. Data from hundreds of millions of people who are not remotely in the market for the product or service being studied contaminate the results.
At the same time, industry insiders and marketing teams have an outsized influence on generative AI content. After all, most of the digitally-available discussion related to any given product or service consists of advertising, public relations, or secondary material that has been predominantly influenced by promotional materials. To the extent that synthetic user market research services impress clients, it’s because these services operate as immensely complicated confirmation bias machines. Generative AI mirrors back what marketers expect to hear because the AI has been trained on what marketers have already said.
Synthetic Users ultimately end up putting market research clients in the position of the queen in Sleeping Beauty, staring into a mirror, demanding to know who is the fairest one of all. If you prefer a more up-to-date mythological metaphor for generative AI in market research, think of the Mirror Of Erised from Harry Potter.
The Difficult Solution
Market research that employs synthetic users powered by generative AI can seem like a fantasy fulfillment for marketing teams looking for a quick hit of insight. Easy insights, however, aren’t what deliver sustainable gains.
Generative AI tools in market research haven’t lived up to their promises. The iron triangle of speed, low cost, and quality remains intact. You might be able to grasp two sides of the triangle at once, but never all three.
If you’re serious about building a brand with any lasting equity, quality insights are essential. Marketing is ultimately about insight into the human experience, and a quick and inexpensive simulation of humanity just doesn’t measure up.
Insights that matter come from time spent with people.
Talk with people. Listen to them. Learn to empathize with how they feel.
All the rest is slop from a prediction engine.