Minimizing Risk and Maximizing Insight: A Conversation on AI in Market Research
AI is reshaping market research and consulting—improving hypothesis development, strengthening methodological rigor, and enabling better decision-making in healthcare and regulated industries.
We sat down with Rebecca Gould, Executive Vice President, Corporate Development & Strategy at Fulcrum Research Group, a division of SAI, to discuss how artificial intelligence is transforming market research, where it adds the most value, and why human oversight remains essential.
AI Is Changing Consulting by Compressing the “Think” Phase
SAI: AI is everywhere in consulting right now. From your perspective, how has traditional consulting actually changed with its onset?
Rebecca: The biggest shift isn’t that AI has replaced consulting – it’s that it has accelerated the early stages of thinking. Traditionally, we’d form a hypothesis, design research to test it, wait for results, refine, and repeat. AI allows us to stress-test those hypotheses much earlier. We can explore multiple strategic directions before committing real-world budget to fieldwork.
In practice, that means fewer weak ideas make it into expensive and time-intensive research. It’s not about automation for its own sake, it’s about sharper preparation. Human judgement and evaluation is still central. What’s changed is how quickly we can pressure-test our thinking and identify solutions that work.
The Most Suitable Product in a Busy Market
SAI: When you went looking for AI platforms, what criteria mattered most?
Rebecca: We were skeptical from the start – and I think that helped. There’s a lot of AI that looks impressive on the surface but doesn’t stand up to scrutiny. So we asked some very basic but important questions:
- Where is the information coming from?
- How is the output generated?
- Is this secure enough for regulated industries like pharma?
- And most importantly – is it genuinely different and providing value beyond what we can attain with traditional methods?
We reviewed many, many platforms and AI solutions that could not stand up to those criteria. But we were blown away by the abilities and rigor behind Synthetic Users, which is why we forged a partnership with them to bring their technology to healthcare.
We expected the same skepticism from clients that we had ourselves, and that’s what we’ve experienced. We’ve had lots of questions such as “Isn’t this just another wrapper around a general model?” “What makes this better?” If we couldn’t answer those questions clearly, we wouldn’t use it. We weren’t looking for novelty. We were looking for something of value that is methodologically defensible.
AI Is Best for Message and Concept Optimization—not Edge-Case Discovery
SAI: Are you concerned that AI won’t always be the right solution?
Rebecca: That’s exactly the right mindset – it isn’t always the right solution. AI doesn’t replicate the full spectrum of human behavior. I often describe it as performing very well in the middle of the bell curve – the statistically dominant response – but less well at the extremes.
So it’s not ideal for uncovering rare archetypes of underlying human behavior, or highly unconventional behaviors. It’s also not something we’d rely solely on to forecast edge-case scenarios.
But where it excels is optimization. It helps refine messaging, concepts, ideas… it eliminates obviously weak stimuli and allows us to enter live research better prepared.
Used appropriately, it improves research quality. Used blindly, it can create false confidence. The difference is the layer of human oversight.
Why “Behavior Simulation” Matters for Healthcare Market Research
SAI: How did you ultimately land on Synthetic Users?
Rebecca: What stood out was that it simulates behavior rather than simply generating answers. That distinction matters. It incorporates multiple modeling layers, including psychometric dimensions. In plain terms, it attempts to model emotional response patterns, not just rational reasoning.
In healthcare in particular, decisions aren’t purely logical. So that emotional modeling layer is really important.
It also has a system that leverages multiple large language models with healthcare-directed training components. When clients ask, “Where is this getting its information from?” we have a credible answer.
And credibility matters. We first encountered the platform through Harvard Business Review coverage, which gave us confidence in its intellectual foundations. Beyond that, it has partnerships with major enterprise organizations. This isn’t a side project, it’s serious technology.
Importantly, we didn’t try to build AI ourselves. We’re market researchers. We partnered with technologists who specialize in this. That division of expertise matters.
Iterative AI, Psychometrics, and Governance Are What Make Outputs Defensible
SAI: What aspects of the technology were most compelling?
Rebecca: Three things. First, it isn’t single-pass generation. It uses multiple modeling cycles – which means outputs aren’t simply the first plausible answer, but something iterated.
Second, the psychometric integration. That layer attempts to capture emotional variability and influence rather than purely logic.
Third, the security and governance. In regulated industries, data security and source integrity matter enormously.
Interestingly, early conversations are usually with commercial or insights teams. But once discussions progress, clients often bring in their technical stakeholders – and we actively encourage that. Tech-to-tech conversations are important. If a platform can’t withstand that scrutiny, it doesn’t belong in healthcare.
The Next Wave of AI in Market Research Will Require Human Oversight by Design
SAI: Where does the product still have limitations – and what are you working toward next?
Rebecca: The next major leap in AI won’t come from simply making models larger or more powerful. It will come from something close to metacognition – the ability for systems to reflect on their own reasoning. In other words, to recognize when a question is straightforward enough for a simple answer, and when it’s complex enough to require slowing down and second-guessing. To flag uncertainty. To understand its own limitations.
We’re not there yet.
And that’s where the human layer remains essential. Our role today is to interrogate outputs, challenge assumptions, and apply research discipline. That “human in the loop” function isn’t going away. What will evolve is the nature of that role. Instead of manually generating every insight, we’re increasingly directing, interpreting, and stress-testing intelligent systems.
The future isn’t AI replacing researchers. It’s researchers operating at a higher level of abstraction – supervising systems that can explore far more than any individual could alone.