Join Us for the 2024 Colorado Privacy Summit on September 26th!

  • Contact Us

Blog

Humanizing AI to Improve Market Research Outcomes 

Share

Market research teams are under more pressure than ever.

Timelines are shrinking. Budgets are tightening. Expectations for certainty keep rising — even though early decisions are often made with limited evidence. In healthcare especially, the gap between what people say and what they actually do continues to undermine insight quality.

AI has promised to help. But “smarter” AI alone hasn’t delivered better research. In many cases, it’s created a new problem: false confidence rooted in overly rational thinking.

The Core Problem: Humans Don’t Act Rationally

Decades of behavioral science point to the same truth: people don’t make decisions the way traditional research — or logical AI — assumes they do.

Most real-world choices are driven by System 1 thinking:

· Fast

· Instinctive

· Emotionally guided

In healthcare, this is amplified by:

· Trust and credibility

· Habit and prior experience

· Risk aversion

· Time pressure

Rational explanations (System 2) usually come later — often as justification for a decision that’s already been made.


Why “Smarter AI” Isn’t Better Research

Logical AI excels at what it was built to do:

· Clear reasoning

· Consistency

· Data-driven conclusions

It can explain why a product should work:

“The product offers superior efficacy, competitive pricing, and convenient dosing.”

But market research doesn’t fail because teams lack logic. It fails because emotional resistance, habit, and discomfort are underestimated or missed entirely.

Logical AI assumes rational decision-making. Humans don’t operate that way.

The result? Early confidence — and hidden risk.


Introducing Humanized Synthetic Users

To close this gap, Fulcrum Research Group is excited to announce our partnership with Synthetic Users – synthetic data that is “humanized” to go beyond just “smart” AI.

These are AI-generated respondents designed not to be perfectly rational — but intentionally human.

They combine:

· Multiple LLMs trained in healthcare

· Psychometric data

· Embedded bias, memory, and emotional state

· Bounded rationality that mirrors real-world pressure

Instead of optimizing for correctness, they’re designed to reflect how people actually react.


What Synthetic Users Do Differently

They surface gut-first reactions.

Synthetic Users respond quickly and instinctively — revealing:

· Trust or distrust

· Comfort or discomfort

· Anxiety, resistance, or openness

They operate within emotional “moods,” carry histories that shape decisions, and limit deliberation the way real people do.

The feedback feels human because it is human-like:

“I know the data looks good, but I’m comfortable with what I use now. I’d need a really strong reason to switch.”

That kind of resistance often goes unspoken — until it’s too late.


Why Creative Concepts Were Our First Test Case

Creative concepts aren’t chosen because they’re “correct.” They’re chosen because something feels relevant, empowering, or reassuring.

If an audience can’t emotionally connect with a concept, no amount of rational explanation will save it.

That’s why creative testing is a powerful litmus test:

· If Synthetic Users can’t feel a concept, they can’t truly evaluate it

· And neither can real humans


What the Results Showed

We evaluated creative concepts with both real human respondents and with similarly designed Synthetic Users. What we found was that Synthetic Users closely mirrored human respondents in both:

System 1 (Emotional)

· Initial impressions

· Tone and resonance

· Urgency to act

System 2 (Rational)

· Concept ranking

· Believability and uniqueness

· Strength of clinical data

· Overall understanding

But they went further.

Synthetic Users provided more detailed, actionable optimization feedback:

· How to adjust visuals to better support messaging

· New metaphor ideas for disease education

· Revised language (in native tone) to soften or strengthen impact

This is feedback human respondents often feel — but struggle to articulate.


The Real Impact: Reducing Research Waste

This isn’t about replacing human research.

It’s about adding a “Phase 0” exploration layer that helps teams:

· Identify emotional risk early

· Eliminate weaker concepts before large-scale testing

· Enter primary research with sharper hypotheses

· Save time and budget without sacrificing rigor

Synthetic Users reduce waste — not validation.


The Bottom Line

Logical AI can tell you what should work.

Humanized Synthetic Users show you what might actually fail — early enough to do something about it.

And in market research, that difference can mean everything.


Related Resources

Connect with us on LinkedIn for up-to-the-minute insights

Scroll to Top