What it is:
Synthetic Users built for pharma research: AI that behaves like real, emotional respondents.

Synthetic Users are AI-generated representations of physicians or patients that participate in interactive research interviews. These Users respond naturally to moderator questions, provide nuanced feedback and allow for follow-up probing, delivering powerful, scalable insights that mirror real-world interactions with 92% parity with human responses. The platform integrates multiple LLMs, life science sources, and psychometric data about human personality types so that personas mimic real human emotion, behavior, knowledge, and attitudes
Validated science, commercial utility:
At ICML 2025, researchers from Stanford, University of Chicago, Princeton, and Santa Fe Institute released a position paper arguing that large language models can already simulate human behavior accurately enough for exploratory social science. Around the same time, in Nature, researchers from the Max Planck Institute, NYU, Princeton, and Google DeepMind introduced Centaur, a foundation model of human cognition—fine-tuned on trial-by-trial data from over 60,000 participants across 160 experiments. Together, these two papers (see bottom of this article for links) mark a turning point for anyone working on Synthetic Users, agents, or simulated research.
The ICML paper outlines five key challenges for LLM-based human simulation:
- Diversity
- Bias
- Sycophancy
- Alienness
- Generalization
But instead of treating them as fatal flaws, the authors frame them as tractable engineering and methodological problems—solvable with context-rich prompts, fine-tuning, and iterative evaluation.
Meanwhile, the Centaur team showed what that looks like in practice:
- Centaur outperforms traditional cognitive models in nearly every held-out experiment
- It generalizes across cover stories, task structures, and even entire domains
- Its internal representations align more closely with human fMRI activity
- It supports interpretable, model-guided scientific discovery
They fine-tuned Llama 3.1-70B on 10 million decisions using their Psych-101 dataset—no prompt hacking, just proper training on structured behavioral data. The takeaway? Synthetic users are no longer theoretical. They are a new class of method. And the first serious, empirically validated toolkits are already here. They won’t replace human participants—but they can meaningfully expand what’s possible in:
- Pilot studies
- Counterfactuals
- Theory development
- UX research
- Scaling social science.
The Synthetic User platform was even discussed as a prime example of this type of cutting edge work in Harvard Business Review’s “How Gen AI Is Transforming Market Research.”
If you’re still thinking of AI Market Research as a gimmick, it’s time to revisit that position.
Paper 1: https://www.nature.com/articles/s41586-025-09215-4
Paper 2: https://arxiv.org/abs/2504.02234
Paper 3: https://hbr.org/2025/05/how-gen-ai-is-transforming-market-research
Where to use it in pharma – Where Synthetic Users add the most value:
- Add a “Phase 0” to research programs (before market research) to:
• Pressure-test and optimize stimuli early (e.g., messaging, concepts, positioning ideas)
• Identify trust and risk concerns
• Explore HCP or patient segment differences
• Refine qual discussion guides
• Reduce unnecessary quant testing - Explore questions that arise after market research
- Reliable ‘human’ insights at lightning speed when primary market research isn’t feasible
Earlier insight, better downstream research
Contact us to let us show you how adding Synthetic Users to your research program can speed up and sharpen insights.

Note: As with all our market research, we ensure our AI tools follow all necessary security, ethics, governance, and compliance regulations