Join Us for the 2024 Colorado Privacy Summit on September 26th!

  • Insights + Resources
  • Contact Us

Blog

Minimizing Bias in Market Research: An Interview with Erik Coats

Share

Minimizing Bias in Market Research: An Interview with Erik Coats

By The Fulcrum Research Division

Erik Coats is one of the co-founders of Fulcrum Research Group. We sat down with him to discuss some of the most important topics in market research and how they relate to the approach Fulcrum takes – and will continue to take – in the future. 

This is a portion of our interview and has been edited for length and clarity. 

Interviewer: How has your background in academic psychology influenced what you do in market research at Fulcrum? 

Erik: I studied social experimental psychology and personality.  A lot of our program was social cognition. This is kind of what today people call behavioral economics, which is understanding how people really think in the world and how they make decisions.  And how it’s not always rational.  Of all the things I did in grad school, that’s probably the most applicable to market research. 

Minimizing bias in market research 

Erik: When you’re an academic psychologist, you’re always trying to minimize bias and measure things as accurately as possible. In market research, there’s no mechanism for validation, and there are time and cost constraints, but we can apply best practices.  This is what we try to do at Fulcrum by recommending approaches to our clients that use psychology to minimize bias. 

Interviewer: Could you give an example? 

Erik: We do a lot of research, talking to doctors about highly detailed product profiles for potential new products that could come to market in the future. Two issues are important here: one is in how you show the information and two is how you measure people’s answers.  

We show doctors information and expect them to process it. We expect them to think about it and what it means.  And we just know for a fact it’s not happening. 

In market research, there’s no mechanism for validation, but we can apply best practices.

It’s worse in quantitative research.  In quantitative studies, there’s no incentive for them to be thoughtful.  You’re paying them to complete the survey, not by the hour as you would do in a qualitative interview. Studies show that many doctors only scan the information in a TPP in a survey. They go right to efficacy, read it quickly, and move on. 

A common approach I see to address this is not letting respondents move on for another 15 seconds to slow them down.  But that’s not really making them think. You’re not making them even necessarily read it for 15 seconds.  You’re just making them sit there and get angry for 15 seconds, pushing buttons and wanting to move on. 

Interviewer: How have you addressed this at Fulcrum? 

Erik: At Fulcrum, we asked: Why are we making it so difficult for respondents?  If you think about adopting a new product for doctors, it’s very System 2 level thinking.  It’s logical and rational. We want our research to reflect a System 2 real-world decision.  Qualitative interviews are best for this.  We’re paying doctors by the hour in qualitative, and there’s a moderator with them.  They’re moving slower. It’s clearly System 2 in qualitative. 

But we need to make the information we present more interesting, especially in quantitative surveys like demand research.  We introduced a video product profile (VPP) to address this.  In a VPP, we show doctors an animated presentation, with audio and visual displays, discussing the product.  It makes the process more engaging.  It costs more and takes longer.  But we’ve seen differences in the results.   

For example, we recently had a situation where we used regular static TPPs in a quantitative study. This is for a client where we typically use VPPs.  One product we tested was really not that good, but it tested well. Doctors said they would use it.  We did follow-up qualitative interviews, taking the doctors through the product.  The doctors said, “No, I would never use that product.”  In the survey, doctors assumed that any products tested were good products. And they always use new products. They went along without reading it closely. They took a shortcut. And we got this quantitative study that’s clearly flawed because of this issue.  The lesson is that you need to think about what you’re showing to respondents, and how you show it to them, to ensure your responses are valid.   

Related Resources

Connect with us on LinkedIn for up-to-the-minute insights

Scroll to Top