A few years ago, Procter & Gamble publicly stated that it had experienced inconsistent research results from successive online research projects. Other organizations shared similar experiences, and questions were raised about “professional respondents.” The trustworthiness of online research was in question, and multiple initiatives arose. In the past two years, we’ve seen a lot of debate around this topic, and associations such as ESOMAR and ARF have come up with protocols that all good panels should follow — and many have. But what does this mean from a client perspective? How have initiatives like ARF's Quality Enhancement Process, MarketTools' TrueSample, or processes like machine fingerprinting changed the industry?

Next month, I'm hosting a panel at Forrester's Marketing Forum 2010 with participants from Microsoft, Procter & Gamble, and the ARS Group to understand what the challenges with online sampling are today — and how they affect adaptability.

Questions I will discuss with the panel include the following:

  • What made you realize there was an issue with the quality of online panels?
  • The existing quality marks are solving only part of the issue; another problem lies in the small pool of active respondents (across panels). Which actions should the industry take to solve this broader issue?
  • Online qualitative research is expected to show a fair amount of uptake in the next few years. Will the quality issues we've seen with online panels also happen with online community research?

 

I'd love to see you on April 22 in Los Angeles. If you can't make it, let me know which questions you think I should be asking the panel. I'll be blogging about it afterwards!