Is the quality of data you get from your online panelists good enough?
BY: DARREN BRIDGER, VP RESEARCH AND SCIENCE
Published on: November 3 2023

At CloudArmy, we are pioneers in online response-time testing for market research. Our clients value the ability to get deeper emotional and intuitive consumer reactions, quickly, and anywhere in the world through our reaction time and Implicit tests. 


Yet making sure we are capturing good quality data is challenging, and getting more so.There are several different challenges, and each requires a different approach.


In this blog we’ll examine each of these challenges, including the toughest of all: ensuring that we are capturing accurate reaction times.

Challenge One: Bots and fake panelists

Online bots pose a threat to market research by automatically submitting high volumes of fake survey responses, and they are becoming harder to detect. Their simple goal is to accumulate rewards at scale. Estimates suggest bots could account for 10-30% or more of all survey responses, meaning a significant proportion of data is artificially corrupted. Bots can even generate convincing open-ended comments using natural language processing. Detecting and excluding bots is an endless game of whack-a-mole. 


Those little ‘CAPTCHA’ forms - where you have to perform a task like clicking on all the images that contain a bridge - are only part of the solution.

Challenge Two: Inattentive panelists

Even genuine humans can corrupt data by responding thoughtlessly. Many participants approach surveys with a “get it done quick” mindset, speeding through and providing lazy responses simply to finish faster.


While not outright fabrication like bots, such inattentive responses still produce low quality data that fails to capture true opinions, attitudes and behaviors.


People are also more attention starved than ever, barraged by distractions ranging from social media to streaming entertainment and 24/7 news cycles. Just focusing on daily life has become a challenge, let alone diligently completing tedious surveys. With multitasking the norm and technology engineered for non-stop engagement, people have less patience for monotony. Securing attentive responses in online research may be harder today than ever before. Market researchers must empathize with overwhelmed respondents and craft engaging experiences that respect participants’ time and limited cognitive bandwidth. Otherwise, poor attention and high dropout rates will persist.


This too stems from economic incentives. Participants are motivated to minimize time spent on monotonous surveys that offer little reward, rather than providing thoughtful consideration. 

Challenge 3: Getting accurate reaction-time measures online

Measuring reaction times has clear advantages, yet the bar for accurate reaction-time and Implicit testing is high. 


Reaction-time testing was developed in academic cognitive science labs. However, getting good quality data in lab conditions is far easier than online. In labs, researchers can rely on using standardized equipment that has been tested and calibrated to ensure accurate, consistent reaction-time measurements. And the lab environment itself can be controlled to minimize distractions and provide supervision for participants, to help make sure they are focusing on the task at hand and to answer questions about anything they don’t understand.


For online market research we do not have these advantages. There are many more points of potential failure or unreliability in transferring the test to the respondent, and then the reaction times back from the respondent. Therefore how do we ensure that we are capturing the high quality of reaction-time data that we want in order to give our clients the best insights?


As the variety of different devices for connecting to the internet proliferates - particularly different browsers - it can be challenging to ensure that your online surveys behave equivalently and are able to capture accurate reaction times, even when a panelist has an intermittently poor Internet connection. Time lags and inconsistent technical performance add challenges to recording precise timings.


It's like trying to time Olympic sprinters with stopwatches while they run on treadmills in their own homes across different time zones. Variations in treadmill calibration and connection lag mean the times may not be precise or comparable.


Online Implicit researchers must ensure that we are genuinely capturing accurate response times, across myriad technical connections, to get the best of both worlds: lab accuracy and the speed and reach of online market access. 

What's next?

In summary, there are now multiple challenges to getting good quality market research results online. For market researchers who rely on these data for business insights, the implications are concerning. Findings based on low quality data cannot be relied upon. 


In upcoming posts, we will explore how CloudArmy has developed methods to overcome these hurdles and reliably gather insightful implicit and reaction-time data in online surveys. Stay tuned to learn how we have cracked the code on biometric testing for market research in the digital age.