Detection of Careless Responses in Online Surveys Using Answering Behavior on Smartphone

Masaki Gogami, Yuki Matsuda, Yutaka Arakawa, Keiichi Yasumoto: Detection of Careless Responses in Online Surveys Using Answering Behavior on Smartphone. In: IEEE Access, pp. 1-1, 2021, ISSN: 2169-3536.

Abstract

Some respondents make careless responses due to the “satisficing,” which is an attempt to complete a questionnaire as quickly and easily as possible. To obtain results that reflect a fact, detecting satisficing and excluding the responses with satisficing from the analysis targets are required. One of the devised methods detects satisficing by adding questions that check violations of instructions and inconsistencies. However, this approach may cause respondents to lose their motivation and prompt them to satisficing. Additionally, a deep learning model that automatically answers these questions was reported. This threatens the reliability of the conventional method. To detect careless responses without inserting such screening questions, machine learning (ML) detection using data obtained from answer results was attempted in a previous study, with a detection rate of 55.6%, which is not sufficient from the viewpoint of practicality. Therefore, we hypothesized that a supervised ML model with a higher detection rate could be constructed by using on-screen answering behavior as features. However, (1) no existing questionnaire system can record on-screen answering behavior and (2) even if the answering behavior can be recorded, it is unclear which answering behavior features are associated with satisficing. We developed an answering behavior recording plug-in for LimeSurvey, an online questionnaire system used all over the world, and collected a large amount of data (from 5,692 people) in Japan. Then, a variety of features were examined and generated from answering behavior, and we constructed ML models to detect careless responses.We call this detection method the ML-ABS (ML-based answering behavior scale). Evaluation by cross-validation demonstrated that the detection rate for careless responses was 85.9%, which is much higher than the previous ML method. Among the various features we proposed, we found that reselecting the Likert scale and scrolling particularly contributed to the detection of careless responses.

BibTeX (Download)

@article{9387296,
title = {Detection of Careless Responses in Online Surveys Using Answering Behavior on Smartphone},
author = {Masaki Gogami and Yuki Matsuda and Yutaka Arakawa and Keiichi Yasumoto},
doi = {10.1109/ACCESS.2021.3069049},
issn = {2169-3536},
year  = {2021},
date = {2021-01-26},
journal = {IEEE Access},
pages = {1-1},
abstract = {Some respondents make careless responses due to the “satisficing,” which is an attempt to complete a questionnaire as quickly and easily as possible. To obtain results that reflect a fact, detecting satisficing and excluding the responses with satisficing from the analysis targets are required. One of the devised methods detects satisficing by adding questions that check violations of instructions and inconsistencies. However, this approach may cause respondents to lose their motivation and prompt them to satisficing. Additionally, a deep learning model that automatically answers these questions was reported. This threatens the reliability of the conventional method. To detect careless responses without inserting such screening questions, machine learning (ML) detection using data obtained from answer results was attempted in a previous study, with a detection rate of 55.6%, which is not sufficient from the viewpoint of practicality. Therefore, we hypothesized that a supervised ML model with a higher detection rate could be constructed by using on-screen answering behavior as features. However, (1) no existing questionnaire system can record on-screen answering behavior and (2) even if the answering behavior can be recorded, it is unclear which answering behavior features are associated with satisficing. We developed an answering behavior recording plug-in for LimeSurvey, an online questionnaire system used all over the world, and collected a large amount of data (from 5,692 people) in Japan. Then, a variety of features were examined and generated from answering behavior, and we constructed ML models to detect careless responses.We call this detection method the ML-ABS (ML-based answering behavior scale). Evaluation by cross-validation demonstrated that the detection rate for careless responses was 85.9%, which is much higher than the previous ML method. Among the various features we proposed, we found that reselecting the Likert scale and scrolling particularly contributed to the detection of careless responses.},
keywords = {Answering behavior, Approximation algorithms, careless response, Data models, Deep learning, Feature extraction, Licenses, online questionnaire, Psychology, Reliability, Reliability;Feature extraction;Data models;Psychology;Deep learning;Approximation algorithms;Licenses;Answering behavior;careless response;online questionnaire;satisficing;smartphone;supervised machine learning;touchscreen, satisficing, smartphone, supervised machine learning, touchscreen},
pubstate = {published},
tppubtype = {article}
}