Identifying non-cooperative participation in web-based elicitation of Acceptability Judgments – How to get rid of noise in your data
Jutta Pieper, Alicia Börner, Tibor Kiss
February 2023
 

In this paper, we discuss different sources of noise or other detrimental effects in the elicitation of experimental data: Such effects may emerge due to the loss of control in now widely favored unsupervised web-based elicitation. But noise may also be task-related, namely if participants lack understanding and do not satisfy the underlying assumptions. Finally, also the researchers themselves may be held responsible for noise if they employ a poor questionnaire design that fails to control for biases and adverse effects, and lacks sufficient means to identify inapt participants. We describe a stepwise process to reach elicited data at the highest attainable level in web-based Acceptability Judgment Tasks (AJTs). In the first step, the questionnaire design, we focus on careful constructions of appropriate filler and control items, and induce an alternative to instructional manipulation checks appropriate for AJTs, namely attention items. The second step is to choose the right platform to elicit experimental data from. Lastly, we will present reflections on how to employ analyses of general response times as well as of responses to the specialized items so that potential inapt participants are reliably detected by latency- and response-based methods. Supplementary materials to the methods described in Chapter 5 (Control by analyses: latency- and response-based identification of non-cooperative participants) of can now be found on GitHub (https://github.com/Linguistic-Data-Science-Lab/AJTs-eligibility-screening). This repository contains in particular source code for ReMFOD (recursive multi-factorial outlier detection) for identifying underperforming participants by means of RTs (see section 5.1) and for the computation of reasonable thresholds in terms of probabilities to pass control/attention trials by chance in order to reliably determine which participants failed on these (see section 5.2). The markdown files show how tables and figures of the paper have been created.
Format: [ pdf ]
Reference: lingbuzz/006514
(please use that when you cite this article)
Published in: Linguistic Data Science Lab, Ruhr-University Bochum
keywords: acceptability judgments, questionnaire design, eligibility screening, online participant pools, attention checks, control items, syntax
previous versions: v2 [May 2022]
v1 [March 2022]
Downloaded:1023 times

 

[ edit this article | back to article list ]