2017 • No. 17–2
Research Data Reports
Battery Order Effects on Relative Ratings in Likert Scales
Many fields of research, especially those in the social sciences, rely on surveys as a means of collecting data. Indeed, certain types of information, such as attitudes or self-assessments, can only be gathered in this manner. Experience has shown that response patterns are heavily influenced by the questionnaire design, with variations in the instrument often introducing systematic tendencies and introducing a "survey effect" in the distribution of sample statistics. Aspects as fundamental as the survey mode (Bowling 2005) to seemingly trivial details of question presentation are known to make a difference. As a result, an entire field of research, survey methodology, has emerged to better understand these aspects of data collection and to establish conventions for consistency. One of the prominent themes in the survey design literature is that order often matters.
In this work, the author focuses on order effects within a very narrow, but common, form of survey question: a battery of Likert-scale questions. A Likert-scale question asks a respondent to select a response from a set of categorical, ordered options. Likert-scale batteries allow respondents to efficiently provide ratings for a group of items in the context of one another. For this reason, analyses often center on the relative rating distributions of two items in the battery, or how often one is given a lower rating than the other. Unlike mean ratings or even the distribution of ratings themselves, relative rating distributions provide direct insight into how the population feels toward one item relative to another and avoid issues caused by heterogeneity of responses, in which certain individuals tend to give high ratings, while others tend to give low ratings. The author studies how different orderings of the items within a battery and, in particular, the relative location of items affect relative rating distributions.
Key Findings
- Ordering effects are real and consistent across years. The most prominent effect relating to relative locations of items is that the farther one item is placed after another item, the more likely that item is to have a lower rating.
Implications
The results show changes large enough to bias comparisons of groups in which data were collected under different orderings. For example, if a researcher were interested in making inferences about relative ratings of two items in one-subpopulation to those in a second sub-population, but responses for the two were collected under different battery orderings, such comparisons would likely be unfair. While the order-specific differences are impossible to predict, changes in reported attitudes based on item distances can be adjusted for. Otherwise, sub-populations that have similar tendencies may be distinguished and marked as different solely due to different relative locations of items in the survey batteries. At the very least, researchers who use similar Likert-scale batteries should be aware of these effects when writing questionnaires and analyzing survey data.
From the viewpoint of a survey methodologist, it is unclear how best to collect responses for a series of Likert-scale questions. One option for modeling is to directly include an ordering effect of the likes described in this paper in the stochastic model used to describe relative response tendencies. The use of randomization of ordering will also be effective at averaging out the influence of ordering effects as long as sample sizes are sufficiently large. Finally, because the author's estimation is that effects are greater when distances between items are greater, it is possible to use shorter Likert batteries. However, it is unclear how the order of the blocked, Likert batteries would affect results. More research is necessary to ascertain the potential benefits of such an approach.
Abstract
Likert-scale batteries, sequences of questions with the same ordinal response choices, are often used in surveys to collect information about attitudes on a related set of topics. Analysis of such data often focuses on the study of relative ratings or the likelihood that one item is given a lower (or higher) rating than another item. This work studies how different orderings of the items within a battery and, in particular, the relative location of items affect relative rating distributions. We take advantage of data from the 2012–2014 Survey of Consumer Payment Surveys, in which item order in six Likert-scale batteries is varied among respondents. We find that ordering effects are real and consistent across years. The most prominent effect relating to relative locations of items is that the farther one item is placed after another item, the more likely that item is to have a lower rating.