Battery Order Effects on Relative Ratings in Likert Scales
Many fields of research, especially those in the social sciences, rely on surveys as a means of collecting data. Indeed, certain types of information, such as attitudes or self-assessments, can only be gathered in this manner. Experience has shown that response patterns are heavily influenced by the questionnaire design, with variations in the instrument often introducing systematic tendencies and introducing a "survey effect" in the distribution of sample statistics. Aspects as fundamental as the survey mode (Bowling 2005) to seemingly trivial details of question presentation are known to make a difference. As a result, an entire field of research, survey methodology, has emerged to better understand these aspects of data collection and to establish conventions for consistency. One of the prominent themes in the survey design literature is that order often matters.
In this work, the author focuses on order effects within a very narrow, but common, form of survey question: a battery of Likert-scale questions. A Likert-scale question asks a respondent to select a response from a set of categorical, ordered options. Likert-scale batteries allow respondents to efficiently provide ratings for a group of items in the context of one another. For this reason, analyses often center on the relative rating distributions of two items in the battery, or how often one is given a lower rating than the other. Unlike mean ratings or even the distribution of ratings themselves, relative rating distributions provide direct insight into how the population feels toward one item relative to another and avoid issues caused by heterogeneity of responses, in which certain individuals tend to give high ratings, while others tend to give low ratings. The author studies how different orderings of the items within a battery and, in particular, the relative location of items affect relative rating distributions.