- MELANIE ABEYSUNDERA, Statistics Canada
Using Total Survey Error Frameworks to Study Mode Effect [PDF]
-
An increasing number of surveys at Statistics Canada have introduced electronic questionnaires as the primary mode of data collection. This change has raised questions regarding the potential differences in response outcomes due to collection mode. Definitions of mode and mode effect are proposed. The relationship between mode and the components of total survey error (TSE) is studied using a proposed extension of the Groves-Lyberg (2010) framework which includes an additional branch for collection. Relationships between the components of TSE and the stochastic mechanisms associated with the sampling process are examined. An example of how the framework was used to evaluate the effect of a change in sampling frame in the General Social Survey will be presented.
- JENNY THOMPSON, U.S. Census Bureau
Using Response Rates and Other Quality Metrics to Assess the Effects of the Mixed Collection Modes for Business Surveys: A Case Study [PDF]
-
In the last decade, offering multiple modes of data collection has become increasingly popular. However, the benefits of offering multiple modes should not come at the cost of data quality. Using historic data from two federal business surveys, we investigate data quality as a function of mode of data collection using various quality measures, including the unit response rate (the unweighted proportion of responding reporting units) and the quantity response rate (the weighted proportion of an estimate obtained from reported data). For these analyses, we associate data quality as the percentage of retained reported data after processing. The results suggest mode-based differences in data quality. We discuss the implication of these results for multi-mode data collection.
- BRADY WEST, University of Michigan
New Methodologies for the Study and Decomposition of Interviewer Effects in Surveys [PDF]
-
Methodological studies of the effects that human interviewers can have on the quality of survey data have long been limited by two critical assumptions: that interviewers in a given survey are assigned random subsets of the larger overall sample that is being studied (also known as interpenetrated assignment), and interviewer effects arise entirely from measurement difficulties, rather than selection effects due to differential sample assignments or nonresponse. In this presentation, we will introduce two new ideas for overcoming a lack of interpenetrated assignment when estimating interviewer effects, and discuss an approach using multilevel modeling and multiple imputation to decompose interviewer effects. Selected methods will be illustrated using data from the 2012 Behavioral Risk Factor Surveillance System (BRFSS).