Galesic, M. Effects of questionnaire length on participation and indicators of response quality in a web survey. Public opinion quarterly, 73 2 , Green, P. Conjoint analysis in consumer research: issues and outlook. Journal of consumer research, 5 2 , Using online panels in psychological research. The Oxford handbook of Internet psychology, Greszki, R. Public Opinion Quarterly, 79 2 , International Organization for Standartization.
ISO Market, opinion and social research: vocabulary and service requirements. Geneva: ISO, Kellner, P. Can online polls produce accurate findings?. International Journal of Market Research, 46 1 , Krosnick, J. Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied cognitive psychology, 5 3 , Panel conditioning and attrition in the AP-Yahoo News election panel study. Proceedings of the Joint Statistical Meeting, p. Lugtig, P. The use of PCs, smartphones, and tablets in a probability-based panel survey: Effects on survey measurement error.
Malhotra, N. Completion time and response order effects in web surveys. Mavletova, A. Data quality in PC and mobile web surveys. Social Science Computer Review, 31 6 , Mobile web survey design: scrolling versus paging, SMS versus e-mail invitations. Journal of Survey Statistics and Methodology, 2 4 , Oppenheimer, D. Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45 4 , Quevedo Silva, F. Peytchev, A. Web survey design: Paging versus scrolling. International Journal of Public Opinion Quarterly, 70 4 , Podsakoff, P.
NCES Handbook of Survey Methods - National Household Education Surveys Program (NHES)
Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of applied psychology, 88 5 , Rao, V. Conjoint Analysis for Marketing Research in Brazil. Estimation in surveys with nonresponse. Scherpenzeel, A. Recruiting a probability sample for an online panel: Effects of contact mode, incentives, and information. Public opinion quarterly, 76 3 , Sendelbah, A.
- How to Write Killer Fiction: The Funhouse of Mystery & the Roller Coaster of Suspense.
- Unit Nonresponse - SAGE Research Methods.
- Shopping Cart.
- Early Hip Disorders.
Investigating respondent multitasking in web surveys using paradata. Computers in Human Behavior, 55, Sheth, J. Marketing productivity: issues and analysis. Journal of Business research, 55 5 , Sikkel, D. Clicking vs. Public opinion quarterly, 78 1 , Tortora, R. Recruitment and Retention for a consumer Panel. Valliant, R.
Estimating propensity adjustments for volunteer web surveys. Walker, R.
Every household survey is subject to some undercoverage bias—the result of some members of the target population being either deliberately or inadvertently missed in the survey. Telephone surveys, such as NHES administrations prior to , are subject to an additional source of bias because not all households in the United States have telephones.
Looking for other ways to read this?
Raking adjustments can reduce such coverage bias, though no adjustments have been found to adequately reduce the amount of bias across all measures that might be affected by coverage issues. Additionally, as the coverage bias increases, it becomes more difficult for raking to adequately adjust see, e. The NHES sample contains all types of residential addresses in order to ensure the best possible coverage of households in the United States.
The mode of data collection has also changed from an interviewer-led telephone interview to a self-administered paper and pencil questionnaire mailed to respondents. The raking of the person-level weights was still required in order to align the person-level weights with the person-level control totals and adjust for differential coverage rates at the person level. Nonresponse error. Nonresponse in NHES surveys is handled in ways designed to minimize the impact on data quality—through weighting adjustments for unit nonresponse and through imputation for item nonresponse.
Unit nonresponse. Household members are identified for extended interviews in a two-stage process.
First, screener interviews are conducted to enumerate and sample households for the extended interviews. The failure to complete the first-stage screener means that it is not possible to enumerate and interview members of the household. The completion rate for the first stage is the percentage of screeners completed by households. The completion rate for the second stage is the percentage of sampled and eligible persons with completed interviews.
NHES: sampling frame variables were used for the unit nonresponse bias analysis for the screener and topical surveys. Analysis of unit nonresponse bias showed evidence of bias based on the distributions of the sample characteristics for the survey respondents when compared to the full eligible sample. However, this bias was greatly reduced by the nonresponse weighting adjustments.
- Financial Management for Nonprofit Organizations: Policies and Practices.
- IN ADDITION TO READING ONLINE, THIS TITLE IS AVAILABLE IN THESE FORMATS:!
- Hearts of Darkness: Why Kids Are Becoming Mass Murderers and How We Can Stop It.
- On Being Conservative.
- Plato on Knowledge and Forms: Selected Essays.
For most of the items collected in the NHES surveys, the item response rate is high. Measurement error. In order to assess item reliability and inform future NHES surveys, many administrations also included a subsample of respondents for a reinterview. In a reinterview, the respondent is asked to respond to the same items on different occasions.
In order to limit the response burden of the reinterview program, only selected items are included in the reinterview. The item selection criteria focus on the inclusion of key survey statistics e. The results of the reinterviews are used to modify subsequent NHES surveys and to give some guidance to users about the reliability of responses for specific items in the data files see, e. However, the reinterview procedure does not account for all measurement errors in the interviewing process, such as systematic errors that would be made in both the original interview and the reinterview.
NHES included a bias analysis to evaluate whether nonresponse at the unit and item levels impacted the estimates. For example, if all households were included in the survey i. Since NHES is based on a sample, the bias is defined as the expected or average value of this difference over all possible samples. Unit nonresponse bias, or the bias due to the failure of some persons or households in the sample to respond to the survey, can be substantial if either the difference between respondents and nonrespondents or the unit nonresponse rate is relatively large. The bias analysis included several analyses.
At the screener phase, significant differences were observed between respondents and the eligible sample in the distributions of characteristics available in or linked to the sample frame. Similarly, for each topical survey, significant differences were observed between respondents and the eligible sample in the distributions of characteristics available in or linked to the sample frame or collected on the screener.
However, this observed bias was greatly reduced by the nonresponse weighting adjustments. In another set of analyses, base-weighted key survey estimates for each topical survey were compared between 1 early and late screener respondents to assess the potential for bias resulting from screener-level nonresponse and 2 early and late topical respondents to assess the potential for bias resulting from topical-level nonresponse. To the extent that late respondents resemble nonrespondents in the characteristics measured by the NHES survey instruments, differences between early and late respondents suggest a potential for unit nonresponse bias in the estimates.
In another set of bias analyses, key survey estimates using the base weights and key estimates using the nonresponse adjusted weights were compared. Only a small number of measurable differences were observed. This suggests that few of these variables were powerful predictors of unit response.