skip to main navigation skip to secondary navigation skip to content
Board of Governors of the Federal Reserve System
skip to content

Report on the Economic Well-Being of U.S. Households in 2013

Appendix 1: Technical Appendix on Survey Methodology

The SHED was designed by Board staff and administered by GfK, an online consumer research company, on behalf of the Board. In order to create a nationally representative probability-based sample, GfK's KnowledgePanel ® selected respondents based on both random digit dialing and address-based sampling (ABS). Since 2009 new respondents have been recruited using ABS. To recruit respondents, GfK sends out mailings to a random selection of residential postal addresses. Out of 100 mailings, approximately 14 households contact GfK and express an interest in joining the panel. Of those who contact GfK, three-quarters complete the process and become members of the panel.14 If the person contacted is interested in participating but does not have a computer or Internet access, GfK provides him or her with a laptop and Internet. Panel respondents are continuously lost to attrition and added to replenish the panel, so the recruitment rate and enrollment rate may vary over time.

For this survey, a total of 6,912 e-mail solicitations to participate in the survey were sent out to a random selection of KnowledgePanel respondents on September 17, 2013, and data collection was terminated on October 4, 2013, with 4,134 surveys fully completed (a completion rate of 59.8 percent) (table 1 of main text). To enhance the completion rate, GfK sent e-mail reminders to non-responders on days three and six of the field period. Respondents were also offered a $10 incentive for completion of the survey. The recruitment rate for this study, reported by GfK, was 13.7 percent and the profile rate was 66.0 percent, for a cumulative response rate of 5.4 percent.

As with any survey method, probability-based Internet panel surveys are subject to potential survey error, such as non-coverage and non-response due to the panel recruitment methods and due to panel attrition. In order to address these potential sources of error, a post-stratification adjustment is applied based on demographic distributions from the most recent (August 2013) data from the Current Population Survey (CPS). The variables used include gender, age, race/ethnicity, education, census region, residence in a metropolitan area, and access to the Internet. The Panel Demographic Post-Stratification weight is applied prior to a probability proportional to size (PPS) selection of a study sample from KnowledgePanel. This weight is designed for sample selection purposes.

Once the sample has been selected and fielded, and all the study data are collected and made final, a post-stratification process is used to adjust for any survey non-response, as well as any non-coverage or under- and over-sampling resulting from the study-specific sample design. Demographic and geographic distributions for the non-institutionalized, civilian population ages 18 and over from the most recent CPS are used as benchmarks in this adjustment.

Comparable distributions are calculated by using all completed cases from the field data. Using the base weight as the starting weight, this procedure adjusts the sample data back to the selected benchmark proportions. Through an iterative convergence process, the weighted sample data are optimally fitted to the marginal distributions.

After this final post-stratification adjustment, the distribution of the calculated weights are examined to identify and, if necessary, trim outliers at the extreme upper and lower tails of the weight distribution. The post-stratified and trimmed weights are then scaled to the sum of the total sample size of all eligible respondents.

There are several reasons that a probability-based Internet panel was selected as the method for this survey rather than an alternative survey method. The first reason is that these types of Internet surveys have been found to be representative of the population.15 The second reason is that the ABS Internet panel allows the same respondents to be re-interviewed in subsequent surveys with relative ease, as they remain in the panel for several years. The third reason is that Internet panel surveys have numerous existing data points on respondents from previously administered surveys, including detailed demographic and economic information. This allows for the inclusion of additional information on respondents without increasing respondent burden. Lastly, collecting data through an ABS Internet panel survey is cost effective, and can be done relatively quickly.


References

14. For further details on the KnowledgePanel sampling methodology and comparisons between KnowledgePanel and telephone surveys, see www.knowledgenetworks.com/accuracy/spring2010/disogra-spring10.html  Leaving the BoardReturn to text

15. David S. Yeager, Jon A. Krosnick, LinChiat Chang, Harold S. Javitz, Matthew S. Levendusky, Alberto Simpser, and Rui Wang (2011) "Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples," Public Opinion Quarterly, vol. 75(4), pp. 709-47. Return to text

Last update: August 15, 2014

Back to Top