Appendix A: Technical Appendix on Survey Methodology

The Survey of Household Economic Decisionmaking (SHED) was designed by Board staff and administered by GfK, an online consumer research company, on behalf of the Board. In order to create a nationally representative probability-based sample, GfK's KnowledgePanel selected respondents based on both random digit dialing and address-based sampling (ABS). Since 2009, new respondents have been recruited using ABS. To recruit respondents, GfK sends out mailings to a random selection of residential postal addresses. Respondents who reply to the mailing and complete a profile survey are then included in the GfK panel.49 If the person contacted is interested in participating but does not have a computer or Internet access, GfK provides him or her with a laptop and access to the Internet. Panel respondents are continuously lost to attrition and added to replenish the panel, so the recruitment rate and enrollment rate may vary over time.

There are several reasons that a probability-based Internet panel was selected as the method for this survey rather than an alternative survey method. The first reason is that these types of Internet surveys have been found to be representative of the population.50 The second reason is that the ABS Internet panel allows the same respondents to be re-interviewed in subsequent surveys with relative ease, as they remain in the panel for several years. The third reason is that Internet panel surveys have numerous existing data points on respondents from previously administered surveys, including detailed demographic and economic information. This allows for the inclusion of additional information on respondents without increasing respondent burden. Lastly, collecting data through an ABS Internet panel survey is cost-effective and can be done relatively quickly.

A total of 11,882 KnowledgePanel members received e-mail invitations to complete this survey, including an oversample of respondents with a household income less than $40,000. The contacted sample included a random selection of 2,857 KnowledgePanel respondents who participated in the Board's 2015 SHED (excluding those who were in the 2015 lower-income oversample) and an additional 5,608 randomly selected KnowledgePanel respondents. It also included 3,417 randomly selected KnowledgePanel respondents whose household income was less than $40,000. (See table 1 in main text.) The lower-income oversample was included in the study to ensure sufficient coverage of this population for key questions of interest.

From these three components of the sample, a total of 6,643 people responded to the e-mail request to participate and completed the survey yielding a final-stage completion rate of 55.9 percent. The recruitment rate for the primary sample, reported by GfK, was 12.2 percent and the profile rate was 64.2 percent, for a cumulative response rate of 4.4 percent.

To enhance the completion rate, GfK sent e-mail reminders to non-responders over the course of the field period.51 GfK maintains an ongoing modest incentive program to encourage KnowledgePanel members to participate. Incentives take the form of raffles and lotteries with cash and other prizes. KnowledgePanel members were offered an additional $5 incentive for completing this survey in addition to the standard incentives offered by GfK. Re-interviewed respondents who participated in the 2013 or 2014 SHED were provided with an additional $5 incentive, for a total of $10.52 On average respondents completed the survey in approximately 23 minutes (median time).

Significant resources and infrastructure are devoted to the recruitment process for the KnowledgePanel so that the resulting panel can properly represent the adult population of the United States. Consequently, the raw distribution of KnowledgePanel mirrors that of U.S. adults fairly closely, barring occasional disparities that may emerge for certain subgroups due to differential attrition rates among recruited panel members.

The selection methodology for general population samples from the KnowledgePanel ensures that the resulting samples behave as an equal probability of selection method (EPSEM) samples. This methodology starts by weighting the entire KnowledgePanel to the benchmarks secured from the latest March supplement of the Current Population Survey along several dimensions. This way, the weighted distribution of the KnowledgePanel matches that of U.S. adults. Typically, the geo-demographic dimensions used for weighting the entire KnowledgePanel include gender, age, race, ethnicity, education, census region, household income, home ownership status, and metropolitan area status.

Using the above weights as the measure of size (MOS) for each panel member, in the next step a probability proportional to size (PPS) procedure is used to select study specific samples. Since this survey includes a lower-income oversample, the departures caused by this oversample from an EPSEM design are corrected by adjusting the corresponding design weights accordingly with the Current Population Survey benchmarks serving as reference points.

Once the sample has been selected and fielded, and all the study data are collected and made final, a post-stratification process is used to adjust for any survey non-response as well as any non-coverage or under- and over-sampling resulting from the study specific sample design. The following variables were used for the adjustment of weights for this study: gender, age, race, ethnicity, education, census region, residence in a metropolitan area, and household income. Demographic and geographic distributions for the noninstitutionalized civilian population ages 18 and over from the March 2014 Current Population Survey are used as benchmarks in this adjustment.

Although weights allow the sample population to match the U.S. population based on observable characteristics, similar to all survey methods, it remains possible that non-coverage or non-response results in differences between the sample population and the U.S. population that are not corrected using weights.

 

References

 

 50. For further details on the KnowledgePanel sampling methodology and comparisons between KnowledgePanel and telephone surveys, see www.knowledgenetworks.com/accuracy/spring2010/disogra-spring10.htmlReturn to text

 51. David S. Yeager, Jon A. Krosnick, LinChiat Chang, Harold S. Javitz, Matthew S. Levendusky, Alberto Simpser, and Rui Wang, "Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples," Public Opinion Quarterly 75, no. 4(2011): 709-47. Return to text

 52. E-mail reminders were sent on days 3, 11, 14, and 18 of the field period. Return to text

 53. The higher incentive for these re-interviewed respondents was provided to maintain the higher compensation rate that was initially offered to survey respondents in those years. Return to text

Back to Top
Last Update: June 14, 2017