skip to main navigation skip to secondary navigation skip to content
Board of Governors of the Federal Reserve System
skip to content

Consumers and Mobile Financial Services
March 2016

Appendix A: Technical Appendix on Survey Methodology

In order to create a nationally representative probability-based sample, GfK's KnowledgePanel® has selected respondents based on both random digit dialing and address-based sampling (ABS). Since 2009, new respondents have been recruited using ABS. To recruit respondents, GfK sends out mailings to a random selection of residential postal addresses. Out of 100 mailings, approximately 14 households respond to GfK and express an interest in joining the panel. Of those who express an interest in joining, around 64 percent complete the process and become members of the panel.21 If the person contacted is interested in participating but indicates he or she does not have a computer or Internet access, GfK provides him or her with a web-enabled device and basic Internet service. Panel respondents are continuously lost to attrition and added to replenish the panel, so the recruitment rate and enrollment rate may vary over time.

For the 2015 Mobile Survey, a total of 5,461 KnowledgePanel® members received e-mail invitations to complete the survey, including both the primary sample and an oversample of non-Hispanic black and Hispanic respondents. The primary sample included 1,364 out of the 1,489 KnowledgePanel® respondents who participated in the Mobile Surveys for both 2013 and 2014 and who were still a part of KnowledgePanel® and could be assigned to the 2015 survey. (See table 1 in main text.) From this group of re-interviewed respondents, 1,071 people (excluding breakoffs) responded to the e-mail request to participate and completed the survey, yielding a final stage completion rate of 78.5 percent. The recruitment rate for the re-interviewed respondents, reported by GfK, was 15.9 percent and the profile rate was 63.4 percent, for a cumulative response rate for this sample of 7.9 percent. The primary sample also included an additional 2,324 randomly selected KnowledgePanel® respondents who did not participate in the previous Mobile Survey. From this sample of fresh cases, a total of 1,458 people (excluding breakoffs) responded to the e-mail request to participate and completed the survey, yielding a final stage completion rate of 62.7 percent. The recruitment rate for the fresh sample, reported by GfK, was 12.8 percent and the profile rate was 64.3 percent, for a cumulative response rate of 5.2 percent. From the 2,529 primary sample respondents who completed the survey, 9 were excluded due to the study qualification criterion, and 10 were excluded based on data quality concerns.22 Answers from the remaining 2,510 respondents were used to compute statistics presented in this report, including the tables in appendix C.

The 2015 survey also included an oversample of non-Hispanic black and Hispanic respondents who were randomly selected from KnowledgePanel® respondents in these racial and ethnic groups who did not participate as primary sample respondents in the previous Mobile Survey. Of these additional 1,773 KnowledgePanel® members who received invitations as a part of the non-Hispanic black and Hispanic oversample, 773 people (excluding breakoffs) responded to the e-mail request to participate and completed the survey, yielding a final stage completion rate of 43.6 percent for the oversample. The recruitment rate for the non-Hispanic black and Hispanic oversample, reported by GfK, was 11.6 percent and the profile rate was 64.8 percent, for a cumulative response rate of 3.3 percent. From the 773 oversample respondents who completed the survey, 2 were excluded due to the study qualification criterion, and 2 were excluded based on data quality concerns, yielding 769 responses for analysis. For comparability with the sample design from prior years of the Mobile Survey, answers from these respondents are not included in the statistics in this report.

After pretesting, the data collection for the survey began on November 4, 2015. For the re-interviewed sample, the survey was closed to responses on November 9, 2015. For the fresh sample and oversample respondents, the survey concluded on November 23, 2015. To enhance the completion rate, GfK sent e-mail reminders to non-responders from all samples on day three of the field period. Two additional e-mail reminders were sent to the non-responders from the oversample on days 8 and 12 of the field period. Three additional e-mail reminders were sent to the non-responders from the fresh general population sample on days 8, 12, and 16 of the field period. GfK maintains an ongoing modest incentive program to encourage KnowledgePanel® members to participate. Incentives take the form of raffles and lotteries with cash and other prizes.

Significant resources and infrastructure are devoted to the recruitment process for the KnowledgePanel® so that the resulting panel can properly represent the adult population of the United States. Consequently, the raw distribution of KnowledgePanel® mirrors that of U.S. adults fairly closely, barring occasional disparities that may emerge for certain subgroups due to differential attrition rates among recruited panel members.

The selection methodology for general population samples from the KnowledgePanel® ensures that the resulting samples behave as equal probability of selection method (EPSEM) samples. This methodology starts by weighting the entire KnowledgePanel® to the benchmarks secured from the latest March supplement of the Current Population Survey (CPS) along several dimensions. This way, the weighted distribution of the KnowledgePanel® matches that of U.S. adults. Typically, the geo-demographic dimensions used for weighting the entire KnowledgePanel® include gender, age, race/ethnicity, education, Census region, household income, home ownership status, metropolitan area status, and Internet access.

Using the above weights as the measure of size (MOS) for each panel member, in the next step a probability proportional to size (PPS) procedure is used to select study specific samples. For studies that include any oversampling of particular subgroups, the departure from an EPSEM design caused by the oversample is corrected by adjusting the corresponding design weights accordingly with the CPS benchmarks serving as reference points.

Once the sample has been selected and fielded, and all the study data are collected and made final, a post-stratification process is used to adjust for any survey non-response as well as any non-coverage or under- and over-sampling resulting from the study-specific sample design. The following variables were used for the adjustment of weights for this study: gender, age, race/ethnicity, Census region, metropolitan area status, education, and access to the Internet. Demographic and geographic distributions for the general population of adults ages 18 and over from the March 2015 CPS, and the July 2013 CPS for Internet access, were used as benchmarks in this adjustment.

Although weights allow the sample population to match the U.S. population based on observable characteristics, similar to all survey methods, it remains possible that non-coverage or non-response results in differences between the sample population and the U.S. population that are not corrected using weights.

There are several reasons that a probability-based Internet panel was selected as the method for this survey rather than an alternative survey method. The first reason is that these types of Internet surveys have been found to be representative of the population.23 The second reason is that the ABS Internet panel allows the same respondents to be re-interviewed in subsequent surveys with relative ease, as they remain in the panel for several years. The third reason is that Internet panel surveys have numerous existing data points on respondents from previously administered surveys, including detailed demographic and economic information. This allows for the inclusion of additional information on respondents without increasing respondent burden. Lastly, collecting data through an ABS Internet panel survey is cost effective, and can be done relatively quickly.

There are possible questions about the extent to which results from an online survey of technology use can be interpreted as being representative of the technology use of the U.S. population. As with any survey method, Internet panels can be subject to biases resulting from undercoverage or nonresponse and, in this case, potential underrepresentation of adults who are physically or cognitively impaired or who may prefer not to use some forms of technology.24 Not everyone in the United States has access to the Internet, and there are demographic (income, education, age) and geographic (urban and rural) differences between those who do have access and those who do not. These concerns about survey error for Internet surveys are partially corrected by GfK providing Internet access to respondents who do not have it in order to include the portion of the population that does not have Internet access in KnowledgePanel®. They are further corrected by the use of post-stratification weights to ensure that the Internet usage and key demographics of the weighted sample population matches the entire U.S. population.

While these steps have been taken to make the survey results generalizable to the adult U.S. population, some caveats apply to interpretation of the results, particularly for subpopulations. This survey was conducted in English, and thus may not reflect the attitudes and behaviors of those in the U.S. population whose dominant language is not English. In addition, participation in this type of survey may require a certain level of skill and interest in responding online, which could limit coverage of some groups, particularly among those in the population who are less likely to use computers or the Internet. As a result, to the extent that these differences cannot be incorporated into the sample weights, technology usage among survey respondents may differ along key dimensions from that of the overall U.S. population.

References

21. For further details on the KnowledgePanel® sampling methodology and comparisons between KnowledgePanel® and telephone surveys see www.knowledgenetworks.com/accuracy/spring2010/disogra-spring10.html Leaving the BoardReturn to text

22. Respondents who refused to answer whether they own or have regular access to a mobile phone (Q19) were not qualified for the study. Respondents who completed the survey in less than one-fourth of the median time for their respective survey paths or who refused to answer more than one-half of the substantive survey questions were excluded due to data quality concerns. Return to text

23. David S. Yeager, Jon A. Krosnick, LinChiat Chang, Harold S. Javitz, Matthew S. Levendusky, Alberto Simpser, and Rui Wang, "Comparing the Accuracy of RDD Telephone Surveys and Internet Surveys Conducted with Probability and Non-Probability Samples," Public Opinion Quarterly 75, no. 4 (2011):709-47. Return to text

24. For a discussion of differences in measures from a web-only survey and a web-plus-mail survey design, including a comparison of technology and Internet measures, see Pew Research Center, September 2015, "Coverage Error in Internet Surveys," available at www.pewresearch.org/files/2015/09/2015-09-22_coverage-error-in-internet-surveys.pdf Leaving the BoardReturn to text

Back to Top

Last update: February 14, 2017