The Federal Reserve Board eagle logo links to home page

Skip to: [Printable Version (PDF)] [Bibliography] [Footnotes]
Finance and Economics Discussion Series: 2008-26 Screen Reader version

Starting Small and Ending Big - The Effect of Monetary Incentives on Response Rates in the 2003 Survey of Small Business Finances: An Observational Experiment


Traci L. Mach, Lieu N. Hazelwood, and John D. Wolken*



Keywords: Incentives, small business surveys, response rates, Survey of Small Business Finances, unit response

Abstract:

In 2003, the Survey of Small Business Finances (SSBF), conducted by the Federal Reserve Board, implemented the use of incentives to increase response rates. This study examines the effects of some of the characteristics of the implementation - such as level of effort, time in queue, and consecutively-increasing incentive amounts - on unit response. Our estimates suggest that as the number of days increase between the initial screener and main interview, the probability of completion decreases. Similarly, as the number of days increases between each consecutive incentive offer the probability of completion decreases. Additional effort, as measured by additional calls, increases the probability of completion. Finally, each consecutive offer after the initial offer decreases the probability of completion.

*Board of Governors of the Federal Reserve System, 20th and Constitution Avenue N.W., Washington, D.C. 20551. The views expressed in this paper are those of the authors. They do not necessarily reflect the opinions of the Federal Reserve Board or its staff. All remaining errors are the responsibility of the authors.

  1. Introduction
Obtaining cooperation and participation from potential survey respondents has become increasingly difficult in recent years (e.g., Dennis, 2003). Survey participants are bombarded with questionnaires, telephone surveys, telemarketers, and Internet surveys. Additionally, concerns over privacy and technological developments, such as call screening, have contributed to the difficulty of conducting scientific surveys. As a result, information about the efficacy of techniques to improve response, such as the use of incentives, is of growing import.

Considerable research has been conducted on the use of incentives in consumer and household surveys (see, e.g., Singer and Kulka 2001). Generally, incentives, particularly prepaid monetary incentives, are likely to increase response rates. This finding differs little by mode of administration - mail, telephone, or in-person. However, much less is known about the use of incentives in business surveys.

In 2003, the Survey of Small Business Finances (SSBF), conducted by the Federal Reserve Board, implemented the use of incentives to increase response rates. This study examines the effects of some of the characteristics of the implementation - such as level of effort, time in queue, and consecutively-increasing incentive amounts - on unit response. Specifically, during the screening phase of the 2003 survey, all eligible firms were offered $50 for completing the main interview. As the interviewing progressed, interim response rates were lower than anticipated and an effort was made to increase response rates by increasing the incentives offered to those respondents that had been contacted but had not yet completed the interview.

We model the relationship between unit response and the incentive-offer function, controlling for a variety of firm and calendar differences. Our estimates suggest that as the number of days increase between the initial screener and main interview, the probability of completion decreases. Similarly, as the number of days increases between each consecutive incentive offer the probability of completion decreases. Additional effort, as measured by additional calls, increases the probability of completion. Finally, each consecutive offer after the initial offer decreases the probability of completion.

In the next section, some background information about the Survey of Small Business Finances is provided, followed by a short literature review of the effect of incentives on response rates. Section 4 includes the 2003 observational experiment and data construction followed by a description of the estimation model and results. The paper concludes with a summary and recommendations.

  1. Background
Since 1987, the Survey of Small Business Finances (SSBF) has been conducted approximately every five years. The survey is voluntary and the data are collected using a computer assisted telephone interview, with an average interview length of about 45 minutes.1 Data collected are financial in nature and sometimes require respondents to consult records. Despite the length and the financial nature of the questions, the 1987 survey achieved a unit response rate of more than 62 percent. In subsequent SSBFs, unit response rates declined. The unit response rate in 1993 was about 52 percent and in 1998 about 33 percent. In each of these surveys, much effort was expended trying to improve response rates, including fine-tuning the questionnaire, hiring interviewers with greater expertise in financial concepts and with greater experience with business respondents, revising mail-out materials to make a more compelling case for participation, developing worksheets to aid respondents during the telephone interview, and adjusting interview schedules to permit multiple sessions and accommodating business owners' busy schedules. But the use of incentives was not attempted until the end of the 1998 survey's field period when a nominal pre-interview monetary incentive of $20 was offered to some respondents.2

One of the planning goals of the 2003 SSBF was to improve response rates. The major innovation was to offer all respondents a monetary incentive upon completion of the main interview.3 The initial value of the incentive was $50. As the survey progressed, it became apparent that both precision goals and sample size goals were unlikely to be met with the realized unit response. In order to meet the original survey goals, it was decided to increase the incentive, first to $100, then to $200, and towards the end of the field period to as much as $500.4 At the conclusion of the survey, incentives and other efforts to increase response rates resulted in an overall response rate of just over 32 percent, which was about the same level as achieved in 1998. It is important to note that this was an observational experiment. The incentive plan was not designed to evaluate the effectiveness of incentives or different incentive levels. The 2003 SSBF incentive program did not contain a zero incentive group and the incentive groups were not mutually exclusive.

  1. Literature Review
There are of course numerous reasons why individuals decide to participate or not to participate as a respondent to a survey. Incentives (monetary or otherwise) are offered as an inducement by the surveyors to compensate for the relative absence of factors that might otherwise stimulate cooperation or offset those factors that might otherwise reduce cooperation.

Incentives are increasingly used in surveys today. Much of the literature on the effects of incentives has been gathered in regard to mail surveys of consumers or households (Singer and Kulka, 2002). More recently, the literature has examined telephone or in-person surveys of consumers and households. Much less is known about the effect of incentives in business or establishment surveys.5 In general, incentives raise overall response rates. Kalka emphasized they work as effectively for hard-to-reach respondents (examples include homeless, young males, professional elites, and minorities). The business owner population could be considered a hard-to-reach respondent similar to professional elites (Kalka, 1995). Church (1999), analyzing 38 mail surveys, concludes that prepaid incentives yield higher response rates than promised incentives and promised incentives- that are contingent on completing the survey - do not significantly increase response rates. Also, prepaid monetary incentives yield higher response rates than non-monetary incentives and response rates increase with increasing amounts of money. At some point, ever increasing incentives fail to be effective in raising response rates (James and Bolsein, 1992).

In a study focusing on 39 telephone and face-to-face interviews, Singer, et al (1999) finds similar results. Specifically, incentives improve response rates in telephone and face-to-face interviews and the effects are similar by mode of survey administration. Response rates tend to increase with the size of the incentive. Prepaid incentives tend to generate higher response rates than promised incentives and money is more effective than gifts. Incentives have greater effects on response rates in high-burden surveys than in low-burden surveys although both benefit from incentives. And incentives have a greater effect in surveys where the response rate without an incentive is low.

Much less is known about the use of monetary incentives for businesses or establishments, although what is known is generally consistent with results for consumers and households. Jobber and O'Reilly (1998) found monetary incentives increased response rates in industrial mail surveys. Small up-front payments were more effective than a large promise for completion and non-monetary incentives had a beneficial but smaller effect than monetary incentives. White and Luo (mail surveys to convenience stores) and Gunn and Rhoades (telephone interviews with physicians) also found that prepaid monetary incentives increased response rates. Bitler and Wolken (2001) found that among respondents completing the main interview, those receiving a $20 prepaid incentive in the 1998 SSBF required fewer calls to complete the interview than those not receiving the incentive.

  1. Data
As originally designed, when respondents were deemed eligible for the main study after completing the screening interview they were told that they would be given the choice of $50 cash or the Dun and Bradstreet Small Business Solutions package (retailing for $199) for completing the main interview.6 The sample was intended to be fielded in three batches in order to minimize time in queue for each case. In addition, after all cases in each batch had been attempted a small number of times, the batch was to be subsampled in order to concentrate efforts on a smaller number of cases. As the interview period progressed, response rates were lower than had been anticipated. The initial offer for cases screened late in the field period was increased to $200. In addition, a fourth batch was fielded, which was not subject to subsampling.

Incentives

In an attempt to increase cooperation and boost flagging response rates, the monetary incentive offered to respondents increased over time. For the most part, the increase in incentives adhered to the following pattern: (1) Upon completion of the screener, most respondents were offered $50 (or the D&B package); (2) after several attempts to complete all cases in the batch were made, batches 1, 2, and 3 were subsampled and cases that were "subsampled in" were offered $100; (3) batch 4 cases that were sent a refusal conversion letter were offered $200; (4) all cases from all batches still being fielded as of January 6, 2005 were offered $200; and (5) all batch 1 - 3 cases still being fielded as of January 19, 2005 were offered $500.

As a baseline, the "rules" outlined above were applied to all cases, using information from the sample control file to determine each case's subsampling status. However, due to the different times at which screening interviews were completed and special interventions by interviewers and supervisors, not all cases were offered all of the incentives and the timing of when the differing incentives were offered varied. To appropriately determine whether or not a given firm was offered a particular incentive and the date on which that incentive was offered, data were taken from several different sources. NORC provided us with a list of the highest offer made to each firm and a list of cases that received refusal conversion letters. The call history file provides a complete list of all calls made to each case as well as the disposition for that call. The call text file provides the call notes written by interviewers to document relevant information obtained during the call. It should be noted that not all calls in the call history file produced a companion call note and not all notes in the text file had a companion call. For example, calls to a busy line would produce a call in the call history file with a disposition of "busy," but would generally not generate a call note. On the other hand, supervisory review of a case might produce a note in the call text file instructing the next interviewer to offer the respondent an increased incentive which would not produce an entry in the call history file.

Taken together, the information from these sources allowed us to construct a number of variables for each firm with which a main interview was attempted. First, a series of dummy variables was constructed to indicate whether or not the firm had been offered a particular monetary incentive. Second, a companion dummy was created for all firms that had been offered a particular incentive that indicated whether or not the firm had completed the interview while that offer was in play. Finally, the date that the offer was made to the firm was determined. From this information and the date that the screening interview was completed, we were able to construct the number of calls that were placed and the number of days while a given offer was in play. We could also determine the number of calls made and days that had elapsed prior to the (current) offer being made.

Other Data

In addition to the information on the size of the incentive offered and the timing of that incentive, data from the D&B file and screening interview were also merged onto the final dataset. In the screening interview, information was collected on the firm's size, primary industry and organizational structure. Additionally, we have information on how difficult it was to complete the screener with the firm that we believe controls for the firm's underlying propensity to complete any survey. This includes the number of calls and days it took to complete the screener and an indicator of whether a proxy completed the screening interview because the owner could not be reached. From the D&B file, we have added the firm's credit score. Additional dummy variables were constructed to capture calendar differences in when the screener was completed

Table 1 provides summary statistics on the variables available for analysis. At first glance, higher incentives appear to be associated with lower response rates. Thirty-five percent of firms completed the interview when offered $50, whereas only 11 percent of firms completed the interview when offered $500. However, it is important to keep in mind that these are not independent samples of firms. All firms that were offered $500 had also been offered a smaller offer previously and failed to complete the interview for the smaller offer, indicating that these firms may be inherently less prone to complete the survey. There is further evidence of this in the amount of effort required to complete the screening interview. Firms that completed the main interview for the $50 incentive required the fewest numbers of calls and fewest days to complete the screener. Differences in calling protocols are also evident across incentives offered. For the $50 incentive, the average firm had 53 days and 15 call attempts to complete the main interview. For the $500, the average firm had just over five days and 2 calls to complete the interview.

There are little to no observable differences in the Census divisions and urban and rural locations of firms across the different incentives offered. There also seem to be few differences across firm ownership and industry. There are slight differences in the offers by the size of the firm, with the average firm size growing slightly as the incentives increase.

  1. Estimation
We model the likelihood of completing the interview for a certain incentive as a function of the three types of characteristics:
  1. p_i^\ast =\alpha E_i +\beta F_i +\delta G_i +u_i
where p_i^\ast is a value function correlated with the probability that the firm i will complete the interview, E is a matrix of variables measuring effort put forth (days and calls made), F is a matrix of variables measuring underlying propensity to respond (days and calls necessary to complete the screener and an indicator of a proxy completing the screener), and G is a matrix of variables capturing firm characteristics (location, ownership, and industry) and time and operational differences (batch, screener completion months, and an indicator of the Christmas/New Year holiday).

In practice, we do not observe the probability that a given firm will participate, but rather whether or not a given firm does participate in the survey.

  1. \begin{array}{l} p_i =1\quad if\,p_i^\ast >0 \ p_i =0\quad otherwise \ \end{array}
  2. \begin{array}{l} \Pr (p_i =1)=\Pr \left[ {u_i >-(\alpha E_i +\beta F_i +\delta G_i )} \right] \ \quad \quad \quad \quad {\kern 1pt}=1-F\left[ {-(\alpha E_i +\beta F_i +\delta G_i )} \right] \ \end{array}
where F(\cdot ) is the cumulative distribution function of u. Assuming that u_{i} is normally distributed, we can estimate \alpha , \beta , and \delta using a probit model.

As part of the survey sample design, after a period of attempting to field all cases in a batch, 40 percent of incomplete cases were subsampled out and no further effort was put toward completing these cases.7 In order to account for this sample design feature, we weighted observations according to the following rules. All cases that were completed prior to subsampling were assigned a weight of one. All cases that were subsampled out were assigned a weight of zero. All cases that were subsampled in were assigned a weight of (1/.6). All batch 4 cases were assigned a weight of one.8

Equation (3) was estimated separately with a probit model four times; one for each of the incentive amounts offered. Due to the nature in which the incentives were offered, these models will have a few notable characteristics. First, each model is estimated independently of the others; it is not possible to estimate a nested system because the incentives were not offered sequentially to all firms. For example, many firms were offered $200 without first having been offered $50 or $100. Second, the sample of firms is not identical across models. For example, firms that completed the survey when the $50 incentive was offered were not subsequently offered to the $100 incentive; such firms will be included in the $50 probit, but not the $100, $200, or $500 probits. Finally, while the control variables in each model are similar and aimed at capturing the same effects, they are slightly different across models. To see this more clearly, consider time in queue. When the first offer is made (most often $50), no time has elapsed. Since no firm was offered less than $50, all firms had zero days between the time the screening interview was complete and the first offer was made. The same thing is true of the number of calls; no call attempts were made prior to the first offer. Because there is no variation in these measures, their coefficients cannot be estimated in the $50 model. Similarly, because the incentives offered to a firm only increased over time (no firm was initially offered $500 and then subsequently offered $50), the coefficients for having been offered $100 or $200 cannot be estimated in the $50 model. Conversely, because not all respondents who received the $200 incentive were initially offered $50, the coefficient for the $50 incentive can be estimated in the $200 model.

The coefficients from these estimates are found in Table 2. Across all models, we find that the more time that elapses from the time the incentive is offered ("Days between $xx offer and..."), the less likely it becomes that the firm will complete the interview. Similarly, time elapsed since the screener completion prior to making the current offer ("Days since screener before...") also decreases the likelihood of completion. We also find that the more calls made while a particular offer is in effect ("Calls made between $xx..."), the more likely the firm is to complete the interview. Again, this is a result constant across all incentive sizes. Calls made prior to making the current offer ("Calls made prior to $xx offer") also enter in positively to the probability of completion, although not always significantly. In the models for the $200 and $500 incentives, we also note that having previously offered the respondent a lower incentive ("R offered $xx incentive") decreases the likelihood that the firm will eventually complete the survey, perhaps indicative of a loss in good will from the respondent. Alternatively, this could indicate that these firms are not sensitive to incentives, regardless of their size. Because there is no control group, i.e., no firm was not offered an incentive, we cannot identify whether this was the case.

Other variables enter into the likelihood much as predicted. Variables that indicate that the firm may be inherently more difficult to interview, such as having completed the screening interview with a proxy and the number of days and calls necessary to complete the screener, generally decrease the likelihood of completion.

Table 3 provides a measure of by how much the likelihood of completion is affected by the effort expended on the case. The results indicate that each day that passes with an offer outstanding diminishes the likelihood of completion by around one percent, with each day that elapsed prior to the current offer being made decreasing the probability by between .1 and .4 percent. They also show that each additional call made while the offer is in play increased the likelihood of completing the interview by between .1 and .7 percent, slightly less than each elapsed day reduced the likelihood in each specification. Each call made prior to the current offer increased the likelihood of completion by .1 percent. Finally, estimates predict a 5 to 7 percent decrease in the likelihood of completion when the firm was offered a lower incentive prior to the current one.

While Table 3 gives a feel for how effective differing measures of effort are on the likelihood of completion, they are only benchmarks for the impact when all variables are measured at their mean and do not provide any information on how this changes as the number of calls and days increase. Figures 1 through 4 look at the predicted probability of completion by the number of days since the incentive was offered. Figure 1 indicates that once the $50 incentive has been available and not taken by a firm for more than 100 days, there is almost no likelihood that the firm will complete the interview, with a very steep drop-off in the likelihood between 50 and 100 days. Examining the same pictures for $100 and $200 incentives, we see a similar pattern with the drop-off occurring more gradually, but earlier. This difference can likely be attributed to the fact that these are the same respondents who did not complete the interview for smaller incentive offers--there has already been a significant amount of time since completing the screener. Figure 4 does not look like the previous three figures, with little discernable pattern. Two factors likely account for this difference: the number of observations and the limited number of days over which the incentive was available. The $500 offer was only made in the very last days of fielding in a last-minute effort to increase response rates before coming out of the field. It is likely that a pattern similar to the earlier patterns would have evolved were similar numbers of days elapsed.

Figures 5 through 8 look at the probability of completion by the number of calls since the incentive was offered. These figures have less of a pattern than the previous four, but do indicate that the effectiveness of more calls to reluctant respondents falls; more than 20 calls seem to only marginally affect the likelihood of completion. It is important to keep in mind that calls are a much less precise measure of effort than days. The measure of calls we have counts a busy signal the same way as it does contact with the respondent, although we would definitely not consider them to have the same impact on the respondent's likelihood of participation.

  1. Conclusions
The current paper examines data from the incentives experience of the 2003 Survey of Small Business Finances. Because there was no control group of firms who did not receive an incentive offer, we can say little about the effectiveness of the size of the incentive. However, the results do provide some insight into the process of offering incentives. First, the data indicate that the incentives are most effective shortly after they are offered. As the time since the initial offer increases, the likelihood of an eventual completion falls. Furthermore, the time between the initial contact--the completed screener in this case--and the incentive offer also reduces the effectiveness of the incentive. Second, we note that more contacts--calls in this case--increase the likelihood of completion. In other words, the offer in and of itself is not likely to induce the respondent to cooperate. However, there does seem to be a limit after which more contacts will not induce higher cooperation. Finally, the 2003 SSBF experience indicates that the size of the incentive ought to be carefully considered before any offer is made. While offering progressively higher incentives may be cost minimizing, it may not be the most efficient way to gain cooperation and achieve high response rates.



Table 1: Table 1: Characteristic Means by Incentive Offer
Variable Incentive Offered ($50) Incentive Offered ($100) Incentive Offered ($200) Incentive Offered ($500)
Incentives: Interview complete for $xx incentive 0.35 0.19 0.13 0.11
Incentives: Days between $xx offer and next offer/complete/end 53.45 48.89 12.99 5.56
Incentives: Calls made between $xx offer and next offer/complete/end 15.15 10.17 3.84 2.61
Incentives: Days since screener before R offered $xx   77.14 89.28 148.70
Incentives: Calls made prior to $xx offer   21.83 22.78 35.89
Incentives: R offered $50 incentive   1.00 0.95 1.00
Incentives: R offered $100 incentive     0.46 0.84
Incentives: R offered $200 incentive       1.00
Screener Difficulty Measures: Proxy completed screener 0.31 0.40 0.40 0.43
Screener Difficulty Measures: Calls made to complete screener 7.51 8.95 9.31 9.77
Screener Difficulty Measures: Days to complete screener 16.33 20.17 22.45 22.41
Screener Difficulty Measures: Screener completed Jun/Jul 0.32 0.43 0.17 0.35
Screener Difficulty Measures: Screener completed Aug/Sep 0.29 0.40 0.24 0.41
Screener Difficulty Measures: Screener completed Oct/Nov 0.30 0.17 0.38 0.25
Screener Difficulty Measures: Screener completed Dec/Jan 0.10 0.00 0.21 0.00
Screener Difficulty Measures: $xx incentive in effect b/t Nov 21 - Jan 1 0.44 0.78 0.15 0.00
Location Information: Urban 0.84 0.86 0.86 0.86
Location Information: New England 0.06 0.06 0.06 0.06
Location Information: Middle Atlantic 0.13 0.13 0.14 0.13
Location Information: East North Central 0.14 0.14 0.14 0.14
Location Information: West North Central 0.07 0.07 0.06 0.06
Location Information: South Atlantic 0.18 0.19 0.19 0.21
Location Information: East South Central 0.05 0.05 0.05 0.05
Location Information: West South Central 0.11 0.12 0.11 0.12
Location Information: Mountain 0.08 0.07 0.07 0.07
Location Information: Pacific 0.17 0.17 0.17 0.17



Table 2: Table 1--continued
Variable Incentive Offered ($50) Incentive Offered ($100) Incentive Offered ($200) Incentive Offered ($500)
Organizational Type: Proprietorship 0.33 0.30 0.31 0.28
Organizational Type: Partnership 0.07 0.07 0.07 0.06
Organizational Type: Corporation 0.60 0.64 0.63 0.65
Industry: Construction and mining 0.12 0.14 0.13 0.15
Industry: Manufacturing 0.11 0.12 0.12 0.11
Industry: Transportation 0.04 0.05 0.05 0.04
Industry: Wholesale trade 0.07 0.07 0.07 0.07
Industry: Retail trade 0.20 0.20 0.21 0.21
Industry: Insurance & real estate 0.06 0.05 0.05 0.05
Industry: Business services 0.20 0.20 0.20 0.19
Industry: Professional services 0.19 0.17 0.17 0.17
Batch: Batch 1 case 0.24 0.38 0.14 0.32
Batch: Batch 2 case 0.24 0.33 0.18 0.30
Batch: Batch 3 case 0.24 0.29 0.22 0.38
Batch: Batch 4 case 0.29 0.00 0.46 0.00
Firm Demographics: D&B credit score 56.34 55.00 55.01 54.49
Firm Demographics: Total employees 41.03 46.95 45.96 48.74
Observations 9,442 2,164 4,185 1,414

Notes: Observations are weighted to reflect subsampling. Firms that completed prior to subsampling were given a weight of 1; firms subsampled in were given a weight of (1/.6); and firms subsampled out were given a weight of 0.



Table 3: Table 2: Probit Coefficients
  Incentive Offered ($50) Incentive Offered ($100) Incentive Offered ($200) Incentive Offered ($500)
Effort: Days between $xx offer and next offer/complete/end -0.060 -0.060 -0.065 -0.119
Effort: Days between $xx offer and next offer/complete/end (standard error) (0.002)*** (0.004)*** (0.004)*** (0.021)***
Effort: Calls made between $xx offer and next offer/complete/end 0.023 0.051 0.043 0.078
Effort: Calls made between $xx offer and next offer/complete/end (standard error) (0.003)*** (0.010)*** (0.006)*** (0.019)***
Effort: Days since screener before R offered $xx   -0.013 -0.032 -0.018
Effort: Days since screener before R offered $xx (standard error)   (0.004)*** (0.005)*** (0.008)**
Effort: Calls made prior to $xx offer   0.002 0.006 0.003
Effort: Calls made prior to $xx offer (standard error)   (0.004) (0.003)* (0.003)
Effort: R offered $50 incentive     -0.025  
Effort: R offered $50 incentive (standard error)     (0.148)  
Effort: R offered $100 incentive     -0.375 -0.387
Effort: R offered $100 incentive (standard error)     (0.113)*** (0.106)***
         
Participation Propensity: Proxy completed screener -0.461 -0.020 -0.336 -0.340
Participation Propensity: Proxy completed screener (standard error) (0.049)*** (0.093) (0.067)*** (0.106)***
Participation Propensity: Calls made to complete screener -0.017 0.005 -0.012 -0.006
Participation Propensity: Calls made to complete screener (standard error) (0.005)*** (0.009) (0.007) (0.010)
Participation Propensity: Days to complete screener -0.042 -0.010 -0.026 -0.021
Participation Propensity: Days to complete screener (standard error) (0.002)*** (0.005)* (0.005)*** (0.008)**
Firm Specific Information: Urban -0.049 -0.175 0.036 -0.128
Firm Specific Information: Urban (standard error) (0.054) (0.114) (0.079) (0.137)
Firm Specific Information: Partnership -0.176 -0.274 -0.019 -0.014
Firm Specific Information: Partnership (standard error) (0.095)* (0.205) (0.117) (0.202)
Firm Specific Information: Corporation -0.006 0.116 -0.092 -0.021
Firm Specific Information: Corporation (standard error) (0.047) (0.105) (0.069) (0.114)
Firm Specific Information: Manufacturing 0.149 0.093 0.144 0.342
Firm Specific Information: Manufacturing (standard error) (0.087)* (0.168) (0.123) (0.207)*
Firm Specific Information: Transportation 0.221 0.502 0.040 0.422
Firm Specific Information: Transportation (standard error) (0.106)** (0.230)** (0.158) (0.269)
Firm Specific Information: Wholesale trade 0.082 0.145 0.073 0.339
Firm Specific Information: Wholesale trade (standard error) (0.098) (0.200) (0.140) (0.233)



Table 4: Table 2--continued
  Incentive Offered ($50) Incentive Offered ($100) Incentive Offered ($200) Incentive Offered ($500)
Retail trade 0.003 -0.110 0.136 0.233
Retail trade (standard error) (0.075) (0.157) (0.105) (0.183)
Insurance & real estate 0.071 0.119 0.089 0.485
Insurance & real estate (standard error) (0.102) (0.233) (0.147) (0.248)**
Business services 0.072 0.169 0.145 0.214
Business services (standard error) (0.075) (0.153) (0.106) (0.182)
Professional services 0.134 0.194 0.172 0.451
Professional services (standard error) (0.077)* (0.157) (0.111) (0.186)**
Screener completed Aug/Sep -0.026 -0.091 -0.095 0.168
Screener completed Aug/Sep (standard error) (0.104) (0.172) (0.202) (0.199)
Screener completed Oct/Nov -0.366 -0.108 0.141 0.421
Screener completed Oct/Nov (standard error) (0.151)** (0.261) (0.282) (0.302)
Screener completed Dec/Jan -0.143   0.198 -0.766
Screener completed Dec/Jan (standard error) (0.203)   (0.350) (1.137)
$xx incentive in effect b/t Nov 21 - Jan 1 -0.081 0.276 1.093  
$xx incentive in effect b/t Nov 21 - Jan 1 (standard error) (0.082) (0.208) (0.133)***  
Batch 2 case 0.217 -0.516 -1.252 -0.308
Batch 2 case (standard error) (0.092)** (0.179)*** (0.249)*** (0.327)
Batch 3 case -0.012 -2.181 -2.580 -1.516
Batch 3 case (standard error) (0.137) (0.294)*** (0.445)*** (0.735)**
Batch 4 case -1.628   -3.848 -1.304
Batch 4 case (standard error) (0.185)***   (0.598)*** (1.318)
D&B credit score 0.001 -0.001 0.001 -0.001
D&B credit score (standard error) (0.001)** (0.001) (0.001) (0.002)
Total employees -0.001 0.001 -0.000 -0.001
Total employees (standard error) (0.000)*** (0.001)* (0.000) (0.001)
Constant 3.693 2.497 5.141 3.095
Constant (standard error) (0.153)*** (0.483)*** (0.920)*** (1.778)*
Observations 8271 2129 4154 1387

Notes: Observations are weighted to reflect subsampling. Firms that completed prior to subsampling were given a weight of 1; firms subsampled in were given a weight of (1/.6); and firms subsampled out were given a weight of 0. Chi-squares tests for joint significance of Census divisions were insignificant and dropped from final models presented here. Standard errors in parentheses. * significant at 10%; ** significant at 5%; *** significant at 1%.



Table 5: Table 3: Probit Marginal Effects for Selected Variables
  Incentive Offered ($50) Incentive Offered ($100) Incentive Offered ($200) Incentive Offered ($500)
Days between $xx offer and next offer/complete/end -0.019 -0.006 -0.009 -0.018
Days between $xx offer and next offer/complete/end (standard error) (0.001)*** (0.001)*** (0.001)*** (0.003)***
Calls made between $xx offer and next offer/complete/end 0.007 0.005 0.006 0.012
Calls made between $xx offer and next offer/complete/end (standard error) (0.001)*** (0.001)*** (0.001)*** (0.003)***
Days since screener before R offered $xx   -0.001 -0.004 -0.003
Days since screener before R offered $xx (standard error)   (0.000)*** (0.001)*** (0.001)**
Calls made prior to $xx offer   0.000 0.001 0.001
Calls made prior to $xx offer (standard error)   (0.000) (0.000)** (0.000)
R offered $50 incentive     -0.003  
R offered $50 incentive (standard error)     (0.021)  
R offered $100 incentive     -0.050 -0.069
R offered $100 incentive (standard error)     (0.015)*** (0.022)***
Observations 8,271 2,129 4,154 1,387

Notes: Observations are weighted to reflect subsampling. Firms that completed prior to subsampling were given a weight of 1; firms subsampled in were given a weight of (1/.6); and firms subsampled out were given a weight of 0. Chi-squares tests for joint significance of Census divisions were insignificant and subsequently dropped from final models presented here. Standard errors in parentheses. * significant at 10%; ** significant at 5%; *** significant at 1%. Marginal effects calculated at sample means.


Figure 1: Predicted probability of completion for $50 by days since offer

Figure 1: Predicted probability of completion for $50 by days since offer was made.  The X-axis displays the days between $50 offer and the next offer or completion of the case or end of the survey.  The Y-axis displays the probability of completion given the $50 incentive was offered.  As the days increase the probability of completion at the $50 incentive level decreases. It drops significantly shortly after the 50 day period.

Figure 2: Predicted probability of completion for $100 by days since offer

Figure 2: Predicted probability of completion for $100 by days since offer was made.  The X-axis displays the days between $100 offer and the next offer or completion of the case or end of the survey.  The Y-axis displays the probability of completion given the $100 incentive was offered.  As the days increase the probability of completion at the $100 incentive level decreases. It drops significantly shortly after the 50 day period.


Figure 3: Predicted probability of completion for $200 by days since offer

Figure 3: Predicted probability of completion for $200 by days since offer was made.  The X-axis displays the days between $200 offer and the next offer or completion of the case or end of the survey.  The Y-axis displays the probability of completion given the $200 incentive was offered.  As the days increase the probability of completion at the $200 incentive level decreases but the downward trend is not as sharp as the results from the $50 and $100 offer.

Figure 4: Predicted probability of completion for $500 by days since offer

Figure 4: Predicted probability of completion for $500 by days since offer was made.  The X-axis displays the days between $500 offer and the next offer or completion of the case or end of the survey.  The Y-axis displays the probability of completion given the $500 incentive was offered.  There is no discernable pattern between the number of days and the probability of completion.


Figure 5: Predicted probability of completion for $50 by calls made since offer

Figure 5: Predicted probability of completion for $50 by calls made since offer.  The X-axis displays the calls made between $50 offer and the next offer or completion of the case or end of the survey.  The Y-axis displays the probability of completion given the $50 incentive was offered.   As the number of calls increase the probability of completion decreases.  Between 20 and 40 calls, there is marked drop-off in the likelihood of completion.

Figure 6: Predicted probability of completion for $100 by calls since offer

Figure 6: Predicted probability of completion for $100 by calls made since offer.  The X-axis displays the calls made between $100 offer and the next offer or completion of the case or end of the survey.  The Y-axis displays the probability of completion given the $100 incentive was offered.  There is no discernable pattern between the number of days and the probability of completion 0 to 20 calls.  After 20 calls, there is a noticeable decrease in the likelihood of completion.


Figure 7: Predicted probability of completion for $200 by calls since offer

Figure 7: Predicted probability of completion for $200 by calls made since offer.  The X-axis displays the calls made between $200 offer and the next offer or completion of the case or end of the survey.  The Y-axis displays the probability of completion given the $200 incentive was offered. There is no discernable pattern between the number of days and the probability of completion 0 to 20 calls.  After 20 calls, there is only a slight increase in the likelihood of completion, but there are relatively few observations.

Figure 8: Predicted probability of completion for $500 by calls since offer

Figure 8: Predicted probability of completion for $500 by calls made since offer.  The X-axis displays the calls made between $500 offer and the next offer or completion of the case or end of the survey.  The Y-axis displays the probability of completion given the $500 incentive was offered. There is no discernable pattern between the number of days and the probability of completion.


Bibliography

Bitler, Marianne P., Alicia M. Robb and John D. Wolken (2001) "Financial services used by small businesses: Evidence from the 1998 Survey of Small Business Finances," Federal Reserve Bulletin, 87: 193-205.

Bitler, Marianne P. and John D. Wolken (2001), "Effects of the level of interviewer effort on the characteristics of completed responses: An experiment using the 1998 Survey of Small Business Finances," Federal Committee on Statistical Methodology, Statistical Policy Working Paper 34, Part 5, pp. 90-99.

Church, Allan H. (1999) "Estimating the effect of incentives on mail survey response rates: A meta-analysis," Public Opinion Quarterly, 57:467-491.

Dennis, William J. Jr. (2003) "Raising response rates in mail surveys of small business owners: Results of an experiment," Journal of Small Business Management, 41:278-???, July 2003.

Gunn, W. J., and I. N. Rhodes (1981) "Physician response rates to a telephone survey: Effects of monetary incentive level," Public Opinion Quarterly 45: 109-115.

James, Jeanine and Bolsein. (1992) "Large monetary incentives and their effect on mail survey response rates," Public Opinion Quaterly 56: 442-453.

Jobber, D., and D. O'Reilly (1998) "Industrial mail surveys: A methodological update," Industrial Marketing Management 27: 95-107.

Kalka, Richard (1995) "The use of incentives to survey `hard-to-reach' respondents: A brief review of empirical research and current research practices. Seminar on new directions in statistical methodology, working paper 23, pp. 256-299

Mach, Traci L., and John D. Wolken (2006) "Financial services used by small businesses: Evidence from the 1998 Survey of Small Business Finances," Federal Reserve Bulletin, 92: 167-195.

National Opinion Research Center (2001), "The 1998 Survey of Small Business Finances Methodology Report," April, http://www.federalreserve.gov/pubs/oss/oss3/ssbf98/ssbf98home.html#ssbf98results.

National Opinion Research Center (2005), "The 2003 Survey of Small Business Finances Methodology Report," April, http://www.federalreserve.gov/pubs/oss/oss3/ssbf03/ssbf03home.html#ssbf03results.

Singer, Eleanor, Nancy Gebler, Trivellore Raghunathan, John Van Hoewyk and Katherine McGonagle (1999) "The effect of incentives in interviewer-mediated surveys," Journal of Official Statistics, 15:217-230.

Singer, Eleanor and Richard A. Kulka (2002) "Paying respondents for survey participation," in Studies of Welfare Populations: Data Collection and Research Issues, eds. Michele Ver Ploeg, Robert A. Moffitt and Constance F. Citro, National Academy Press, Washington, D.C., 2002, pp. 105-127

White, Glenn D. and Luo, Amy (2005). Business survey response rates-Can they be improved? The American Statistical Association Proceedings of the Section on Survey Research Methods, pp 3666-368.



Footnotes

1. More information on the Federal Reserve's SSBF project can be found on the project website: http://www.federalreserve.gov/pubs/oss/oss3/nssbftoc.htm. The field work for the 2003 survey was conducted by the National Opinion Research Center (NORC) of the University of Chicago between June 2004 and February 2005.

Return to Text

2. See National Opinion Research Center (2001) and Bitler and Wolken (2001). Return to Text
3. See section 4 below for additional details on the incentive program. Return to Text
4. Although the plan called for offering a $50 incentive to all respondents, some respondents who were not screened until late in the field period received initial incentive offers of $200. Return to Text
5. See, e.g., Bitler and Wolken (2001), Gunn and Rhodes (1981), Jobber and Reilly (1998), and White and Luo (2005).

Return to Text

6. At every incentive level, respondents had the opportunity to choose either the cash or the D&B package. In practice, most respondents took the cash incentive, and some that initially opted for the non-cash incentive requested the monetary incentive once they had examined the non-cash incentive. For additional details, see National Opinion Research Center (2005).

Return to Text

7. Subsampling was only done in batch 1, 2 and 3. Batch 4 cases were all worked until the end of the field period. Return to Text
8. The (1/.6) weights adjust for the 40 percent of firms that were subsampled out (60 percent subsampling rate). All estimations were also done using all observations and no weights. The qualitative and quantitative results from these unweighted models are very similar to the one presented in this paper. For brevity, only the weighted estimates will be presented here. Return to Text

This version is optimized for use by screen readers. Descriptions for all mathematical expressions are provided in LaTex format. A printable pdf version is available. Return to Text