The Federal Reserve Board eagle logo links to home page

Skip to: [SSBF 2003 Home] [Printable Version (PDF)] [Bibliography] [Footnotes]
2003 SSBF: Methodology Report Screen Reader version

THE 2003 SURVEY OF SMALL BUSINESS FINANCES

METHODOLOGY REPORT

SUBMITTED TO:

THE BOARD OF GOVERNORS OF THE FEDERAL RESERVE

SUBMITTED BY:

THE NATIONAL OPINION RESEARCH CENTER

NANCY POTOK

ROBERT BAILEY

BILL SHERMAN

RACHEL HARTER

MICHAEL YANG

JANELLA CHAPLINE

JAKE BARTOLONE

NORC Logo SSBF Logo

ACKNOWLEDGMENTS

The authors wish to express their sincere thanks to all those who contributed to the production of this report. First and foremost, we acknowledge the substantive expertise of Dr. John D. Wolken and Dr. Traci Mach, whose insights guided this study throughout. Lieu Hazelwood, Courtney M. Carter, John A. Holmes and Mary K. Wilson at the Federal Reserve Board reviewed drafts of the report and offered useful suggestions.

The authors greatly appreciate the respondents who generously gave their time and information and the dedicated interviewers who collected these important data.

TABLE OF CONTENTS

1. INTRODUCTION 1

1.1 STUDY PURPOSE 1

1.2 STUDY BACKGROUND 2

1.3 FEDERAL RESERVE BOARD AND NORC PROJECT STAFF 3

1.3.1 Federal Reserve Board Staff 3

1.3.2 NORC Staff 3

1.4 PROJECT OVERVIEW 4

1.5 SUMMARY OF FINAL OUTCOMES 7

1.6 ORGANIZATION OF THE REPORT 7

2 QUESTIONNAIRE AND WORKSHEET DEVELOPMENT 9

2.1 INTRODUCTION 9

2.2 EXPERT CONSULTATION 10

2.3 WORKSHEET DEVELOPMENT 10

2.4 QUESTIONNAIRE DEVELOPMENT 10

2.4.1 Screener instrument 11

2.4.2 Questionnaire 12

2.5 CATI DEVELOPMENT 15

2.6 PRETESTS ONE AND TWO 16

2.6.1 Introduction 16

2.6.2 Testing Matching InfoUSA Data with D&B Data 17

2.6.3 Pretest Sample Selection 17

2.6.4 Pretest Data Collection 19

2.6.5 Pretest Findings 20

2.7 2003 SSBF SCREENER 23

2.8 2003 SSBF QUESTIONNAIRE 24

3 DATA COLLECTION PREPARATION AND INTERVIEWER TRAINING 29

3.1 INTRODUCTION 29

3.2 DEVELOP RESPONDENT MATERIALS 30

3.2.1 Logos 30

3.2.2 Endorsement Letters and Press Release 30

3.2.3 Other Promotional Materials 31

3.3 ADVANCE MAILING 33

3.4 WORKSHEET MAILING 33

3.5 2003 SSBF WEBSITE 34

3.6 TOLL-FREE PHONE NUMBERS AND EMAIL ADDRESSES 35

3.7 INTERVIEWER RECRUITING AND HIRING 35

3.7.1 Overview 35

3.7.2 Pretests 36

3.7.3 Main Study 37

3.7.4 Interviewer Attrition 37

3.8 INTERVIEWER TRAINING 38

3.8.1 Introduction 38

3.8.2 Job Aids 40

3.8.3 Screener Materials and Training 41

3.8.4 Screener Certification 43

3.8.5 Main Interview Materials and Training 44

3.8.6 Main Interview Certification 46

4 DATA COLLECTION 49

4.1 INTRODUCTION 49

4.2 TWO-PASS INTERVIEWING 50

4.3 MAILING ADVANCE RESPONDENT MATERIALS 51

4.4 SAMPLE RELEASE AND MANAGEMENT 52

4.4.1 Overview 52

4.4.2 Release of Sample Batch Four 55

4.5 TIMING AND SCHEDULE 55

4.6 SCREENER DATA COLLECTION 56

4.6.1 Introduction 56

4.6.2 Protocol for Working Screening Passes 57

4.6.3 Refusal Conversion 63

4.6.4 Locating 66

4.6.5 Receipt Control 66

4.6.6 Quality Control 67

4.6.7 Interviewer Misconduct 67

4.6.8 CATI Changes 68

4.6.9 Level of Effort 68

4.6.10 Unweighted Completion Rate 68

4.6.11 Eligibility Rate 70

4.6.12 Nonresponse 71

4.6.13 Nonresponse Follow-Up 71

4.7 MAIN INTERVIEW DATA COLLECTION 78

4.7.1 Introduction 78

4.7.2 Mailing Worksheet Materials 79

4.7.3 Protocol for Working Main Interview Passes 80

4.7.4 Protocol for Batch Four Main Interviewing 85

4.7.5 Special Efforts to Increase Production 85

4.7.6 Quality Control 89

4.7.7 Locating 90

4.7.8 Receipt Control and Worksheets 90

4.7.9 Data Retrieval 91

4.7.10 Drop/Add Forms 93

4.7.11 Weekly Production Reports 94

4.7.12 CATI Changes 94

4.7.13 Level of Effort 99

4.7.14 Unweighted Completion Rate 101

5 DATA REVIEW AND DELIVERY 103

5.1 INTRODUCTION 103

5.2 QUALITY CONTROL PROCESS 103

5.2.1 Data Review Process 104

5.2.2 Interviewer Variability Checks 105

5.2.3 Completeness Check Process 105

5.3 CLIENT DATA MEMOS 110

5.4 DATA EDITING 110

5.4.1 Identifying Cases for Editing 110

5.4.2 Editing Process 111

5.4.3 Transactions File 111

5.4.4 Data Changes Spreadsheet 112

5.4.5 Global and System Edits 112

5.5 DATA CLEANING 115

5.6 DATA CODING AND RECODING 115

5.6.1 Industry Coding 115

5.6.2 Race Coding 116

5.6.3 Business Problem Coding 117

5.7 INTERIM DATA DELIVERIES 119

5.8 FINAL DATA DELIVERY 119

6 SAMPLING AND WEIGHTING PROCEDURES 121

6.1 INTRODUCTION 121

6.2 TARGET POPULATION 122

6.3 FRAME CONSTRUCTION 124

6.3.1 Identification Variable 125

6.3.2 Stratification Variables 125

6.3.3 Other Variables 125

6.4 SAMPLE STRATIFICATION 126

6.5 SCREENING SAMPLE SIZE ESTIMATION 129

6.5.1 The First Approach 130

6.5.2 The Second Approach 137

6.5.3 Sample Size Revisions 147

6.6 ASSIGNING SAMPLE TO BATCHES AND REPLICATES 151

6.6.1 Formation of Batches 152

6.6.2 Formation of Replicates 152

6.6.3 Adjustment of Selection Probabilities for Batch Selection 154

6.7 NONRESPONSE SUBSAMPLING AND DESIGN EFFECTS 155

6.7.1 Subsampling Method 155

6.7.2 Estimated Design Effects 159

6.7.3 Realized Design Effects 162

6.8 THE FIVE PERCENT FOLLOW-UP SUBSAMPLE 163

6.8.1 Purpose of the Subsample 163

6.8.2 Selection of the Subsample 164

6.8.3 Estimates from the Subsample 165

6.9 WEIGHTING PROCEDURES 166

6.9.1 Initial Base Weight: w_{1ih} 166

6.9.2 Adjustment for Batch Selection: w_{2ih} 166

6.9.3 Adjustment for Sample Release: w_{3ih} 167

6.9.4 Adjustment for Screener Subsampling: w_{4ih} 167

6.9.5 Adjustment for Eligibility: w_{5ih} 168

6.9.6 Adjustment for Screener Nonresponse: w_{6ih} 172

6.9.7 Adjustment for Main Interview Nonresponse Subsampling: w_{7ih} 175

6.9.8 Adjustment for Main Interview Eligibility: w_{8ih} 177

6.9.9 Adjustment for Main Interview Nonresponse: w_{9ih} 177

6.9.10 Weight Trimming: w_{10ih} 179

6.10 RESPONSE RATES 186

6.10.1 Introduction to the Concept of Multiple Rates 186

6.10.2 Background 187

6.10.3 Screener Completion Rates 189

6.10.4 Screener Completion Rate Calculations 191

6.10.5 Main Interview Completion Rates 192

6.10.6 Main Interview Completion Rate Calculations 193

6.10.7 Overall Response Rates 195

6.11 DESIGN CHANGES 196

6.11.1 Summary of Design Changes 196

6.11.2 InfoUSA Database 197

6.11.3 Evaluation of InfoUSA's Out of Business Flag 207

6.11.4 Evaluation of D&B-InfoUSA Matching 207

7 2003 SSBF LESSONS LEARNED 208

7.1 INTERVIEWERS 208

7.1.1 Interviewer Skills 209

7.1.2 Using Employment Agencies 209

7.1.3 Mentoring Interviewers 210

7.1.4 Training in Gaining Cooperation 211

7.1.5 Timing of Training 211

7.2 SAMPLE DESIGN 212

7.2.1 Time between Screening and Main Interview 212

7.2.2 Sample Batches 212

7.2.3 Two-pass Interviewing 213

7.3 RESPONDENT MATERIALS 213

7.3.1 Letter Enclosures 213

7.3.2 Refusal Conversion Letters 214

7.4 CATI INNOVATIONS 214

7.4.1 Dollar Verification Screens 214

7.4.2 Institution Look-Up Process 215

7.5 RESPONDENT INCENTIVES 215

7.6 SCHEDULE 216

8 BIBLIOGRAPHY 218

LIST OF TABLES

Table 2.1 Key Screener Changes from 1998 11

Table 2.2 Key Questionnaire Changes from 1998 12

Table 2.3 Sample Strata for 750 Cases Used in Pretest One 18

Table 2.4 Sample Strata for 600 Cases Used in Pretest Two 19

Table 2.5 Pretest Eligibility and Completion Rates 19

Table 2.6 Screening Hours per Case During Pretests 20

Table 2.7 Questionnaire Hours per Case During Pretests 20

Table 3.1 Outline of Activities for Developing Respondent Materials and Interviewers 30

Table 3.2 Training Schedule for First Three Sessions 39

Table 3.3 Training Dates, Number of Trainees and Interviewer Attrition 39

Table 3.4 List of Job Aids Used in Screening and Main Interviewing 41

Table 3.5 Agenda for Screening Training (Short Form) 42

Table 3.6 Agenda Modules for Main Interview Training (Short Version) 45

Table 4.1 Outline of SSBF Data Collection Process 50

Table 4.2 Purposes of Pass One and Pass Two for Screening and Main Interviewing 51

Table 4.3 Items Sent in Advance Mailing 52

Table 4.4 Sample Batch Size, Initial Advance Mailing and Sample Release Dates (2004) 53

Table 4.5 Production Workflow 53

Table 4.6 Number of Weeks to Complete Screeners by Sample Batch and Pass 56

Table 4.7 Number of Weeks to Complete Main Interviews by Sample Batch and Pass 56

Table 4.8 Screener Refusal Conversion Rate1 by Sample Batch and Pass 64

Table 4.9 Description of Screener Refusal Mailing by Sample Batch 65

Table 4.10 Number of Returned Advance Letters by Sample Batch 67

Table 4.11 List of Screener CATI Changes During Production 69

Table 4.12 Level of Effort by Screener Completes and Non-Completes 69

Table 4.13 Screener Completion Rate1 by Batch and Pass (Unweighted) 70

Table 4.14 Screener Completion Rate by Refusal Letter Treatment 70

Table 4.15 Final Case Status by Sample Batch (Unweighted) 70

Table 4.16 Number of Screened Eligible Firms by Workforce Size1 71

Table 4.17 Cases Eligible for 5% Follow-up Subsample by Sample Batch and Pass 72

Table 4.18 Outcomes That Helped Establish Ineligibility in 5% Follow-Up 76

Table 4.19 Receipt Control from 5% Follow-Up Mailing to Nonrespondents 78

Table 4.20 Items Sent in Worksheet Mailing1 80

Table 4.21 Interviewer Hours per Week Before and After Incentive Program 87

Table 4.22 Completes1 by Amount and Type of Incentive 89

Table 4.23 Number of Returned Worksheets by Main Case Status 91

Table 4.24 Number of Returned Worksheets Among Completed Cases by Batch 91

Table 4.25 Five Most Frequent Data Retrieval Issues 93

Table 4.26 List of CATI Changes Made to Main Questionnaire During Production 94

Table 4.27 Level of Effort by Main Case Status 100

Table 4.28 Number of Calls by Sample Batch for All Main Cases 100

Table 4.29 Case Status and Completion Rate by Sample Batch 101

Table 4.30 Main Eligible Completion Rate1 by Sample Batch and Pass (Unweighted) 101

Table 5.1 Pass Rates by Batch 109

Table 5.2 Bubble Cases by Batch 110

Table 6.1 Businesses Excluded from the SSBF Target Population 124

Table 6.2 2003 SSBF Sample Stratification 127

Table 6.3 2003 SSBF Screening Sample Size Estimation 131

Table 6.4 Nonrespondents and Finalized Nonrespondents/Noncontacts Among Pass 1
Screener Incompletes: Estimated from Table 6.6 in 1998 SSBF Methodology Report 134

Table 6.5 Screener Eligibility Rate Among Pass 1 Finalized Nonrespondents/Noncontacts:
Estimated From Table 8.18 in the 1998 SSBF Methodology Report 135

Table 6.6 Screener Eligibility Rate Among Pass 2 Nonrespondents: Established From
Table 8.18 in the 1998 SSBF Methodology Report 135

Table 6.7 Marginal Totals for Each of the Dimensions of the Sample Stratification 139

Table 6.8 Allocation of 4,000 Completed Interviews to Strata 140

Table 6.9 Expected Screener Completion Rate by Group: Estimated from Table 8.21 in the
1998 SSBF Methodology Report 142

Table 6.10 Expected Main Interview Eligibility Rate by Group 143

Table 6.11 Expected 2003 Main Interview Completion Rates by Group: Estimated from
Table 8.28 in the 1998 SSBF Methodology Report 143

Table 6.12 Allocation of Initial Screening Sample to Strata 144

Table 6.13 Assumed vs. Actual Outcome Rates 147

Table 6.14 Total Completes as a Proportion of Total Target by Sample Balancing Group
for batches 1 and 2 148

Table 6.15 Final Screening Sample Size by Stratum and Batch 149

Table 6.16 Pass 2 Screener Sample Size by Disposition Code and Batch 157

Table 6.17 Pass 2 Main Interview Sample Size by Disposition Code and Batch 158

Table 6.18 Design Effects per Stratum Due to Unequal Weighting 161

Table 6.19 Realized Design Effect per Frame Size Class Before Weight Trimming 162

Table 6.20 Distribution of Completes Over the Original and Updated Size Classes 162

Table 6.21 Realized Design Effect per Updated Size Class Before Weight Trimming 163

Table 6.22 Final Design Effect per Updated Size Class After Weight Trimming 163

Table 6.23 Sample Size for 5% Follow-Up Study 164

Table 6.24 Final Dispositions for Noncontacts in Five-Percent Follow-Up Study 165

Table 6.25 Final Dispositions for Nonrespondents in 5% Follow-Up Study 165

Table 6.26 Pass 2 Screener Sample Size by Batch and Subsampling Group 169

Table 6.27 Final Screener Disposition Codes and Eligibility Adjustments 171

Table 6.28 Screener Nonresponse Adjustment Cells 174

Table 6.29 Pass 2 Main Interview Sample Size by Batch and Subsampling Group 176

Table 6.30 Main Interview Final Disposition Codes 178

Table 6.31 Main Interview Nonresponse Adjustment Cells 180

Table 6.32 Summary Statistics of the Weights and Design Effects Prior to Weight
Trimming By Updated Size Class 180

Table 6.33 Final Summary Statistics of the Weights and Design Effects By Updated Size Class 185

Table 6.34 Sum of Weights at Each Weighting Step 186

Table 6.35 Operating Status of Pretest 1 Cases 200

Table 6.36 Operating Status of Pretest 1 Cases by Final Disposition 200

Table 6.37 Operating Status and Match Outcome by Final Disposition 203

Table 6.38 Operating Status by Firm Size and Match Outcome 204

Table 6.39 Operating Status by Ownership Type and Match Outcome 204

Table 6.40 Operating Status by Firm Size, Ownership Type and Match Outcome 205

Table 6.41 Operating Status by Firm Size (Percent Calculation by Firm Size) 206

Table 6.42 Operating Status by Collapsed Firm Size (Percentage Calculation by Firm Size) 206

LIST OF FIGURES

Figure 6.1 2003 SSBF Sampling Flowchart 123

Figure 6.2 Distribution of Untrimmed Weights: Size Class One 181

Figure 6.3 Distribution of Untrimmed Weights: Size Class Two 182

Figure 6.4 Distribution of Untrimmed Weights: Size Class Three 182

Figure 6.5 Distribution of Untrimmed Weights: Size Class Four 183

Figure 6.6 DEFF by Trimming Level: Size Class One 183

Figure 6.7 DEFF by Trimming Level: Size Class Two 184

Figure 6.8 DEFF by Trimming Level: Size Class Three 184

Figure 6.9 DEFF by Trimming Level: Size Class Four 184

LIST OF APPENDICES

Appendix A. Pretest One Debriefing Memo

Appendix B. Pretest Two Debriefing Memo

Appendix C. Interviewer Job Aids

C.0 List of 2003 SSBF Job Aids
C.1 Tax forms used by different organization types
C.2 Eligibility criteria
C.3 Frequently asked questions and answers
C.4 Telephone Number Management System (TNMS) disposition codes
C.5 Instructions for logging in and out of TNMS
C.6 Answering machine scripts
C.7 CATI functions
C.8 SSBF important codes/telephone numbers
C.9 Institution look-up
C.10 Institution look-up quick reference
C.11 Top 10 reasons to participate in the SSBF
C.12 Entering institution names into the look-up table
C.13 Conventions for entering institution names into the look-up database
C.14 Conventions for coding responses to A10_2
C.15 Tools for working batch four main cases after incentive increases to $200
C.16 Encouraging respondents to report dollar amounts in balance sheet questions
C.17 Encouraging respondents to return worksheet and other materials

Appendix D. Screener Questionnaire

Appendix E. Average Timing Report

Appendix F. 2003 SSBF Logos

Appendix G. NBA Endorsement Letter

Appendix H. NFIB Endorsement Letter

Appendix I. SBA Endorsement Letter

Appendix J. Press Release for the 2003 SSBF

Appendix K. Advance Mailing: Project Director Letter

Appendix L. Advance Mailing: Envelope Return Address

Appendix M. Worksheet Mailing: Project Director Letter

Appendix N. Alan Greenspan Letter

Appendix O. Advance Mailing: Buckslip

Appendix P. Advance Mailing: General Information Brochure

Appendix Q. Worksheet Mailing: 1998 Results Brochure

Appendix R. Worksheet Mailing: FAQ Brochure

Appendix S. Worksheet WS1 (Sole Proprietors)

Appendix T. Worksheet WS2 (Partnerships)

Appendix U. Worksheet WS3 (S Corporations)

Appendix V. Worksheet WS4 (C Corporations)

Appendix W. Worksheet Mailing: D&B Small Business Reports Brochure

Appendix X. Worksheet Mailing: NORC Confidentiality Statement

Appendix Y. Worksheet Mailing: FRB Structure & Functions Brochure

Appendix Z. Worksheet Mailing: Folder

Appendix AA. Worksheet Mailing: Return Envelope

Appendix BB. Worksheet Mailing: Organizational-Type-Unknown Letter

Appendix CC. 2003 SSBF Homepage at NORC Website

Appendix DD. 2003 SSBF Homepage at the FRB Website

Appendix EE. Final Data Collection Debriefing Memo

Appendix FF. Agenda for Screener Training

Appendix GG. Agenda for Main Interview Training

Appendix HH. Number of Calls to Complete by Screener

Appendix II. Sample Level of Effort Report

Appendix JJ. Screener Conversion Letters Not Offering $2


JJ.0 List of 2003 SSBF Screener Conversion Letters Not Offering $2
JJ.1 Letter Type 1 - Concern About Confidentiality
JJ.2 Letter Type 2 - Firm Does Not Use Credit
JJ.3 Letter Type 3 - Too Busy/Concern About Time and Effort
JJ.4 Letter Type 4 - Concern About Study Legitimacy
JJ.5 Letter Type 5 - Generic/Non-Specific Refusal

Appendix KK. FRB In Plain English Brochure

Appendix LL. Screener Conversion Letters Offering $2


LL.0 List of 2003 SSBF Screener Conversion Letters Offering $2
LL.1 Letter Type 1 - Concern About Confidentiality ($2)
LL.2 Letter Type 2 - Firm Does Not Use Credit ($2)
LL.3 Letter Type 3 - Too Busy/Concern About Time and Effort ($2)
LL.4 Letter Type 4 - Concern About Legitimacy ($2)
LL.5 Letter Type 5 - Generic/Non-Specific Refusal ($2)

Appendix MM. Level of Effort by Screener Final Disposition Codes

Appendix NN. 5% Follow-Up: Self-Administered Questionnaire

Appendix OO. 5% Follow-Up: Project Director Letter to Noncontacts

Appendix PP. 5% Follow-Up: Facesheet for Noncontacts

Appendix QQ. 5% Follow-Up: Interviewer-Administered Questionnaire

Appendix RR. 5% Follow-Up: Project Director Letter to Nonrespondents

Appendix SS. Main Conversion Letters

SS.0 Main Conversion Letters List
SS.1 Letter Type 1 - Concern About Confidentiality
SS.2 Letter Type 2 - Too Busy/Concern About Time and Effort
SS.3 Letter Type 3 - Conversion Letter Sent to Firms That Have Not Explicitly Refused
SS.4 Letter Type 4 - General/Non-Specified Reason for Refusal
SS.5 Letter Type 5 - Letter Sent to Firms That Had Partially Completed the Main Interview

Appendix TT. Email Message Sent to Main Respondents

Appendix UU. Interviewer Letter for Sections P-S Conversions

Appendix VV. Interviewer Monitoring Form

Appendix WW. Sample PDR Form

Appendix XX. Drop/Add Form for Services and Institutions

Appendix YY. Final Production Report

Appendix ZZ. Number of Calls to Complete by Main Interview

Appendix AAA. Level of Effort by Main Final Disposition Codes

Appendix BBB. Interim Delivery Schedule

Appendix CCC. Logistic Regression Results


1. Introduction

While data on small businesses exist from a variety of sources, no source provides detailed information on the finances of small businesses and their use of credit from all sources. In 1987, 1993, and 1998, small businesses were surveyed about their finances on behalf of the Federal Reserve Board (FRB) in order to provide a nationally representative sample of the aforementioned information. These surveys provided information on small businesses' income, expenses, assets, liabilities, characteristics of the firm and firm owners, in addition to characteristics of small businesses' financial relationships with financial service suppliers for a broad set of financial products and services.

To measure the extent to which the financial environment of small businesses has changed since the last survey, the 2003 Survey of Small Business Finances (SSBF) was proposed. The 2003 SSBF was established to collect information from the owners of a nationally representative sample of up to 5,000 business enterprises. It was intended to gather data from small businesses on their financial relationships, credit experiences, lending terms and conditions, income and balance sheet information, the location and types of financial institutions used, and other firm characteristics.

In October 2003, the National Opinion Research Center (NORC) at the University of Chicago was awarded the contract to conduct the 2003 SSBF, which was to have similar content to the previous three surveys. This report documents the methods used to conduct the 2003 SSBF, from the planning stages in October 2003 and through its completion in April 2005.


1.1 Study Purpose

The 2003 SSBF had several specific objectives that governed the basic content of the survey. The following list briefly describes each objective requirement associated with the objective.

Assess credit availability for small businesses. The SSBF will provide the only current, nationally representative data on the use of nondepository and nonfinancial sources of credit by small businesses. This includes trade credit; credit from finance companies, individuals, and nonfinancial firms; and "angel" and venture capital. Responses to questions about the firm's most recent credit application will provide information on credit terms and credit turn down experiences.

Study the effects of bank mergers, bank consolidation, and interstate banking on credit use by small businesses. The 2003 SSBF will provide a fourth cross section to study these issues, enhancing analysts' ability to study changes in bank lending behavior over time. The SSBF questions on the types of credit used and the amounts of each type of credit from various sources and credit turn downs are important for this objective. The 2003 SSBF will also provide a benchmark for statistics collected from large banks on small business lending under the Community Reinvestment Act.

Provide important financial statement data for small businesses that are not available systematically from any other source. The 2003 SSBF will be a current micro database containing information on small firms' financial statements. No private or publicly available data set provides such comprehensive, nationally representative data on the financial condition of individual small businesses. One use for the SSBF data will be to improve the quality of estimates of aggregate statistics in the Board's Flow of Funds Accounts. The Statistics of Income Division of the Internal Revenue Service, the primary data source for the noncorporate business sector in the Flow of Funds Accounts, provides an income statement but no balance sheet data for proprietorships. The three previous surveys' data for proprietors' income statements and balance sheets were used to construct a benchmark for estimating asset and liability amounts for this sector; the 2003 SSBF will be used similarly.

Review validity of operating definitions of geographic and product markets used in antitrust analysis of banking markets. Technological, competitive, and regulatory changes may reduce information or transaction costs that limit geographic or product markets for financial services. The link between the financial product inventory and the name and location of the financial institution supplying each product is critical for defining markets. This link enables the analyst to determine distances between the firm and financial institution for each financial service, and the types of financial services obtained from each institution.

Monitor technological and competitive changes in markets for financial services used by small businesses. Credit scoring, credit cards, electronic methods for delivery of financial services, and other efforts to standardize credit products or reduce transaction costs have the potential to change significantly the cost and the availability of financial services to small businesses. Over time, these changes may alter behavior and affect monetary, supervisory, and antitrust policies at the Board. For this objective, information on the frequency of use of certain products will be obtained from questions on credit cards, other financial services, and computer usage.


1.2 Study Background

Small businesses are extremely important to the U.S. economy. According to the Small Business Administration (SBA), in 2001, 99.7% of all firms had fewer than 500 employees. These firms employed more than half of the private sector workforce and accounted for 75% of the new jobs created. Small businesses accounted for approximately half of the gross domestic product (GDP). Starting and maintaining a small business is fraught with challenges. In 2001, almost 585,000 new firms with fewer than 500 employees were created, but almost 553,000 went out of business during that same period. Small business loans increased between 1999 and 2000 from 7.73 million to 9.8 million, with almost 27% of the increase attributable to loans of under $100,000. In part, the SSBF is conducted to help researchers better understand how such changes in the financial marketplace affect the acquisition and use of financial services by small businesses.

Since the 1998 survey, consolidation has continued in the banking industry; current data are needed to understand the impact of this consolidation on small business lending practices. Data from the 2003 SSBF will help researchers understand the ongoing impact of banking mergers and consolidations, as well as the continued rise in interstate banking. This data set, together with data collected for 1987, 1993, and 1998, will permit researchers to identify trends in the use of local and non-local banks, and non-bank institutions, and to identify any changes in the types of financial services used by small businesses, such as in credit card and trade credit use. The survey contains an expanded set of information on small businesses' recent borrowing experiences, which can be used to identify segments of the small business sector that have the most difficulty obtaining credit. The data from four points in time, taken together, will be a significant contribution to the body of knowledge in this area.

The target population of the study was headquarter locations1 of nongovernmental, nonfinancial, nonagricultural for-profit businesses with fewer than 500 employees. These firms also had to be in business at the time of data collection as well as during December 2003 under one or more of their current owners.


1.3 Federal Reserve Board and NORC Project Staff


1.3.1 Federal Reserve Board Staff

Dr. John Wolken, a Senior Economist at the Federal Reserve Board, was the Contracting Officer's Technical Representative (COTR) for the SSBF project and in that role was responsible for all technical aspects of the project. Dr. Traci Mach, an economist, assisted Dr. Wolken with all aspects of the project. Other Board staff who assisted with the project included financial analysts Courtney Carter and Lieu Hazelwood and research assistants Katie Wilson and John Holmes. Lucy Lucas, a contracting specialist, assisted Dr. Wolken with contracting issues during the period of performance of the contract. Drs. Wolken and Mach and their staff were active participants in the project throughout the design, execution, and data delivery phases of the project.


1.3.2 NORC Staff

The FRB contracted with NORC to conduct the SSBF project. The SSBF project was led and supported by staff from various departments within NORC. The project was initially led by Dr. Carol-Ann Emmons, the project director, and Robert Bailey, the associate project director. After data collection began, Nancy Potok joined the study as the project director, and Dr. Emmons replaced Robert Bailey as the associate project director, who then became the data collection production manager at the NORC Telephone Survey Operations Center. Mr. Bailey was assisted by Mireya Dominguez. Michael Weitzenfeld initially was responsible for materials and systems development. He was replaced on the project by Bill Sherman, who was responsible for data collection activities. Mr. Sherman was initially assisted by Kelly Gardner and later by Dan Loew. Jake Bartolone supervised the data delivery activities for the main study. Dr. Bartolone was assisted by Kate Dalton. Antonio Macias was responsible for tracking the project expenses.

Dr. Rachel Harter coordinated and oversaw the work of the team of sampling statisticians. Dr. Janella Chapline was the lead operational statistician, responsible for sample selection and weight calculation. Dr. Chapline was assisted by Lidan Luo and Candice Saulsberry, primarily for program quality control and the development of custom sample monitoring routines. Dr. Yonghe (Michael) Yang led the development of the sampling and weighting plans. Javier Porras contributed to the weighting plan and the pretest sample selection, and Dr. Fritz Scheuren contributed significant design ideas in the early stages of the project. Benjamin Skalland carried out quality control analyses and related activities.

Computing support was led by Phillip Panczuk. Valeri Cooke was the questionnaire and Telephone Number Management System (TNMS) programmer. David Pieper and Robert Montgomery wrote the SAS programs and formatted the data for data delivery.

Shirley Williams led the mailout and receipt effort with assistance from Nate Straughter and Walter Bonner. Sharnia Lashley, Chequita Moody and Nauman Mirza were responsible for SIC coding.

James Casey and David Adams provided assistance with contracting issues throughout the period of performance.

NORC vice presidents Michael Pergamit and Richard Rubin developed the proposed study along with Dr. Scheuren. They were assisted by Robert Bailey. In addition, Dr. Pergamit, assisted by Javier Porras, conducted a detailed analysis of how InfoUSA data might improve the accuracy of the sample frame data (see Section 6.11). The oversight for the project was provided by Executive Vice President John Thompson.

NORC engaged the services of a small business accountant, Charles Smith. As a Certified Public Accountant from Smith and Associates, an accounting firm that specializes in accounting for small businesses, Mr. Smith provided helpful technical guidance throughout the project.


1.4 Project Overview

The first activity undertaken after contract award was questionnaire development. Prior to the start of the contract, the questionnaire from 1998 was modified to incorporate changes made by the FRB as well as to improve question wording, to include additional questions on respondent incentives, to re-order questions within the loan type section, and to re-work numerous skips in the questionnaire. Testing for the study was conducted through two pretests, executed sequentially. In addition to informing questionnaire design, the pretests helped to test the sample drawn from Dun and Bradstreet, allowed a trial run of data collection processes and protocols, and assisted in testing the Computer Assisted Telephone Interview (CATI) instrument. The preparation for the initial pretest began in February 2004. Data collection took place over a five-week period during March and April 2004. The preparation for the second pretest began in early April 2004. The second pretest began in late April and was completed in early June 2004.

Prior to, during, and following each of the pretests, the questionnaire was reviewed. Mr. Smith from Smith and Associates assisted in these reviews. A number of changes were made as a result of the pretests.

Many activities were underway during the period prior to data collection. These activities included preparing training materials, preparing respondent materials, developing a website, obtaining letters of endorsement, specifying the receipt system and process, developing mail-out protocols, and recruiting, hiring and training interviewers.

Data collection had two phases: a screening phase and an interviewing phase. A stratified systematic sample of 37,600 businesses was selected from the Dun and Bradstreet master file according to specifications determined by NORC statisticians and FRB staff. This sample was designed to be large enough to accommodate all of the survey's needs, including the worst case response rate scenario. The stratification was designed to ensure that differences in the use of credit and financial services among firms of differing sizes could be measured. The sample specifications and sampling technique were tested with a sample of 2,000 businesses during the pretest phase of the study. During the main screening effort, 23,798 firms were drawn into batches and released. Sampling is described in detail in Chapter 6.

Screener data collection began on June 10, 2004 and ended January 21, 2005. The screening was designed to verify the name of the business owner and the physical address of the business, screen the business for eligibility to participate, identify the business's legal form of organization, and record the fiscal year-end date. In addition, the screener asked for an address to which Federal Express could deliver a package to the firm's owner and an email address for the firm. The screener included an open-ended question asking respondents to name the single most important problem currently facing their business.

The initial contact with respondents was a mailing containing two letters explaining the purpose of the survey and encouraging participation: one letter from Federal Reserve Board Chairman Alan Greenspan, and the other letter from the SSBF project director. The mailing also included a question-and-answer brochure and buckslip. The size of a No. 10 envelope, the green buckslip was meant to give the recipient the few essential pieces of information about the study, even if none of the other materials were read.

Within a few days of receiving the mailing, businesses were called and asked to complete the screener. To ensure that interviewers reached the most knowledgeable screener respondent, the initial conversation with the person at the firm who answered the telephone call was fully scripted for the 2003 survey. A protocol was developed to ensure that at least three attempts were made to speak with owner, before accepting screener responses from a knowledgeable proxy.

The sample was ultimately fielded in four batches, with each batch subject first to screening interviews and then to interviewing eligible firms for the main study. In batches one through three, interviews were attempted during a first pass, and then nonrespondents were subsampled and recontacted during a second pass. This procedure was used to manage both screening and main interviewing. Batch four was added late in the project to compensate for lower than anticipated response rates. There was no second pass, nor any subsampling, for either screening or interviewing of batch four.

Main interviewing began June 29, 2004 and ended January 31, 2005. The main questionnaire interview collected information on the following:

• Eligibility determination2

• Organizational demographics

• Personal characteristics of owners

• Firm demographics

• Use of deposit services

• Use of credit and financing including credit cards, lines of credit, mortgages, motor vehicle loans, equipment loans, loans from partners or stockholders, and leases

• Most recent loan application that was approved and/or the most recent loan application that was denied, if either occurred within the last three years

• Use of other financial services including check clearing, credit card processing, brokerage services and trade credit

• Relationships with financial institutions

• Trade credit

• New equity investments in the firm

• Income and expenses

• Assets

• Liabilities and equity

• Credit history

• The primary owner's net worth and home value

• Respondent payment information

Prior to calling businesses to complete the main interview, NORC shipped a package via Federal Express to the business owner that included a financial worksheet to fill out before the interview to help expedite the interview and increase accurate responses. Within a few days of the worksheet mailing, businesses were called by telephone interviewers to answer any questions about the survey and attempt to complete the main interview.

Data preparation and delivery tasks began during the data collection period and were completed at the end of March 2005. Throughout data collection, periodic deliveries of the questionnaire data were sent to the FRB. Editing and coding activities spanned the data collection period and continued for about six weeks after data collection ended. The final data files, code books, and data documentation were sent to the FRB by March 31, 2005.

The procedures for producing weights were developed in collaboration with staff from the FRB. The final analysis weights included adjustments for eligibility and non-response to both the screening and main interview.


1.5 Summary of Final Outcomes

At the close of data collection, 13,864 firms completed the screening interview, from the 23,798 initially released for screening. There were an additional 197 cases that were determined to be ineligible, although the screening interview was not administered. Of the 14,061 firms where eligibility was determined, 9,687 (69%) met the eligibility criteria for the study. This was a decrease from 1998, when 73% of the screened firms were eligible. The final raw screener completion rate3 was 59% compared to 69% complete in the 1998 survey. The final 2003 sample consisted of 4,268 firms with completed interviews and a weighted overall response rate of 32.4%4. Complete details on the preparation for and experiences of data collection, in addition to the survey outcomes, can be found in Chapters 3 and 4. More information on survey response can be found in Section 6.10.


1.6 Organization of the Report

To help the reader best understand the survey processes as they were carried out, the chapters are organized to present information in the order in which tasks were undertaken, with the exception of the sampling task, which is presented at the end of the report just prior to the conclusion. Chapter 2 gives information on the questionnaire and worksheet development. Data collection preparation and interviewer training are covered in Chapter 3. Chapter 4 describes the data collection activities and outcomes. Chapter 5 describes the data review and delivery process including data quality control, editing, checking for completeness, and coding. Sample specifications and characteristics, as well as detailed weighting specifications, procedures, and response rate calculations are found in Chapter 6. Finally, Chapter 7 provides conclusions about the processes used to conduct the study.


2 Questionnaire and Worksheet Development


2.1 Introduction

The SSBF was previously conducted for 1987, 1993, and 1998 fiscal years5. The questionnaires from the previous surveys were used as the basis for the 2003 SSBF. Although much of the content of the 2003 questionnaire was identical to that of previous questionnaires, in order to reflect recent changes in the economy and in the delivery of financial services and to implement refinements in data collection procedures, many changes were made from the 1998 instrument.

As in previous surveys, two instruments were prepared for 2003. The first was a short screening questionnaire (referred to as the "screener") that was used to establish contact with the firm, verify contact information (name and address), and establish firm eligibility. The screener was necessary because not all eligibility criteria were available from the sample frame, and because the frame data were sometimes incorrect. The second instrument was the main interview questionnaire, referred to as the "questionnaire." The questionnaire was administered to firms whose eligibility status had been confirmed in the screener, either by a business owner or by a proxy.

Questionnaire development for the 2003 SSBF screener and questionnaire began shortly after the contract award in October 2003. The FRB provided NORC with a modified version of the 1998 SSBF questionnaire, at which point NORC, the FRB, and an accounting consultant worked together to design the screener and questionnaire for both pretests and ultimately for the main data collection.

NORC and FRB staff engaged in ongoing questionnaire design over a nine-month period, between October 2003 and June 2004. The activities began with a review of the questionnaire in consultation with FRB and NORC staff and an outside small business accountant. Included in the design period was the programming of the two instruments into a computer assisted telephone interviewing (CATI) package. A significant amount of time was spent during the design period testing the CATI instrument. The questionnaires were reviewed for questionnaire wording, format, and flow and skips. Following each pretest, a debriefing with the pretest interviewers helped inform additional changes.

Because of the technical nature of some of the questions asked in the questionnaire, eligible firms were mailed and asked to complete a worksheet prior to the interview and to use the worksheet as an aid during the interview. At the end of the interview, respondents were encouraged to return their completed worksheet6 to NORC. Some changes to the 1998 worksheets were required. NORC and the FRB spent some time modifying the worksheet to reflect current tax laws and questionnaire changes.

This chapter first reviews the expert consultation and worksheet development, describes the two pretests, and then details the final phase of questionnaire development. The chapter ends with a brief description of the 2003 SSBF screener and questionnaire.


2.2 Expert Consultation

Charles Smith, of Smith and Associates, a CPA firm specializing in small business accounting, was retained to review the worksheet and questionnaire, and participated in numerous meetings with FRB and NORC staff to review the questionnaire and worksheets. These meetings were informed by basic questionnaire construction methods, our experiences during the two pretests, and the bookkeeping, tax and accounting practices of small businesses. The meetings took place over a period of several months. Most of the recommended changes made by Mr. Smith were made prior to the pretests and main data collection. Mr. Smith revised the tax-form line-number references to match the appropriate tax forms and ensure that the worksheets and questionnaire reflected current tax laws and accounting practices.


2.3 Worksheet Development

As discussed previously, business owners were sent a worksheet to fill out prior to the questionnaire interview to aid the telephone interview. The design of the worksheets was a two-sided form that requested financial record data on one side and financial services and sources of financing on the other side. Each of four possible business types (sole proprietorship, partnership, S corporation and C corporation) had a unique worksheet. Although the worksheets were similar to those used in 1998, some changes were necessitated by changes in business tax law and tax forms. One difference from 1998 was that the worksheets no longer needed to be customized by fiscal year-end date since the tax forms for fiscal year end 2003 and fiscal year end 2002 were identical7. As a result, there were only four worksheet versions as compared to the ten required in 1998. In addition, the worksheets were somewhat redesigned so that the layout and flow of the worksheet questions was easier to follow, the instructions were more easily understood, and the explanations of how to use the worksheet as an aid in providing data were clearer.


2.4 Questionnaire Development

Owing to a number of changes in the 2003 study from the 1998 study, including changes in sample design, the implementation of new automated lookup procedures for firm and financial institution location, the offer of an incentive, and efforts to improve response rates, there were substantive changes required for both the screener and the questionnaire. The general organization and content of the screener and questionnaire are listed below in Sections 2.7 and 2.8, respectively.


2.4.1 Screener instrument

The major changes made to the 1998 screener are described in Table 2.1.

Table 2.1 Key Screener Changes from 1998
Key Change Description
Proxy rule Enforced at least three attempts to contact an owner before seeking knowledgeable proxy through explicit CATI programming
No minority or Hispanic ownership questions No minority oversampling was done in 2003. In 1998 this caused a long delay between screening a firm and recontacting them to complete the main interview, which was thought to have exacerbated the difficulty of achieving the targeted response rate. The overall sample was increased in order to compensate for the loss of the oversample.
Collecting physical address of firm in screener and not in main Decreased the time between screener and main; eliminated the need to verify a firm's physical location during the main interview.
Headquarters office questions refined Only asked if the sampled location was the headquarters location when firm had multiple locations. Not asked of single location firms.
Firm name verified Frame data on the firm name and phone number were noisy. Verifying the firm name allowed interviewers to confirm they had reached the sampled business.
Zip code lookup CATI was programmed to match city, state, county and MSA to the zip code entered, to ensure accuracy of the physical address data.
Most important problem facing firm This question was moved from questionnaire to screener to engage the respondent more fully in the survey sooner.
FedEx address The worksheet package sent to respondents prior to the questionnaire interview was sent via Federal Express to make the mailing more conspicuous and less likely to be lost. Once a firm was confirmed eligible, respondents were asked for the best address to send the package.
Email address Email addresses were collected from eligible respondents to provide NORC with an additional method of contacting firms.

One substantive change in the screener resulted from a change in the sampling design. In 1998, minority businesses were oversampled. This requirement was relaxed in 2003 and hence questions on race and ethnicity were no longer needed in the screener8. Firm names and physical locations were verified to better deal with known problems with the D&B frame data. The question about the most significant problem facing the firm was moved from the main to the screener so that this information could be used to help convince firms to participate in the main questionnaire or later in the field period. In the end, it was not used in this manner. The introduction text was significantly enhanced and a protocol was implemented to ensure that interviewers made three attempts to speak with the owner before they were permitted to collect screener data from a knowledgeable proxy. These changes were made to maximize the number of screeners completed with the most knowledgeable screener respondent (an owner), reduce the number of screeners completed with someone at the firm other than the owner, and exercise greater control over the selection of proxy respondents to ensure data quality.


2.4.2 Questionnaire

A number of changes were made to the 2003 questionnaire from the one used in 1998. Table 2.2 provides an overview of the key changes. In addition to the changes in Table 2.1, NORC and the FRB made many improvements to the 2003 questionnaire including question wording, question-by-question (QxQ) help notes9, interviewer instructions, and question ordering.

Table 2.2 Key Questionnaire Changes from 1998
Questionnaire Section Key Changes from 1998 Survey
General Changes • Multiple reference periods were used. For balance sheet and income statement, the reference period was the most recent fiscal year, as it was in previous surveys.10 For most other questions (e.g., balances in accounts and loans, firm demographics) the reference period was the date of the interview. For the Most Recent Loan section, the reference period was within three years of the interview date.

• Dollar amounts were displayed in words and read back to respondents at every question that collected a dollar amount, rather than in a subset of these items.

A. Eligibility Determination • Added questions to compare current level of employment to level one year and three years ago

• No questions on firm's mailing address

• Revised introduction and initial READ statements

• Respondent incentive question added

C. Personal Characteristics of Owner • Separate questions asked about each owner, up to three owners for partnerships and corporations, two owners for sole proprietorships
D. Firm Demographics • Question about most important problem facing business moved to screener
N. Records • Question added about which tax form version the firm used in most recent fiscal year for corporation and proprietorships.

• Question added about whether firm's most recent fiscal year tax forms were audited by a professional accountant

F. Use of Credit and Financing • Questions added on rates paid on credit cards
MRL. Most Recent Loan • 2003 version included renewals of lines of credit if no other borrowing activity in last three years

• Reorganized to improve flow

G. Use of Other Financial Services • Credit card processing services split from transactions services

• Credit (and debit) card processing services added as separate service.

H. Relationship with Financial Institutions • Depository institution/branch lookup function added using Paradox database

• Questions about the length of relationship with financial institutions expanded to deal with recent merger activities of financial institutions

• Skip patterns changed to avoid duplication of questions for MRL-only institutions

I. Trade Credit • Skip patterns changed to be more efficient
M. New Equity Investments in the Firm • Skip patterns modified-each firm answered questions in only one of two subparts
P. Income and Expenses • Questions added on officer's compensation and salaries and wages

• Qualitative retrospective questions on sales and profits (1 and 3 years ago) added

U. Credit History • If no majority owner, questions on largest shareholder/partner/owner not asked.

• Skip patterns refined

T. Respondent Payment Information • Questions added on respondent incentive type and mailing address for incentive payment

Two general changes from 1998 were a change in the reference period for some questions and the automated read back in words of dollar amounts. In 1998, the reference period for all questions was the latest fiscal year or fiscal year end. For some firms, this meant that data on account balances and loans outstanding, for example, would have to be obtained from records that were as much as 18 months old. In 1998, it appeared that many firms instead reported amounts and other data as of the interview date. Consequently, the reference period in 2003 was changed to the date of the interview (or the last statement date in the case of checking accounts and loans). Since the balance sheet and income data were tied closely to tax forms, the reference period for these items remained the latest fiscal year.

In 1998, NORC programmed CATI to translate numeric dollar amounts that had been entered by interviewers into words that interviewers were to read back to respondents to verify that the dollar amounts had been captured accurately. Large dollar amounts are often expressed using verbal "short-hand" such as "one million six." Such expressions are subject to misinterpretation by interviewers. This example could be understood to mean "$1,600,000" or it might be more literally interpreted to mean "$1,000,006" which is unlikely to be correct. Dollar amounts with many zeros are also subject to keystroke error. In 1998, the procedure for reading back amounts after they had been programmatically translated into words was applied to only a subset of the dollar amount items in the questionnaire. In 2003, this procedure was applied to all dollar amount items. Despite pressure from respondents to complete the interview as quickly as possible, interviewers were instructed that adherence to this procedure was critically important to ensure data quality, and this was strictly enforced through monitoring.

In addition to these general changes, the most significant changes were a total revamping of the owner demographics section and the addition of an automated institution look-up procedure. The FRB redesigned the owner demographics and characteristics section (subsection C) to better match the U.S. Census Bureau's redesigned 2002 Survey of Business Owners. This was in part motivated by the Office of Management and Budget's guidelines regarding ethnicity and race information, which encourages agencies to collect information in a "check all that apply" format. Instead of collecting the ethnicity and racial information at the firm level, the redesign calls for collecting the information at the individual owner level. For the 2003 questionnaire, demographic data were collected on up to three owners/partners of corporations and partnerships compared to the majority owner and the firm as a whole in 1998. For sole proprietorships jointly owned by a husband and wife, the 2003 questionnaire collected demographics on both spouses. For any organizational type, if a single owner/partner/ shareholder owned more than 50% of the firm, data was only collected for that person.

Another important change from 1998 was the addition of the automated look-up of depository institutions. The lookup function was added to improve the identification of financial institutions, minimize post-processing, and reduce the number of uncodable institutions. In previous rounds respondents often reported financial institution name and address data that were not sufficient to accurately determine geographic location, which is critical to market analysis. For 2003 the questionnaire included a link to a database of more than 100,000 U.S. depository institutions and branches. Working with respondents, interviewers searched this database to identify the exact location of institutions reported by respondents. Thanks in particular to the guidance of Dr. Mach during development and testing, NORC programmers delivered an effective look-up application that interviewers found easy to use.

NORC suggested and the FRB agreed to re-word many questions to make them more conversational, therefore easier to read and comprehend. Some response frames that were read to respondents were reordered, so that the most likely and logical responses were read first. The FRB requested that one of the loan types be moved from near the beginning of its section to the end. In addition, questions were added at the end of the survey to capture respondent incentive information.

Several changes were made to the questionnaire after the two pretests. These are described in more detail in the sections below. Although the design of the questionnaire was, for the most part, completed before data collection began for the main study, some changes were needed throughout data collection. Some of these changes were due to changed activities, such as increasing the amount of the respondent incentive in order to increase the success rate of refusal conversion efforts. Because of how the questionnaire was written, this change required that interviewer prompts be rewritten to reflect the higher amounts and that a response category in several questions be renamed.


2.5 CATI Development

Programming for the 2003 SSBF screener and questionnaire began in early January 2004 and continued into September 2004. The CATI program manager attended meetings in which the questionnaire content was reviewed and changes discussed. The program manager asked questions about presentation, response categories, allowable ranges, and consistency and contingency checks. The desired specifications were noted in a hard-copy questionnaire and then documented electronically in the programmer's log. The specifications were then translated into SurveyCraft code (the programming language used for the CATI instrument). As changes were made to the questionnaire, this process was repeated until the questionnaire was deemed final.

The screener was the first instrument to be completed. Several changes were made to the screener between the first and second pretests, but since the screener was short, and the changes were few and straightforward, additional programming was minimal.

Programming for the questionnaire was much more complex and time-consuming than for the screener. This was largely a function of the inherent complexity of the questionnaire design as well as the inclusion of the zip code and bank branch look-up procedures (see Section 2.6.5 for more details). In particular, the complexity of the questionnaire required the programming of many consistency and contingency checks. One key aspect of this complexity was the way in which items such as firm zip code were collected in the screener, preloaded into the questionnaire, and then used to drive complex skips in the questionnaire. To effectively test the functionality of these skips, and to take best advantage of the time available for testing, the testing plan needed to include a carefully designed set of preloaded questionnaires at an early stage, or the questionnaire needed to allow data entry of the preloaded items, so that specific scenarios could be rigorously tested. NORC recommends that future surveys be planned to make time and budgetary provisions for effective and comprehensive testing, recognizing the complexity of this endeavor.

CATI testing began in March, before the first pretest, and continued into December for changes made during the main data collection. The testing protocol followed these general steps:

1) The programmer tested the CATI program to be sure that it was performing according to specifications delineated in the questionnaire.

2) Project and production staff were assigned specific sections of the instrument to test, and each potential path within a subsection was systematically reviewed and tested. Additionally, question text was reviewed to be sure that it matched the hard-copy questionnaire. One person was responsible for maintaining a log of errors. Errors were reported to the programmer on a flow basis.

3) The programmer made corrections to the text and code and released updated versions of the program for further testing.

4) After iterative testing cycles, culminating with written final approval by the FRB, the revised instrument was released for data collection.

FRB staff played a key role during the testing and the implementation phases of the questionnaire design process. NORC sent the FRB four laptop computers on which the CATI program had been loaded - to aid with testing and to provide a platform for remote interviewer monitoring. Originally, it was anticipated that the FRB would review a version of CATI that had already been tested and reviewed by NORC, but due to time constraints the FRB tested the program simultaneously with NORC's testers. The testing efforts of NORC and the FRB were coordinated and new versions were delivered for testing as changes were implemented by NORC.

Even with extensive testing, some errors in the logic and skip patterns were not detected until data collection started. In appropriate instances when these errors were detected after data had been collected from a respondent, NORC recontacted the respondent and retrieved corrected information. In addition, NORC made several updates to the CATI instruments during data collection; see

Table 4.11 and Table 4.26 for a summary of the changes made to the screener and main questionnaire during production.


2.6 Pretests One and Two11


2.6.1 Introduction

Two pretests were conducted during questionnaire development. The pretests had several objectives: testing the matching of InfoUSA data to Dun & Bradstreet (D&B) data to improve sampling, informing the design of the screening and data collection questionnaires, evaluating the quality of the Dun & Bradstreet sample, testing the respondent materials to see if they encouraged participation and assisted interviewers in answering questions about the study, and testing our processes and protocols for the main data collection effort.

Data collection was conducted for the first pretest between March 10 and April 6, 2004, and for the second pretest between May 11 and May 27, 2004. Assessing completion rates was not an objective of either pretest. Rather, NORC wanted to complete 50 main interviews as quickly as possible with small business owners whose businesses were determined to be eligible for the survey based on the screening interview.


2.6.2 Testing Matching InfoUSA Data with D&B Data

Because NORC's experience with D&B data - the source of the sample frame - suggested that these data contained errors, one of which was the inclusion of firms no longer in business, NORC's original proposal called for matching D&B data to InfoUSA data. The idea was to use a second source of information to identify firms with a lower probability of being in business, so that these firms could be subsampled in an effort to reduce data collection costs. Firms in the D&B file for which no match could be found in the InfoUSA file would be considered suspect in terms of still being in business. The premise of such a match was that two sources showing a business was in operation increased the ex ante probability that the business was genuinely in operation. Thus, the original approach involved matching the D&B sample to InfoUSA, treating all matches as very likely in operation and including those in the fielded sample with certainty. Businesses in D&B with no match in InfoUSA would be assumed to be less likely to be in business and would therefore be subsampled at a lower rate. This method would reduce the number of attempts to contact firms which had gone out of business.

NORC took advantage of pretest one to test the InfoUSA matching procedures. NORC and InfoUSA identified three matching procedures to test. The procedures varied the items that were matched (the number of telephone number digits, the characters in a company name and/or street address) and the minimal threshold for determining matches. NORC also tested the usefulness of a flag in the InfoUSA database indicating that a firm was out of business. For a more detailed discussion of these tests and the results, see Section 6.11.2.

Based on the results from pretest one, NORC concluded that the information generated by matching D&B data to InfoUSA data would not enhance the sample design or operational efficiency of data collection. Generally, the InfoUSA options proved unwieldy, unworkable or unreliable (see 6.11.2). Sometimes illogical results were obtained such as a lower number of matches when applying "looser" matching criteria. InfoUSA's internal procedures were not transparent, so unexpected results could not be effectively analyzed and corrected. Accordingly, the FRB and NORC decided that no matching to InfoUSA data should be undertaken. In the absence of proof of the benefit of implementing this procedure, neither the costs nor the added complexity to the sample design could be justified.


2.6.3 Pretest Sample Selection

In order to complete 50 main interviews in the short data collection period established for each pretest, NORC initially selected a larger-than-necessary pretest sample of 1,000 businesses. As explained below, this initial selection was later doubled to 2,000. The pretest sample consisted of a stratified random sample selected from the Dun & Bradstreet master file (this file is described in detail in Chapter 6.) There were two specific features of the sample file for the pretests:

• The listings were selected in equal numbers from "buckets" of two sizes: 1-19 employees, all sites (500 firms), and 20-499 employees, all sites (500 firms)

• The selections were systematically random, and listings were kept as a "deletion file" to avoid duplication with our main sample draw.

Sample selection for the pretests proceeded smoothly. From the frame of over eight million records, NORC initially selected a sample of 500 companies for pretest one (later increased to 750 companies, see Section 2.6.3.1). Six strata were created by crossing two employment-size categories (0-19 and unknown size, 20-499) with three business-type categories (sole proprietorship, partnership and corporation).

Table 2.3 Sample Strata for 750 Cases Used in Pretest One
Organization Type 0 - 19
employees
1
20 - 499
employees
Total
Sole proprietorship 2 126 125 251
Partnership 123 125 248
C corporation/S corporation 127 124 251
Total 376 374 750

1Includes cases with unknown number of employees based on D&B data. Return to Table
2Includes cases with unknown organizational type based on D&B data. Return to Table


2.6.3.1 Pretest One

For pretest one, the frame was sorted by SIC code within strata, and selections were then made, systematically within strata. The same procedure was followed for pretest two. Table 2.3 above shows the number of businesses in each stratum for the entire sample used for pretest one.

During pretest one, interviewers initially attempted to screen 500 businesses over a four-day period. After four days of screening, NORC had identified only about 100 eligible firms compared with 200 expected. Although the eligibility status of most of the pretest sample was unresolved at this point, NORC decided to add an additional 250 cases to the sample to ensure that 50 main interviews were completed within the pretest one timeframe. These 250 cases were drawn from a sample of 500 cases originally intended for the second pretest.

To select the 250 cases to add to the pretest one sample, the 500 pretest one cases were first randomly ordered (uniform distribution) and then split into replicates of size 50 each. Five of these replicates became the sample for part two of pretest one. The remainder was set aside for use in pretest two. Though each of the replicates was a valid random subsample of the original systematically sampled cases, due to time constraints, there was no attempt to control the number of cases that were selected into each replicate by stratum or primary SIC distribution.


2.6.3.2 Pretest Two

Because of the additional sample used during pretest one, additional sample had to be selected for pretest two. Prior to selecting another 1,000 cases from the frame of 8,021,303 firms, the previously selected 1,000 pretest cases were removed from the frame. NORC used the same procedures to select the second group of 1,000 cases.

In total, 600 firms (the 250 that had been set aside from the first 1,000 pretest cases and 350 from the second 1,000 pretest cases) were selected for pretest two. Table 2.4 shows how the pretest two cases were stratified by organizational type and employee size. Though it became unnecessary, the remainder of the second 1,000 selected cases (650) was set aside for future pretest use.

Table 2.4 Sample Strata for 600 Cases Used in Pretest Two
Organization Type 0 - 19 employees1 20 - 499 employees Total
Sole proprietorship2 102 98 200
Partnership 99 101 200
C corporation/S corporation 100 100 200
Total 301 299 600

1Includes cases with unknown number of employees based on D&B data. Return to Table
2Includes cases with unknown organizational type based on D&B data. Return to Table


2.6.4 Pretest Data Collection

During pretest one, NORC interviewers completed 398 screeners from the sample of 750. Of those, 302 businesses were eligible, and 52 completed the main interview (Table 2.5). An additional 19 main interviews were partially completed. During pretest two, NORC interviewers completed 322 screeners; 253 of those businesses were eligible and 66 completed the main interview; an additional 9 cases were partially completed.

Table 2.5 Pretest Eligibility and Completion Rates
  Cases Released Screener Completed Screened Eligible % Eligible Completed Questionnaire % Complete
Pretest One 750 398 302 76% 52 17%
Pretest Two 600 322 253 79% 66 26%
Total 1350 720 555 77% 118 21%

Average hours per case (HPC) - defined as all interviewer time spent on a case, including making callbacks, leaving messages, recording call notes, and so forth - declined between the pretests. As Table 2.6 shows, the number of hours per completed screener for the second pretest was less than for the first pretest, 0.9 vs. 1.1. In addition, as shown in Table 2.7, the amount of time to complete the main interviews was significantly less in the second pretest than in the first pretest. The factors contributing to the lower hours per completed case in the second pretest were: 1) more experienced interviewers; 2) shorter intervals between callbacks due to the need to finish the pretest and begin the main study; and 3) changes made between pretests to improve readability, flow and respondent cooperation.

Table 2.6 Screening Hours per Case During Pretests
  Interviewer
hours
Total completed
cases
Hours per
completed case
Pretest One 451 398 1.1
Pretest Two 292 322 0.9
Total 743 720 1.0

Table 2.7 Questionnaire Hours per Case During Pretests
  Interviewer
hours
Total completed
cases
Hours per
completed case
Pretest One 263 71 3.7
Pretest Two 189 66 2.9
Total 452 137 3.3


2.6.5 Pretest Findings

Details about the two pretest instruments, respondent materials, survey processes and protocols, and debriefings are included in The 2003 Survey of Small Business Finances Pretest I Report and The 2003 Survey of Small Business Finances Pretest II Report. These reports were delivered to the FRB in May and October 2004, respectively. Notes from the interviewer debriefing following pretest one can be found in Appendix A.

The interviewer debriefing for pretest two was much shorter and less formal than for pretest one and no formal minutes exist from that debriefing12. However, the action items that emerged from a meeting with interviewers to discuss pretest two training and related issues can be found in Appendix B.

Pretest interviewers were briefed both before and after each pretest; FRB staff attended the briefings and debriefings. The briefings consisted of a review of the screener, the questionnaire, and the materials sent to respondents and closed with a question-and-answer period. The debriefings entailed walking through each screener and questionnaire item, allowing interviewers to describe the ease or difficulty of administering each item, how respondents reacted, and how interviewers handled problems. The pretest interviewers, project staff, and FRB staff suggested many modifications during the debriefing meetings.

The screening questionnaire was extensively tested during the two pretests regarding the performance of the CATI program. Of primary interest was how the instrument had helped interviewers get past gatekeepers to reach an appropriate survey respondent, the wording and order of the questions, and the performance of the zip code look-up, particularly its accuracy and response time. A number of changes were made to the screening questionnaire as a result of the two pretests. In addition, several changes were made to materials to help interviewers gain cooperation. The most notable changes to the screener and materials are listed below:

• The introductory script was shortened and revised to be more effective at gaining cooperation. Interviewers were provided with optional responses on their CATI screen, to overcome objections and be better able to tailor the introductory script to meet the needs of specific cases.

• Interviewers did not have to identify themselves or NORC until they were speaking to a potential respondent; i.e., an owner or proxy owner. Interviewers were instructed to introduce themselves as few times as possible (ideally, just once) to the owner or proxy owner. Following standard business protocol, whenever the owner's name was known, interviewers would simply ask to speak with the owner by name initially and then identify themselves and the reason for the call as needed.

• Response options were expanded. In pretest two, interviewers had the option to speak to another owner who may have been present if the D&B-listed owner was unavailable. In addition, a response category was added to first question of the screener and main questionnaire for situations in which an owner was never available during the data collection period; in those situations, an interviewer was given the option to immediately seek out a proxy.

• Softening the request for the owner's email address, and allowing an interviewer to explain the benign purpose13 of the request (in pretest two, 50% of the respondents provided an email address in the screening interview, up from 31% in pretest one). Although the earlier version of the text did not result in any respondents refusing to complete the screener, interviewers said that the previous text was too blunt and might have made respondents less likely to cooperate for the main questionnaire.

• A job aid for interviewers was added that listed the top-ten reasons to participate in the SSBF (see Appendix C.11). These reasons were developed in a brainstorming session of supervisors and key members of the SSBF project team, and were based on overcoming the most frequently heard objections to participating in the survey.

• Changes were made to the project director letter mailed with the worksheet to emphasize that many owners, especially those of the smaller businesses, were unlikely to need to complete all the worksheet items. The purpose of this change was to mitigate the prospect of owners being intimidated by the worksheet.

The main interview questionnaire was also tested extensively during the pretests. Of particular interest were the introduction script, the institution look-up, question order, question wording, questionnaire logic, and the help text known as QXQ instructions containing answers to potential questions and additional background information. Many changes were made to the main interview questionnaire between the pretests and after the second pretest. The major pretest findings regarding the CATI instrument are highlighted below:

Zip Code Look-up. The zip code look-up feature was a Paradox application used to verify the accuracy of the zip codes reported by the respondent for the sampled firm and the financial institutions used by the firm in both the screener and the questionnaire. The application was first called in the CATI version of the screening questionnaire at question A11.1.1, when confirming and capturing the physical address of the firm. This sequence was repeated at question A3 of the main instrument, but only if the screener had been completed by a proxy owner. The zip code look-up was next called at question A3.3.1 of the main interview questionnaire, when the physical address differed from the mailing address. Finally, the zip code lookup was called during financial institution lookup in the main interview questionnaire.

The zip code look-up linked a reported zip code to the following information: 1) city; 2) state; 3) county (in cases where multiple counties existed in a zip code, it selected the largest county); 4) MSA14; 5) NECMA15; and 6) FIPS16 code. The zip code look-up worked well during both pretests, for screening and main interviews. Interviewers reported negligible response time, and respondents confirmed that the correct city and state were being returned by the program, establishing the accuracy and completeness of the zip code database.

Headquarter Location. Pretest one interviewers reported significant confusion by respondents at single-location firms when asked if the D&B listed firm was the "headquarters or main office" of the firm. To eliminate this, the screener was rewritten to first ask if the firm had more than one location (as did 82% in pretest two screening); if not, that location was assumed to be the headquarters or main office and the potentially confusing question was never asked. This helped to reduce the likelihood of categorizing an eligible firm as ineligible as a result of a respondent misunderstanding.

Physical Address of Firm. Also of interest in the screener during pretest two was verification that we had the correct physical address of the firm's headquarters. A firm's physical address might have differed from the firm's mailing address, which could have been a post office box or rural route number. The wording of the questions in this section emphasized the focus of a physical address, and we added interviewer prompts to read when a respondent indicated that the firm's physical address was a rural route or P.O. Box. Following the section on the firm's physical address, the screener asked the respondent to provide an address to which Federal Express could send a package to the owner.

Branch/Institution Look-up. One of the challenging parts of the main interview questionnaire was the branch/institution look-up and subsequent logic skips in subsection H. The branch/institution look-up was a real-time-accessible database that assisted interviewers in identifying the correct branch or main office of depository institutions used by a firm. Initially, interviewers found it hard to identify the branch when the search brought up multiple main offices of financial institutions in different states. As a result of the pretests, the search functions were redesigned to to be easier to use by interviewers.

Skip57. More difficult to correct was the logic in the skip following the institution look-ups in subsection H. This skip, known as SKIP57, was intended to ensure that a distance question was always asked in the correct situation and not asked unnecessarily in other situations. The distance question was to be asked when the firm's main office and the most frequently used branch of the depository institution being discussed resided in the same MSA or county.

This skip did not work for all instances until after data collection began for the main study. While the concept was fairly straightforward, getting the logic of the skip to work correctly proved challenging. The skip needed to accommodate a wide variety of situations, including unusual ones, such as when a respondent could not identify a financial institution's location because he or she dealt with the institution entirely by telephone or over the internet. When the zip codes of both the institution and the firm were known, another look-up function would determine if a match existed for either MSA or county. In addition, the skip was affected by another problem in the CATI instrument. When the zip code, MSA, county, city and state were updated in the screener for the firm's physical address, the updated information did not initially populate fields that were preloaded from the screener into the main questionnaire. This affected the logic of SKIP57, which was location dependent. During the main study, for businesses whose data were collected prior to this being fixed, NORC re-contacted those businesses to get corrected data.


2.7 2003 SSBF Screener

The screener questionnaire can be found in Appendix D. The median administration time for the screener was 11.2 minutes and the mean administration time was 13.2 minutes17. (See Appendix E for an explanation of how these statistics were calculated.) The 2003 SSBF screener accomplished the following:

• Confirmed the firm name, the firm's physical address, and the name of at least one owner

• Screened for whether the business was currently in operation

• Screened for whether the business was in operation during December 2003 under one or more of its current owners

• Screened for whether the firm's headquarters had been contacted

• Screened for for-profit businesses that were not government entities

• Screened for firms that were not owned (at least 50%) by another firm

• Screened for the appropriate size of business

• Identified the business's fiscal year end date

• Identified the single most important problem facing the business currently

• Identified the email address of the owner

• Identified the firm's Federal Express address

Although most screener questions used in 2003 closely resembled those used in 1998, a major change was that the demographic information of the owner was not collected during screening. Minority oversampling was dropped for the 2003 survey, so there was no need to collect this information during screening. In addition, NORC reviewed the 1998 version of the screening questionnaire to update the eligibility questions for 2003, identified and removed questions that no longer applied, improved the wording on a few questions and interviewer scripts to make them clearer and more respondent-friendly, and mentioned the financial incentive in the closing statement to eligible firms.


2.8 2003 SSBF Questionnaire

The 2003 SSBF questionnaire had a median administration time of 57.6 minutes and a mean administration time of 59.1 minutes18. It would be impractical to attempt to describe all of the types of information asked for in the questionnaire; for that level of detail, one should consult the questionnaire itself. This report section provides a highlight of the questions and purposes of each section of the questionnaire. The questionnaire comprised the following sections:

Section I: Characteristics of the Firm

• A: Eligibility Determination. Questions in this section were asked of those businesses that had been screened by a proxy or firms that indicated an ownership change had occurred since the initial screening; the questions were asked again to be certain the business was eligible to participate in the study. Regardless of who completed the screener (proxy or owner), this section asked questions about owners and others working in the firm, the use of temporary and contract labor, and retrospective questions on employment one year and three years ago. Firms were also asked which incentive they would prefer. (Incentive information was confirmed at the end of the interview as well.)

• B: Organization Demographics. This section confirmed or collected the principal activity of the firm from which a standard industry classification could be made. This section also captured the fiscal year end date which sets the reference period for the balance sheet and income sections of the questionnaire. The organizational form of the business was determined so that firms could be classified into one of four major groups: sole proprietorship, partnership, S corporation, or C corporation.

• C: Personal Characteristics of the Owner(s). This section collected basic information about the majority owner, if one person owned more than 50% of the firm, or up to three owners, if no majority owner existed, such as race, sex, age, highest level of education, and number of years managing/owning a business. This section collects information on the number of owners/partners/stockholders, whether a corporation is publicly traded and how and when the firm was founded by current owners.

• D: Firm Demographics. This section collected information about number of sites and the geographic region served by the business and how the business used computers.

• N: Records. This section contained questions about what records the respondent would be using for the remainder of the interview, the specific tax forms used when filing taxes, and whether the financial statements were audited or not. Interviewers reported that respondents sometimes used more records than they indicated at this question19.

Section II: Sources of Financial Services

• E: Use of Deposit Services. The name of the financial institution for each savings and checking account held by the business was captured along with the dollar amount in each account. In this and the other subsections of Section II, the interviewer asked the respondent to give an estimate if the respondent could not state an exact dollar amount, and all dollar amounts were displayed in words and read back to respondents.

• F: Use of Credit and Financing. Information about the use of personal credit cards used for business purposes and business credit cards, and the interest rates paid on business and personal credit was collected. The institutions from which the firm obtained lines of credit, capital leases, mortgages, vehicle and equipment loans and other loans were identified, along with the account characteristics, such as outstanding balances, guarantees and collateral associated with each source were collected20.

• MRL: Most Recent Loan Application. Information about the most recent loan application that was approved in the last three years, and the most recent loan application that was denied in the last three years was collected. This information included the name of the institution at which the firm applied for a loan, the loan amount and maturity, the need for collateral or a guarantor, the interest rate and amount of any fees. Information about the length of relationship with the institution at the time of the loan application as well as how the firm conducted business (e.g., in person) was also collected. Respondents were asked about new loans and if no new loans, then the most recent renewal of an existing lines of credit.

• G: Use of Other Financial Services. This section asked if the firm used transaction services, cash management, and services related to credit, trusts, and brokerages. A new subsection about the use of credit card processing services was also included. For each service used, the name of the associated financial institution(s) was captured.

• H: Relationships with Financial Institutions. The most important source of financial services for the business - as defined by the respondent - was determined and the characteristics of up to eight depository institutions were collected. The institution look-up, which attempted to identify the exact branch location of the depository institution used from a database of more than 100,000 branch locations, was conducted in this section.

• L: Trade Credit. This section captured general information about the firm's use of trade credit, and if used, specific account terms on the firm's most important trade credit supplier

• M: New Equity Investments in the Firm. This section captured information about additional equity capital invested in the firm in the past year and the primary use of the capital.

Section III: Income and Expenses

• P: Income and Expenses. Data were collected about the firm's income and expenses during the most recent fiscal year. Income, other income, total costs of conducting business, officers' compensation and salaries and wages were collected. Qualitative retrospective data on sales and profits one and three years ago were also collected. As was true throughout the interview, the dollar value was displayed on the computer screen, in narrative form, and the interviewer read the amount back to the respondent. When respondents could not report a precise amount, they were asked to give an estimate. If the respondent could not or refused to give an estimate, the interviewer read a series of dollar ranges and asked the respondent to select the range that most closely matched their answer.

Section IV: Balance Sheet, Credit History and Respondent Payment

• R: Assets. This section asked about the firm's assets.

• S: Liabilities and Equity. This section asked about the firm's liabilities and equity.

• U: Credit History. This section asked about the firm's recent credit history and the principal owner's recent credit history21.

• T: Respondent Payment Information. This section identified which token of appreciation the respondent selected, either the financial incentive or the package of D&B reports designed for small businesses. For respondents choosing the former, the interviewer confirmed the name to put on the check and the best mailing address for sending it. For respondents choosing the latter, the interviewer providing the D&B website and passcode to access the site and obtain the reports at no charge.


3 Data Collection Preparation and Interviewer Training


3.1 Introduction

This chapter describes the activities required to prepare for data collection. These activities included: 1) developing respondent materials; 2) obtaining endorsement letters and sending a press release; 3) creating 2003 SSBF websites; 4) establishing toll-free phone numbers and email addresses; 5) recruiting and hiring interviewers; 6) developing interviewer training materials; 7) training and certifying interviewers to conduct screeners; and 8) training and certifying interviewers to conduct main interviews.

Identifying the information most important to share with respondents, and presenting this information in a simple, attractive, and professional form was critical to the study's success. NORC and the FRB developed materials that explained the study and attempted to convince respondents to participate. This information was presented in letters, brochures and pamphlets, and posted on an exclusive SSBF website designed by NORC and the FRB. These postings included letters of endorsement obtained by the FRB from the Small Business Administration and the National Federation of Independent Businesses, as well as a letter from the National Business Association that could be used to gain the cooperation of respondents, a letter from Federal Reserve Board chairman Alan Greenspan, and a letter from the NORC project director. In addition, the FRB prepared and sent a press release announcing the survey.

Finding the right people to collect these complex data and providing them with good training was also critical to success. NORC spent a significant amount of time preparing for interviewer training. Staff from NORC's human resources department and survey operations center recruited and hired interviewers, developed training materials, and conducted interviewer training in preparation for screener and questionnaire data collection.

Also during this period, NORC planned the receipt and storage of returned worksheets and financial records as well as undeliverable mail.

In this chapter, we describe the activities in which staff engaged to support screener and main questionnaire data collection. These include developing materials for respondents, interviewer recruiting and hiring, and interviewer training. All the activities fall under two broad areas, developing respondent materials and developing interviewers. Activities in these two areas occurred during the same period and are summarized in Table 3.1.

Table 3.1 Outline of Activities for Developing Respondent Materials and Interviewers
Develop Respondent Materials Develop Interviewers
• Review existing materials; changing, adding or substituting as appropriate • Hire and train interviewers for pretests
• Prepare advance mailing and worksheet mailing • Certify pretest interviewers
• Create 2003 SSBF website • Hire and train interviewers for main study
• Establish hotline and fax line • Certify main study interviewers


3.2 Develop respondent materials

Soon after contract award, NORC and FRB discussed the importance of developing a strategy that would assist us in gaining the cooperation of business owners. One of the strategies we discussed was designing materials to send to respondents in advance of our telephone contacts that would persuade them to participate in the survey. We also discussed the need to have one set of materials for the screener and another set of materials for the longer, more complex main interview. In an easily understandable format, the information needed to state the goals of the survey and the uses of the data. The materials needed to be attractive, professional, and designed to appeal to business owners.


3.2.1 Logos

First we decided on standard logos that would be used on all materials, making them easily identifiable with the project. NORC prepared two project logos for the 2003 survey, both based on the 1998 project logo. One logo, with no reference to year, appeared only on worksheets. The reference to year was removed to allow respondents to focus on the definition of the fiscal year without distraction. The other version, in which 1998 was replaced by 2003, was used on other materials including the worksheet mailing folder, the letterhead for letters to respondents, and the General Information and Frequently Asked Questions (FAQ) brochures. Word graphics of the two SSBF logos can be found in Appendix F.


3.2.2 Endorsement Letters and Press Release

The next steps were to identify the information that we thought would best explain the study and convince respondents to participate, and then to select the appropriate formats in which to present this information. First, NORC and the FRB sought endorsement letters from national organizations whose names would resonate with small business owners. We hoped that many respondents would recognize one or more of the endorsing organizations as representing their best interests. The primary purpose of these letters was to help establish the legitimacy of the survey. The survey received endorsement letters from three organizations: the National Business Association (Appendix G); the National Federation of Independent Businesses (Appendix H); and the United States Small Business Administration (Appendix I). The NFIB and SBA endorsements were acquired by the FRB. The endorsement letters were posted to the NORC website and included in selected screener refusal-conversion mailings.

NORC and the FRB staff agreed that announcements of the survey in the media might persuade business owners to participate. The FRB prepared a press release that was sent to major news media organizations across the country (Appendix J).


3.2.3 Other Promotional Materials

In addition to the endorsement letters, NORC and the FRB prepared several other items for respondents. The following materials were prepared for the survey:

Letters from the project director. NORC prepared two letters from the project director for 2003. The first was for the advance (pre-screening) mailing (Appendix K). The 2003 advance letter was slightly shorter and used a larger font than in 1998. Each main paragraph began with a summary statement in bold text. The redesign was intended to make the letter more readable. With the summary statements, NORC hoped that respondents who only skimmed the letter would still recall the main points - and would be expecting a telephone call from NORC. The letter was personalized and included logos for NORC and the 2003 SSBF. In addition, the official Federal Reserve Board logo was used on the envelope to further promote the legitimacy of the study. (An image of the envelope with the logo is found in Appendix L.)

The second project director letter was mailed with the worksheets in preparation for the main interview (Appendix M). The letter expressed appreciation for the respondent's participation in the screener, and emphasized the importance of continued participation. It described the materials included in the package and the task we wanted the respondent to complete in filling out the worksheet. It provided an estimate of the amount of time for the phone interview. The letter discussed the respondent-fee options in more detail. It included a telephone number and email address for respondents to contact NORC with questions, and provided a toll-free number for respondents to return their worksheets or other financial records by fax. Changes for 2003 included: 1) adding boxed text to the top of the letter stating "Please read this letter first."; 2) adding summary statements in bold text to the beginning of each paragraph; and 3) emphasizing that respondents may not need to complete the entire worksheet, depending on their firm's circumstances.

Letter from Alan Greenspan. The FRB provided NORC with a persuasion letter from Chairman Greenspan, used in the 1998 study and updated for the 2003 SSBF (Appendix N). The letter explained the purpose of the survey and asked for respondents' cooperation. To allay concerns about confidentiality and privacy, the letter was not personalized. The letter was sent both in the advance mailing and in the worksheet mailing.

Buckslip. For the 2003 SSBF NORC added a buckslip to the advance mailing (Appendix O). The green buckslip was about the size of a No. 10 envelope; printed on it was an official FRB seal and three bulleted sentences. Its purpose was to increase the incidence of recipients who came away from the mailing with the major communication points, even if they looked at nothing else in the mailing. The buckslip was the first item respondents saw when they opened the envelope.

General information brochure. The brochure was included in the advance mailing and contained answers to frequently asked questions about the study (Appendix P). The brochure was updated from 1998, including the logo; the name, address and phone number of the project director; and the SSBF website addresses.

Results brochure. This brochure was included in the worksheet mailing (Appendix Q). It was revised from the 1998 "Important Facts" brochure to make a more powerful cooperation-gaining tool. NORC deleted bullet points describing the study's purpose; this information was available from other materials in the worksheet mailing. NORC added narrative text to provide context for the statistics presented. The text discussed changes in the business environment since 1998, reinforcing the need to update the 1998 data with another round of research. The brochure provided the following results from the 1998 SSBF: 1) selected characteristics of small businesses and their owners; 2) the most important problems facing small businesses; 3) credit and borrowing activities; and 4) where small businesses went for financial services. By presenting summary percentages from the 1998 survey in this manner, NORC hoped to show by example how the 2003 data would be used, and how these uses did not involve release of identifying information.

FAQ Brochure. A revised version of the General Information brochure, called the Frequently Asked Questions Brochure, was included in the worksheet mailing (Appendix R). It included a description of the tokens of appreciation under the heading "What will I get out of this?" This information was omitted from the pre-screening brochure to ensure that screener respondents did not form the incorrect impression that they would receive an incentive for completing the screening interview only. The brochure was not significantly changed from 1998.

Customized Worksheet. NORC revised the 1998 worksheets that were sent to respondents. Respondents were asked to complete the worksheet prior to the interview and to use the worksheet as an aid during the interview. The design retained the structure and overall appearance of the worksheets used in the 1998 survey. The design was a two-sided form, printed on 17x11-inch paper, requesting financial record data on one side and financial services and sources of financing on the other side. Each of four possible business types had a unique worksheet with the appropriate reference lines in the tax return to assist respondents in looking up the data: sole proprietor (Appendix S), partnership (Appendix T), S corporation (Appendix U), and C corporation (Appendix V). The worksheets were printed in red, white and blue for functionality - columns were color-coded - and to reinforce the patriotic element of participation.

In updating the 1998 worksheets, NORC's consulting accountant revised the tax-form line-number references to match the appropriate tax forms and ensure that each worksheet version reflected current tax laws and accounting practices. NORC's instrument design team redesigned the worksheet so that the layout was more attractive and logical to follow, the instructions were more easily understood, and the explanation of how to use the worksheet as an aid in providing data was clearer. In addition, because the federal tax forms did not change between 2002 and 2003, worksheets did not need to be customized by fiscal year-end date as they had in 1998. This change reduced the number of worksheet versions from ten in 1998 to four in 2003. In addition, NORC added a fax number to the bottom of side one of the worksheet.

D&B Small Business Solutions ® Brochure. NORC offered respondents access to Dun and Bradstreet's (D&B) Small Business Solutions ® package as an alternative to a monetary incentive. The package that SSBF respondents were offered provided the respondent with one comprehensive report on any company in the Dun and Bradstreet database, one credit evaluator, one industry research report including 25 leads, and two Duns demand letters for collections. The package, which retailed for $199, was described in the color brochure, and the information was also made available to interviewers. The incentive and the brochure (Appendix W) describing it were new for 2003.

NORC Confidentiality Statement. This is the standard statement that NORC uses for its surveys to assure respondents that their answers are kept confidential (Appendix X). The statement was not modified for the SSBF.

Federal Reserve Structure and Functions brochure. The multi-page color brochure was newly published at the time of data collection (Appendix Y). The brochure provided a wealth of information about the organization, purpose, history and functions of the Federal Reserve Board.

Folder. A folder was used to contain all the materials sent as part of the worksheet mailing. The materials were inserted into the inside pockets of the folder. Minor updates were made to reflect changes since the 1998 survey; otherwise, the folder was unchanged from the 1998 survey. It was made from glossy card stock with a red, white and blue patriotic design. The SSBF logo and the SSBF web site address were on the front cover and the toll-free telephone number was printed on the back cover (see Appendix Z).


3.3 Advance Mailing

Prior to being screened, all sampled firms were sent an advance mailing. The advance mailing was mailed 1st class in a standard business envelope. In summary, the materials described above that were included in the advance mailing were: 1) the first letter from the project director; 2) the letter from Alan Greenspan; 3) the general information brochure; and 4) the buckslip.


3.4 Worksheet Mailing

Worksheet mailings were sent to firms that screened eligible for the main survey. In preparation for data collection, NORC reviewed the contents of the 1998 worksheet mailing and made a number of changes, some of which are described in Section 3.2. Generally, the purpose of the changes was to make the worksheet mailing more appealing and user-friendly. Besides the materials themselves, NORC thought about which pieces should be on top to make the packet inviting when initially opened.

The worksheet mailing comprised the following items: 1) one worksheet, using the appropriate version as determined during the screening interview; 2) the second letter from the project director; 3) the letter from FRB Chairman Alan Greenspan; 4) the Results brochure; 5) a NORC confidentiality statement; 6) the FAQ brochure; 7) the D&B Small Business Solutions brochure; 8) an NORC-addressed postage-paid return envelope to mail back the worksheet (Appendix AA); 9) the Federal Reserve Structure and Functions brochure, all placed inside 10) the folder.

If a proxy completed the screener and did not know, or refused to provide, the firm's organizational type, NORC included all four worksheet versions in the mailing with a letter that helped the owner determine which worksheet to use (Appendix BB).

The materials were placed in the folder in the same order each time. The top items in each pocket were selected to have the greatest impact. The project director letter was the top item in the right-hand pocket and was intended to be the first item a respondent would see upon opening the folder. NORC included a second copy of the letter from Chairman Greenspan in the worksheet mailing to help legitimize the study and provide "brand" recognition. The Greenspan letter was the top item in the left-hand pocket and was designed to be the second item respondents would see after the project director letter. The NORC-addressed postage-paid return envelope was unchanged from 1998. It was an 8 \raise.5ex\hbox{$\scriptstyle 1$}\kern-.1em/ \kern-.15em\lower.25ex\hbox{$\scriptstyle 2$} " x 11" brown envelope, folded in half and addressed to NORC's One North State Street, Chicago, IL, mailing center. The reply envelope was placed in the back of the left hand pocket.


3.5 2003 SSBF Website

NORC created a website specifically for 2003 SSBF, compliant with Section 508 of the Federal Disabilities Act. The address (www.norc.org/ssbf) was included in project director letters and other materials. The purpose of the website was to encourage respondent cooperation, by reinforcing the legitimacy of the survey and by providing an inviting and helpful repository of information and documents for the study. Even if a respondent only read the home page (Appendix CC), he or she would understand the study's objectives and the benefits of participating.

The website had links to the NORC home page, the Small Business Administration home page, the Dun and Bradstreet website for its Small Business Solutions report package, and the FRB website for SSBF respondents. The NORC SSBF website contained a text-link directory to information organized as follows: 1) About NORC; 2) About the SSBF; 3) About the Federal Reserve Board; 4) About American small businesses; 5) Organizations endorsing the SSBF; 6) Worksheets; 7) Frequently Asked Questions; and 8) NORC's privacy policy.

The website also had internal links to PDF copies of NORC's pledge of confidentiality, letters of endorsements from small business organizations, and the Alan Greenspan letter. Visitors could download PDF versions of all four worksheets.

The FRB provided a page on its website for the 2003 SSBF (www.federalreserve.gov/ssbf/). The page explained the purpose of the study; had excerpts from speeches given by FRB officials referencing the SSBF; and had links to the Alan Greenspan letter, frequently asked questions, and references and abstracts of research that used data from the 1987, 1993, and 1998 SSBF surveys (Appendix DD).


3.6 Toll-Free Phone Numbers and Email Addresses

Toll-free telephone numbers were established for respondents to inquire about the study or to fax materials such as worksheets or tax forms. Additionally, the fax was used to send information to businesses, usually either a Federal Express tracking number to locate the worksheet package, or an additional copy of a letter or other information. The telephone number established for voice communication was mentioned in letters and brochures, and respondents were invited to call the toll-free number if they had any questions or wanted to set an appointment for the interview. This number terminated at NORC's production facility in Downers Grove and was staffed by supervisors during the hours when the production center was open, and answered by a study-specific voice mail message after hours inviting callers to leave a message. Calls received after hours were returned the next morning. NORC also had a dedicated SSBF fax machine, with a toll-free number. Both the fax machine and voice mail were checked throughout each workday.

NORC established two email addresses for SSBF: [email protected] and [email protected]22 for respondents to use if that was the business' preferred method of communicating questions or information. The second address was based on the name of the SSBF project director at the start of the study, and appeared on all letters sent by the project director, including refusal conversion letters. The first address appeared at the top of worksheets. The accounts received very few inquiries, until the time NORC started following delivery of main-interview conversion letters with email reminders. The responses to these email follow-up messages are discussed in Section 4.7.3.4.


3.7 Interviewer Recruiting and Hiring


3.7.1 Overview

Based on prior experience, NORC knew that recruiting sufficient numbers of qualified interviewers for the screening and interviewing phases of this project would be challenging. The interviewers needed to have the ability to work with complex financial information and terminology as well as demonstrate essential interviewing skills such as dealing professionally with respondents who hesitate or refuse, reading questions verbatim, probing responses as appropriate, using CATI effectively, and following all survey protocols. They also needed to be available during scheduled dialing times, establish an acceptable weekly schedule, and adhere to it.

NORC attempted to collect empirical data that might suggest the best sources of job candidates for SSBF interviewer positions. NORC recruited candidates from three sources: 1) employment agencies that placed individuals with accounting, bookkeeping and finance backgrounds; 2) internal job postings that targeted experienced NORC interviewers; and 3) referrals and newspaper ads.

NORC began recruiting SSBF telephone interviewers in February 2004. The first phase was to hire interviewers for pretests one and two. The second and larger phase was to hire interviewers for the main study. NORC's goals were for all SSBF interviewers to: 1) exhibit all of the qualities of good telephone interviewers; 2) gain respondent cooperation; 3) conduct the interview accurately; 4) adhere to all research protocols for the study; 5) demonstrate numerical literacy; 6) have an understanding of the financial and accounting terms used in the main interview; and 7) meet the productivity expectations of the telephone operations center.

Before attending SSBF-specific training, all candidates had to attend a group interview, during which they received more information about NORC and demonstrated some basic interviewing skills. Those who were still interested in the position and met NORC's requirements were invited to NORC general interviewer training. Those who successfully complete NORC's general interviewer training were invited to project training.

At the end of data retrieval, interviewers attended a half-day debriefing session, along with FRB representatives and NORC senior project staff. The memo for the final interviewer debriefing is in Appendix EE.


3.7.2 Pretests

NORC recruited 12 trainees for pretest one. Six trainees were from employment agencies, three were experienced NORC telephone interviewers, and three were recruited through standard NORC methods, including referrals and newspaper ads.

Of the 12 trainees, 11 successfully completed the SSBF training. These 11 were certified to administer the screener and main instrument after being tested on mock interviews conducted with supervisors. Two interviewers dropped out between the pretests, so that for pretest one NORC used 11 interviewers and for pretest two NORC used 9 interviewers.

Interviewer performance in the pretests was evaluated using NORC's standard criteria for screening candidates for interviewing positions, and for evaluating interviewers' performance. The former criteria included exhibiting enthusiasm, being able to read questions fluently and with appropriate effect and being able to follow directions and communicate clearly. The latter criteria included number of completed interviews, hours per completed interview, and dials per hour.

 The results of the evaluation suggested that recruiting some interviewers from agencies would benefit SSBF. Of the three top-performing interviewers in the pretests (based on a combination of subjective and objective measures), all came from agencies. These individuals differed in their interviewing styles, but all had backgrounds in business, either through work, education or both.

 NORC concluded from the pretests that having a business background was a useful, though not sufficient, attribute of a successful SSBF interviewer. Individuals with business backgrounds were more likely to understand the financial concepts in the SSBF main questionnaire and be able to explain them to respondents more easily than interviewers without a business background. However, while a business background was helpful, exhibiting the qualities NORC demands of all its interviewers was always a necessary requirement.

Based on the pretests, NORC recommended that for the main study, the SSBF interviewer pool be a mix of experienced NORC interviewers from other studies, traditionally recruited new hires, and qualified trainees with business backgrounds from employment agencies.


3.7.3 Main Study

NORC conducted four SSBF training sessions. For the first three sessions, NORC recruited almost entirely from agencies. This was for two reasons. First, the agencies were able to provide relatively high-quality trainees with business or financial experience. Because the agencies served as the first screening point, they were able to filter out obviously unqualified applicants before those individuals could reach NORC. In this role the agencies were able to streamline the recruiting process. The second reason was that few NORC interviewers from other projects were available to work on SSBF during this time. Those who were available were included in these trainings.

Overall, NORC was satisfied with agency applicants. However we believe that the quality of agency applicants declined over time. It may have been that for the pretests and first few training sessions, the agencies provided NORC with their most qualified and well-suited applicants. For the later sessions, the agencies may have had to dig more deeply into their supplies of available employees to find applicants who met NORC's requirements. In addition, NORC later experienced very high attrition among the agency employees. NORC has theorized that this attrition might have been because many of the applicants were working through the agencies while looking for more permanent jobs. Because many SSBF applicants were highly skilled in finance or accounting, they were able to find other positions.

Because of this tendency, and because a number of experienced interviewers became available, NORC did not use agency applicants for the fourth group of interviewers. Instead, NORC recruited interviewers from other NORC projects, and new interviewers were hired as direct NORC employees. The fourth group of interviewers was trained solely on administering the screening interviews, because that was where the need was greatest, because the screeners did not require as much familiarity with financial concepts, and because this approach cut down on training time, limiting the impact of training on ongoing production activities.


3.7.4 Interviewer Attrition

Interviewer attrition is one measure of the success of NORC's approach to recruiting and training for SSBF. Evidence presented below and discussed further in Chapter 7 suggests that, while interviewers recruited from financial services employment agencies could add real value to SSBF, many were either not well-suited or disinclined to be long-term SSBF interviewers.

Attrition was higher than expected during group interviews with agency recruits and after screener training. Anecdotally, NORC expects one in 20 invitees to a group interview to realize that they are not well-suited to the job and to leave during the group interview, which they are invited to do, recognizing that this challenging and relatively low paying job is not for everyone. On SSBF the group interview attrition rate among agency recruits was four times higher than typical for NORC23.

In addition, almost one in five (18%) agency recruits quit after the 1.5-day screener training and before main training. These individuals appeared to be willing to accept payment for attending training, with little intention of actually performing the work. While NORC does not have a comparable benchmark, this level of attrition after less than two days of SSBF-specific training and less than 3 days of screener production dialing suggests that the agencies needed to do a better job describing the nature of the work to potential candidates.

Finally, many agency interviewers who completed training left before the end of data collection. For the first three training sessions, fewer than 40% of the certified interviewers remained at the end of the study - the rest having had quit or been terminated (

Table 3.3) From exit interviews, NORC learned that many agency-recruited individuals viewed interviewing as a stop-gap way to earn money while they looked for a better job. This effect was exacerbated by the high skill level of many of the interviewers in finance-related fields: despite NORC offering a higher-than-normal rate of pay for SSBF, the pay would have been significantly less than what an accountant earned, for example.


3.8 Interviewer Training


3.8.1 Introduction

NORC conducted four interviewer training sessions beginning in June 2004 (in addition to the trainings conducted for the pretests). For the first three sessions, trainees were instructed on administering both screener and main interviews. Briefly, the focus of screener training was on familiarizing interviewers with the purposes of the study and instructing them on how to adhere to the research protocol and gain respondent cooperation. Topics specifically addressed were the importance of the survey eligibility criteria (e.g., whether the firm was in business during December 2003 with one or more of its current owners), the protocol necessary to identify the appropriate respondent, and relaying to the respondent the confidentiality of information provided to the interviewer. The training for the main interview delved more deeply into the substance of the survey to assure NORC that interviewers were familiar with the terms used, and could elicit meaningful data from the respondents, particularly about complex financial activities.

Each of the first three training sessions required five days, spread over a week and a half. One and a half day of screener training was followed by 3 days during which interviewers were certified and gained some initial experience in screener production to reinforce the knowledge they had gained. The following week, two and a half days of main interview training were followed by certification and initial main interview production, to reinforce main training. Table 3.2 illustrates the typical training schedule.

As mentioned, the fourth training session in September was for the screener only because NORC needed more interviewers to increase screener production. The fourth training session ran on a Thursday and Friday, with certification the following week. The next table shows basic information on training, including the attrition rate for each group of certified interviewers.

Table 3.2 Training Schedule for First Three Sessions
Training Activity Approximate Number of Days Days of Week
Screener training, including an
introduction to the project
1.5 Mon.-Tues.
Conduct screener certification 0.5 Tues.-Wed.1
Break in training; interviewer screening
certification and live screening
2.5 Wed.-Fri.
Main interview training 2.5 Mon.-Wed. of following week
Conduct main interview certification 0.5 Wed.-Fri.1
Total 7.5
(including 2.5 days for break in training)
 

1For each trainee, certification took about half a day including preparation and debriefing. For NORC to conduct certification of all trainees typically took several days. Return to Table

Table 3.3 Training Dates, Number of Trainees and Interviewer Attrition
Training Session Training Dates1 Number of Interviewers Attending Training Number of Interviewers Certified Number of Interviewers Retained Through End of Data Collection Attrition Rate (Number of Interviewers Leaving Before Study End / Number Certified)
1 6/7 - 6/16 42 33 11 33%
2 6/21 - 6/30 28 23 2 9%
3 7/26 - 8/4 44 32 12 38%
42 9/2 - 9/3 19 19 103 53%
Overall 17 days 139 107 44 41%

1Includes three days between end of screener training and start of main training for first three sessions. Return to Table
2Screener training only; trainees were certified during the week following training, Return to Table
3These interviewers were recruited and trained to conduct screening interviews only. When screening ended on January 21, 2005, these interviewers were reassigned to other projects. Main interviewing continued until January 31, 2005. Return to Table

Training was led by the senior NORC central office team, including the project director, associate project director and production manager. Supervisors helped with the hands-on CATI instruction, including administering round-robin interviews. The two principal investigators from the FRB attended all interviewer training sessions. The FRB attendees were active participants, often providing additional information in response to interviewers' questions, especially questions on unusual or complicated financial instruments and situations.

All interviewer training was conducted at NORC's Telephone Survey Operations center in Downers Grove, IL.

Supervisor training. One week before the first interviewer training session, NORC held a two-day training session for SSBF supervisors. Supervisor training was led by the associate project director. Training was conducted by the production manager and assistant production manager, and three NORC supervisors. All three supervisors were familiar with SSBF prior to the training - two had tested the CATI instrument, and the third was an interviewer during the pretests who had been promoted to supervisor on the basis of performance and the value of his business and accounting knowledge. Two members of the FRB project staff were also in attendance.

To train the supervisors, project staff used the materials that had been prepared for interviewer training; this was a dual opportunity to test the training materials and train the supervisors. The material was delivered at an accelerated pace.


3.8.2 Job Aids

NORC prepared job aids to assist interviewers during training and while interviewing. Some job aids were designed to be at interviewers' work stations to provide quick access to key project information, such as the name of the sponsoring agency and answers to frequently asked questions. Other job aids were available in hardcopy (the interviewer manual) and electronic form (CATI), such as the glossary of terms and question-by-question specifications. Most job aids were distributed and discussed at screener or main interview training; a few of the later job aids were added after data collection had started and discussed during interviewer meetings. Table 3.4 provides a list of all job aids, which are included in Appendix C.

Table 3.4 List of Job Aids Used in Screening and Main Interviewing
Number Title/Description
1 Tax forms used by different organization types
2 Eligibility criteria
3 Frequently asked questions and answers
4 Telephone Number Management System (TNMS) disposition codes
5 Instructions for logging in and out of TNMS
6 Answering machine scripts
7 CATI functions
8 SSBF important codes/telephone numbers
9 Institution look-up
9A Institution look-up quick reference
10 Top 10 reasons to participate in the SSBF
11 Entering institution names into the look-up table
12 Conventions for entering institution names into the look-up database
13 Conventions for codes responses to A10_2
14 Tools for working batch four main cases after incentive increases to $200
15 Encouraging respondents to report dollar amounts in balance sheet questions
16 Encouraging respondents to return worksheet and other materials


3.8.3 Screener Materials and Training

NORC developed materials for the training sessions used to prepare interviewers to screen businesses. The materials included an agenda, a trainer's guide, an interviewer's manual, and mock interview scenarios. Job aids were part of the interviewer's manual.

NORC prepared the agenda first since it was the basis for all other training documents. The agenda was organized so that each topic built on the information presented in previous topics. (The agenda also built on material presented in NORC's one day of general interviewer training.) The material in the interviewer manual and trainer's guide followed the agenda outline. The short version of the screener training agenda appears in Table 3.5; a more detailed version, including the length of each module, appears in Appendix FF.

Table 3.5 Agenda for Screening Training (Short Form)
Day:Module Number Module Description
1:1 Introductions and Objectives
1:2 Study Purpose and Design. Introduce trainees to the purpose of the SSBF, the survey sponsor, the sample, and the study design.
1:3 Screener review. Discuss reasons for screening, present the five parts of the screening process, define key terms, explain different company types and how to handle firms with zero employees.
1:4 Eligibility criteria. Provide trainees with more practice in applying the eligibility criteria. Help trainees understand the importance of interviewing an owner or an appropriate proxy. Discuss the qualifications of an appropriate proxy.
1:5 Respondent confidentiality. Review importance of safeguarding respondent confidentiality and the interviewer's role in doing so. Review the contents of the advance materials mailed to respondents, focusing on the confidentiality statements contained in each.
1:6 Using SurveyCraft Telephone Number Management System. Acquaint trainees with the TNMS: its purpose, how it works, and how to use it effectively.
1:7 Mock Screening Interviews #1
1:8 Gaining Cooperation - Part 1. To introduce trainees to the techniques used to gain respondent cooperation and familiarize them with the Answers-to-Frequently-Asked-Questions Job Aid
1:9 Mock Screening Interviews #2
1:10 Wrap Up Day 1
2:11 Review of day one. Summarize the main points from Day 1. Answer Trainee questions on Day 1. Review agenda for Day 2
2:12 Gaining Cooperation, Part 2/Respondent Incentives. Provide trainees with additional practice in gaining respondent cooperation, and to acquaint them with the respondent incentive options and how to describe these to respondents.
2:13 Respondent Worksheets. Familiarize trainees with the purpose and content of respondent worksheets.
2:14 Duo Mock Screening Interviews 3 & 4
2:15 Q&A session.
2:16 Production goals. Inform trainees of the production goals for the study and how their performance will be evaluated.
2:17 In-class Certification Quiz/Training Evaluation
2:18 Wrap-up
2:19 Certification Mock. Certify that each interviewer has acquired the necessary knowledge and skills to effectively screen businesses for this survey.

The agenda, and the material presented following this agenda, ensured that trainees in all sessions received the same consistent information essential to SSBF interviewing.

The 2003 Survey of Small Business Finances Trainers' Guide for Main Screener (June 2004) consisted of modules describing the concepts and materials covered during training. The module instructions outlined the learning goals for that section, the presentation mode (e.g., lecture, round-robin mock interview), the materials needed, and an explanation of the importance of each topic.

At the close of the first day, trainees were given a homework assignment to test how well they had learned key concepts and financial terms. The test comprised fill-in-the-blank and matching exercises. The correct answers were reviewed and discussed as the first task of the second training day.

In addition to familiarizing trainees with the purpose and sponsor of the study, gaining cooperation, and the eligibility criteria, other screener training objectives were to: 1) get interviewers familiar with the FRB and its mission; 2) teach interviewers to address issues of confidentiality, privacy and identity fraud in a time of heightened national concern about these issues; 3) identify qualified respondents, either owners or proxy owners24; 4) navigate the initial questions of the CATI instrument, especially the first question, A1, which had a large response frame and several interviewer prompts; and 5) review the definitions of organization types.

To facilitate an understanding of the screener and the variety of screening outcomes, NORC prepared mock scenarios for group round-robins and duo-mock practice sessions. The initial scenarios used straightforward situations, while subsequent scenarios were more varied and complex.


3.8.4 Screener Certification

After screener training each interviewer was required to demonstrate that he or she understood the eligibility requirements and could successfully:

• Follow the protocol to reach an owner

• Gain cooperation from resistant respondents

• Record responses accurately

• Administer a complete screener interview using CATI

Certification was done through a mock CATI interview. A supervisor acted as the respondent and sat at one telephone interviewing station. The trainee sat a nearby station and called the supervisor to initiate the interview. The certification mock was conducted over the telephone.

The CATI certification screener was a scripted mock interview that tested cooperation-gaining skills, probing skills, using the CATI system, and coding responses. Interviewers had to demonstrate the ability to address respondents' concerns, answer questions appropriately, overcome common objections in their own words, and persuade respondents to participate. Sometimes supervisors would embellish the basic script with additional challenges.

Trainees who failed the certification interview were invited to try again after additional study and practice. Most trainees, however, passed the first time. Of those who passed, some trainees got additional feedback from supervisors on specific areas of improvement. In this way, the certification interview functioned both as a test and an opportunity for constructive feedback.


3.8.5 Main Interview Materials and Training

The materials used for main interview training comprised an agenda, a trainer's guide, an interviewer's manual, and mock interview scenarios. At the start of main interview training, trainees were given pages to insert in the three-ring binder training manual that they had received at the start of screener training.

The agenda was organized so that each topic built on the information presented in the previous topics, as well as on the material covered during screener training. The material in the interviewer manual and trainer's guide followed the agenda outline. A short version of the main training agenda appears in Table 3.6; a longer version, including training goals and the length of each module, appears in Appendix GG.

The agenda, and the material presented following this agenda, ensured that trainees in all sessions received the same consistent information essential to SSBF interviewing.

The 2003 Survey of Small Business Finances Trainers' Guide for the Main Interview (June 2004) consisted of modules describing the concepts and materials covered during training. The module instructions outline the learning goals for each section, the presentation mode (e.g., lecture, round-robin mock interview), the materials needed, and an explanation of the importance of each topic.

The emphasis of main interview training was on providing a much greater level of detail about business finance than had been presented in screener training. The additional detail was designed to help trainees understand the objectives and proper interviewing techniques for each section of the questionnaire. For example, an interviewer needed to learn the four types of depository institutions and be familiar with a dozen other types of institutions from which a small business might receive a loan. An interviewer had to have enough knowledge to ask the right questions to help ensure respondents correctly classified financial sources by type.

Trainees were also given a broad overview to help them better understand the context of the interview. They were told how the interview process worked, that is, how for each financial service the instrument first collects information about the use of the service and then collects information about the sources of the service. Interviewers also were shown a grid representing the array of financial services and institutions collected in the first part of the interview.

Table 3.6 Agenda Modules for Main Interview Training (Short Version)
Day:Module Number Module Title
1:1 Welcome and Introductions
1:2 Overview of questionnaire and worksheet
1:3 Questionnaire conventions
1:4 Section 1 - Characteristics of the firm
1:5 Duo mocks, section 1
1:6 Subsections E and F - Uses of deposit services, credit and financing
1:7 Duo mocks
1:8 Subsection MRL - most recent loan application
1:9 Wrap-Up/Agenda for day 2
2:10 Review of day 2
2:11 Duo mocks - Subsections MRL and G
2:12 Mock interview of subsection H for a C corporation
2:13 Duo mock - Subsection H for C corporation
2:14 Trade credit and new equity investments
2:15 Duo mock - Trade credit and new equity investments
2:16 Overview of income and expenses, and balance sheet
2:17 Mock interview on income and expenses, balance sheet, credit history and respondent incentive
2:18 Duo mocks - Income and expenses, balance sheet, credit history and respondent incentive
3:19 Review of day 2 activities
3:20 Duo mock - Part 1 for sole proprietorship
3:21 Gaining cooperation
3:22 Duo mock - Part 2 for partnerships
3:23 Confidentiality
3:24 Use of TNMS for main interviews
3:25 Handling missed financial institutions and services
3:26 Production goals and performance evaluation
3:27 Final exam and training evaluation
3:28 Wrap-up of training
3:29 Schedule certification and mock interview

A portion of main interview training focused on using the look-up function of branch names for depository institutions. The look-up function, used for the first time in the 2003 SSBF, could be administered up to eight times during subsection H of the questionnaire to attempt to pinpoint the exact office or branch used most often25 by respondents, and to capture the bank ID. Mastering the look-up function required knowing what questions to ask respondents and what searches to use to find a match. In training, respondents watched a trainer demonstrate the look-up function, and then were able to practice using it themselves during mock interviews.

Other training objectives were to instruct interviewers how to 1) record large amounts, and the different ways in which numbers and percentages were expressed conversationally; 2) break off an interview safely when necessary; 3) appropriately record comments and responses outside of soft-range checks; and 4) navigate the more common path variations - for example, how to follow the different paths in section C depending on the firm organization type.


3.8.6 Main Interview Certification

After main interview training each trainee was required to demonstrate that he or she could successfully:

• Demonstrate the ability to read and explain complicated financial terms;

• Read questions clearly and verbatim, and record responses accurately;

• Use the look-up database to attempt to locate branch locations and bank IDs of depository institutions; and,

• Administer an entire main interview using CATI.

The certification process used for screener interviewing was also used for main interviewing. Certification was done through a mock CATI interview. A supervisor acted as the respondent; the supervisor sat at one phone station, the trainee sat a nearby station. The trainee called the supervisor to initiate the interview.

The mock main interview took about one hour to administer; the script was for a sole proprietorship. Interviewers had to demonstrate the ability to move seamlessly from section to section; explain financial terms and interview questions; demonstrate competency using the depository institution look-up function; use QxQ26 text as appropriate; accurately record numeric responses; and read all questions verbatim, clearly and at a steady pace. As with the screening certification, sometimes supervisors add their own challenges to the interview script.

Trainees who failed the certification interview were invited to try again after additional study and practice. Most trainees, however, passed the first time. Of those who passed, some trainees got additional feedback from supervisors on specific areas of improvement. In this way, the certification interview functioned both as a test and an opportunity for constructive feedback.


4 Data Collection


4.1 Introduction

Main survey data collection began on June 1, 2004 and ran through January 31, 2005. The 2003 SSBF required a two-stage interviewing process: a screening interview followed by a main interview. Before and after each stage, many steps were undertaken to facilitate the data collection and encourage respondent participation. After selecting the sample, advance letters were sent by U.S. mail to all sampled businesses to provide respondents with an introduction to the study and its intent. Telephone interviewers then attempted to screen sampled businesses for eligibility. Firms that were found to be eligible for the main study were then sent a worksheet package via Federal Express. After allowing time for business owners to review the worksheet package, gather their financial records, and, if they desired, complete the worksheet, telephone interviewers called them to complete the main interview. This process enabled NORC to: 1) quickly screen firms to establish that they met the study's eligibility criteria; 2) allow a time interval of up to two weeks for screened eligible firms to complete the worksheet and collect their financial materials, such as tax forms, that could shorten the time to complete the main interview and improve data quality; and 3) conduct main interviews with informed, prepared, cooperative owners of eligible firms.

NORC used two questionnaires, both administered through computer-assisted telephone interviewing (CATI). The first questionnaire was a short screening instrument, designed primarily to identify firms that met the survey's eligibility criteria. The second, or main, questionnaire collected the balance of the study data. The mean time to administer these questionnaires was 11.2 minutes for the screener questionnaire and 59.1 minutes for the main questionnaire27. At the conclusion of the study, 14,061 screening interviews and 4,268 main interviews had been completed. All interviews were conducted by telephone from NORC's Downers Grove, Illinois Telephone Survey Operations center.

The rest of this chapter describes NORC's approach to each step of the main study. Thumbnail descriptions are provided in Table 4.1.

NORC and the FRB developed the instruments, systems, materials and protocols for the main study and refined them during the two pretests. However, there were two key differences between pretest data collection and main study data collection:

Sample batches. For the main study, cases were released in discrete batches rather than all at once. The sampling plan originally called for releasing two batches of 5,666 cases and a third batch of approximately 3,800 cases. Because of a lower-than-expected completion rate, batch three was increased to be the same size as the previous two batches, and a fourth batch of 6,800 additional cases was released in November 2004. Sample batches are discussed in more detail in Chapter 6.

Two-pass interviewing. For the first three sample batches of the main study, NORC used a two-pass approach designed to more efficiently and effectively work cases than a more standard single-pass design. The two pretest data collection periods were too short to allow for a reasonable test of two-pass sampling. The approach is discussed in the next section.

Table 4.1 Outline of SSBF Data Collection Process
Stage:
Step
Purpose
Screening Stage:
Clean address data
Improved the likelihood of advance mailings reaching their intended recipients.
Screening Stage:
Send advance letters by USPS 1st-class mail
Informed firms of 2003 SSBF; established legitimacy; explained purpose and benefits of the study; increased cooperation; alerted owners to expect a telephone call from NORC.
Screening Stage:
Attempt to contact owners
Found firm owners or, after repeated attempts to find the owner, found a proxy owner qualified to answer basic questions about the firm; located firms in the sample.
Screening Stage:
Conduct screening interviews
Verified that firms in the sample frame met all eligibility criteria; verified the owner's name, firm name and firm's physical address; developed rapport with respondents.
Main Interview Stage:
Send worksheet packages to screened eligible firms by Federal Express
Provided respondents with worksheets to complete in advance of the main interview; provided information about an incentive, Dun & Bradstreet reports; provided information that addressed concerns about legitimacy, privacy and confidentiality; provided a simple, convenient way for respondents to return completed worksheets and other materials following the interview.
Main Interview Stage:
Recontact eligible firms to conduct main interviews
If proxy owner had completed screener, reconfirmed that firm met eligibility criteria; collected survey data
Main Interview Stage:
Deliver incentives
Sent respondents financial incentives or provided respondents with access code and other information to obtain D&B reports


4.2 Two-Pass Interviewing

NORC used a two-pass approach for screening and conducting main interviews. The approach was new for SSBF in 2003 and not been used previously at NORC. Previous SSBF surveys used a single-pass approach that involved working all cases with equal intensity. NORC's proposal for the 2003 SSBF specified using a two-pass approach, introducing subsampling among cases not easily completed. The anticipated benefit of the two-pass approach was to reduce excessive efforts in trying to reach cases with low probabilities of completion by quickly completing the easiest of cases and then only attempting a subsample of the more difficult cases. Table 4.2 highlights the purposes of passes one and two.

Table 4.2 Purposes of Pass One and Pass Two for Screening and Main Interviewing
  Screening Main Interviewing
Pass One Complete the "easiest" cases first; identify promising cases for pass two; clean up the sample with respect to erroneous data and bad records; capture/confirm contact information such as telephone number, Federal Express address and email address Complete the "easiest" cases first; identify promising cases for pass two
Pass Two Focus interviewing resources on a subsample of reluctant and/or difficult-to-reach owners Focus interviewing resources on a subsample of reluctant and/or difficult-to-reach owners

The two-pass procedure worked as follows for sample batches one through three.

Screening. During the first pass of screening, interviewers followed a protocol designed to work cases quickly and efficiently. At the end of pass one, NORC subsampled pending and promising28 screener cases at a rate of 50%. These subsampled cases were moved on to pass two. Cases identified as not promising were not eligible for pass two and no further effort was expended on trying to complete them. Cases for which hard appointments had been scheduled in pass one were selected for pass two with certainty. Cases subsampled into pass two received a more intense calling protocol. More details on subsampling can be found in Chapter 6.

Main Interviewing. During the first pass of main interviewing, interviewers followed a protocol designed to work cases quickly. At the end of pass one, NORC subsampled pending and promising main cases at a rate of 60%. Cases subsampled into pass two received a more intense calling protocol. Cases for which hard appointments had been scheduled in pass one, or cases partially completed in pass one, were selected for pass two with certainty. More details on subsampling can be found in Chapter 6.


4.3 Mailing Advance Respondent Materials

The primary purpose of the advance mailing was to notify business owners that NORC would soon be contacting them to conduct the SSBF. The advance letter introduced respondents to the SSBF, NORC and the FRB, explained the purpose and benefits of the study, addressed privacy and confidentiality concerns, and provided website addresses and a telephone number, so owners could get additional information if they desired. Because of the advance mailing, the first telephone call to a firm was not generally a cold call. By providing advance notice and addressing respondent questions and concerns, the advance mailing was designed to improve respondent cooperation and responsiveness. Table 4.3 shows the items included in the advance mailing.

Advance mailings for each batch were sent out over a seven-day period in sub-batches or replicates sized to match the capacity of the interviewing staff to contact firms within one week of mailing. Each replicate of advance mailings was sent approximately one week before that replicate was released to interviewers to begin conducting the screening interview.

Generally, advance mailings were on schedule and problem free. NORC deviated from the advance mailing schedule in two instances. First, the advance mailings for batch two were delayed at the direction of the FRB because, at the time, the CATI program and sampling plan required further work. As a result, the 5,666 advance letters for batch two were released in three replicates spread over 2.5 weeks rather than as a single replicate over one week. Second, the entire batch four advance mailing of 6,800 letters were sent over a two-day period, in an attempt to accommodate the compressed batch four schedule. In hindsight, mailing all 6,800 letters in just two days was counter-productive as it took interviewers more than one week to contact or make the first "touch" to every case in the batch.

Table 4.3 Items Sent in Advance Mailing
Item Purpose
Cover letter from project director (Appendix K) Introduced SSBF, NORC and the FRB; explained study intent and benefits; addressed privacy and confidentiality concerns; set tone of future communications; provided website addresses
Letter from Chairman Alan Greenspan (Appendix N) Established legitimacy of SSBF
General information brochure (Appendix P) Provided answers to frequently asked questions about the study
Buckslip (Appendix O) Provided need-to-know facts about the study, even if a respondent chose not to look at any of the other mailing contents


4.4 Sample Release and Management


4.4.1 Overview

NORC started with Dun & Bradstreet's Dun's Market IdentifiersTM (DMI) file and used Smartmailer software to provide a clean, standardized address format. The sample was then loaded into NORC's SurveyCraft Telephone Number Management System (TNMS) in replicates containing approximately 300 businesses.

As originally planned, the first two sample batches would each comprise about 40% of the sample with a third and final sample batch comprising 20%. However, due to a lower-than-expected completion rate, batch three was increased to be the same size as the first two batches, and a fourth sample batch of 6,800 cases was released in November 2004 (see next section). With the addition of the fourth batch and the increased size of the third batch, each of the first three batches comprised 23.8% of the total sample with the larger batch four comprising 28.6% (Table 4.4).

Table 4.4 Sample Batch Size, Initial Advance Mailing and Sample Release Dates (2004)
  Batch One Batch Two1 Batch Three Batch Four TOTAL
No. of Cases 5,666 5,666 5,666 6,800 23,798
% of all Cases 23.8% 23.8% 23.8% 28.6% 100.0%
Initial Advance Mailing Date 6/3 7/9, 7/15, 7/22 9/1 10/25 n/a
Initial Release Date for Interviewing 6/10 7/19, 7/22, 7/29 9/9 11/1 n/a

1Batch two cases were sent advance letters and released for interviewing in three waves over 2.5 weeks. Return to Table

NORC's guideline was to release sample as needed with the intent to have all cases metered - i.e., made available to interviewers for calling - within one week of being released.

In addition to overseeing the sample as a whole, the project statisticians were responsible for subsampling cases between the first and second passes of each screening batch, and between the first and second passes of each main-interview batch.

Telephone data collection began on June 10, 2004, when the initial sample replicates were released for interviewing. Table 4.5 shows the weekly release of cases and production data.

Table 4.5 Production Workflow
Week ending Number of Advance Mailings Number of Cases Released1 Completed screeners Completed eligible screeners Worksheet mailings sent2 Completed main interviews3
6/5/2004 900 0 0 0 0 0
6/12/2004 4,766 5,666 418 311 0 0
6/19/2004 0 0 560 389 557 0
6/26/2004 0 0 762 549 606 0
7/3/2004 0 0 412 282 282 84
7/10/2004 442 442 327 215 269 132
7/17/2004 3,137 3,137 321 229 203 121
7/24/2004 2,087 2,087 729 527 530 94
7/31/2004 0 0 757 509 489 112
8/7/2004 0 0 520 359 421 153
8/14/2004 0 0 374 262 343 162

Table 4.5 - continued.
Week ending Number of Advance Mailings Number of Cases Released1 Completed screeners Completed eligible screeners Worksheet mailings sent2 Completed main interviews3
8/21/2004 0 0 394 318 331 163
8/28/2004 0 0 281 221 310 106
9/4/2004 5,666 0 95 63 169 129
9/11/2004 0 5,666 96 58 40 129
9/18/2004 0 0 830 580 567 123
9/25/2004 0 0 569 411 508 123
10/2/2004 0 0 686 504 513 217
10/9/2004 0 0 536 351 351 159
10/16/2004 0 0 351 238 376 140
10/23/2004 0 0 204 136 200 198
10/30/2004 6,800 0 75 53 94 156
11/6/2004 0 6,800 899 612 43 121
11/13/2004 0 0 797 556 836 152
11/20/2004 0 0 644 442 785 182
11/27/2004 0 0 371 250 150 139
12/4/2004 0 0 432 291 496 213
12/11/2004 0 0 282 188 249 203
12/18/2004 0 0 411 280 297 110
12/25/2004 0 0 143 90 141 130
1/1/2005 0 0 177 117 58 153
1/8/2005 0 0 159 113 208 124
1/15/2005 0 0 360 131 185 167
1/22/2005 0 0 89 64 102 142
1/29/2005 0 0 0 0 5 211
2/5/2005 0 0 0 0 0 35
TOTAL 23,798 23,798 14,061 9,699 10,714 4,5833

1Cases were released to be screened approximately 3 days after advance mailings went out. Return to Table
2Includes multiple requests for worksheets by the same firm. Return to Table
3All completed cases, including those that did NOT pass the FRB's criteria for completeness. See Section 5.2.3 for a description of the completeness check process. Return to Table


4.4.2 Release of Sample Batch Four

This section discusses more fully the implementation of the fourth batch, which was not part of the original design and which was treated differently than other batches. Response rates were lower than originally anticipated. As a result, in October 2004, the FRB and NORC agreed that a fourth sample batch was necessary to ensure that NORC would complete 4,000 main interviews by January 31, 2005. Batch four differed from the previous batches in three ways: size, schedule and protocol.

A larger sample. Batch four had 6,800 cases compared with 5,666 cases in each of the first three batches. NORC used completion rates calculated from the earlier batches to estimate the number of batch four cases needed to complete 4,000 main interviews.

A shorter schedule. Batch four data collection lasted thirteen weeks29. By contrast, batches one through three were in the field from 24 to 31 weeks. Also, batch four occurred over Thanksgiving, Christmas and New Year's Day, weeks that tend be less productive than non-holiday weeks, further compressing the practical schedule.

A protocol without subsampling. Because of the shorter schedule, NORC did not have time to employ a two-pass approach for batch four screening or main interviewing. Batch four cases were not subsampled. They were, however, generally worked more aggressively than cases in the first three batches: NORC managed the sample so that all "virgin" cases got called as quickly as possible and did not, for example, sit in queue behind cases that had multiple ring-no answer call outcomes. NORC trained and added screening interviewers to ensure that batch four cases were worked promptly when released, and did not suffer from competition for resources with main interviewing.

In main interviewing, batch four cases were called by interviewers who tended to be among the strongest interviewers on the project. Main batch four cases were sent conversion letters offering a $200 incentive on a rolling basis - once per week during December 2004. Near the end of data collection, in January 2005, all eligible batch four cases completing the screener were offered $200 to complete the main, as opposed to the $50 dollars offered to batch four respondents completing the screener prior to this. Because of the impending end of the data collection period, NORC accelerated the process of offering the larger incentive amount, rather than wait to make this offer when respondents were contacted to complete the main interview.


4.5 Timing and Schedule

NORC originally planned to start main study data collection in March 2004, and finish in October 2004. However, delays in developing the sampling plan and in developing, programming and testing the CATI questionnaires, resulted in the start date being moved to June 10. The end date was subsequently moved to November 2004, but with the introduction of sample batch four, the final end date was January 31, 2005.

Table 4.6 and Table 4.7 show the length of passes one and two for screener and main data collection.

Table 4.6 Number of Weeks to Complete Screeners by Sample Batch and Pass
  Batch 1 Batch 2 Batch 3 Batch 41
Pass 1 6.4 6.0 6.5 n/a
Pass 2 4.0 4.8 4.5 n/a
Total 10.4 10.8 11.0 10.5

1Batch 4 had no subsampled pass; however, most screener refusal letters were sent four weeks after the start of screening, with an additional group of letters sent eight weeks after screening started. Return to Table

Table 4.7 Number of Weeks to Complete Main Interviews by Sample Batch and Pass
  Batch 1 Batch 2 Batch 3 Batch 41
Pass 1 11.0 14.5 12.2 n/a
Pass 2 18.4 11.4 5.8 n/a
Total 29.4 25.9 18.0 12.0

1Batch 4 had no subsampled pass; however, main conversion letters were sent on a rolling basis during December 2004; most main cases were worked four to six weeks before a letter was sent. Return to Table

The design of the study, particularly the three sample batches with two passes each of screening and main interviews and then the addition of a fourth batch, required collaborative, iterative review by the FRB and NORC throughout the data collection period.


4.6 Screener Data Collection


4.6.1 Introduction

The sample for the SSBF was drawn from a list frame provided by Dun and Bradstreet (D&B). Because not all of the eligibility criteria were available from the frame, and because the frame data were sometimes incorrect, NORC interviewers conducted screening interviews with owners or owner proxies from the sampled firms. The main purposes of the screener interview were to:

• Confirm and, if necessary, update D&B-provided information about the firm name and address;

• Identify firms as either eligible or ineligible for the main interview;

• Obtain the firm's organizational type (sole proprietorship, partnership, S corporation, C corporation) in order to send the appropriate worksheet to eligible firms;

• Get the owner's name and an address to which Federal Express would deliver the worksheet mailing to the owner; and,

• Establish credibility and build rapport with gatekeepers and eligible owners.

The mean time to complete the screener was 13.2 minutes; the median, 11.2 minutes. Overall, NORC spent an average of 1.1 hours per screener case, which represents the total number of interviewer hours spent working all screening cases divided by the number of completed cases. The average does not include time working on main interviews. See Appendix E for an explanation of how administrative timings were calculated.

NORC had initially estimated that screening would require 0.5 hours per case, calculated by dividing total interviewer time spent screening by the number of completed screeners. The increase to 1.1 hours per case could be due to several factors. While questions about the firm's owner's race and ethnicity were removed from the screener because minority oversampling was dropped for the 2003 survey, other factors may have contributed to the increased administration time:

• As discussed in Chapter 2, the introduction text was significantly enhanced, and a protocol was programmed into the instrument to ensure that interviewers made three attempts to speak with the owner before they were permitted to collect screener data from a knowledgeable proxy. These changes were made to maximize the number of screeners completed with the most knowledgeable screener respondents (owners), minimize the number of screeners completed with someone at the firm other than the owner, and exercise greater control over the selection of proxy respondents, to ensure data quality.

• The number of calls required to complete a screener increased from 5.4 in 1998 to 6.7 in 2003 (see Appendix HH for 2003 number of calls). Median administration time increased from 4.5 minutes per case in 1998 to 11.2 minutes per case in 200330. Overall, the average number of calls made to firms during screening increased from 6.9 calls per case in 1998 to 9.5 calls per case in 2003.

• In 1998, interviewers completed screeners with 69% of sampled respondents (see The 1998 SSBF Methodology Report Table 6.4). In 2003, interviewers completed screeners with only 59% of sampled respondents (see Table 4.13 below). This lower-than-anticipated response rate worked to increase the average administration time.


4.6.2 Protocol for Working Screening Passes

4.6.2.1 Overview

At the start of data collection, NORC developed a set of calling rules and guidelines to serve as the protocol for working each case in a pass, as well as to systematically determine which cases would be eligible for pass two subsampling. In this section we discuss how cases were worked in each pass, how pass one was closed, and how cases were subsampled to pass two.

For the first three sample batches, screening was divided into pass one and pass two interviewing. All firms in a batch went through the pass one protocol. Only a portion of firms that did not complete screening in pass one were attempted in pass two. (Note: Batch 4 screening protocols are discussed separately in Section 4.6.2.5).

The purpose of pass one was to: 1) complete the "easiest" screener cases first; 2) identify promising cases for pass two; and 3) clean the sample with respect to erroneous data and bad records. The purpose of pass two was to focus interviewing resources on a subsample of reluctant and/or difficult-to-reach owners, with greater attention on finding the best gaining-cooperation strategy for each case.

In pass one NORC worked cases efficiently by controlling the number of calls made to unpromising cases - for example, to firms that were likely to be no longer in business. This enabled NORC to expend more effort attempting to complete the promising cases. Locating specialists performed additional steps to try to locate firms that were not reachable using the sampling frame data or to confirm that we were using the best available locating information, to ensure that all workable cases were being worked most effectively.

In pass two NORC worked a subset of promising cases, plus cases for which hard appointments had been scheduled in pass one. NORC worked the cases more intensely, with a more concentrated application of resources than the cases were worked in pass one.

Call notes, outcome codes and level of effort reports provided the information needed to execute the two-pass approach. Level of effort reports, discussed in more detail below, were reviewed by the FRB beginning one week before the date NORC believed a pass could be ended.


4.6.2.2 Protocol for Pass One Screening

For the pass one screening, interviewers made up to seven calls including at least one weekday evening call and one Saturday call. The calls were conducted over a minimum two-week period. Interviewers tried to conduct the interview or set up an appointment during the first seven calls.

If a case was deemed promising within the first seven calls, interviewers made up to 13 additional calls. Promising was defined as having at least one call that resulted in contact with a person at the sampled business, or an indication that the business was in operation, such as an answering machine with a greeting that identified the sampled business. Nonpromising was defined as no contact with a person at the sampled business and no indication during any call attempt that the business was in operation. After seven calls, interviewers sent nonpromising cases to supervisor review. A supervisor either sent the case to interviewing for more work, sent it to locating, or determined that it had completed pass one.

Hostile refusals, such as "If you call me again, I will call my attorney," or if the respondent used harsh profanity when refusing were coded as final hostile refusals and not retried.

Protocol for unconfirmed-firm-name cases in screener pass one. As part of the screener interview, respondents were asked to confirm the D&B-provided firm name. If a respondent provided a firm name that the interviewer felt was significantly different from the D&B-provided name, the interview was suspended and sent to supervisor review. If a supervisor determined that the respondent-provided name was sufficiently similar to the preloaded name, the case was sent back to interviewers to complete. If the supervisor determined that the names were sufficiently different to suggest that the wrong firm had been called, the case was sent to locating.

A description of NORC's locating procedures is provided later in this chapter, in Section 4.6.4.

The owner's name was listed as the firm name in 200 to 300 cases per batch in the sample frame data delivered to NORC by Dun and Bradstreet. If an interviewer could not confirm with a respondent that the owner's name was the actual firm name, the interviewer asked for the firm's correct name. The interview was then broken off and the case was sent to a supervisor for review. A supervisor would compare the preloaded SIC description of the firm with the firm name. If the comparison strongly indicated that NORC had called the correct firm, the case would be put back into circulation. The interviewer assigned to recontact the case would be told to enter a comment at the appropriate question (A2_2) indicating that the owner name had been incorrectly preloaded as the firm's name, and to continue the interview. If the comparison suggested that NORC had called the wrong number, the case was sent to locating. In these situations locators would attempt to find the right telephone number for the firm using the owner's name, SIC category, and other information available from the sample frame such as street address.

Cases sent to locating for which a previously unknown telephone number was found for the D&B-listed firm name were sent back to interviewers and worked according to established call rules unless it was identified late in a pass. During the last two weeks of a pass, the case was worked according to an accelerated (shortened) call schedule to end the pass on time and keep to the overall data collection schedule. The accelerated schedule called for working the case for up to 72 hours - about three days - from the time the case was put back into circulation. Sundays were not included in the 72-hour period. If during the 72 hours a hard appointment was arranged, the case remained active and the hard appointment was kept, even if the appointment was scheduled beyond the 72-hour period.

Answering machine protocol. NORC's protocol for leaving messages on answering machines was to leave a message on the third consecutive call that reached an answering machine at the same business. NORC developed a scripted message that provided a toll-free number so respondents could contact NORC. (See Appendix C.6)

4.6.2.3 Protocol for Subsampling

The actual time required to complete pass one screening for batches one through three varied by batch. The decision to end pass one screening was predicated on the FRB and NORC agreeing that all cases had been appropriately worked, and that there were no unexpected disposition codes - erroneous, false, or unintended codes - in any of the cases' call histories.

NORC developed a pass-based level of effort (LoE) report to help determine when cases had been sufficiently worked. The LoE report provided information on call attempts, completed calls and refusals by disposition code. For example, for the group of cases that had at least one refusal in their call history, the report showed the average number of refusals and the minimum and maximum number of refusals for the group. The LoE report provided similar averages, minimums and maximums for call attempts and calls in which contact was made with the firm. The report also showed whether or NORC had made at least seven call attempts for all noncontacted cases. The report provided a relatively easy way to demonstrate that all cases had been adequately worked. (See Appendix II for an example.)

Pass one screening continued until the FRB agreed that the calling protocol has been met. One week before the planned end of a pass one, the FRB and NORC reviewed the level of effort report, along with notes explaining the circumstances of problematic cases and outliers. The FRB and NORC discussed the LoE report and decided how to handle special-circumstance cases. The reports were reviewed again one day before a pass was ending and on the final day of the pass. When NORC and the FRB were satisfied that a screener pass one had been sufficiently worked according to existing protocols and calling rules, NORC closed the pass and began the subsampling procedure.

At the completion of pass one screening, supervisors were able to classify cases as one of the following:

• Completed screeners, eligible for main interview

• Completed screeners, ineligible for main interview

• Final non-interviews (final eligibility unknown - final nonrespondent)

• Hard callback appointments (sent to pass two with 100% certainty)

• Promising cases not yet complete (subsampled for pass two at 50%)

At the completion of screening pass one, all cases for which there had been no contact with a person, and other final outcomes such as a hostile refusal, language barrier, or incapacitated owner, were considered nonpromising. These nonpromising cases were not subject to pass two subsampling, and became final non-interviews with their final eligibility unknown. Nonpromising cases include cases with the following outcomes:

• Language barrier

• Computer tone/fax

• Fast busy

• Combination of busy and no answer with no other dispositions

• All busy

• All no answer

• Unavailable during field period

• Incapacitated

• Hostile refusal

• Disconnected telephone number with no new number available

In the first three sample batches, incomplete cases with the following outcomes at the end of pass one were subsampled into pass two at a rate of 50%.

• Transferred to voicemail, no message left

• Transferred to voicemail, message left

• Owner/proxy to call 800 number

• Hung up during intro (HUDI)

• Proxy refusal

• Owner refusal

• Gatekeeper refusal

Cases with the last four dispositions in the above list received refusal letters and had a one-week cooling-off period between passes while the letters could be delivered to their recipients.

As previously stated, the two-pass protocol was not used for sample batch four. Instead, batch four screening cases that had been worked to the pass one protocol and had the last four dispositions in the above list were sent screener refusal letters after being worked for about four weeks. There were two mailings to pending batch four screener cases that had a refusal call outcome; both mailings were in December 2004.

4.6.2.4 Protocol for Pass Two Screening

Because the subsampling procedure removed some cases from circulation, NORC was able to concentrate its telephone shop resources on a smaller number of cases. As a result NORC was able to use a more intensive, focused protocol in pass two compared to pass one. NORC found that pass two respondents were, generally, more resistant than anticipated. To achieve the needed number of eligible screened cases, NORC continued to work pass two cases for one to two weeks more than planned31.

At the beginning of pass two of batch one, NORC selected several experienced interviewers to work pass two cases. As additional sample continued to go into pass two, beginning with batch two, NORC developed a larger pool of interviewers to work these cases. All selected pass two interviewers attended an in-house gaining-cooperation workshop, and for the remainder of the study NORC continued to identify exceptional interviewers for pass two based on performance in this special training session. Supervisors played a larger role in pass two than pass one, focusing their attention on the smaller number of subsampled cases, reviewing call notes and recommending calling and gaining cooperation strategies.

Pass two interviewers were given greater flexibility to seek a qualified proxy if the owner was unavailable. In pass one screening, interviewers were required to make three attempts to reach the owner before being permitted to find a qualified proxy32. In pass two, interviewers were still required to first attempt to locate an owner to take the interview, but were able to ask for a qualified proxy before making three attempts to find an owner33.

NORC developed individual calling plans for all cases in which no contact had been made after one week of calling. In addition, if an interviewer encountered a disconnected number, or appeared to have called the wrong firm, he or she immediately called directory assistance to attempt to find the correct telephone number for the firm rather than send the case to locating for searching. If the interviewer could not find a number for the firm, the case was sent to supervisor review.

Supervisors received daily reports showing the calling history of non-finalized (active) cases. Reviewing and discussing the data, the supervisors were able to focus on the most promising cases and prioritizing cases that appeared to be languishing.

NORC made up to 20 call attempts to complete a screener case, and in some instances, many more than 20 call attempts. NORC aggressively worked pass two cases to obtain the highest possible completion rate. Protocols were for the minimum number of contacts, refusals or call attempts; NORC determined on a case-by-case basis when more attempts were likely to yield a completed interview. Pass two calling rules for final dispositions such as language barrier, computer/fax tone, physically or mentally handicapped, and other miscellaneous outcomes were identical to pass one calling rules. Calling continued on pass two until the response rate expectation had been met and, as with pass one, the Level of Effort report demonstrated that the calling protocol had been fulfilled.


4.6.2.5 Protocol for Batch Four Screening (Single Pass)

Batch four screening generally followed the calling rules for batches one through three. The major exception was that the two pass approach was not employed, and no subsampling was performed. In addition, because of the shorter timeframe for batch four, cases were worked somewhat more aggressively, e.g., shorter intervals between call attempts. Batch four cases were determined to be promising if at least one contact was made with a person in the first seven call attempts. Promising cases received up to 20 calls - and in certain situations, more than 20 calls - to attempt to complete the screening interview.

At two intervention points, NORC reviewed pending batch-four screening cases to determine which ones were ready to receive a refusal conversion letter. The first point was four weeks after screening began; in the first week of December, nearly 1,000 batch four refusal cases were sent a conversion letter via Federal Express and recontacted three days later. Eight weeks after screening, an additional 334 refusal cases were sent conversion letters and recontacted several days later.

NORC continued batch four screening until it yielded a number of screened cases comparable to the yields of batches one through three. NORC had calculated the size of sample batch four, based on data from the previous batches, to yield enough completed screeners to reach the goal of 4,000 completed interviews. Batch four screening ended on January 21, after which time interviewers worked on main interviews for the duration of data collection.


4.6.2.6 Protocol Changes During Production

NORC made several protocol changes for the screener once the main study began. Interviewers were instructed to break off an interview and report the case to a supervisor if the firm's primary business appeared to be ranching or raising livestock. These firms were considered to be farms and therefore out-of-scope for SSBF 2003.

By the beginning of January, 2005, newly screened eligible batch four cases had four weeks or less to complete the main interview - much less time than was given to respondents in the first three sample batches to complete the main interview. NORC was challenged to shorten the interval in which respondents were able complete their worksheet, assemble tax forms and financial records for their firm's 2003 fiscal year34, and find time to complete a telephone interview that could exceed an hour. Accordingly NORC made two changes to its screening protocol beginning in January:

• At the close of a screening interview with an eligible firm, an interviewer would try to schedule an appointment with the respondent to do the main interview35.

• Also at the close, the interviewer would inform the respondent that the incentive for completing the main interview was $200 (or the D&B package).


4.6.3 Refusal Conversion

Converting refusals was a critical part of data collection. Working with the FRB, NORC developed responses that could be used by interviewers to overcome the more common objections to participate. Throughout the two pretests and data collection, supervisors worked with interviewers to refine and expand these responses and to practice using them. NORC employed other techniques to help respondents understand the value of the study and the importance of their role in it. These techniques are discussed below.

Pass one. For pass one, interviewers used an FAQ job aid, skills learned in training, plus knowledge gained from on-the-job training, monitoring, and small-group meetings, to gain cooperation.

Interviewers with strong refusal-conversion skills were selected to call firms that repeatedly refused to be screened during pass one. The basic refusal conversion strategy was to a call firm again, determine why the respondent initially refused, address each specific objection raised by a respondent (and avoid issues not raised the respondent), and then attempt again to administer the screener.

Requests to fax or remail the advance letter were fulfilled within 24 hours of the request. Firms that received a fax of the advance letter were called the same day, or the following day; firms that received a remailed advance letter were called within four days of the letter being remailed.

Pass two. Much greater emphasis was put on refusal conversion in pass two compared to pass one. Table 4.8 shows the conversion rate by batch and pass. Although pass two cases were subject to more rigorous refusal-conversion efforts, these cases were, for the most part, far more difficult cases to convert than pass one cases.

Prior to being called in pass two, respondents who had refused in pass one were sent a customized refusal-conversion letter. The rest of this section discusses refusal letters.

NORC sent refusal-conversion letters to respondents who refused or hung up during introduction (HUDI) during pass one and were subsampled into pass two. Other cases were identified in pass one as promising and subsampled into pass two, but were not sent a refusal letter. Whether or not they were sent a letter, pass two cases were subject to the same protocol.

Table 4.8 Screener Refusal Conversion Rate1 by Sample Batch and Pass
  Batch One Batch Two Batch Three Batch Four TOTAL
Pass One 17.2% 24.0% 22.0% n/a 21.1%1
N refusing and complete in P1 290 430 306 n/a 1,026
N refusing in P1 1,684 1,793 1,391 n/a 4,868
Pass Two 7.8% 11.4% 15.0% n/a 11.4%1
N refusing and complete in P2 28 41 57 n/a 126
N refusing in P2 361 359 381 n/a 1101
Total2 28.4% 37.4% 33.1% 45.7% 36.3%

1Refusal conversion is defined within batch and pass. For example, a batch 1, pass 1 (B1P1) refusal conversion is a case that refused in B1P1 and was completed in B1P1, and a batch 1, pass 2 (B1P2) refusal conversion is a case that refused in B1P2 and was completed in B1P2. Return to Table
2Percentage of cases with any refusal outcome, in either pass, that completed the screener. This includes cases that refused in pass 1 but were completed in pass 2 - these cases would not be counted in the pass 1 or pass 2 refusal-conversion rates in this table. Return to Table

Refusal letters were personalized and signed by the project director. NORC used five versions of the basic screener conversion letter based on expressed reason for refusal: concern about confidentiality, concern about study legitimacy, firm does not currently use credit or borrow, owner is too busy, and general/nonspecified reason for refusal. HUDI cases received the general/nonspecified letter. The version a respondent received was based on a supervisor's review of call notes for the case. Copies of the letters are in Appendix JJ. A plan to send a brochure about the FRB, called FRB In Plain English (Appendix KK) with refusal letters was abandoned after data collection started and the team started getting a better sense of which materials would be most effective in converting reluctant respondents.

Over the course of data collection NORC varied the screener refusal mailing (in addition to using different letter versions) to try to boost response rates. For batch one, respondents received a project director letter, one or more SSBF-endorsement letters from national small-business organizations and, for some refusal types, the SSBF FAQ brochure. For batches two and three, NORC eliminated the FAQ brochure, and added two $1 bills to each mailing as a tool to aid respondents' recall of the mailing and encourage response (Table 4.9). See Appendix LL for screener letters that mentioned two dollars.

The screener refusal mailing changed for batch four. Batch four started in early November. Due to the holiday season, NORC was concerned that the previous configurations - a mix of letters and other materials sent by 1st-class mail - would get lost in the volume of holiday-period mail. The 1st-class No. 10 envelope and its contents were replaced by an overnight Federal Express envelope containing only a personalized refusal-conversion letter from the project director (PD).

Table 4.9 Description of Screener Refusal Mailing by Sample Batch
  Contents Delivery Method Cash enclosed
Batch 1 Letter from project director (PD), FAQ brochure and/or one or two endorsement letters1 1st-class mail None
Batch 2 Letter from PD and one or two endorsement letters2 1st-class mail $2
Batch 3 Letter from PD and one or two endorsement letters2 1st-class mail $2
Batch 4 Letter from PD Federal Express None

1For addressing concerns about confidentiality, and about not qualifying because the firm does not use credit, the mailing contained the PD letter and FAQ brochure. For addressing the concern of not enough time/too busy for the survey, the mailing contained the PD letter and endorsement letter from National Federation of Independent Businesses. For addressing the concern about the study's legitimacy, the mailing contained the PD letter and endorsement letter from Small Business Association. For general/non-specified refusals, the mailing contained the PD letter, the FAQ brochure and both endorsement letters. Return to Table

2Same as in batch 1 except the FAQ brochure was not included in any mailing. Return to Table

For batches one through three, NORC waited one week after mailing refusal letters before recontacting refusal cases by telephone, resulting in a one-week cooling-off period. During batch four, firms were recontacted one or two days after receiving their Federal Express mailing.

NORC and the FRB decided that beginning in mid-December, screener refusal letters (and other respondent correspondence) would be modified to include the end date of data collection, i.e., January 31, 2005, the last day on which respondents could complete the main interview. The purpose was two-fold: 1) to create a sense of timeliness, if not urgency - completing the survey could not be delayed or postponed indefinitely; and 2) to provide respondents with needed information - after a certain date they would no longer be able to take the survey or be eligible for an incentive.


4.6.4 Locating

Finding firms is a challenge of business surveys. Firms quietly go out of business. They change their names. They move headquarters, they merge with other firms, and they get purchased outright. For smaller firms in particular, the churn of dynamic capitalism often leaves little tangible documentation in its wake. For these reasons, many firms were not easily or quickly located from their sample information. For the 2003 SSBF, screener cases were sent to locating if:

• The preloaded telephone number was disconnected or had a fast busy on every call attempt;

• The firm at the preloaded telephone number was confirmed as not the firm in the D&B sample, i.e. the telephone number was wrong;

• The firm had been called at least seven times over a two-week period and the outcome of every call attempt was ring-no answer; or

• The firm was screened out as having an unconfirmed firm name that was deemed sufficiently different from the D&B preloaded name.

Supervisors reviewed call notes and call histories to determine if a case should be sent to locating. The average time a case was in locating before being put back in circulation, or finalized, was five business days.

Locators performed up to six steps to find a firm's telephone number. First, they called the preloaded number to verify that the case had been correctly identified as a locating problem. If the number was correct, the case was sent back to interviewing. If the number was not correct, the locator called directory assistance and asked for the business name in the city and surrounding areas. If the business was found, then the case was sent back into interviewing. If the case was still not found, interviewers checked four online locating sites and ran searches. If any meaningful leads were found, the case was sent back to interviewing. If no leads were found, the case was sent for supervisor review for a final review of possible leads, to ensure that the case had been worked thoroughly.


4.6.5 Receipt Control

Throughout screener data collection, advance letters that were returned to NORC by the post office were receipted as undeliverable (Table 4.10). These cases were worked by the telephone shop in the same way as cases that had their advance letters delivered. However, among cases selected for the 5% follow-up (see Section 4.6.13), the information was used to help determine whether or not a firm was in business.

Table 4.10 Number of Returned Advance Letters by Sample Batch
  Batch 1 Batch 2 Batch 3 Batch 4 TOTAL
Number of advance letters sent 5,666 5,666 5,666 6,800 23,798
Number of returned advance letters 484 508 519 487 1,998
% of returned letters 8.5% 9.0% 9.2% 7.2% 8.4%


4.6.6 Quality Control

Prior to data collection, NORC and the FRB knew that screener quality control would be critical to the survey's success. For larger firms, interviewers were challenged to get through to an owner. For smaller firms, it was often relatively easy finding the owner, but difficult keeping him or her on the telephone. Interviewers had to be prepared to address myriad objections to participate. They had to emphasize the importance of participation without alienating respondents. Although the screening instrument was short and relatively straightforward, interviewers needed to use this limited opportunity to build rapport and establish a sense of commitment among eligible respondents that would carry through to the main interviews. And, of course, interviewers had to carefully administer the eligibility questions and accurately record responses.

NORC supervisors and the FRB monitored screening interviews throughout the data collection period. NORC monitored 1,140 screener interviews with a variety of outcomes including eligibles, ineligibles, and incompletes. Monitoring included observing interactions with gatekeepers, gaining cooperation with respondents, finding owners or proxies, and conducting locating procedures, as well as administering the screener instrument.

Observations made while monitoring were shared with interviewers during one-on-one sessions and group meetings. To be effective, feedback needed to be provided within a very short time after observations. Typically, supervisors would provide feedback to an interviewer within minutes after an observation.

Every SSBF interviewer was assigned a supervisor to provide one-on-one feedback. Interviewers met their assigned supervisors at least once a week to review new issues, hours per case (HPC), dials per hour (DPH) and to discuss performance-improvement strategies.

NORC also held small-group meetings with interviewers. The meetings were initially held weekly, but as interviewers gained experience, the meetings were held less frequently. Led by supervisors, the production manager and the assistant production manager, the meetings gave the staff opportunities to discuss FRB feedback and issues identified through monitoring and data review.


4.6.7 Interviewer Misconduct

Most occurrences of misconduct were minor behavior infractions. These included interviewers taking extended breaks, not entering their time properly or accurately, and not staying focused on a task. A few interviewers were unable, or unwilling, to consistently read questions verbatim.

NORC believes the amount of interviewer misconduct was low. One reason was the high ratio of supervisors to interviewers. Interviewers were monitored frequently, both formally and informally, and they were aware of the high level of monitoring.

One interviewer was discovered to have falsified a disposition code. This interviewer was immediately dismissed after the incident. The case was handled by a supervisor, who corrected the disposition code.


4.6.8 CATI Changes

The vast majority of CATI changes were made prior to data collection. Inevitably, however, issues were discovered that required changes to the CATI after data collection started. On June 23, 2004, NORC put a new version of the screener CATI into production. The primary fix enabled interviewers to continue calling cases after three call attempts. Before the fix, on the fourth call to a case, CATI was programmed to go the Suspend screen, terminating the call. On October 6, 2004, another version went into production that corrected the criteria for eligibility flags 4 and 6 to include don't know and refused responses to question A9_2, "Was [STREET ADDRESS] ever the firm's headquarters, or ever a branch location?"

Table 4.11 shows the versions of the CATI screener used in production and the changes implemented for each version.


4.6.9 Level of Effort

Level of effort is the number of calls made to a firm to achieve a given outcome - for example, the average number of calls per completed eligible screener. NORC obtained 23,798 cases for the screening effort. The total number of calls made to these cases was 226,178, for an average of 9.5 calls per case. Level of effort by individual final screener outcomes are shown in Appendix MM. A comparison of level of effort for all completed screeners and all non-completed screeners is shown in Table 4.12.


4.6.10 Unweighted Completion Rate

NORC attempted to complete as many screeners as possible. The number of screened eligible cases determined the number of cases that could become completed main interviews. For screening, the unweighted completion rate is defined as the percentage of cases worked that resulted in completed interviews. When screening ended on January 21, 2005, NORC interviewers had completed 14,061 cases for an unweighted completion rate of 59.1%. Completed screeners comprised:

• All cases that finished the interview and were ruled either eligible or ineligible. The vast majority of completed screeners fell into this category

• Cases that did not finish the interview, but were classified as screened ineligible based on information provided by a respondent. These cases were reviewed by NORC supervisors and the FRB. A fictitious example would be an organization whose preloaded name was "Unity Church." An interviewer called the organization, and before getting to the question of whether the firm was nonprofit or for-profit, the respondent said that his organization was a church, not a business, and hung up.

• Cases confirmed by locators to be currently out of business or not in business as of December 2003.

By batch and pass, completion rates are shown in Table 4.13.

Table 4.11 List of Screener CATI Changes During Production
CATI Version Date List of changes in Each Version of CATI (by Date)
10-Jun-04 • - After the third call to a firm, CATI reverted back to the suspend screen, e.g., interviewers could not continue making calls to that firm. The program was fixed so that CATI continued not only on calls\le 3 but on calls>3.
15-Jun-04 • Original Firm Name and Updated Firm Name variables were written back to TNMS so that they could be passed to the main questionnaire.
23-Jun-04 • This version fixed a problem where SCRNFLG was not reset to "1" (Owner) if an interviewer mistakenly took the proxy path, and then backed up in CATI and coded A1 as "1" (Owner available). It also added a flag of "3" to GETPROXY to indicate that A1_2 has been reached.
25-Aug-04 • SCRQ63 variable records day. CATI fixed so that this variable no longer records values of "DO" and "RE."
6-Oct-04 • A11_1_6 fixed so that DK and RF responses now go to A11_1_7. For one case, an RF response at A11_1_6 had skipped to A12.
6-Oct-04 • A2_3 fixed so that when a proxy said DK, the information was captured and for the main interview, respondent was asked A5_1_1, A5_2 and A5_3. The fix was made to the screener, but it corrected how CATI operated in the main interview.
6-Oct-04 • The conditions in hardcopy quex for setting the eligibility flag to 3, 4, 5 and 6 all refer to question A9.2 (if A9.2 is DK and owner, set flag to this value, if A9.2 is DK and proxy, set flag to that value, and so forth). CATI was fixed to reflect hard copy questionnaire conditions; specifically, eligibility flags 4 and 6 were corrected.
1-Nov-04 • When zip code, MSA, county, city or state were updated in the screener for the firm's physical address, the data were not added to the TNMS file and did not cross the bridge to the main questionnaire. Both CATI and the "bridge" program that moved data from the screener to the main questionnaire were fixed.

Table 4.12 Level of Effort by Screener Completes and Non-Completes
Final Disposition Number of Calls Percent of Total Calls Number of Cases Percent of Total Cases Calls per Case
Non-complete 131,788 58.3% 9,737 40.9% 13.5
Complete 94,390 41.7% 14,061 59.1% 6.7
All 226,178   23,798   9.5

Table 4.13 Screener Completion Rate1 by Batch and Pass (Unweighted)
  Batch 1 Batch 2 Batch 3 Batch 4 TOTAL
Pass One 50.4% 51.5% 51.6% n/a 51.2%
Pass Two 27.1% 33.5% 33.0% n/a 31.1%
TOTAL 55.9% 58.1% 57.9% 63.6% 59.1%

1The unweighted completed rate is the number of completed screeners divided by the number of cases worked. Return to Table

To increase the completion rate, NORC tried several different combinations of incentives and methods of mailing refusal letters. For details see Table 4.9. Completion rates by treatment are shown in Table 4.14.

Table 4.14 Screener Completion Rate by Refusal Letter Treatment
  No Token Incentive - Sent by USPS $2 Token Incentive1 - Sent by USPS No Token Incentive - Sent by Federal Express
Sample Batch 1 2 and 3 4
Number of Cases 664 1,213 1,321
Completion Rate2 25.3% 34.0% 39.8%

1Token amount included in screener pass two refusal letters. Return to Table

2Completed screeners that received the letter treatment divided by the number of cases that received the letter treatment. Return to Table


4.6.11 Eligibility Rate

Of the 14,061 completed screened cases, 9,867 (68.9%) met the eligibility criteria and were included in the pool of cases for main-interview data collection (Table 4.15)

Table 4.15 Final Case Status by Sample Batch (Unweighted)
  Cases Metered1 Cases Screened % of Metered Cases Screened Eligible Cases Screened Percentage of Screened Cases Eligible Ineligible Cases Screened Percentage of Screened Cases Ineligible
Batch 1 5,666 3168 55.9 2,222 70.1% 946 29.9%
Batch 2 5,666 3291 58.1 2,276 69.2% 1,015 30.8%
Batch 3 5,666 3278 57.9 2,232 68.1% 1,046 31.9%
Batch 4 6,800 4324 63.6 2,957 68.4% 1,367 31.6%
Total 23,798 14,061 59.1 9,687 68.9% 4,374 31.1%

1Metered means the case was released into the system that makes cases available for interviewers to call. Return to Table

A breakdown of screened eligible firms by workforce size is shown in Table 4.16. The largest number of screened eligible firms were in the smallest size class (Unknown or less than or equal to 19 workers).

Table 4.16 Number of Screened Eligible Firms by Workforce Size1
  Unknown or \le 19 20 - 49 50 - 99 100 - 499
Number of firms 5,557 1,273 1,465 1,392
% of total screened eligible 57.4% 13.1% 15.1% 14.4%

1Workforce size is preloaded data from D&B list frame. Return to Table


4.6.12 Nonresponse

Despite substantial effort, many firms never completed the screener. In some instances owners would refuse, after multiple conversion attempts, to do the interview. Some firms were never locatable; others were seasonal firms not in business during the data collection period. Other firms never answered their telephone. There were a variety of reasons for nonresponse; see Appendix MM for the full list of final disposition codes for screener nonresponse.

Of the 23,798 cases worked in screening, NORC was unable to screen 9,737 cases for an unweighted incomplete rate of 40.9% (Table 4.12). The average of number of call attempts per nonrespondent was 13.5 compared to 6.7 call attempts per completed screener.


4.6.13 Nonresponse Follow-Up


4.6.13.1 Overview

In order to refine estimates of eligibility for the noncontacts and nonrespondents, which are used in the weight construction, NORC attempted to contact and screen a five percent subsample of certain categories of firms that did not complete the screening interview to determine their eligibility. Those cases eligible for the 5% follow-up were:

• Noncontacts at the end of pass one of batches one through three

• Noncontacts after six weeks of batch four screening

• Nonrespondents from pass two of batches one and two.

NORC sampled noncontacts from all four batches. Because batch four did not use subsampling to create a pass two, it was decided that noncontacts would be selected from batch four six weeks after interviewing started. The six week period approximated the length of pass one in batches one through three. NORC sampled nonrespondents from batches one and two only. The base from which the 5 percent was calculated, however, came from an estimation of the total number of nonrespondents in all four sample batches.

Table 4.17 Cases Eligible for 5% Follow-up Subsample by Sample Batch and Pass
Sample Batch Pass One Pass Two
One Noncontacts Nonrespondents1
Two Noncontacts Nonrespondents1
Three Noncontacts Not sampled
Four Noncontacts Not sampled

1Taken from all cases eligible for pass two, including those subsampled into pass two and those not subsampled into pass two. Return to Table

NORC sampled noncontacts from all four batches. Because batch four did not use subsampling to create a pass two, it was decided that noncontacts would be selected from batch four six weeks after interviewing started. The six week period approximated the length of pass one in batches one through three. NORC sampled nonrespondents from batches one and two only. The base from which the 5 percent was calculated, however, came from an estimation of the total number of nonrespondents in all four sample batches.

NORC produced a 5% systematic sample of 113 noncontacted cases and 201 nonrespondents. A detailed explanation of how the 5% follow-up sample size was calculated and the sample drawn can be found in Section 6.8.

Overall, 78.9% of the noncontacts and 17.4% of the nonrespondents were found to be ineligible for the study based on the 5 percent follow up. Section 6.8.3 provides the detailed outcome of the 5 percent follow-up - the percentages of noncontacts and nonrespondents who were confirmed eligible, confirmed ineligible/out of business or whose eligibility status remained indeterminate - and how these percentages were used to adjust the sample weighting to account for eligibility.

The 5 percent follow-up task was originally scheduled to start on September 16, 2004 and last three weeks. However, with the introduction of batch four in November and its inclusion in the follow-up, data collection actually started during the first week of December.

Noncontacts and nonrespondents were treated as separate types of cases for the 5% follow-up. The noncontact work was designed to determine whether a firm was no longer in business or otherwise ineligible. The nonrespondent work involved attempts to talk to an owner or proxy owner to determine a firm's eligibility status during pass one screening.

Locating and interviewing work for the 5% follow-up was done by three interviewers and a supervisor, all of whom had significant locating experience on SSBF, as well as being strong refusal converters. In addition, the nonrespondents were mailed a self administered questionnaire (SAQ), which is described below in more detail.

Nonresponse questionnaire. NORC drafted a one-page questionnaire to be completed by nonrespondents. The self-administered questionnaire (SAQ) was an abbreviated version of the screener CATI that could be completed by a knowledgeable respondent in one minute or less. The purpose of the instrument was to collect or verify the following information:

• The firm's name

• The firm's address of its main office

• If the firm was in business in December 2003 under one or more of its current owners

• If the firm was in business during the last month in which the firm could have been called for a screening interview (e.g., August 2004 for batch one)

• If another company owned 50% or more of the firm

• If the firm was a nonprofit organization

• If the firm was owned by a local, state or federal government agency

• The firm size, including employees and working owners, during a typical pay period in 2003

• The firm's SIC description or principal activity of business

Each mailed copy of the questionnaire was personalized with the preloaded firm name, street address and SIC description, and printed on gray paper to distinguish it from the other package contents. A copy of the SAQ is in Appendix NN.


4.6.13.2 Noncontact Protocol

Noncontacts were cases that interviewers never established contact with during screening. Noncontacts included pass one wrong numbers, disconnected numbers, numbers with fast busy signals, and numbers that were never answered on at least seven call attempts. The last category included cases with any combination of busy signals and ring-no answer call outcomes. Noncontacts for the 5% follow-up had received some locating work during pass one - typically by calling directory assistance and entering the business name into online search directories.

NORC used a four-step approach for noncontacts in the 5% follow-up in which it:

1) Matched the sample with a more current D&B database of U.S. businesses;

2) Sent 1st-class mailings, including the SAQ, to the preloaded address or the most recent address identified by the end of pass one screening, to all cases with indeterminate eligibility after step 1;

3) Initiated advanced locating procedures for cases that did not return a completed SAQ, including sending certified letters. If a firm or owner was contacted by telephone, NORC used an interviewer-administered questionnaire (IAQ) to determine if the firm was in business during the last month in which the firm was in pass one screening;

4) For firms or owners who were unable or unwilling to complete the IAQ, NORC sent Federal Express packages similar to the ones used in the 5% follow-up for nonrespondents, and worked the cases to resolution as nonrespondents.

Each step is discussed below.

Match sample to more current D&B database. NORC sent Duns numbers for all 113 cases to D&B. D&B matched the original sample, which had been drawn from D&B's database of businesses as of May 2004, to a more current database of businesses as of December 2004. From this step, NORC identified one firm (1%) as having gone out of business; additional locating steps were not performed for this firm.

Send 1st-class mailing. NORC sent 1st-class mailings to 112 noncontacts on December 20-21, 2004. The contents of the mailing were:

• A cover letter from the SSBF project director explaining the purpose of the study and the importance of completing the questionnaire (see Appendix OO).

• An SAQ personalized with the firm name, SIC description and street address

• A pre-addressed, postage-paid return envelope

Mailings were sent to firms' preloaded address, or to the most current address for the firm at the end of pass one locating. The letter explained the purpose of the study and asked the recipient to complete and return the SAQ by January 21, 2005. The letter mentioned that NORC would send $25 as a token of appreciation to anyone who returned the completed questionnaire. The SAQ was identical to the version developed for nonrespondents (see below).

The mailing's purpose was both to receive completed SAQs and to get additional clues about a firm's location. Many letters were returned by the USPS as undeliverable, and this action was an indication that a firm may no longer be in business. Other letters were forwarded to new addresses, and NORC requested that the USPS provide it with the new addresses, which were passed to the locating team.

Conduct advanced locating procedures. NORC used an extensive set of techniques to attempt to locate a firm or, as was more often the situation, to find enough evidence from enough sources to make a case that the firm had gone out of business, and as such was unlocatable. NORC experimented with different locating techniques, dropping unproductive ones and adding new ones that offered good information about a firm's whereabouts. A list of the techniques used most often by SSBF locators during the 5% follow-up is shown below. See Appendix PP for the form locators used to conduct advanced searches.

• Calling the firm's preloaded telephone number or the most recent telephone number identified for the firm at the end of pass one screening

• Calling directory assistance for the firm name and the owner name

• Searching multiple online directories for the firm name and the owner name36

• Repeating online name searches across wider geographic areas, e.g., an entire state or the U.S.

• Searching for neighboring firms or residences

• Searching for local firms in the same line of business as the noncontact

• Calling trade associations

NORC also prepared a short interviewer-administered questionnaire (IAQ) specifically designed for collecting information about noncontacts (see Appendix QQ). The purpose of the IAQ was to:

• Identify someone who could verify basic information about a firm

• Confirm if a firm was in business or out of business

• If the latter, confirm the month and year that the firm went out of business

• Get the relationship of the respondent to the firm, e.g., former owner, former employee, spouse or relative of the owner

As part of locating, in January 2005, NORC sent certified letters to all noncontacts for which eligibility status had not been confirmed. The contents of the certified mailing were identical to those of the 1st-class mailing sent in December 2004. Undeliverable certified mail was an indication that a firm, while not necessarily out of business, was likely to no longer be at the D&B-preloaded address. A delivered certified letter was an indicator, though less reliable, that a firm existed at the address we had for the firm37.

In many instances NORC was unable to locate by telephone or mail a current or former owner of a firm, or a qualified proxy, who could confirm that the firm was in business and eligible at the time of pass one screening. Without such confirmation, NORC had to analyze all the records and outcomes for each case. By synthesizing this information, NORC was able to recommend a final status for each case: eligible, ineligible/out of business, or status indeterminate. The next table shows some of the outcomes NORC used to help determine a case's status.

Table 4.18 Outcomes That Helped Establish Ineligibility in 5% Follow-Up
Outcomes Indicator of Ineligibility1
Confirmed ineligible with questionnaire, by owner/proxy Confirms
Disconnected/wrong/fast busy telephone number Strong
No productive telephone leads Strong
No DA listings Strong
No Internet listings for firm (including neighboring states) or owner Strong
Some Internet listings, but none productive or different from preload Strong
Nondeliverable 1st-class mailing Supporting
Nondeliverable Certified letter Supporting
Evidence of ineligibility by neighboring business Supporting
Evidence of ineligibility by landlord/mgt. company Supporting
Evidence of ineligibility by other third party Supporting
Expanded owner search by geography is unproductive Supporting
Evidence of substantial call history during data collection Supporting
Unlocatable business has more than five employees Supporting
Advance letter returned to NORC as undeliverable Supporting

1Indicators of ineligibility is NORC's assessment and may not reflect the FRB's view Return to Table

NORC sent the FRB all information it had amassed for each noncontact, including call records, call notes, outcomes of online searches and certified mailings, and completed or partially completed questionnaires, plus, for each case, NORC's recommended status. The FRB reviewed all of the materials, for some cases changed NORC's recommended status, and on February 3, 2004 provided NORC with a final, FRB-approved eligibility status for all noncontacts as well as all nonrespondents.


4.6.13.3 Nonrespondent Protocol

The objective of the nonrespondent protocol was to contact as many firms as possible and administer to their owners or proxy owners a modified version of the screener interview, either by telephone or mail. The intent was to confirm the eligibility status at the time of pass one screening for as many firms as possible.

Nonrespondents consisted largely of non-hostile refusals, and cases for which messages had been left on voice mail or with gatekeepers during pass one screening. Nonrespondent cases presumably were operating businesses with working telephone numbers at the time they were called for screening.

NORC used a three-step protocol for nonrespondents in the 5% follow-up:

1) Sent firms a Federal Express package that included the SAQ

2) After two and a half weeks, started calling firms that had not returned a SAQ, or had returned an incomplete SAQ. When an owner or proxy owner was reached, NORC administered an interviewer-based version of the SAQ.

3) For cases that had a wrong number/disconnected number/fast busy after repeated call attempts, or could otherwise not be contacted during two weeks of call attempts, NORC used the locating protocol for 5% follow-up noncontact cases to find a telephone number for the business. If a new number was found, the protocol resumed at step 2.

Each step is discussed below.

Send Federal Express packages. On December 13, 2004, all 201 nonrespondents were sent a package by standard overnight Federal Express containing the following materials:

• A cover letter from the NORC project director stating that this was the final opportunity for the business owner to participate (See Appendix RR)

• A $25 incentive check made out to the firm owner or firm

• A SAQ personalized with the firm name, SIC description and street address

• A pre-addressed, postage-paid return envelope

• A pen

The letter asked respondents to return the SAQ to NORC by December 30, 2004. This is consistent with professional standards that recommend allowing respondents two weeks to return an SAQ. Respondents were given the option of mailing or faxing the SAQ, or they could call the SSBF hotline and have the interview administered over the telephone by an SSBF supervisor. NORC trained the supervisors who maintained the SSBF hotline to administer the SAQ and to answer respondents' questions about the 5% follow-up.

NORC set up receipt control to track incoming SAQs38. When an SAQ arrived, NORC recorded whether it came by telephone or fax and was fully or partially completed. If Federal Express forwarded the package to another address, NORC was provided with the new address, and this information was passed to the 5% follow-up locating team.

Call firms that did not return SAQ. Starting in January 2005, NORC's team of locators and refusal-converter interviewers began calling the nonrespondents that had not yet returned an SAQ. The original protocol was for interviewers to make up to 14 call attempts - two calls a day for two weeks - and to stop working a case after the first owner refusal. In reality NORC worked some cases well beyond the protocol, particularly in converting refusals. NORC called nonrespondents for approximately three weeks, through the end of January 2005.

Table 4.19 Receipt Control from 5% Follow-Up Mailing to Nonrespondents
  Count Percentage
Fully completed questionnaires 85 42.3%
Partially completed questionnaires 1 0.5%
Refusals 5 2.5%
Returned Undeliverable by Federal Express 9 4.5%
Not returned 101 50.2%
Total 201 100%

Sending nonrespondents to locating. NORC was unable to contact some nonrespondents even after sending the Federal Express package and making multiple call attempts. These cases were issued facesheets and worked as noncontacts, going through the noncontact protocol described earlier in this section.


4.7 Main Interview Data Collection


4.7.1 Introduction

All firms that met the eligibility requirements in the screener were invited to take the main interview. The main interview was the second stage of the two-stage data collection process. Two times per week, on every Tuesday and Friday, worksheet packages were sent via Federal Express to eligible businesses that had been screened in since the previous worksheet package shipment. The timeliness of these shipments was intended to preserve the cooperation that had been established by telephone interviewers and keep the study fresh in the minds of business owners. After a brief waiting period, business owners were called to complete the main interview.

The data collected from the main interview will be used by the FRB and other investigators to fulfill the study objectives. (For a description of study objectives, see Chapter 1.) The main interview collected the following information:

• Eligibility confirmation39

• Organizational demographics

• Personal characteristics of owners

• Firm demographics

• Use of deposit services

• Use of credit and financing including credit cards, lines of credit, mortgages, motor vehicle loans, equipment loans, loans from partners or stockholders, and leases

• Most recent loan application that was approved and/or the most recent loan application that was denied, if either occurred within the last three years

• Use of other financial services including check clearing, credit card processing, brokerage services and trade credit

• Relationships with financial institutions

• Trade credit

• New equity investments in the firm

• Income and expenses

• Assets

• Liabilities and equity

• Credit history

• The primary owner's net worth and home value

• Respondent payment information

The average time to administer the main interview was 59.1 minutes; the median time was 57.6 minutes (see Appendix E for how these timings were calculated). Overall, NORC spent an average of 3.4 hours per main case, which represents the total number of interviewer hours spent working all main cases divided by the number of completed main cases; this calculation does not include time spent on screening interviews.

Despite higher average administration times than expected, hours per completed case (HPC) for main interviewing were lower than expected. Many factors may have contributed to the decline in HPC from 1998, including improvements since the 1998 survey in the caliber of interviewers, the content of interviewer training, and the quality of the respondent contact materials. The most significant contribution may have been derived from the shorter interval between screening and interviewing. In 2003, interviewers were able to build on the rapport they had established during screening when recontacting respondents to complete the main approximately one week later.


4.7.2 Mailing Worksheet Materials

After being screened eligible and prior to being called for the main interview, respondents were sent a worksheet package by Federal Express. The worksheet allowed a respondent to prepare for the main interview, by collecting information from a variety of sources such as bank statements and income tax returns, and compiling all of it on one piece of paper. The payoff for NORC and the FRB of respondents completing the worksheet was two-fold: 1) shorter administration time, since respondents would have their responses to many questions in front of them during the interview, and 2) better data quality, since respondents had spent time beforehand preparing some of their responses. Table 4.20 shows the materials in the worksheet package.

Table 4.20 Items Sent in Worksheet Mailing1
Item Purpose
Worksheet for respondents to complete before the interview
( Appendix S, Appendix T, Appendix U and Appendix V)
Collected information about the firm's use of credit and its financial position prior to being called for the main interview. The worksheet had four versions: sole proprietorship, partnership, S corporation and C corporation
Cover letter from project director (Appendix M) Explained the contents of the mailing and encouraged respondents to continue to participate; mentioned that most firms did not need to complete the entire worksheet, provided more detailed information on incentives
Letter from FRB chairman Alan Greenspan (Appendix N) Established legitimacy of SSBF
FAQ brochure about the 2003 SSBF (Appendix R) Provided answers to frequently asked questions about the survey, and mentioned incentives for completing the main interview
Brochure highlighting findings from 1998 SSBF (Appendix Q) Provided interesting and relevant highlights from the 1998 SSBF; reinforced notion that only aggregated data were reported; reinforced need for a new round of up-to-date survey information
Sheet explaining NORC's privacy and confidentiality policies (Appendix X) Addressed concerns about confidentiality and privacy
FRB Structure & Functions brochure (Appendix Y) Established legitimacy of SSBF by providing extensive information about the structure, functions, history and importance of the FRB.
Sheet explaining D&B reports (Appendix W) Provided information respondents would need in order to choose between a financial incentive and the D&B package of small business reports
Self-addressed postage paid envelope (Appendix AA) Provided easy way to return the worksheet, and other materials used for the interview, back to NORC at the end of the interview
2003 SSBF folder (Appendix Z) Folder for respondent to easily keep all materials together

1If, when contacted, a respondent said that he or she had not received the worksheet package, NORC would confirm the respondent's Federal Express address, set a callback time, and send a new package within two business days. Return to Text


4.7.3 Protocol for Working Main Interview Passes

4.7.3.1 Overview

NORC attempted to conduct each main interview as soon as possible after a firm was screened. We believed that as the time between the screener interview and main interview lengthened, respondents' recall of and commitment to the study waned. When respondents were called for the main interview within a few days or one week of completing the screener interview, their level of commitment appeared to often be at its highest point.

Typically firms were called four to six business days after screening. Toward the end of data collection, when time was at a premium, firms were called as early as three business days after screening. During peak production times, NORC allowed no more than two weeks to elapse between a screening interview and the first call for the main interview.

The two-pass process for main interviewing was similar to that used in screening. The purpose of pass one was to complete the "easiest" cases first - those firms ready to be interviewed. The purpose of pass two was to focus resources on a subsample of reluctant and/or difficult to reach owners.

In pass one NORC worked cases efficiently by controlling the number of calls made to unpromising cases - to firms that after repeated attempts appeared very unlikely to complete the main interview.

In pass two NORC worked a smaller set of promising40 cases - these were subsampled pending cases including non-hostile refusals. NORC worked these cases more intensely, with a concentrated application of resources and strategies, than cases worked in pass one. Pass two also included cases for which hard appointments had been made in pass one, as well as partially completed cases from pass one.

In both main passes, some cases were identified as ineligible for the study. This outcome was possible when a proxy for the owner had completed the screener. In these situations the main interview respondent (who was supposed to be an owner) was re-asked all of the eligibility questions at the outset of the interview. When an answer to one of the eligibility questions rendered the firm ineligible, the main interview was terminated and the case was finalized.

Note: As with screening, the two-pass approach was used only in batches one through three. During batch 4 main interviewing, NORC used a more standard approach to working the sample, that is, one single pass without subsampling (see Section 4.7.4 below).

4.7.3.2 Protocol for Pass One Main Interviewing

The pass one main interviewing protocol was similar to that of pass one screening. First, interviewers made up to seven call attempts. Most call attempts were made during regular business hours, but protocol required each case to receive at least one Saturday call and one evening call over a two-week period.

If a case was deemed promising after seven calls, interviewers made up to 13 more calls to complete the case. If a case was not deemed promising after seven calls it was sent to supervisor review. If a supervisor concurred that a case had been appropriately worked after seven calls - that there was no indication that trying a different time of day or assigning a interviewer might yield a better outcome - then the case was taken out circulation for the rest of pass one. If a case's call history indicated that it had not been worked according to the seven-call protocol, or if a supervisor saw from a case's call notes remaining potential for gaining cooperation, then the case received up to 13 more pass one calls.

Supervisors removed certain pass one refusal cases from circulation for the duration of the pass. These cases were removed because, based on their call history and call notes, they appeared likely to become hostile refusals with additional pass one work. After a cooling-off period, those cases subsampled41 into pass two were sent a refusal conversion letter and put back in circulation.

It should be noted that pass one of the main could not be closed until screening was considered completed for that batch. This helped to ensure that all cases received the appropriate calling protocol prior to becoming subject to subsampling.

4.7.3.3 Protocol for Subsampling into Pass Two Main Interviewing

Main interview subsampling for pass two was similar to screener subsampling. Subsampling occurred for the first three sample batches only. The actual duration of pass one depended on when the FRB and NORC agreed that NORC had fulfilled the pass one calling protocol, and that no unexpected disposition codes remained in any of the cases' call histories.

During pass one of main interviewing, supervisors finalized certain cases as nonpromising for the reasons below; these cases were not eligible for subsampling and were removed from circulation:

• Interview completed

• Final language barrier

• Final disconnected or no longer in business

• Final ineligible - screened out in main

• Final hostile refusal

• Final owner not available for field period

• Final owner physically or mentally handicapped

• Firm had gone out of business or otherwise unlocatable since being screened

Promising, pending cases at the end of pass one main interviewing were subsampled at 60% into pass two. Cases eligible for subsampling included:

• Non-hostile refusals from owners, proxies and gatekeepers

• Soft appointments

• Cases in which interviewers were transferred to voicemail or an answering machine

In addition to promising cases that were subsampled into pass two main interviewing, partially completed cases and cases with hard appointments were moved into pass two with certainty.


4.7.3.4 Protocol for Pass Two Main Interviewing

After subsampling, all pending cases moved into pass two. Pass two cases were worked harder than pass one cases by a subset of the interviewers who worked on pass one cases. In addition, respondents were offered a larger monetary incentive and other inducements. NORC made up to 20 call attempts for each pass two case, by its most skilled interviewers.

A subset of the general SSBF interviewer population - interviewers who demonstrated exceptional skill in converting refusals and keeping respondents engaged during a lengthy interview - were selected to work pass two main cases. Supervisors also played a larger role in pass two than pass one, focusing their attention on a smaller number of subsampled cases, reviewing call histories and recommending strategies for continued attempts.

NORC used the following protocols for pass two main interviewing.

An increased financial incentive. The incentive offered to respondents increased from $50 in pass one main to $100 in pass two main, among subsampled respondents42. Near the end of the study, NORC increased the pass two incentive to $200, then $500 for batches one through three. Changes in incentive amounts are discussed in detail later in Chapter 4. (For batch four, the financial incentive started at $50 and later increased to $200.)

Refusal letters. As in pass two screening, NORC mailed refusal conversion letters to subsampled non-hostile refusal cases from pass one. The number of identifiable reasons for refusing went from four in screening to two in main interviewing: concern about confidentiality, and concern about the amount of time to complete the survey. A version of a refusal letter was written to address each concern. A third version of the letter addressed an array of potential concerns. Regardless of the version, every refusal conversion letter mentioned the monetary incentive for completing the interview. As discussed further below, refusal letters sent in batch four were identical to those sent in the first three batches, except for reference to a higher financial incentive.

Non-refusal letters. Some cases went through pass one main interviewing without an explicit refusal call outcome. NORC knew from previous telephone studies that many of these cases were passive refusals. For pass two, these cases also received a letter. While not overtly acknowledging a refusal, the letter provided compelling reasons to participate, as well as helping to establish the study's legitimacy and relevance.

In short, all subsampled pass two main interview cases received a response conversion letter - either one of three versions of a refusal letter, or a letter encouraging cooperation but not specifically mentioning a refusal. All letter versions were printed in color on bond paper, personalized to the firm's owner, signed by the SSBF project director and sent on NORC stationery. A copy of each letter is in Appendix SS.

Although batch four did not have two passes, batch four main cases were still sent conversion letters after having been adequately worked. The letters used in batch four, including the non-refusal version, were identical to those used in the first three batches. Again, see Appendix SS.

Federal Express delivery. NORC sent pass two main interview conversion letters by Federal Express. Each package contained just the single letter. NORC believed that the presentation of a single letter sent by overnight Federal Express would be memorable and striking, emphasizing the study's importance and urgency.

Email follow-up. One day after sending conversion letters, NORC sent email messages to owners for whom we had email addresses43. The message was a shorter version of the conversion letter - a review of the reasons for participating. The email message asked recipients to look for a Federal Express package from NORC. The text referenced the SSBF hotline and links to NORC's and the FRB's websites for the 2003 SSBF. A copy of the email letter is in Appendix TT.

Some recipients responded electronically to the email message. These responses were useful for the telephone shop, which checked them regularly. A few owners emailed refusals; supervisors reviewed these messages and decided whether or not to code the cases as final hostile refusals and remove them from circulation. Other owners sent information such as the best time or telephone number to reach them.

Greater interviewer discretion to suggest that an accountant complete the interview. For pass two main, interviewers were trained to be more forthcoming in suggesting that a large part of the interview could be conducted with an accountant or CFO. When firms used outside accountants, interviewers were taught to explain that NORC would compensate them for an accountant's time preparing materials and doing the financial part of the interview. NORC also trained interviewers to emphasize that owners themselves needed to complete the first section of the questionnaire up to subsection N, Records.

Accountants that completed portions of the SSBF main interview for their clients were paid by NORC at one time. Checks were processed and mailed on February 23, 2005. Payments ranged from $75 to $262.50, with a mean payment of $162. The majority of the payments were between $100 and $200. Nine accountants were paid.

A key difference between pass two screening and pass two main interviewing was that pass two main interviewing for all practical purposes did not close for any batch until the survey data collection period ended. NORC continued to contact the diminishing number of pass two main interview cases that were not hostile refusals. With very little effort, interviewers could continue to keep appointments and administer the main interview, or contact soft refusals and try to persuade owners to participate, or call to see if owners who were rarely available were available.


4.7.4 Protocol for Batch Four Main Interviewing

Main interviewing for batch four started in the first week of November and continued through January 2005. To fully exploit the shorter, 13-week schedule, NORC conducted a rolling wave of more intensive interviewing concurrent with the first wave. Cases still pending four weeks after their worksheet mailing date moved into the rolling wave. Each week for five weeks, starting on December 1 and ending on December 29, main batch four pending cases were rolled into the more intensive interviewing, which started with the shipment of a conversion letter that mentioned a $200 financial incentive via Federal Express. The protocol for the more intensive working of batch four cases was similar to the protocol employed in pass two for the first three batches.

The following practices were used in main batch four; these practices are described elsewhere in this report and so are simply noted here:

• Federal Express mailing

• Greater discretion to suggest that an accountant complete the interview

• Email update following Federal Express mailing


4.7.5 Special Efforts to Increase Production

4.7.5.1 Overview

In addition to weekly interviewer meetings on production issues, NORC conducted other special efforts to increase response rates and the number of completed cases. These initiatives, described in detail below, included:

• Conducting interview listening sessions

• Providing additional interviewer training on financial questions

• Initiating interviewer incentive programs

• Increasing respondent incentives


Interview listening sessions.
NORC arranged for small groups of less experienced interviewers to listen (using a speaker phone from another room) to some of the best SSBF interviewers conduct interviews in real time. The sessions were moderated by supervisors who could point out gaining-cooperation and other techniques perfected by experienced interviewers - as well as point out places where an interview might have been handled better or differently. Interviewers listened to both screening interviews and main interviews.

Providing additional training on financial questions. Many respondents were reluctant or unwilling to provide information about their firm's financial profile. These questions about a firm's income and expenses, assets, liabilities and equity were in sections P, R, and S. When the FRB began identifying cases with insufficient data (i.e., too many don't know (DK) or refused (RF) responses), most DKs and RFs came in sections P, R, and S. In early October, NORC provided interviewers with a special training session. Interviewers were reminded about the importance of these items, and about techniques they could use to convince respondents to answer these potentially sensitive but critical financial questions. The techniques were:

• Remind respondent about our assurance of confidentiality

• Confidently ask for estimates and ranges as presented in the CATI questionnaire

• If respondent answers "Don't Know" to item after item in these sections, try to find out if there might be a better respondent for these questions at the firm

See Appendix UU for the memo given to interviewers that provided more details about using these techniques; at the special training session the content of this memo was reviewed and discussed.

After training, the incidence of cases that passed the completeness test (see Section 5.2.3) for these questionnaire sections increased and, anecdotally, interviewers reported the techniques as being helpful.

NORC considered, but ultimately decided against, the following initiatives: 1) creating a shorter - and potentially less intimidating - worksheet to send to respondents in later batches, to possibly increase main-interview cooperation, and 2) conducting a non-interview survey of a small group of owners who refused to participate to get qualitative input on how to increase cooperation. NORC and the FRB reviewed these proposals, and concluded that their cost would likely outweigh any benefits. It was agreed that a non-interview survey might have been useful had it been conducted well before data collection, when there would have been time to fully apply the findings, but would be of very limited value once data collection had started and materials and protocols had been finalized.


4.7.5.2 Initiating Interviewer Incentive Programs

In order to keep interviewers working sufficient hours over the holiday season (Thanksgiving to New Year's), NORC initiated an incentive program to encourage interviewers to work more hours. The program, which started in the first week of November, paid a $5-per-hour bonus to interviewers who worked more than 30 hours a week. The program increased average hours per week per interviewer (see Table 4.21). After declining through October, total interviewer hours and average hours per interviewer jumped sharply in November and into December. Average hours per interviewer remained high, although total interviewer hours declined again over the Christmas and New Year's holiday period.

The program continued through January 15, 2005 to help ensure that production goals - completing a certain number of screeners and main interviews each week - were met to complete data collection by the end of January.

Table 4.21 Interviewer Hours per Week Before and After Incentive Program
Week Ending Total Interviewer Hours Number of Interviewers Average hours per Interviewer
10/2 1,407.7 68 20.7
10/9 1,214.9 64 19.0
10/16 934.1 48 19.5
10/23 992.4 45 22.1
10/30 689.9 44 15.7
Pre-Average 1,047.8 54 19.4
11/6 1,126.4 47 24.0
11/13 1,174.6 49 24.0
11/20 1,224.9 46 26.6
11/27 904.3 44 20.6
12/4 1,213 44 27.6
12/11 991.5 44 22.5
12/18 916.5 38 24.1
12/25 762.4 32 23.8
1/1 791.6 32 24.7
Post-Average 1,011.7 42 24.2

In October NORC implemented the Oktoberfest incentive plan, with the intent to encourage high energy and increase productivity by creating an atmosphere of friendly competition. Interviewers were assigned to one of two teams. The teams competed on daily, weekly and cumulative production projections, which were divided between the teams. Supervisors held team meetings to inform members of the standings and discuss strategies. At the end of the competition, if one team exceeded its cumulative projection, each member of the winning team received a prize valued at $10 to $15. If both teams exceeded their projections, all SSBF interviewers received a more expensive prize. In addition, Oktoberfest included other incentives such as pizza parties.

The outcome of Oktoberfest was an increase in hours worked per week from a subset of interviewers who prior to the program worked less than 35 to 40 hours per week.  There was noticeable improvement in morale. Teammates encouraged each other to come to work, to increase their team's chances of winning on a particular week.

NORC introduced two other bonus programs in January 2005. The programs' purpose was to keep interviewers working on SSBF into the new year. One program encouraged interviewers to work all their scheduled hours to the end of the data collection: interviewers received a $50 bonus for staying to the end of SSBF data collection and working 90% of their scheduled hours; $75 for working 91% to 99% of their scheduled hours, and $100 for working all their scheduled hours. The other program paid interviewers an extra $100 for working 40 hours in the last week of January.


4.7.5.3 Increasing Respondent Incentives

NORC believed that increasing the respondent incentive would have a positive effect on response rate: a larger token of appreciation would likely help convince some reluctant respondents to participate. NORC began increasing respondent fees with sample batch four. Batch four main cases that were sent conversion letters were offered $200. This was double the $100 fee offered in the previous batches. On January 6, 2005, all pending main cases from any batch became eligible for a $200 incentive, and interviewers were instructed to mention the increased incentive in their introductions.

On January 19, 2005, NORC increased the financial incentive from $200 to $500 for all pending main cases excluding those in batch four. Again, interviewers were instructed to mention the increased incentive in their introductions. The higher incentive caught the attention of some respondents from the earlier batches, respondent who had expressed their reluctance many times over the weeks or months since they had completed the screener, or who had completed part of the main interview but for some time had been unwilling or unable to complete it. Because sample batch four had only recently been released at this time, NORC decided that it was not necessary to raise the incentive for batch four. Batch four cases had not aged to the degree that the increased incentive would be warranted.

Table 4.22 shows the number of cases offered each of the various incentives and the number completed at each level. All cases completed at $50 were pass one completes. Fees of $100 or more were offered to respondents in pass two only. Respondents could have been offered the incentive by telephone (at the start of interview), by mail (in a main pass two conversion letter), by email, or through a combination of the three.

As column H shows, the higher the dollar amount offered, the larger the proportion of completed cases that opt for the cash. Dollar amounts increased as cases aged, so very reluctant respondents were offered increasing respondent fee amounts though time. For those who eventually elected to participate, the high proportion in column H indicates that the amount of the respondent fee was important to them. At lower levels, more respondents chose the package of D&B reports, no incentive, or were still undecided about which incentive to choose when asked at the end of the interview.

Regardless of the monetary incentive offered, all respondents were offered the option of receiving the Dun & Bradstreet Small Business Solutions© set of reports that retailed for $199. Fourteen percent of respondents finishing the main interview initially chose the D&B reports44. The overall effectiveness of providing the D&B reports as an alternative incentive is unclear. At the final interviewer debriefing, interviewers said that being able to offer respondents two options was a plus. But they also mentioned negatives. According to interviewers, some respondents perceived D&B's involvement as adding a commercial, for-profit element to the SSBF, calling into question the study's legitimacy. Others were unclear of the relationship between D&B and NORC, and were concerned that their firm's financial information might be shared with D&B.

Table 4.22 Completes 1 by Amount and Type of Incentive
A
Highest financial incentive Offered
B
Number of cases offered incentive amount
C
Number of complete cases offered each amount
D
Number of complete cases opting for monetary incentive
E
Number of complete cases opting for D&B package
F
Number of complete cases opting for no incentive
G
Number of completed cases undecided at end of interview
H
Proportion of complete cases opting for monetary incentive 2
$50 4,825 3,096 2,482 525 75 14 80.2%
$100 692 417 358 38 16 5 85.9%
$200 2,766 594 560 26 6 2 94.3%
$500 1,404 161 159 2 0 0 98.8%


1Complete cases in columns C, D, E, F, G, and H include only those that passed the FRB completeness test. See Section 5.2.3 for description of completeness test. Return to Text
2Column H = Column D / Column C. Return to Text

Note: Respondents were typically told the amount of the fee by telephone and mail; respondents offered $500, however, were told of the amount by telephone only, at the start of an interview.


4.7.6 Quality Control

Prior to data collection, NORC and the FRB knew that quality control would be important. The main interview could be long and challenging for both respondents and interviewers. The questionnaire had many paths, and interviewers had to be prepared for sudden skips depending on which services respondents used. Interviewers had to juggle many tasks. They had to keep the respondent engaged, work with them to identify branch locations of depository institutions, provide additional information if necessary to explain financial instruments, be prepared to respond to concerns about privacy and confidentiality and, of course, read questions with proper pacing, tone and pronunciation, and record answers accurately.

According to telephone center records, NORC supervisors and the FRB formally monitored 10-minute increments of 711 main interviews throughout the data collection period. A copy of the monitoring form is in Appendix VV. Monitoring included observations of how well interviewers read questions, probed responses, recorded responses and call notes, and used CATI to enter call outcomes. Monitoring was conducted more intensively at the beginning of the data collection period and following each of the three main interviewer trainings to make corrections as quickly as possible.

In addition to formal monitoring, supervisors did informal monitoring. A high ratio of supervisors to SSBF interviewers meant that supervisors could spend time near the interviewing stations, listening to interviews. NORC believes this informal feedback was effective. Supervisors reported that they were often able to identify and correct problems - or offer suggestions to interviewers - on the spot.

Monitoring observations were shared with interviewers during one-on-one sessions and group meetings. Feedback was typically provided within several days of monitoring. Through December 2004, the FRB sent electronic monitoring forms to NORC; the FRB's comments and ratings were communicated to interviewers.

All interviewers were assigned a supervisor to provide individual feedback. Interviewers met with their assigned supervisor at least once a week to review new issues, the interviewer's average hours per case and average dials per hour, and to discuss performance-improvement strategies.

NORC held small-group meetings with interviewers. The meetings were initially held weekly, but as interviewers gained experience the meetings were held less often. Led by supervisors, the production manager and the assistant production manager, the meetings gave the staff opportunities to discuss FRB feedback and problems identified through monitoring.

NORC also conducted quality control by reviewing the data output as described in Section 5.2. Any discrepancies in the data that may have been the result of interviewer activities were immediately cycled back to the production center for action. NORC did the same with any comments received from the FRB as a result of data review.


4.7.7 Locating

Main interview cases sent to locating tended to follow the same procedures as screener cases. Locating consisted of calling directory assistance (DA) followed by an internet search if DA did not produce any leads. Locators used address information collected in the screening interview to try to locate a firm's telephone number. If the correct telephone number for a firm could not be found through DA or an online search, the case was reviewed and finalized by a supervisor.

Because cases eligible for main interviewing had already been screened, relatively few needed to be sent to locating. Few businesses became difficult to reach after screening.


4.7.8 Receipt Control and Worksheets

At the end of the telephone interview, respondents were asked to send in their completed worksheets, tax forms, and other records they used during the course of the interview to help with data editing. Receipt control tracked returned worksheets, copies of tax forms and other financial records. On a regular basis these materials were sent to the FRB for review. A total of 1,392 worksheets were returned by respondents. Ninety-six percent of returned worksheets came from respondents who completed the main interview. Of the 4,268 completed cases, 31.2% returned a partial or fully completed worksheet (Table 4.23).

Table 4.24 shows the incidence of returned worksheets by sample batch. Interviewers may have gotten more adept over the course of study in asking respondents to return materials. In December, NORC provided interviewers with an alternative text to the CATI text asking respondents to return their materials; NORC believed that the new script was more forceful and persuasive (see Appendix C.17). Both may contribute to the observed improvement in the percentage of returned worksheets among batch four respondents compared to those in the first three batches.

Table 4.23 Number of Returned Worksheets by Main Case Status
  Completed Cases Partially Completed1 Non-Interviews Total
Total number of cases 4,268 701 4,718 9,687
Number of cases that returned worksheets2 1,332 16 44 1,392
% of cases that returned worksheet 31.2% 2.3% 0.9% 14.4%

1Break-offs and cases that did not pass the completeness test Return to Table

2All completed and partially completed worksheets returned to NORC Return to Table

Table 4.24 Number of Returned Worksheets Among Completed Cases by Batch
  Batch 1 Batch 2 Batch 3 Batch 4 Total
Total number of completed cases 1,043 1,066 1,069 1,090 4,268
Number that returned a worksheet1  289 301   326 416   1,332
% of cases that returned worksheet 27.7% 28.2% 30.5% 38.2% 31.2%

1Completed and partially completed worksheets returned to NORC by respondents who completed the main interview Return to Table

To gain some insight into the quality of the information provided on worksheets, NORC analyzed the completion rate for side two of 100 returned worksheets. Worksheets were selected at random by an assistant from stacks of worksheets sorted by date of return. Side two asked respondents to provide the firm's income statement and balance sheet information, as well as the owner's net worth and home value. The completion rate for these questions was high. Sixty-two of the selected worksheets completed all requested items. Ninety had completed 90% or more of the side-two items. Only eight completed fewer than 75% of these items.

It would be difficult, if not impossible, to provide a similar analysis of the worksheet's side one, which asked respondents to indicate their use of financial services and sources of financing. Each firm had a unique set of financial services and sources - very small firms may have had just a checking account, for example, while a larger firm may have used every service on the worksheet. NORC did not have a way to independently verify the accuracy or completeness of the responses on side one of the worksheet.


4.7.9 Data Retrieval

While reviewing interim data files, NORC and the FRB encountered cases with erroneous or missing data. The errors arose from three sources:

CATI errors. Flawed skip patterns caused some cases to skip over questions that should have been asked. In some cases the CATI accepted answers that were out of range or out of the codeframe. In some situations, text fills to questions were incorrect. If information collected during the screening interview updated the preloaded information, then in some instances this updated information was not correctly passed into the main interview preloads.

Interviewer errors. Interviewers sometimes added the same institution to a roster multiple times, or gave the same name to two institutions. They may have chosen the wrong response to question A1. They may have incorrectly used the exception key, or entered "fake"45 institutions inappropriately.

Respondent errors. The financial institutions and services used by the firm were collected in a particular order in sections E, F, MRL, and G. Sometimes a respondent would mention an institution or service after the interview had proceeded beyond that section. In these instances, interviewers used the Add/Drop form (see below) or inserted a comment to record the information, but sometimes this information collected at the time of the interview was insufficient.

If NORC or the FRB thought that a data problem might require recontacting the respondent, the data retrieval coordinator created a Policy Decision Request (PDR) form and sent it to the FRB. After reviewing the PDR and the completeness test results for the case, the FRB determined whether or not a data problem warranted retrieval. (See Appendix WW for a sample PDR form.)

To conduct data retrieval, NORC prepared hardcopy facesheets and questionnaires for use by specially-trained data retrieval interviewers - typically, highly experienced SSBF supervisors.

When possible, these retrieval questionnaires were written in a general way, allowing interviewers to fill case-specific information from the facesheet into the text of the retrieval questions as they were asked; in this way, the same general questionnaire could be used in retrieval for multiple cases with the same data problem. However, some cases required complex retrieval, and for these cases specific questionnaires were drafted by NORC and approved by the FRB. NORC staff reviewed these questionnaires with interviewers prior to calling. The interviewers recorded answers directly on the questionnaire hard copy.

The FRB requested data retrieval, and NORC attempted data retrieval, for 281 cases. See Table 4.25 for the most frequent data retrieval issues. NORC successfully retrieved data in 269 of 281 cases (96%), although four of these cases were partially, not fully, retrieved. Ten cases could not be reached, and for just two retrieval cases - less than 1% - respondents refused to provide the additional data. Retrieved data were entered into the transaction file. The facesheets, call logs, and questionnaires for retrieval cases were delivered to the FRB in hard copy and electronically.

Table 4.25 Most Frequent Data Retrieval Issues
Description Number of Cases Requiring Retrieval for This Issue Percentage of All Retrieval Cases
Cases with institutions that were incorrectly treated by the CATI as MRL-and-no-other-services institutions 158 56%
Firm's physical street address might have been overwritten with the mailing address. 32 11%
Problems with the roster of institutions - missing institutions or duplicate institutions. 25 9%
Question H7, the distance from the firm to the institution, was incorrectly skipped 21 7%
Firm's physical street address was missing 14 5%
All other retrieval issues 31 12%
Total 281 100%


4.7.10 Drop/Add Forms

The protocol interviewers were trained to follow was that once they had completed a section of the CATI questionnaire, they were not allowed to back up into it. In some instances respondents remembered financial institutions or services too late to be recorded in the appropriate question sequence. Other times respondents wanted to correct an institution name or type of service - again, after it was too late to record the changes using the CATI instrument. In these situations NORC used the following protocol:

• An interviewer signaled for a supervisor

• The interviewer and supervisor completed a drop/add form for each service or institution added, dropped or changed (See Appendix XX for drop/add form)

• The drop/add forms were reviewed by supervisors, scanned into an electronic format and periodically sent to the FRB

• FRB-approved updated data were appended to a case record using the transaction file

• If additional data were required based on the change, the FRB would instruct NORC to recontact the firm for data retrieval

Not all add/drop situations were recorded using the add/drop form (some were recorded directly into comments for example). Additionally, the add/drop form should have included a section collecting information on the institution in addition to the service being added or dropped. As a result, some additional retrieval was required because of these deficiencies in the design of the add/drop form and protocol.


4.7.11 Weekly Production Reports

NORC created a weekly production report showing the progress of data collection for screeners and main interviews by batch and total. The report provided data such as hours per case; number of cases released, metered and screened; and number of completed and partially completed main interviews. Most data items were shown for the previous week and for the cumulative data collection period to date. An electronic version of the report was sent to the FRB weekly. The final weekly production report is shown in Appendix YY


4.7.12 CATI Changes

The vast majority of CATI changes were made prior to data collection. Invariably, however, issues were discovered that required the CATI to be changed during the data collection period. NORC changed the CATI program several times after data collection started. These changes did not slow or stop production, and for the most part they were transparent to SSBF interviewers. Table 4.26 shows when each version of the main CATI was put into production and the changes made with each version.

Table 4.26 List of CATI Changes Made to Main Questionnaire During Production
CATI Version Date List of changes in Each Version of CATI (by Date)
25-Aug-04 • Fixed A9_2 so that CATI was not inappropriately populating the field with responses of 0, 1 and PR.
25-Aug-04 • SKIP40 fixed so that skip occurs when B3=9.
25-Aug-04 • G12 jump fill fixed so that new institution is not automatically added to roster.
25-Aug-04 • TFLAG fixed so that institutions where respondents use no financial services do not go through section H.
25-Aug-04 • SKIP57.5 added so that internet-only institutions, MRL-only institutions and other institutions have different - and appropriate skips - through section H.
25-Aug-04 • Service flags fixed so that, for each service, the number of flags that are not missing is the same for all respondents.
25-Aug-04 • SKIP59 fixed so that the only cases that skip L6_2 are those with L3=1 or L3.1=1.
25-Aug-04 • READ27 through M6 fixed to match hard copy questionnaire.
25-Aug-04 • MRL18 fixed so that text is identical to hard copy questionnaire: "Please specify:..."
25-Aug-04 • MRL5 and MRL23 fixed to hold institution names with a missing institution name coded as "0."
25-Aug-04 • MRL4_3 fixed so that a DK/RF response will not skip over service flags that are necessary for later sections.
25-Aug-04 • P1 fixed so that an RF response would skip to P1_1.
25-Aug-04 • P12 fixed to correctly reference worksheet line 6B.

Table 4.26 - continued
CATI Version Date List of changes in Each Version of CATI (by Date)
6-Oct-04 • A1 fixed to correct skip patterns in section A, and so that value of 5 was not inappropriately assigned in A1. Relatedly, A1.1.1 fixed so that the only way to get to the question is when A1=7.
6-Oct-04 • SKIP3 fixed so that so that A10.8 is asked only if all responses to A10.7 are "no."
6-Oct-04 • A1.2 fixed so that when A1=7 and A1.1.1=proxy name, A1.2 fills with proxy name and not owner name.
6-Oct-04 • SKIP11 and C10.1 fixed so that when C10.1=DK/RF, SKIP11 controls the number of loops.
6-Oct-04 • C12_2(1-3) fixed so that DK and RF responses to C12_3.
6-Oct-04 • C20 fixed to allow at least five digits, not just three digits.
6-Oct-04 • C22_1 and C22_2 fixed to control skip when C20=DK/RF and C20_1=DK/RF. Currently in this scenario CATI goes to SKIP15, which skips to C30.
6-Oct-04 • SKIP15 fixed so that when skip condition=5 (information collected on one stockholder, directing those who go to SKIP12A), an RF response at C22_2_1 sends CATI to SKIP12A.
6-Oct-04 • SKIP5 fixed so that "All others" (i.e. those without C1=1) go to C2_1.
6-Oct-04 • SKIP18 fixed so that if D1=1, GO TO D3; ELSE, GO TO D2.
6-Oct-04 • E6_1, E6_1_1, E6_2, E6_1_2, E6_3 and E6_1_3 fixed to not allow a response of 0. Responses to these questions can be positive numbers only.
6-Oct-04 • F3 and F3_1 fixed so that CATI will not accept a response of 0. Range for both questions changed to >0.
6-Oct-04 • B3 fixed so that when B3=9, F39 and subsequent related questions are skipped.
6-Oct-04 • SKIP31 fixed so that F15 is not inappropriately skipped in loop 3.
6-Oct-04 • H5_3 fixed so that if H53ST_X=DK/RF, then H5_3CITY_X is a corresponding DK/RF, not a missing value.
6-Oct-04 • H6_2 fixed so that exceptions are not allowed.
6-Oct-04 • H rostering fixed so that when an MRD institution is a new institution not previously reported by the respondent, it is entered to the roster at MRDNAME.
6-Oct-04 • L6_2 fixed to not allow 0 or EX entries.
6-Oct-04 • MRL4_3 fixed so that when MRL4_3-2, CATI skips MRL24 and goes to MRL25.

Table 4.26 - continued
CATI Version Date List of changes in Each Version of CATI (by Date)
6-Oct-04 • MRL section fixed so that cases are not inappropriately skipped over a series of questions. One case had a recently approved loan but was missing data for MRL6 through MRL22_1. Another case provided a new institution for a most recently denied loan but was missing data for MRL24 through MRL30V. A third case stated that the most recent loan was denied, but was missing data for MRL23 through MRL30V.
6-Oct-04 • MRL19 fixed so that when MRL19=-1, CATI skips to MRL20.
6-Oct-04 • MRL4_3 fixed so that when MRL4_3=1, CATI skips to MRL5; when MRL4_3=2, CATI skips to MRL23, rather than to READ18 and READ20, respectively.
6-Oct-04 • MRL4_3 fixed so that when MRL4_3=1, after MRL5 CATI skips to MRL7 instead of going to MRL6.
6-Oct-04 • P10_1 fixed so that when response <0, CATI skips to SKIP76 instead of going to P11.
6-Oct-04 • New logic added that skips CATI from SKIP71 to SKIP76 if B3=4,8 or 8 and P5_5_7(PROFIT) <=0. The logic for PROFIT is (P2+P2_2+P4+P4_1)-(P5+P5_1)
6-Oct-04 • P8 fixed so that "and >=0" removed from following conditional: GO TO SKIP74 if P2/P2_2, P4/P4_1 and P5/P5_1 are all answered and not DK/RF and >=0.
6-Oct-04 • SKIP71 fixed to work correctly for EX. CATI fixed so that if P2, P4 or P5=EX, interviewer asked respondent about firm's profit.
6-Oct-04 • SKIP71 fixed so that P8 is skipped when P2, P4 and P5 are complete. P8 was not skipped when P4 was a negative number and B3=2,3,5 or 7.
6-Oct-04 • P1 fixed so that when DK/RF, CATI skips to P1_2, not P1_1.
6-Oct-04 • R3 fixed so that when SKIP79 skips R3, CATI fills the field for R3.
6-Oct-04 • D&B access code fixed so that it is the correct passcode for respondents electing the D&B small business reports as their incentive.
1-Nov-04 • A10_8 fixed to allow range up to 9999 and disable EX key. Previously A10_8 upper bound was 99 and the EX key was allowed.
1-Nov-04 • A10_5 fixed to disable the EX key.
1-Nov-04 • Cases that updated the firm's physical address at A9_1 and were asked where to have their financial incentive mailed in T3 may have had their updated physical street address overwritten if a new street address was given at T3. NORC created a new variable in CATI called PHYSADDR to capture the physical street address; this new variable cannot be overwritten.
1-Nov-04 • A10_9 and T1 changed so that for both questions the following interviewer prompt was added to the CATI screen: "REMINDER: FOR PASS 2 CASES, CHANGE $50 TO $100." In addition, first response of frame for each question was changed from "$50" to "CASH." In A10_9 QxQ text, "$50" was replaced with "cash."

Table 4.26 - continued
CATI Version Date List of changes in Each Version of CATI (by Date)
1-Nov-04 • A range check is now performed if C6-C8 (C16-C18, C26-C28)<=15 (including 0 and negative values). Previously, the value of 0 was treated as DK/RF and a range check was not performed.
1-Nov-04 • H2_1_1 fixed to not allow a 0 response.
30-Nov-04 • F6_2 fixed so that soft range check functions like that of F6_2_1. A dash/minus sign is never an appropriate response, and 0 is within the acceptable range. Previously a dash/minus sign and a 0 elicited a soft range check.
30-Nov-04 • H6_1_(1-8) fixed to require at least two characters for the address field. The previous minimum was one character.
30-Nov-04 • L12_1 fixed so that code frame will no longer accept a 0 response.
30-Nov-04 • MRL19 fixed so that a dash/minus sign is no longer an acceptable entry.
30-Nov-04 • SKIP71 fixed so that when B3=6 and derived profit is zero [P2 (or P2_2) + P4 (or P4_1) - P5 (or P5_1) = 0], CATI skips to SKIP76. Currently CATI goes to P10 unless derived profit is exactly zero.
30-Nov-04 • REMARKS text that follows T4 changed to a shorter and potentially more persuasive argument for respondent to return completed worksheet and other financial documents used for the main interview to NORC.
30-Nov-04 • U7_1 fixed so that when B3=1 or 9 (sole prop), interviewer prompt reads: "PROMPT: WORKSHEET SIDE 2 LINE 20." When B3 not equal to 1 or 9, prompt reads: `PROMPT: WORKSHEET SIDE 2 LINE 21."
30-Nov-04 • U8_1 fixed so that when B3=1 or 9 (sole prop), interviewer prompt reads: "PROMPT: WORKSHEET SIDE 2 LINE 21." When B3 not equal to 1 or 9, prompt reads: `PROMPT: WORKSHEET SIDE 2 LINE 22."
30-Nov-04 • U8_1 fixed so that parenthetical sentence and text fill are like U8. Previously parenthetical text for U8_1 says "Excluding OWNER'S primary home, what..." It now has the appropriate text fill and reads: "Excluding (your/OWNER_1]'s) primary home and the value of [FIRM], what..."
30-Nov-04 • A10_9 and T1 interviewer prompts changed to read "REMINDER: FOR PASS 2 CASES IN BATCHES 1-3, CHANGE $50 TO $100. FOR PASS 2 CASES IN BATCH 4, CHANGE $50 TO $200.1
2-Dec-04 • F3_2, F3_2_1, F3_5 and F6_5_2 fixed to not allow a dash/minus sign as a legitimate response.
  • F6_5 fixed so that neither a dash/minus sign, or 0, invoke the soft range check. The dash/minus sign is no longer an acceptable response, and 0 is acceptable and will no longer invoke the soft range check.
  • MRL19_1 fixed so that it no longer accepts dash/minus sign.

Table 4.26 - continued
CATI Version Date List of changes in Each Version of CATI (by Date)
3-Jan-05 • C30_1 response frame expanded to include 2005.
3-Jan-05 • C30_1 QxQ text changed to account for survey being conducted in 2005. Instructions changed for converting number of years ago to a specific year by subtracting number of years from 2005, instead of 2004.
3-Jan-05 • C32 QxQ text changed to account for survey being conducted in 2005. Instructions changed for converting number of years ago to a specific year by subtracting number of years from 2005, instead of 2004.
3-Jan-05 • C6_(1-2) changed to not allow + or - to follow a numeric entry, e.g., 70+ is no longer accepted.
3-Jan-05 • MRL25YR and MRL7YR ranges expanded to include 2005.
3-Jan-05 • MRL19, MRL19_1, MRL21, MRL21_1 fixed to not accept dash/minus sign without numeric characters.
3-Jan-05 • MRL11 fixed to not accept 0 response.
3-Jan-05 • MRL19_1 fixed so that an out-of-range response causes CATI to skip to MRL20 instead of MRL19_2.
3-Jan-05 • A1_2(2) fixed so that when A1=7 and A1_1_1 is filled with proxy name, A1_2 does not fill with owner name.

1Although sample batch four interviewing did not use passes, SSBF interviewers sometimes referred to the period after which a B4 respondent received a conversion letter as "pass two," for ease of use. Accordingly, the prompt refers to pass 2 for B4.

The types of changes, including an illustrative example of each, are listed below.

Allowable response and range changes. Problems of this type were detected during interim data review and testing of training scenarios. Typically, this problem involved a response that was accepted by the CATI that should have been out of range. Several such changes were required with early versions of the CATI program. One such change was necessitated when the data collection period extended into 2005. At that time, NORC changed the date ranges of some questions, such as when the firm had most recently applied for a loan, and when the firm had become listed on a stock exchange, to allow for entries of 2005.

Text changes. Toward the end of data collection NORC changed the close of the main interview to more forcefully ask respondents to return worksheets and other materials used for the interview.

Interviewer prompt and QxQ46 changes. When NORC began offering incentives larger than $100, it added an interviewer prompt for questions about incentives. The new prompt told interviewers which amount to read based on batch; the QxQs for these questions were likewise updated.

Skip logic. There were a few instances where skip patterns in the CATI questionnaire were programmed differently than the hard-copy questionnaire. For example, SKIP40 was intended to skip three questions about loans from partners or stockholders if the firm was a sole proprietorship. Initially the skip worked only if a respondent classified the firm in B3 as a sole proprietorship (B3=1) but not if a respondent said the firm was an LLC that filed its taxes as a sole proprietorship (B3=9). CATI was changed so that the questions were skipped when B3 = 1 or 9.

Flag construction. In subsection H of the questionnaire, information was collected on up to eight institutions that appeared on a case's institution roster. However, if a respondent had identified a particular institution as the most recent loan (MRL) institution and if the respondent had not indicated that the firm used this institution for any other services, fewer and different questions were asked in section H about this institution than would be asked otherwise. To identify such an institution, special "MRLONLY" flags were constructed by the CATI and used to determine which questions were asked about the institution. However, in the earliest versions of the CATI, the MRLONLY flags were not being set properly; institutions were being improperly flagged as MRLONLY, causing some questions to be inappropriately skipped. The CATI was reprogrammed to fix this error, and data retrieval was attempted for all of the cases that were adversely affected.


4.7.13 Level of Effort

NORC attempted to complete main interviews with 9,687 eligible cases. The total number of calls made to these cases was 186,076 for an average of 19.2 calls per case. The level of effort, as measured by number of calls, for completed interviews, partially completed interviews, and non-interviews is shown in Table 4.27. Completed interviews includes only cases that passed the completeness check (see Section 5.2.3). Partially completed interviews includes breakoffs and cases that did not pass the completeness check.

NORC completed main interviews for 4,268 cases. 33.1% of completed main interviews required five or fewer calls; 54.1% required up to ten calls before the interview was completed. The distribution of the number of calls to complete in an interview is shown in Appendix ZZ. NORC accommodated respondents who preferred to complete the survey over multiple calls. Total calls includes all call attempts, regardless of whether some or all of the instrument was administered, or whether the calls resulted in refusals, ring/no answer, reached an answering machine, or resulted in any other outcome.

Table 4.27 Level of Effort by Main Case Status
Case Status Number of Cases Percent
of Cases
Number of Calls Percent
of Calls
Average Number of Calls
Completed interviews 4,268 44.06 57,907 31.1% 13.6
Partially completed interviews 701 7.24 19,143 10.3% 27.3
Non-interviews 4,718 48.70 109,026 58.6% 23.1
All 9,687 100.00 186,076 100.0% 19.2

NORC assigned different final dispositions to all non-interviews. These dispositions were determined two ways. Throughout data collection, supervisors routinely reviewed cases and, as appropriate, assigned final dispositions that removed cases from circulation. For example, if on two call attempts interviewers could not administer the survey because of a language barrier, a supervisor would review the case and decide whether to assign a final disposition of language barrier or return the case with instructions for at least one additional call. At the end of data collection, NORC ran a program that assigned final dispositions to all pending refusal cases based on a hierarchy of call-attempt outcomes: an owner refusal trumped a proxy refusal, which trumped a gatekeeper refusal, which trumped a hung-up-during-introduction (HUDI). NORC used this automated approach because the programming was relatively straightforward, and because it would free supervisors' time to assign final dispositions to more complicated non-refusal pending cases. As a quality control, measure, supervisors reviewed 10% of the program's output. All non-refusal pending cases at the end of data collection were reviewed by supervisors and, based on each case's call history and call notes, assigned a final disposition.

The level of effort - meaning the number of calls - for each final main interview disposition is shown in Appendix AAA.

The level of effort by sample batch - all cases including completed and partially completed cases, and non-interviews - is shown in Table 4.28. The shorter time period for batch four compared to the other batches, and the absence of passes, meant fewer calls were made to batch four cases relative to the other batches.

Table 4.28 Number of Calls by Sample Batch for All Main Cases
Sample Batch Number of Cases Percent of Cases Number of Calls Percent of Calls Average Number of Calls
Batch 1 2,222 22.94 46,628 25.06 21.0
Batch 2 2,276 23.50 55,122 29.62 24.2
Batch 3 2,232 23.04 47,443 25.50 21.3
Batch 4 2,957 30.53 36,883 19.82 12.5
All 9,687 100.00 186,076 100.00 19.2


4.7.14 Unweighted Completion Rate

Of the 9,687 eligible cases released to main interviewing, 4,268 were completed, for an unweighted completion rate of 44.1%47 (Table 4.29). The total number of cases that were assigned a final code of refusal in main interviewing was 2,047 for a refusal rate of 21.1%. Virtually every case (99%) that qualified for a main interview was contacted at least once during data collection, meaning that NORC interviewers were able to reach and speak to someone at the firm. 4,718 cases (48.7%) cases were non-interviews, meaning cases not completed or partially completed.

Table 4.29 Case Status and Completion Rate by Sample Batch
  Cases Released Completed Cases1 Completeness Rate (Unweighted) Partially Completed Cases2 Non-Interviews
Batch 1 2,222 1,043 46.9% 155 1,024
Batch 2 2,276 1,066 46.8% 182 1,028
Batch 3 2,232 1,069 47.9% 149 1,014
Batch 4 2,957 1,090 36.7% 215 1,652
Total 9,687 4,268 44.1% 701 4,718

1Cases that passed the completeness check with a final disposition of 19/1 Return to Table

2Break-offs and cases that did not pass the completeness test Return to Table

Table 4.30 shows the completion rate for main interviewing by sample batch and pass (except for batch four, which did not use passes).

Table 4.30 Main Eligible Completion Rate1 by Sample Batch and Pass (Unweighted)
  Batch 1 Batch 2 Batch 3 Batch 4 TOTAL
Pass One 34.0% 37.1% 38.3% n/a 36.5%
Pass Two 30.8% 26.2% 25.1% n/a 27.4%
Total 46.9% 46.8% 47.9% 36.9% 44.1%

1Of all main cases in batch and pass, % that completed the main interview in that batch and pass. For example, if 500 cases were worked in (batch 1 pass 1) B1P1, and 100 of the cases were completed in B1P1, the completion rate is 20%.


5 Data Review And Delivery


5.1 Introduction

This chapter describes the process NORC used to review and prepare the screener and main questionnaire data for delivery. For both the screener and main questionnaire data, NORC delivered raw, unedited data files. These files reflected the data as it was originally captured, but subject to several automated, systematic recoding steps so that the CATI questionnaire data matched the questionnaire naming conventions. These data translations are discussed below. Other than the automated recoding steps, no modifications were made to the raw data.

In addition to the raw data, NORC delivered recommended data edits in a separate transactions file. This file reflected NORC's proposed edits and cleaning measures, as well as changes from data retrieval and CATI questionnaire version changes. The transactions file was intended to allow the FRB to review the proposed edits and determine which, if any, to implement.


5.2 Quality Control Process

NORC performed quality control checks on the screener and main interview data prior to delivering the data. The following steps comprised NORC's quality control process:

1) A set of SAS programs were run on the dataset to check the skip logic, ranges, and code frame. For each question, the programs checked that a respondent was asked that question if and only if the respondent should have been asked that question, and that the respondent's answer fell within the correct range and/or was in the code frame.

2) For the main interview data, the programs checked that the preloaded variables were loaded with the correct information (i.e., that if updated information was gathered during the screener interview, the updated information was preloaded for the main interview), and that the service flags and section H rank flags were correctly assigned.

3) To ensure that the SAS dataset matched the data dictionary, NORC compared the labels, names and delivery formats in the data dictionary against the PROC CONTENTS output from the SAS dataset. In addition, NORC checked to ensure that only the subset of the ASCII variables previously identified by NORC and the FRB were included on the SAS dataset.

4) The verbatim flags and verbatim files were reviewed to ensure that for every verbatim variable, there was a corresponding verbatim flag in the main dataset, and for every verbatim flag, there was a corresponding verbatim record in the verbatim file for that case.

5) The verbatim responses and interviewer comments entered by using the <F2> key (<F2> comments) were reviewed for appropriate content. This review included removing all punctuation, and modifying character strings such as "don't know," "refused," and "exception" to their single character codes, "D", "R" and "X" (left-justified in the character field). The comment file was produced directly from the CATI questionnaire, not the SAS dataset, so it was edited to use the SAS variable names, rather than the questionnaire variable names it contained when generated.

6) In the main interview data, the <F2> comments were reviewed to ensure that there was a comment for every use of the exception key, and that each comment contained a data value and an explanation.

7) A CATI programmer reviewed all issues raised during all QC steps. All issues were maintained in a central system accessible to all team members.

8) In a data memo sent with each delivery, NORC recommended resolution for each issue to the FRB, and, if necessary, also sent additional follow-up memos. NORC included new problems as well as new cases affected by previously identified problems in these data memos


5.2.1 Data Review Process

Prior to each delivery of main or screener interview data, NORC reviewed the data using a set of SAS programs, which were a critical component of our multi-step quality assurance and control process. The objectives of the review programs were:

1) To identify areas in the questionnaire that might require targeted editing such as incomplete addresses.

2) To identify invalid data that might require editing.

3) To identify problems in the intricate CATI program. These problems could include faulty skip patterns or other CATI questionnaire logic that were not due to interviewer error or respondent error.

The review program allowed us to identify unexpected or disallowed types of responses and to perform the range checks. The review program also checked that all updated information from the screener was properly preloaded in the corresponding main interview, and created its own roster, service flags, MRLONLY flags, and section-H roster using only variables containing actual respondent answers to questions asked of them (i.e., did not use any CATI-created variables). In this way, the program independently checked the CATI operation.

Findings from the data review were documented in the README file that was sent with each delivery. Items requiring editing were documented in the transactions file; and, if modifications to the CATI program were indicated, NORC notified the FRB (via the README file or a follow-up memo) and requested approval to make fixes. Modifications to the CATI program were first implemented and tested in a testing environment prior to being implemented in the production environment, and then delivered to the FRB for testing. Following FRB approval, the new CATI went into production. For every CATI change made during production, NORC notified the client and provided the following information:

• The date that the new version was implemented in production

• The issue(s) that the modification to the CATI programming corrected along with a list of affected variables.


5.2.2 Interviewer Variability Checks

Interviewer variability effects on the data often can be ameliorated by retraining and/or coaching individual interviewers. NORC conducted a regular review of item non-response for all SAS-delivered variables at the individual-interviewer level. Specifically, NORC calculated the mean number of non-missing items for each interviewer as compared to the mean for all other interviewers, and then looked at the mean number of "Don't Know" and "Refused" answers for each interviewer in the same manner. The data collection team reviewed these results to determine if the pattern of response suggested that an individual interviewer may benefit from supervisor coaching. See Section 4.7.6 for how these results were used by the data collection team.


5.2.3 Completeness Check Process

All completed cases were evaluated through the completeness check process. The process comprised three separate but related tests: 1) overall completeness test; 2) key financial services completeness test, and 3) income, assets, liabilities and equity completeness test. In order to be considered a complete/passed interview, a case must have passed all three tests.

In addition, NORC conducted a high level completeness check to identify cases that may have failed on an individual test, but might have sufficient data, upon inspection, to be considered a complete/passed interview. These cases were called bubble cases and are discussed later in the chapter.

There were three tests to pass in order for a case to be updated from a "Complete" disposition to a "Complete/Passed" disposition:

1. Overall Completeness Test

The overall completeness test evaluated the proportion of questions the respondent answered. Because not all questions were asked of all respondents and item non-response was possible, this measure is quite complex. It required the construction of two sub-indices: total eligible question groups and total complete (valid) response groups.

The completeness ratio is defined as the number of complete response groups divided by the number of eligible question groups. To pass the overall completeness test, the case needed to have a completeness ratio greater than or equal to 75%. For passing cases, the overall completeness test pass/fail flag was updated to "pass" (1, where 0 means "failed") in the output file, and the percentage of valid responses was noted in the output file.

In order to be counted as complete (valid), responses were required to fall within the question-specified range. Throughout the questionnaire, the acceptable code frames or ranges are specified following each question. Item non-responses, that is "Don't know" or "Refused" (designated by D and R respectively in the data), were not counted as complete responses. However, exceptions (designated by X in the data) were counted as complete responses.

Based on firm characteristics and certain responses, each respondent took a different course through the survey instrument. For example, only firms organized as corporations were asked if the firm was publicly traded (question C30). Thus, the total number of eligible questions varied greatly across firms. In addition, there were many instances when the same (or very similar) information could be collected in more than one question. In such cases, the group of questions and responses only counted as a single item. Note that some questions, specifically the dollar verification questions following the collection of all dollar amounts, were not counted as part of any question group.

There were seven types of question groups in the questionnaire. The following provides a description of each group and how each group is evaluated for completeness.

Single question groups. Most questions are also single question groups. An example of a single question group is question D2 (see section D in the questionnaire). D2 is complete if it contains a value greater than 0. If the respondent answered "Don't know" or refused to answer question D2, it would be counted as an incomplete eligible group, adding one to the total eligible question total (denominator) and zero to the complete response group total (numerator).

Range pair groups. Often in the questionnaire, responses of "Don't know" and "Refused" for questions eliciting an exact number are followed up with another question asking for a range. Such pairs are considered a single group. Questions C18_1 and C18_1_1 (see section C in the questionnaire) are an example of a range pair group. If C18_1 and C18_1_1 are asked, then this pair is a single eligible group. If C18_1 contains a non-missing value or X, or if C18_1_1 contains a non-missing value (e.g., 1, 2, 3, or 4), then the C18_1 group is complete. If C18_1_1 is answered "Don't know" or refused, then the C18_1 group is not complete. Note that in order to be asked question C18_1_1, the respondent must have already given an answer of "Don't know" or refused to answer question C18_1. Therefore if the respondent gives a complete answer to either of the two questions in the group, the entire group is considered complete, adding one to the total eligible question total and one to the complete response group total.

Estimate pair groups. Many, but not all of the dollar amounts fall into this group, where "Don't know" or "Refused" responses are followed up by a request for the respondent to provide an estimate. An example of an estimate pair group is questions E6_1 and E6_1_1. If E6_1 or E6_1_1 is asked, then this pair is one eligible group. If E6_1 contains a non-missing value (or X), or if E6_1_1 contains a non-missing value, then the group is complete. If E6_1_1 contains a DK or RF, the E6_1 group is not complete. Note that, like range pair groups, the second question of estimate pair groups is only asked if the first question is not answered with a non-missing value.

Triples of question, estimate, and range. Most of these types of questions occur in sections P, R, S, and U of the questionnaire. All three questions are considered a single group. An example of this type of group is questions P2, P2_2, and P2_3. P2 requests an amount, P2_2 requests an estimate of the amount (if P2 is not adequately answered), and P2_3 asks for a range (if P2_2 is not adequately answered). If P2, P2_2, or P2_3 are asked, then this triple is one group. The group is complete if P2 contains a non-missing response or X, or if P2_2 contains a non-missing response or X. The group is incomplete if P2_3 is asked, regardless of the response to P2_3. (P2_3 is asked if P2 is refused, or if P2_2 is answered "Don't know" or refused.).

Five-question groups (estimate, positive/negative/zero, negative range, positive range) questions. Most of these groups are asked in sections P, R, and S. In these groups, all five questions are considered a single eligible group. An example of this type of question group is questions P6, P6_1, P7, P7_1, and P7_2. These five questions jointly add a single eligible group to the denominator of the completeness ratio. For this eligible group to be complete, either P6 or P6_1 must contain a non-missing response. This group is incomplete if P6 is refused or if P6_1 is refused or answered "Don't know." Even if P7, P7_1, or P7_2 contain responses, the answers do not make the group complete.

Multiple (Check all that apply) categorical response questions. A multiple categorical response question is essentially a series of individual yes/no questions for each category. Each category is considered as its own question group and considered complete if it has a non-missing value (typically equivalent to yes or no). For example, question F26_1_(1-3) has 7 categories, and thus seven eligible groups (F26_1T1_1, F26_1T2_1, F26_1T3_1, F26_1T4_1, F26_1T5_1, F26_1T6_1, F26_1T7_1). The total number of eligible question groups would increase by 7 for any firm that is asked F26_1_1. Response groups would be evaluated separately for each of the seven eligible groups, with the total complete response groups increasing by one for each category with a non-missing (1 or 2 in this example) response and zero for each category that is refused or answered "Don't know."

Verification Groups. These groups consist of 1) a question that verifies information about the firm/owner, and 2) subsequent question(s) that allow the respondent to correct false information and gather new information. The majority of these groups are found in section A of the main questionnaire. An example of a verification group is questions A5_1_1 and A5_2. A5_1_1 verifies that the preloaded business name is still applicable. A5_2 collects the correct name, if the preloaded name was not accurate. An important distinction between these groups and the other groups is that, unlike for the other groups, subsequent questions (in this example, A5_2) are asked not only following a missing response (refused, or "Don't know"), but also following a (non-missing) "no" response from the previous question. These question pairs add a single eligible group to the completeness ratio denominator. To be considered complete, non-missing data is required on the lead-in question as well the follow-up whenever it is asked.

2. Key Financial Services Completeness Test

The requirements for the completeness for the key financial services test vary depending on whether the firm is a sole proprietorship or other firm type. Proprietorship status is determined at question B3 in the main questionnaire. Values of <1> or <9> at B3 indicate that the firm is a sole proprietorship. These cases have a proprietorship flag of the value "true." For sole proprietorships, the key financial service entrance questions are:

{\rm t} E1

- E4

- F7

- F20

- F27

- F33

- F50

- F54

- MRL1 (or MRL1.1) and MRL2 (or MRL3)

If 7 or more of these 9 key financial service questions contained valid values, then the case passed the key financial services completeness check; the financial services pass/fail flag was updated to "pass" (1) in the output file, and the number of valid responses was noted in the output file.

Questions MRL1 (or MRL1_1) and MRL2 (or MRL3) were treated as a single group of entrance questions. For the MRL entrance question to be complete, a valid (non-missing) response was required at MRL1 or MRL1_1 and at MRL2 or MRL3. Questions E1, E4, F7, F20, F27, F33, F50, and F54 were considered complete if they were greater than zero.

Non-proprietorship is indicated by a value of <2>, <3>, <4>, <5>, <6>, <7>, or <8> at B3. These cases have a proprietorship flag of the value "false." For non-proprietorships, the key financial service entrance questions are:

- E1

- E4

- F7

- F20

- F27

- F33

- F39

- F50

- F54

- MRL1 (or MRL1.1) and MRL2 (or MRL3)

If 8 or more of these 10 key financial service questions contained valid values, then the case passed the key financial services completeness check; the financial services pass/fail flag was updated to "pass" (1) in the output file, and the number of valid responses was noted in the output file.

For non-proprietorships, questions MRL1 (or MRL1_1) and MRL2 (or MRL3) were treated as a single group of entrance questions in the same way as for sole proprietorships, as described above. Questions E1, E4, F7, F20, F27, F33, F39, F50, and F54 were considered complete if they are greater than zero.

3. Income, Assets, Liabilities and Equity Completeness Test

To pass the income, assets, liabilities and equity completeness test, 75% or more of the eligible question groups in sections P, R and S must contain valid answers. The question groups and calculation of the completeness ratio are the same as for the Overall Completeness Test, except that only question groups from sections P, R and S are considered.

For cases that successfully passed the income, assets, liabilities and equity completeness test, the pass/fail flag was set to "pass" (1) in the output file, and the percentage of valid responses was noted in the output file.

Table 5.1 lists the passing rate for each of the three parts of the completeness check, and the entire completeness check, by batch.

Table 5.1 Pass Rates by Batch
  Batch
1
Batch
2
Batch
3
Batch
4
Total
Overall Completeness Test 98.93% 99.32% 99.91% 99.56% 99.43%
Key Financial Services Test 99.20% 99.07% 99.74% 99.91% 99.48%
Income, Assets, Liabilities and Equity Completeness Test 90.83% 88.84% 92.96% 94.74% 91.82%
All Three Tests 90.83% 88.42% 92.79% 94.65% 91.64%


5.2.3.1 High Level Completeness Check - Bubble Cases

All cases, regardless of whether or not they passed the individual tests, were reviewed in the high level completeness check. In addition to being a general review, this review checked for cases that were "on the bubble," i.e., cases that were close to passing all three completeness tests, and may have passed one or two but not all three tests.

Bubble cases were defined as cases that failed one or more of the three tests but answered at least 65% of eligible question groups for all of the failed tests (the normal passing criterion was 75%). FRB reviewed these cases, as well as others that scored lower (as low as 55%) on the completeness criteria, to determine if they could be considered passing cases.

NORC sent bubble cases to the FRB for review on a flow basis, and the FRB responded on a flow basis. After data collection ended, NORC reset the dispositions of all accepted bubble cases to indicate that they passed the completeness check. Production reports did not include the outcomes of bubble case reviews until data collection ended.

Table 5.2 displays the number of cases that failed the completeness check, the number of cases identified by NORC as bubble cases, and the number of cases that initially failed the completeness check that the FRB accepted as passing cases after review, by batch and across all batches. Please note that because the FRB reviewed other cases in addition to the bubble cases identified by NORC, the number of accepted failing cases for a batch could actually be higher than the number of bubble cases that NORC identified for that batch.

Table 5.2 Bubble Cases by Batch

Table 5.2 Bubble Cases by Batch
  Batch
1
Batch
2
Batch
3
Batch
4
Total
Failed Completeness Test 103 137 82 61 383
Bubble Cases 24 39 23 18 104
Failed Cases Accepted by FRB as Passing 27 20 17 12 76


5.3 Client Data Memos

NORC delivered screener and main questionnaire data to the FRB on a regular basis (based on calendar time, described in Section 5.7). The FRB reviewed the data in each data delivery. In the event that questions arose out of that review, the FRB sent questions to the NORC data delivery task leader and the IT project manager, in a memo in response to the README file from that delivery. The data delivery task leader compiled the responses from NORC into a single memo and worked with the IT project manager to allocate resources to respond to the FRB's questions.

In addition to documenting responses to data questions from the FRB, the data delivery task leader maintained and regularly sent to the FRB:

• A log of all changes made to the production CATI, including the date that a version went into production

• A log of all changes made to the hardcopy version of the questionnaire (towards the end of the project the FRB took over this responsibility from NORC)

• Updates on cases that may have required data retrieval

• Any other delivery-related issues, as needed

NORC's SSBF project team discussed these issues at weekly staff meetings. NORC responded within one week of receipt of the FRB's questions, or alerted the FRB if a response would require more than one week's review.


5.4 data Editing


5.4.1 Identifying Cases for Editing

Completed cases were eligible for editing. For editing purposes, a completed case is any case that reached the end of the CATI instrument and was filed on the data management system. This definition is separate from the designation of "Complete and Passed" as measured by the completeness check process (see Section 5.2.3). The completeness check process was intended to verify the quality of the data by requiring cases to contain sufficient substantive data to permit analysis, whereas this definition of complete identifies all cases that finished the interview, regardless of quality.


5.4.2 Editing Process

NORC did not directly edit any CATI data, but instead maintained a separate database of suggested edits, called the transactions file. NORC created a companion document called the data changes file to document the broader issues addressed by the edits in the transactions file. The editing process included reviewing verbatim responses and comments, implementing edits based on those responses, and adding or deleting services and institutions not captured during the original interview. The editing process also included updating the transactions file with corrections that arose as a result of CATI version changes, and data obtained through data retrieval.


5.4.3 Transactions File

All edits were made in a separate file, called the transactions file, and were not made to the raw data in the SAS dataset. Issues identified through the data review process or through the client's data memos were reviewed by the senior technical questionnaire analyst and other NORC project staff. When this review identified variables that required editing, SAS code was written to generate a list of all case IDs affected by the issue. Each affected case was entered into the transactions file. The following variable-level information was recorded in the transactions file:

• Status of the entry (added, changed, removed)

• Date added

• SUID (case ID)

• Policy Decision Report (PDR) number (if applicable)

• The issue that caused the error

• Variable name

• Old response value

• New response value

• Reason for the change

• Comments or further description of the problem

• The verbatim, if a recode based on the content of the verbatim

NORC maintained separate transactions files for screener and main data.


5.4.4 Data Changes Spreadsheet

NORC maintained the data changes spreadsheet in tandem with the transactions file; while the transactions file was a variable-level listing of all suggested data edits, the data changes spreadsheet was a case-and-issue-level listing of all of the problems that were identified during the course of the project. For each case-issue combination, the following information was recorded:

• Policy Decision Report (PDR) number (if applicable)

• Status of the entry (added, changed, removed)

• Action taken (added to the transactions file, or not)

• Date added/changed

• Issue description

• SUID (case ID)

• Variables affected

• Candidate for retrieval or not

• Begun retrieval or not

• Date of retrieval

• Comments or further description of the problem

The purpose of the data changes spreadsheet was to track the status of particular issues without being bogged down in the necessary details of the variable-level transactions file.


5.4.5 Global and System Edits

Several data edits occurred within the data management systems. These edits were applied post-data collection and were incorporated into the SAS delivery files. Each global edit is described below.


5.4.5.1 Section H Ranking Flags

Financial institution names were collected in sections E, F, MRL, and G of the main interview and stored in rosters that permitted entry of up to twenty institutions. After completion of section G, these financial institutions were ranked based on the types of services the firm used at the institutions. Cases then went through up to eight loops of section H, one loop per institution, with the loop number for the institution corresponding to its ranking. Institution ranking flags (variable names INSTRNK1-INSTRNK20) were created post-data collection to indicate which loop of section H, if any, corresponded to the institutions at each of the twenty roster positions. Therefore INSTRNK1-INSTRNK20 mapped the twenty roster positions onto the eight loops of section H.


5.4.5.2 Service Flags

As mentioned in the preceding paragraph, institution names were collected in sections E, F, MRL, and G and stored on a roster of up to 20 institutions. In sections E and F, after the names of the institutions at which the firm used the particular service were collected, the respondent was asked service-specific questions about these institutions. Where more than three institutions were listed, respondents were asked to identify the two with the largest balances. The respondent was asked about these institutions in the order they are listed on the roster the first two times and about all other institutions in the third loop. When there were three or fewer institutions, the respondent was asked about each institution in the order they were entered into the roster.

Flags for each service offered by each institution were created in post-processing prior to data delivery. The purpose of these service flags was to indicate which services the firm used at the institution stored in each of the twenty roster positions. Flags were not calculated for non-occupied roster positions.

The service flags for sections E and F were constructed as follows:

• 0 = A particular service was not used by the respondent at the referent institution.

• 1 = A particular service was used by the respondent at the referent institution, and the respondent was asked service-specific questions about this particular institution first.

• 2 = A particular service was used by the respondent at the referent institution, and the respondent was asked service-specific questions about this particular institution second.

• 3 = A particular service was used by the respondent at the referent institution, and the respondent was asked service-specific questions about this particular institution third.

• 30 = The respondent used a particular service at the referent institution as well as at least 3 other institutions. The respondent was asked the service-specific questions about all of the `30' flag institutions combined.

• .= No financial institution stored in the roster position.

The TMRA and TMRD service flags indicated the most recent approved loan and most recent denied loan, respectively. Since there could only be one most recent approved loan institution and/or one most recent denied loan institution, at most one of the twenty TMRA flags could be non-missing/non-zero, and at most one of the twenty TMRD flags could be non-missing/non-zero. Thus:

• If a firm had a most recent approved loan, then TMRA=1 for that roster position and TMRA=0 for all other occupied roster positions. If a firm did not have a most recent approved loan, then for all occupied roster positions TMRA=0. For unoccupied roster positions, TMRA=.

• If a firm had a most recent denied loan, then TMRD=1 for that roster position and TMRD=0 for all other occupied roster positions. If a firm did not have a most recent denied loan, then for all occupied roster positions TMRD=0. For unoccupied roster positions, TMRD=.

In section G, a firm could have the service at more than one institution, but no follow-up questions were asked. Therefore the non-missing, non-zero flags contain no extra information, unlike in sections E and F. The meanings of the service flags in section G are as follows:

• 0 = A particular service was not used by the respondent at the referent institution.

• 1 = A particular service was used by the respondent at the referent institution, and this institution was in the lowest numbered roster position of all of the institutions at which this service was used.

• 2 = A particular service was used by the respondent at the referent institution, and this institution was in the second lowest numbered roster position of all of the institutions at which this service was used.

• 3 = A particular service was used by the respondent at the referent institution, and this institution was in the third lowest numbered roster position of all of the institutions at which this service was used. The respondent used this service at exactly 3 institutions.

• 30 = The respondent used a particular service at more than 3 institutions, and this institution was not in the two lowest numbered roster positions of all of the institutions at which this service was used.

• . = No financial source or institution.


5.4.5.3 Reserve Codes

NORC's standard set of reserve codes are:

• -5 Not Applicable

• -4 Multiple

• -3 Missing

• -2 Don't Know

• -1 Refused

On SSBF, the value -3 (Missing) was usually replaced with a dot (.) in the dataset, although there were instances when -3 was a valid value. For example, MSA = -3 when the firm used was not located in an MSA. -2 (Don't know) was replaced with the letter D. -1 (Refused) was replaced with the letter R. The letter X was used to designate an "Exception" - a value that is out of range. Interviewers entered a data value and an explanation for each instance of an X in the dataset. There were questions within the interview that could have valid responses of -1 through -5; these were not changed to reflect the above reserve codes.


5.4.5.4 Verbatim Flags

Verbatim variables included responses to open-ended questions and "other/specify" responses. Verbatim variables were delivered in a separate file. A flag was inserted in the main dataset when verbatim responses were captured. The following flags were used:

• . = A verbatim response was not present (legitimate skip)

• 1 = A verbatim response was present

• D = Interviewer indicated "don't know" either by using the <F8> key or by typing the words "don't know." Use of the <F8> key inserts ASCII character 209 into the data.

• R = Interviewer indicated "refused" either by using the <F7> key or by typing refused. The <F7> key inserts ASCII character 208 into the data.


5.5 Data Cleaning

The CATI system automatically performed most of what would be considered data cleaning through programmed checks for valid responses and ranges as the interview was administered. Necessary data edits were identified by the project team through their quality control procedures or through normal CATI production support, or by the FRB in their review of the data.

However, certain data cleaning and reviewing had to be done post-CATI, in particular service flag construction and section H ranking flag construction (as described above).


5.6 Data Coding and Recoding

Three sets of variables were recoded by NORC. Two sets of variables were from the main interview (the SIC industry type variable and the race variables), and one was from the screener interview (single most important problem facing business today). As with other edits, the results of all data coding and recoding were included in the transaction file format. At the FRB's request, NORC provided the coding output in three separate files, one for each type of coding.


5.6.1 Industry Coding

Question B1_1 of the main interview asked: "What is the principal activity of the business?" Usually (70 percent of the time), the description provided by the respondent matched the preloaded description provided by Dun and Bradstreet. In the event that the descriptions did not match, the interviewer recorded the response in a verbatim field and the case was referred for coding. The new description provided by the respondent, if significantly different from that provided by Dun and Bradstreet, was recoded using the 4-digit SIC code frame.

Two SIC-experienced coders independently coded the industry data. Discrepancies between the two coders were automatically flagged for adjudicator review. During adjudicator review, the adjudicator, NORC's coding department production manager, reviewed both coder recommendations and the original data, and could assign either of the two recommended codes or any other code from the frame. After adjudicating all items coded differently by the two coders, the same adjudicator performed a series of quality control checks on the coded data to identify inconsistencies in coding between items, as well as instances in which both coders agreed, but the adjudicator thought that there was a more appropriate code.

In total, NORC coded 1572 verbatim responses into the 4-digit SIC code frame. 728 of the verbatim responses were found to not be significantly different from their preloaded value, and were thus coded back to that original SIC code. 53 items were deemed uncodable (usually because they were too vague or otherwise incomprehensible). The remaining 791 items were assigned a new SIC code to match the new industry as described by the respondent in the verbatim entry.

A batch of approximately the first half of the industry coding was sent to the FRB for review and comments on January 21, 2005. The second file, containing all of the coded data, was sent on February 23, 2005.


5.6.2 Race Coding

The "other/specify" question for race was possible in 8 different variables in the questionnaire:

• C4_1_1

• C4_1_2

• C14_1_1

• C14_1_2

• C14_1_3

• C24_1_1

• C24_1_2

• C24_1_3

For this question, the respondent was asked to classify the individual owners (themselves or another individual) into one or more of the following six race categories: White, Black/African American, Asian, Native Hawaiian or Other Pacific Islander, American Indian or Alaska Native, or Other. All responses of "Other" were accompanied by a verbatim response in which the respondent specified the other race. These questions were "check all that apply" questions, meaning that the respondent could select any combination of one or more races, including the "Other" category.

Often the "other/specify" verbatim response could clearly be back-coded into the existing code frame. However, during the coding process, it became clear that new response categories should be added to accommodate more frequently occurring answers in the "other/specify" response category. NORC recommended adding three new categories for these race questions: Hispanic, unspecified, and not applicable.

Hispanic was added because many respondents chose to not classify themselves as anything but Hispanic, and often included a particular country of origin (i.e. "Cuban" or "Dominican"). Unspecified was a necessary addition due to the number of respondents who essentially refused to categorize themselves, or whose responses were too vague to be coded, such as "American" or "Other." The final new category, not applicable, was used for the handful of cases for which the responses were clearly not meaningful answers to the question.

As described above, because these questions are "check all that apply" questions, the respondent's "other/specify" answer may contain more than one value. (For further description of this type of question, see the discussion of check all that apply question groups in Section 5.2.3 above.) In the event that the "other/specify" answer contained more than one value that could be back-coded into the existing code frame, NORC made an entry in the race coding transactions file for each affected variable. For example, if a recoding of a race question required both the African American and the Asian categories to be set to yes, NORC added two separate entries to the transactions file, one for each category.

All race coding was performed by two members of NORC's project staff, one of whom had extensive experience with the standard governmental race classification system. The two coders independently coded all items and then discussed discrepancies until resolved.

A batch of approximately the first half of the race coding was sent to the FRB for review and comments on December 10, 2004, at which point NORC proposed the three additions to the code frame described above. The FRB approved of the code frame additions and provided feedback that was used in preparing the final (cumulative) batch of race coding, which was delivered on February 26, 2005. In total, NORC coded 283 verbatim responses of "Other" races.


5.6.3 Business Problem Coding

Question A10_2 of the screener interview asked: "What is the single most important problem facing your business today?" The "other/specify" responses to this question were reviewed to determine if each response could be back-coded into the existing code frame. In the event that more than one problem was noted, NORC recoded only the first problem listed, because the code frame did not allow for multiple answers. At the FRB's request, the following categories were added to the code frame during the recoding phase:

• Energy costs

• Health costs (later modified to health care costs or availability)

• Costs other than labor, insurance, energy or health

• Cash flow

• No problems

All business problem coding was performed by three members of NORC's project staff, to provide maximum understanding of the aim and content of the interview. Two staff members independently coded all items, and then discussed discrepancies until resolved, with the third staff member serving as the adjudicator.

NORC reviewed and coded approximately half of the business problem coding, and sent the results to the FRB for review and comments on January 21, 2005. During this initial review, NORC identified more categories and proposed that these categories be added to the code frame:

• Growth

• Foreign competition

• Competition, other (including from unspecified sources)

• Availability of materials/resources (including quality)

• Labor problems other than cost or quality

• Internal management/administrative problems

• Environmental constraints (including location)

• Advertising and public awareness

• Market, economic, or industry instability

• Owner's personal problems

During the final review of all responses, NORC identified three more categories of responses that had previously been grouped in with the "Other" responses:

• Technology

• Dealing with insurance companies (not costs or availability)

• War and September 11th.

Items that did not fit into an existing or a proposed category, or were otherwise uncodable, were left in the "Other" category. Although no data needed to be changed in these cases, NORC included them in the appropriate transactions file so the FRB could review them alongside the other recodes.

Nearly 60 percent of all respondents indicated an "other/specify" response. NORC delivered the final file of all responses on March 2, 2005, which included 5613 coded items in total, all but 234 of which were coded into existing or proposed categories.


5.7 Interim Data Deliveries

NORC delivered screener and main data on a staggered schedule, originally biweekly so that main data and screener data were delivered on alternating weeks. In early September 2004, the FRB agreed to move screener deliveries to a monthly schedule (every 4 weeks) rather than biweekly. The delivery schedule was altered to accommodate holidays and staff vacations when necessary. Please see Appendix BBB for the interim delivery schedule.

For each interim screener or main data delivery, NORC delivered:

• Processed raw, unedited data, in SAS format

• Completeness check results (merged into the main data only)

• Transaction file that documented the suggested edits for individual cases

• Separate ASCII data file (main only)

• List of bubble cases with completeness check results (main only)

• Data memo to document any problems found in that set of data, and recommend solutions where applicable

• Verbatim file containing all verbatim responses

• Comment file containing all marginal comments, F2 comments, and exception comments

• Frequency of responses file


5.8 Final Data Delivery

After completion of the data collection period (January 31, 2005), NORC began the final delivery process. The final delivery included all of the interim deliverables for all complete and partial interviews (both main and screener), plus the following items:

• Transaction files for the three types of coding/recoding

• Hard copies of all data that have been collected outside of the interview (worksheets, and any paper records of call), along with a directory (in Excel) of all of the hard copies

• The final version of the data dictionary, including frequencies for categorical data and mean, median, standard deviation, minimum, and maximum responses for continuous variables

• Variables from weighting program output

• Case management data from the TNMS (call history file) along with a crosswalk that gives the meaning of each TNMS disposition

• Runtime version of the main and screener interview programs including institution lookup

• Separate spreadsheets for the main and screener that indicate the highest incentive amount each respondent was offered as well as which refusal letters were sent to that respondent

• Spreadsheet containing interviewer demographic information (interviewer IDs, race, gender, and status as either a supervisor or a converter), without names

With regard to the spreadsheet of interviewer demographic information, NORC required interviewers to sign waivers that allow NORC to provide this information. Because these waivers were not procured until late in the project, this spreadsheet only contained information for those interviewers who worked on the project after that date.

In order to meet the contractual requirement that all items be delivered and approved by March 31, 2005, NORC sent individual deliverables as they were completed. This approach allowed time for the FRB to review each piece and provide comments to NORC, and for NORC to respond to these comments before the deadline.


6 Sampling and Weighting Procedures


6.1 Introduction

This chapter describes the sample design and weighting procedures for the 2003 Survey of Small Business Finances. The 2003 survey was based on a stratified systematic sample, where the 72 strata were defined by the cross-classification of business size, census division, and urban/rural status. The sample frame was constructed from the Dun's Market IdentifiersTM (DMI) file, a business database maintained by the Dun & Bradstreet Corporation (D&B). The initial sample consisted of 37,600 businesses, a sample large enough to yield 4,000 completed interviews under the worst case scenario. Before the screening interviews, the sample was assigned to batches to facilitate sample management at the call center. Releasing the sample by batches ensured that only enough businesses were screened to achieve the target sample size. By the end of the study 23,798 businesses were released for screening.

One of the important goals of the 2003 survey was to increase the overall response rate. To achieve that goal, the sample design included extensive nonresponse subsampling. Nonrespondents to the screening interview were subsampled for further screening attempts to improve the screener completion rate. The intent of the subsampling was to allow the interviewers to concentrate more intensive efforts on a subsample of the more difficult cases, ultimately leading to more completed cases. Nonrespondents to the main interview were also subsampled48.

In order to compensate for the imperfect frame, a five-percent follow-up sample was selected from the final screener incompletes to inform post-survey weighting adjustments.

In sum, the 2003 SSBF design involved four major components: the selection of the initial sample, the subsampling of screener nonrespondents, the subsampling of main interview nonrespondents, and an additional follow-up sample of screener incompletes. Figure 6.1 describes the entire sampling process.

The final SSBF analysis sample consisted of the businesses that completed the main interview (see Chapter 4 for a discussion of completeness requirements). An analysis weight was calculated for each complete case to support weighted estimation. The primary purpose of weighting was to correct for potential bias due to unequal selection probabilities and nonresponse. A secondary purpose of weighting was to adjust for ineligible businesses that were part of the original sample of 37,600 businesses. Informally, the analysis weight approximated the number of businesses in the target population that the responding business represented. The final analysis weight was calculated in multiple stages. The first stage was the calculation of the initial base weight to account for the sample design. The base weight for a sample business was the reciprocal of the probability of selection under the sample design. The subsequent weighting stages represented adjustments to the base weight for batch selection, sample release, screener and main eligibility, screener nonresponse subsampling, screener nonresponse, main interview nonresponse subsampling, and main interview nonresponse. Finally, outlier weights were trimmed as described in Section 6.9.10.

The remainder of this chapter describes the sampling methodology and weighting procedures in greater detail. Section 6.2 defines the target population of the 2003 survey. Section 6.3 discusses the construction of the sampling frame from the DMI file. Section 6.4 presents the sample stratification scheme. Section 6.5 illustrates how the screening sample size was determined based on assumptions regarding rates of eligibility, completion, and nonresponse subsampling. Section 6.6 describes the procedures for assigning the sample to batches and replicates. Section 6.7 presents an analysis of nonresponse subsampling and its impact on design effects. Section 6.8 discusses the selection of the five percent follow-up sample of screener incompletes. Section 6.9 describes the procedures of calculating the final analysis weights. Section 6.10 discusses the calculation of the response rates. Finally, Section 6.11 explains where actual implementation of the sampling methodology differed from the NORC sampling plan, including a detailed look at InfoUSA matching.


6.2 Target Population

The 2003 SSBF target population included U.S. businesses that met the following criteria:

• Businesses that were for-profit, nongovernmental, nonfinancial, and nonagricultural;

• Businesses that were at the enterprise level;

• Businesses with fewer than 500 employees; and

• Businesses that were in operation December 31, 2003 under one or more of the current owners and were still in operation as of the date of the main interview.

The first criterion identified the industries of businesses that were covered in the target population. It explicitly excluded businesses that were governmental, financial, and agricultural. Table 6.1 lists the types of businesses, along with their Standard Industrial Classification (SIC) codes, that were specifically excluded from the target population.

The second criterion stated that only businesses that were not branches, divisions, or subsidiaries of a parent business were eligible for the survey. The third criterion included only businesses with less than 500 employees in the target population. An employee was an owner or some other worker in the business whether he or she was paid or not. The last criterion about operational status was more stringent than its 1998 counterpart, in that the business needed to be in operation at the time of the main interview. To be eligible in 1998, a business only needed to be in operation as of December 31, 1998.

Figure 6.1 2003 SSBF Sampling Flowchart

Figure 6.1 2003 SSBF Sampling Flowchart
The figure depicts the steps involved in the survey, from the initial step of sample selection, to the screening interview, to the main interview.
Table 6.1 Businesses Excluded from the SSBF Target Population
SIC Types of Businesses
0000-0999 Agriculture, Fishing, and Forestry
4311 U.S. Postal Service
6000-6399 Non-Depository/Depository Institutions, Security/Commodity Brokers, Insurance Carriers
6700-6799 Holding and Other Investment Offices
8600-8699 Membership Organizations
9000-9721 Public Administration
821103 Public Elementary/Secondary Schools (This is the Only 6-digit SIC in the List.)


6.3 Frame Construction

As in all prior rounds of the survey, the 2003 SSBF used the DMI file to construct the sampling frame to represent the target population. However, unlike in previous years, NORC used a complete listing of all population firms to draw the sample rather than having D&B draw the sample. The DMI is based on D&B's credit rating services and business telephone listings, and it is widely considered the best commercially available business database. NORC considered supplementing the DMI file using the InfoUSA business database. After extensive research, however, we decided not to use the InfoUSA database for this purpose. Section 6.11.2 summarizes this research.

For each business on file, D&B attempts to collect the telephone number, physical and mailing address, name of owner or chief executive/chief financial officer, classification as headquarters/branch/division or parent/subsidiary/sole location, the industry (which is coded to its Standard Industrial Classification (SIC) code), sales volume, and number of employees. These data are stored on the DMI file, which D&B attempts to regularly update, as noted by a variable designating the date of the most recent update to the record.

Upon request from NORC, D&B froze the DMI file prior to sample selection. The frozen file was preserved in its entirety by D&B throughout the duration of the study. A limited-content abstract (only variables necessary to establish strata were delivered) of the frozen file was delivered to NORC to serve as the sampling frame. Prior to the delivery of the abstract, D&B eliminated all firms that did not meet target population definitions according to the information contained on the DMI file49. First, businesses with ineligible SIC codes as identified in Table 6.1 were removed. Second, all branch, division and subsidiary businesses and businesses that were headquartered outside of the United States (i.e., the 50 states and the District of Columbia) were removed. Only businesses at the enterprise level, including single-location businesses and ultimate parent businesses that were not also subsidiaries, remained on the frame. Third, businesses with 500 or more employees were removed from the frame. In prior rounds of the survey, a small sample of businesses with 500 or more employees was included as a means to eliminate potential coverage bias. The idea was to give smaller businesses that were misclassified as large businesses a chance to be included in the sample. However, NORC's experience with the DMI file indicated that the potential coverage bias due to misclassified size was very small since businesses with 500 or more employees were expected to encompass less than two percent of the abstract file.

Additional ineligible businesses that might still have existed in the final frame were identified through screening interviews. The ineligible businesses were removed from the sample and a post interview weight adjustment was applied to the remainder of the analysis sample to compensate for the ineligible businesses that were part of the original sample. The final frame constructed from the limited-content abstract file contained 9,701,023 businesses50. For each business on the frame, the following variables were constructed to support sample stratification, sample selection, and survey operations: identification variable, stratification variables, and other variables that were potentially useful. All of these variables were created from the existing variables in the DMI file, as described below.


6.3.1 Identification Variable

D&B assigns a unique DUNS number for each business in the DMI file through routine database maintenance. If a business is sold or otherwise changed, the DUNS number is retained only if the revised business is legally the same entity as the original business; otherwise a new DUNS number is assigned. The DUNS number was used as the identification variable.


6.3.2 Stratification Variables

Stratification variables included employment size, census division, and urban/rural status. As will be discussed in more detail later, employment size was a 4-category recode from the number of employees. Census division had 9 categories and urban/rural status had 2. Together they represented the geographic location of the business that was correlated with many survey variables. In addition, the SIC code was used to sort the frame before systematic selection within each stratum. This so-called implicit stratification helped improve the representativeness of the sample with respect to business types.


6.3.3 Other Variables

The sampling frame also included other variables that would be potentially useful for analytical purposes although the variables were not essential for sampling. These included credit score percentile51, sales volume, legal organization status, manufacturing/non-manufacturing indicator, and so on.


6.4 Sample Stratification

Prior to sampling, the DMI frame was divided into 72 strata based on the cross-classification of three stratification variables: total employment size, urban/rural status, and census division. The total employment size variable was coded from the number of employees, and it had the following four categories:

• 1-19 employees or unknown size;

• 20-49 employees;

• 50-99 employees; and

100-499 employees.

Note that businesses with missing information on the total number of employees in the D&B frame (unknown size) were classified into the first size class.

The census division variable was coded from the geographic location of the business. The nine census divisions are listed below, along with the abbreviated names of the states within each division.

• New England: ME, NH, VT, MA, RI, and CT;

• Middle Atlantic: NY, NJ, and PA;

• East North Central: OH, IN, IL, MI, and WI;

• West North Central: MN, IA, MO, ND, SD, NE, and KS;

• South Atlantic: DE, MD, DC, VA, WV, NC, SC, GA, and FL;

• East South Central: KY, TN, AL, and MS;

• West South Central: AR, LA, OK, and TX;

• Mountain: MT, ID, WY, CO, NM, AZ, UT, and NV; and

• Pacific: WA, OR, CA, AK, and HI.

The urban/rural status variable was coded from the geographic location variables on the DMI frame. It had two categories:

• Urban: businesses located within a Metropolitan Statistical Area (MSA); and

• Rural: other businesses.

The current MSAs were those defined by the Office of Management and Budget (OMB) based on application of 2000 standards to 2000 decennial census data and announced by OMB effective December 2003. According to OMB's definition, an MSA consists of one or more whole counties. An MSA is a core area containing a substantial population nucleus, together with adjacent communities having a high degree of economic and social integration with that core. The physical-location zip code in the abstract file was linked to state and county FIPS52 codes. These state and county FIPS codes could link counties and county aggregates to the MSAs, and thus, to urban areas.

The 72 strata defined by the three stratification variables are presented in Table 6.2. The last column contains the total number of businesses in the frame per stratum. The 37,600 businesses were selected from this frame which contained 9,701,023 businesses. The original DMI frame contained 9,702,935 businesses, but the 1,912 overlapping businesses that had previously been selected for pretest use were removed from the original frame before selecting the main study sample.

Table 6.2 2003 SSBF Sample Stratification
Stratum Number Size Class Urban Rural Census Division Frame Size
111 1-19 Urban New England 433,299
112 1-19 Urban Middle Atlantic 1,157,605
113 1-19 Urban East North Central 1,012,548
114 1-19 Urban West North Central 373,457
115 1-19 Urban South Atlantic 1,493,698
116 1-19 Urban East South Central 297,658
117 1-19 Urban West South Central 831,701
118 1-19 Urban Mountain 483,425
119 1-19 Urban Pacific 1,487,055
121 1-19 Rural New England 71,126
122 1-19 Rural Middle Atlantic 95,128
123 1-19 Rural East North Central 264,112
124 1-19 Rural West North Central 240,377
125 1-19 Rural South Atlantic 236,948
126 1-19 Rural East South Central 151,682
127 1-19 Rural West South Central 180,248
128 1-19 Rural Mountain 150,181
129 1-19 Rural Pacific 119,444
211 20-49 Urban New England 21,484
212 20-49 Urban Middle Atlantic 56,023
213 20-49 Urban East North Central 56,380
214 20-49 Urban West North Central 20,730
215 20-49 Urban South Atlantic 65,038
216 20-49 Urban East South Central 15,362
217 20-49 Urban West South Central 35,158
218 20-49 Urban Mountain 21,722
219 20-49 Urban Pacific 62,312
221 20-49 Rural New England 2,958
222 20-49 Rural Middle Atlantic 4,020
223 20-49 Rural East North Central 11,169
224 20-49 Rural West North Central 9,901
225 20-49 Rural South Atlantic 9,544
226 20-49 Rural East South Central 6,416
227 20-49 Rural West South Central 6,414
228 20-49 Rural Mountain 5,591
229 20-49 Rural Pacific 3,818
311 50-99 Urban New England 6,440
312 50-99 Urban Middle Atlantic 17,254
313 50-99 Urban East North Central 17,983
314 50-99 Urban West North Central 7,188
315 50-99 Urban South Atlantic 19,482
316 50-99 Urban East South Central 4,636
317 50-99 Urban West South Central 10,710
318 50-99 Urban Mountain 6,647
319 50-99 Urban Pacific 19,229
321 50-99 Rural New England 867
322 50-99 Rural Middle Atlantic 1,115
323 50-99 Rural East North Central 3,140
324 50-99 Rural West North Central 3,198
325 50-99 Rural South Atlantic 2,597
326 50-99 Rural East South Central 1,869
327 50-99 Rural West South Central 1,926
328 50-99 Rural Mountain 1,409
329 50-99 Rural Pacific 959
411 100-499 Urban New England 4,438
412 100-499 Urban Middle Atlantic 11,434
413 100-499 Urban East North Central 12,014
414 100-499 Urban West North Central 4,657
415 100-499 Urban South Atlantic 11,996
416 100-499 Urban East South Central 3,027
417 100-499 Urban West South Central 6,725
418 100-499 Urban Mountain 3,872
419 100-499 Urban Pacific 11,669
421 100-499 Rural New England 548
422 100-499 Rural Middle Atlantic 844
423 100-499 Rural East North Central 2,270
424 100-499 Rural West North Central 1,909
425 100-499 Rural South Atlantic 1,622
426 100-499 Rural East South Central 1,229
427 100-499 Rural West South Central 1,155
428 100-499 Rural Mountain 693
429 100-499 Rural Pacific 540
Total N/A N/A N/A 9,701,023


6.5 Screening Sample Size Estimation

NORC's original proposal to conduct the SSBF specified drawing a sample of 37,600 firms from the D&B frame. This reflects the number of cases estimated at the time of the proposal with a 30% "cushion" to ensure there was sufficient sample to complete 4,000 interviews in case of any unforeseen circumstances. Although subsequent estimates developed during the planning stage indicated that this was a larger sample than the survey would be likely to require, this sample was selected as an expedient measure, intended to help ensure that the study stayed on schedule. The final screening sample size was a function of realized completion rates, eligibility rates, and subsampling rates at various stages of the survey.

This section discusses the two approaches we used to estimate the screening sample size. The first approach was based on overall assumptions of completion rates and eligibility rates and the second approach was based on stratum-specific assumptions of completion rates and eligibility rates. The first approach gave a quick approximation that was used for initial workload and staff planning. The sample size estimated from the second approach was used to select the sample. The second approach was adopted not only because it was based on refined stratum-specific assumptions, but also because it directly led to a sample allocation among the strata.


6.5.1 The First Approach


6.5.1.1 The Estimation Worksheet

The first approach estimates the screening sample size needed to obtain 4,000 interviews based on overall assumptions of completion rates, eligibility rates, and subsampling rates. We started by building a worksheet to represent the mathematical relationships among all the stages of the sampling and responding processes. When the various assumed rates were entered into the worksheet, the sample size that led to 4,000 completed interviews was used as the estimated screening sample size. Taking into account the intrinsic uncertainty about the assumptions, we considered three scenarios that differed in their assumed screener and main interview completion rates (but not in eligibility rates and nonresponse subsampling rates). Table 6.3 presents the worksheet with three scenarios. The first column lists the sampling steps, and all the numbered steps correspond to the numbered boxes in Figure 6.1.

The three scenarios represented different levels of expected completion rates. Under Scenario 1, which assumed the lowest completion rates, the screening sample needed to include 28,775 businesses. Scenario 2 was based on higher completion rates that were expected with improved operational procedures for the 2003 survey. Under Scenario 2, the screening sample would consist of 18,910 businesses. The most optimistic Scenario 3 was designed to achieve an overall response rate of 60% as required by the FRB. Under Scenario 3, the sample would include only 13,660 businesses. Since the goal was to achieve a 60% response rate, the estimated screening sample size under scenario 3 was taken as the final estimate under the first approach. The specific assumptions used in the scenarios are discussed in the following section.

Table 6.3 2003 SSBF Screening Sample Size Estimation: Sample Selection
Sampling Steps Scenario 1 Scenario 2 Scenario 3
1. SSBF target population Unknown Unknown Unknown
2. DMI frame 9,701,023 9,701,023 9,701,023
3. Initial sample selected from DMI 37,600 37,600 37,600

Table 6.3- continued: 2003 SSBF Screening Sample Size Estimation: Screening, Pass 1
Sampling Steps Scenario 1 Scenario 2 Scenario 3
4. Screening sample size 28,775 18,910 13,660
Expected screener completion rate, Pass 1 40% 50% 60%
5. Expected number of businesses with eligibility determined (eligibility known) 11,510 9,455 8,196
6. Expected number of businesses with eligibility not determined (eligibility unknown) 17,265 9,455 5,464
Expected main interview eligibility rate among determined, Pass 1 65% 65% 65%
7. Expected number of businesses ineligible for main interview among eligibility known 4,029 3,309 2,869
8. Expected number of businesses eligible for main interview among eligibility known 7,482 6,146 5,327
Expected rate of nonrespondents among those not determined, Pass 1 59% 59% 59%
9. Expected number of businesses with eligibility not determined in Pass 1 that are contacts and non-finalized nonrespondents, to continue into Pass 2 (nonresponse) 10,186 5,578 3,224
10. Expected number of businesses with eligibility not determined in pass 1 that are noncontacts and finalized nonrespondents, not to continue into pass 2 (noncontacts) 7,079 3,877 2,240
Expected screener eligibility rate among noncontacts that will not continue to pass 2 38% 38% 38%
11. Expected number of screener ineligible (not live) businesses among noncontacts (for estimating screener completion rate) 4,389 2,403 1,389
12. Expected number of screener eligible (live) businesses among noncontacts 2,690 1,473 851
Average subsampling rate for pass 1 nonrespondents 50% 50% 50%

Table 6.3- continued: 2003 SSBF Screening Sample Size Estimation: Screening, Pass 2
Sampling Steps Scenario 1 Scenario 2 Scenario 3
13. Sample size for pass 2 screening 5,093 2,789 1,612
Expected screener completion rate, pass 2 30% 35% 40%
14. Expected number of businesses with eligibility determined (eligibility known) 1,528 976 645
15. Expected number of businesses with eligibility not determined (eligibility unknown) 3,565 1,813 967
Expected main interview eligibility rate among determined, pass 2 65% 65% 65%
16. Expected number of businesses ineligible for main interview among eligibility known 535 342 226
17. Expected number of businesses eligible for main interview among eligibility known 993 635 419
Expected screener eligibility rate among not determined, pass 2 89% 89% 89%
18. Expected number of screener ineligible (not live) businesses among those not determined (for estimating screener completion rate) 392 199 106
19. Expected number of screener eligible (live) businesses among those not determined 3,173 1,614 861
20. Total number of businesses eligible for main interview after pass 2 8,475 6,780 5,746
21. Reserve sample  8,825 18,690 23,940
Weighted screener response rate 62% 71% 79%

Table 6.3- continued: 2003 SSBF Screening Sample Size Estimation: Main Interview, Pass 1
Sampling Steps Scenario 1 Scenario 2 Scenario 3
22. Main interview sample size, pass 1 8,475 6,780 5,746
Expected main interview completion rate, pass 1 40% 50% 60%
23. Expected number of completed interviews, pass 1 3,390 3,390 3,448
24. Expected number of incompletes 5,085 3,390 2,299
Average subsampling rate for pass 1 incompletes 60% 60% 60%

Table 6.3- continued: 2003 SSBF Screening Sample Size Estimation: Main Interview, Pass 2
Sampling Steps Scenario 1 Scenario 2 Scenario 3
25. Main interview sample size, pass 2 3,051 2,034 1,379
Expected main interview completion rate, pass 2 20% 30% 40%
26. Expected number of completed interviews, pass 2 610 610 552
27. Expected number of incompletes, pass 2 2,441 1,424 827
28. Total number of completed interviews 4,000 4,000 4,000
Weighted main interview response rate 52% 65% 76%
Overall Weighted Response Rate 32% 46% 60%


6.5.1.2 Derivation of Assumptions

The assumptions of eligibility rates and completion rates were derived from the 1998 survey results, and assumptions about nonresponse subsampling rates represented our sampling decisions.

Pass 1 Screener Completion Rate. The first important assumption is the screener completion rate at pass 1. All live businesses in the sample were considered eligible for the screening interview53. The screener completion rate is defined as the number of businesses that completed the screener as a proportion of the screening sample size. The screener completion rate of the 1998 survey was about 70%. However, the 2003 survey design involved subsampling of screener nonrespondents for further attempts in pass 2. As a result, the 2003 pass 1 cases were to be in the field for a shorter period of time than the 1998 sample, and the 2003 pass 1 screener completion rate was likely to be lower than the 1998 rate. We assumed three rates for the three scenarios: Scenario 1 (40%) represented the worst case, Scenario 2 (50%) represented an improved rate, and Scenario 3 (60%) represented the most optimistic assumptions.

Main Interview Eligibility Rate. The second important assumption was the main interview eligibility rate among completed screeners. This eligibility rate is defined as the number of businesses that are eligible for the main interview as a proportion of all businesses that complete the screener. In 1998, 73% of the businesses that completed the screener were eligible for the main interview (see The 1998 SSBF Methodology Report, page 58). We assumed that the eligibility rate for this round might be as low as 65% due to stricter eligibility criteria. To be eligible for the 1998 SSBF, a business had to be in operation at the end of 1998. To be eligible for the 2003 SSBF, however, a business should have been in operation at the end of 2003 and at the time of the main interview. The expected decline in eligibility rate assumes that about 1% of small businesses go out of business each month. The same eligibility rate is assumed for businesses in both pass 1 and pass 2 of the screening interviews.

Proportion of Pass 1 Screener Incompletes Eligible for Subsampling. According to the sample design, sample businesses that failed to complete the screener at pass 1 would be divided into two groups for subsampling purposes: nonrespondents and finalized nonrespondents/noncontacts. The former category was eligible for subsampling while the latter was not. The nonrespondents were businesses that did not complete the screener but with which a human contact or other promising contact was established during the field period. All other incompletes were either finalized nonrespondents or noncontacts, including disconnected numbers, computer/fax numbers, fast busy signals, hostile refusals, language barriers, locating problems, unavailable during the field period, or incapacitated. Based on this definition, nonrespondents accounted for 59% of all screener incompletes in 1998. The remaining 41% of screener incompletes were finalized nonrespondents/noncontacts. Table 6.4 shows how this breakdown was estimated from the 1998 results.


Table 6.4 Nonrespondents and Finalized Nonrespondents/Noncontacts Among Pass 1 Screener Incompletes: Estimated from Table 6.6 in the 1998 SSBF Methodology Report
Final Screener Disposition code Outcome Description Number of Cases Subsampling Status Proportion
11 Final noncontact; unconfirmed phone number 680 Finalized Nonrespondents or Noncontacts 41%
12 Final locating problem 3,413 Finalized Nonrespondents or Noncontacts 41%
22 Final incapacitated respondent 7 Finalized Nonrespondents or Noncontacts 41%
23 Final non-contact; phone number confirmed 782 Finalized Nonrespondents or Noncontacts 41%
24 Final language barrier 53 Finalized Nonrespondents or Noncontacts 41%
32 Final hostile refusal 40 Finalized Nonrespondents or Noncontacts 41%
21 Final unavailable during field period 1,339 Nonrespondents 59%
31 Final refusal 5,913 Nonrespondents 59%

Screener Eligibility Rate Among Those Ineligible for Subsampling. We next estimated the screener eligibility rate (i.e., proportion of live businesses) among finalized nonrespondents/ noncontacts. This eligibility rate was needed to calculate the final screener completion rate since businesses ineligible for the screener would be subtracted from the denominator in calculating the screener completion rate. In 1998, a 5% subsample was selected from screener incompletes for follow-up screening attempts. Based on the current definition, this sample of 621 screener incompletes included 247 noncontacts. The 1998 follow-up survey was able to contact 95 (38%) businesses among these former noncontact cases. If we consider a contact as evidence that the business was still in operation, then the contact rate provided a reasonable estimate of the proportion of live businesses among the initially noncontact cases. We used this contact rate to estimate the proportion of live businesses among finalized nonrespondents/noncontacts after pass 1 of the 2003 screening interview. Table 6.5 lists the types of cases in the 1998 5% sample that were considered finalized nonrespondents or noncontacts.


Table 6.5 Screener Eligibility Rate Among Pass 1 Finalized Nonrespondents/Noncontacts: Estimated From Table 8.18 in the 1998 SSBF Methodology Report
Type of Finalized Nonrespondents/Noncontacts Sample Size Number Contacted (as a proxy of number of live businesses)
Noncontact, confirmed number 39 21
Language barrier 3 2
Noncontact, unconfirmed number 34 11
Locating problem 171 61
Total noncontacts 247 95

Screener Subsampling Rate. We decided to subsample 50% of pass 1 nonrespondents for further screening attempts in pass 2. This subsampling rate was the expected average subsampling rate across all strata and it was applied to all three scenarios for estimating the screening sample size.

Pass 2 Screener Completion Rate. The next important parameter is the pass 2 screener completion rate where we had no guidance from the 1998 survey. We assumed that the pass 2 rate would be lower than the pass 1 rate since the pass 2 sample would include more difficult cases. As shown in Table 6.3, we assumed the pass 2 screener completion rates to be 30%, 35%, and 40% for the three scenarios respectively.

Screener Eligibility Rate among Pass 2 Incompletes. In order to compute the weighted screener completion rate, it was also necessary to estimate the screener eligibility rate among pass 2 incompletes, i.e., the proportion of live businesses among pass 2 screener incompletes. Again, we derived this estimate from the 1998 5% follow-up sample. Based on our definition, the 1998 5% follow-up sample included 374 screener nonrespondents. NORC was able to establish contact with 334 (89%) of these nonrespondents. If a contact may be considered as evidence that the business was still in operation, then this contact rate provided a reasonable estimate of the proportion of live businesses among the nonrespondents. We used this rate as the proportion of live businesses among pass 2 screener incompletes. Table 6.6 lists the types of cases in the 1998 5% sample that were considered nonrespondents.


Table 6.6 Screener Eligibility Rate Among Pass 2 Nonrespondents: Established From Table 8.18 in the 1998 SSBF Methodology Report
Type of Pass 2 Nonrespondents Sample Size Number Contacted
(as a proxy of number of live businesses)
Unavailable 67 49
Refusal/DK 307 285
Total nonrespondents 374 334

Pass 1 Main Interview Completion Rate. The expected main interview completion rates at pass 1 were also unknown. Again, we considered three scenarios that represented the lowest rate (40%), the improved rate (50%), and the most optimistic rate (60%).

Main Interview Subsampling Rate. At the conclusion of pass 1 of the main interview, nonrespondents were to be subsampled at an average rate of 60%. Certain types of nonrespondents were not eligible for subsampling. These would include hostile refusals as well as those cases that were found to be ineligible at the main interview even though they were determined to be eligible by the screener54. For estimating the screening sample size, however, we assumed that all pass 1 main interview nonrespondents were eligible for subsampling.

Pass 2 Main Interview Completion Rate. The expected main interview completion rate at pass 2 was assumed to be lower than that of pass 1. The three different scenarios presented are 20%, 30%, and 40%, respectively.


6.5.1.3 Response Rate

As part of the estimation of the screening sample size, we computed the response rate under each scenario55. The screener completion rate takes screener nonresponse subsampling into account although it is not weighted by individual case weight. The screener completion rate is calculated as:

[6.1] R_{screener} =\frac{n_1 +\left( {{n_2 } \mathord{\left/ {\vphantom {{n_2 } {r_s }}} \right. \kern-\nulldelimiterspace} {r_s }} \right)}{n-n_3 -\left( {{n_4 } \mathord{\left/ {\vphantom {{n_4 } {r_s }}} \right. \kern-\nulldelimiterspace} {r_s }} \right)}

where n is the screening sample size, n_{1} is the number of businesses that completed the screener in Pass 1, n_2 is the number of businesses that completed the screener in pass 2, r_{s} is the subsampling rate for pass 1 nonrespondents, n_3 is the expected number of screener ineligible (not live) businesses among pass 1 noncontacts, and n_4 is the expected number of screener ineligible (not live) businesses among pass 2 incompletes. The estimated screener completion rate under each of the three scenarios is presented in Table 6.3. Scenario 1 would lead to a completion rate of 62%, Scenario 2 would reach a completion rate of 71%, and Scenario 3 would achieve a completion rate of 79%.

The weighted main interview completion rate is calculated as:

[6.2] R_{main} =\frac{m_1 +\left( {{m_2 } \mathord{\left/ {\vphantom {{m_2 } {r_m }}} \right. \kern-\nulldelimiterspace} {r_m }} \right)}{m_1 +\left( {{m_3 } \mathord{\left/ {\vphantom {{m_3 } {r_m }}} \right. \kern-\nulldelimiterspace} {r_m }} \right)}

where m_1 is the number of completed interviews at pass 1, m_2 is the number of completed interviews at pass 2, r_m is the subsampling rate for pass 1 incompletes, and m_3 is the sample size for pass 2. R_{main} takes into account the main interview nonresponse subsampling but it is not weighted by individual case weight. The estimated main interview completion rates under the three scenarios are 52%, 65%, and 76%, respectively.

The overall weighted response rate was computed as the product of the screener completion rate and the main interview completion rate, i.e.:

[6.3] R=R_{screener} \ast R_{main}

The last row of Table 6.3 shows the expected overall response rate under each scenario. Under Scenario 1, the overall response rate would be 32%, a rate close to the 1998 level; under scenario 2, the overall response rate would be 46%; and Scenario 3 would achieve a response rate of 60%, the rate required by the FRB.


6.5.2 The Second Approach

The second approach to estimating the screening sample size was based on stratum-specific assumptions of completion rates and eligibility rates derived from the 1998 survey results. We started by allocating the 4,000 interviews to the strata through a raking program to meet the survey's precision requirement. We then inflated the stratum allocation by stratum-specific eligibility rates and completion rates at various stages to derive the screening sample size per stratum. Therefore, the second approach was the same as the first approach except that it was separately done for each stratum using stratum-specific assumptions. The total sample size was the sum of the stratum sample sizes. The advantage of this approach is that it led to an allocation of the screening sample across strata.


6.5.2.1 Allocation of Complete Interviews

The total number of 4,000 complete interviews was first allocated to each size class to meet the precision requirement. It was required that the 95% confidence interval of a proportion estimate \hat {p}_{ }for each size class should be \left[ {\hat {p}\pm 0.05} \right] or better, which means that the standard error of \hat {p} cannot be greater than 0.025. A proportion estimate \hat {p} has standard error:

[6.4] s_{\hat {p}} =\left( {\hat {p}\ast \left( {1-\hat {p}} \right)/n} \right)^{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}

which is maximized when \hat {p} is 0.5. Under simple random sampling, s_{\hat {p}} is less than or equal to 0.025 when n is roughly 400. The 2003 SSBF, however, was based on a complex design that involved extensive subsampling and unequal selection probabilities. Complex designs like this typically introduce a design effect that tends to reduce the effective sample size. Design effect (DEFF) is defined as the ratio of the sampling variance reflecting all complexities of the design to the sampling variance expected from a simple random sample of the same size (Kish, 1965). If the design effect isDEFF=d, the effective sample size is the nominal sample size divided byd. Thus, to achieve the required precision under the current design, the sample size per size class should be 400*d_s , where s indexes the size class \left( {s=1,2,3,4} \right). For sample allocation purposes, we assumed a conservative within-class design effect of 1.2556. Therefore, at least 400x1.25=500 completed interviews should be allocated to each size class.

Table 6.7 shows the marginal distribution of the SSBF population over the stratification variables. Since the population distribution is highly skewed with respect to employment size, size classes 20-49, 50-99, and 100-499 would receive less than 500 interviews under a proportional allocation. Therefore, these classes would have to be substantially oversampled. After allocating 500 interviews to each of these three size classes, the remaining 2,500 were allocated to the 1-19 (and unknown) size class. The allocation of the 4,000 complete interviews over the four size classes was 2,500, 500, 500, and 500.

Within each size class, the number of complete interviews was further allocated to the 18 strata that make up each size class. As in the 1998 survey, the sample allocation to strata within each size class was accomplished through a raking routine. Raking, also known as iterative proportional fitting, is a technique for applying multiple marginal constraints through iteratively adjusting the size of individual cells that determine the marginal distributions. For the current application, the sample allocation to strata was constrained by the requirements that 1) each size class receives a fixed sample size and 2) the overall sample size is 4,000. The raking program started with an arbitrary allocation per stratum. These initial allocations were then adjusted iteratively, one dimension at a time, until all constraints were met.

The resulting allocation of complete interviews to strata, presented in Table 6.8, was equivalent to proportional allocation within each size class. Although no precision requirements were specified for analysis domains other than the size class, we expected that this allocation would yield sufficient sample for analyses by urban/rural status and by census division. The sample size numbers were not rounded here since they would be inflated later to derive the screening sample size per stratum.


6.5.2.2 Allocation of Screening Sample to Strata

This section discusses procedures to inflate the allocation of complete interviews to derive the size of the screening sample per stratum. Based on the 1998 SSBF report, the screener completion rate, main interview eligibility rate, and main interview completion rate varied significantly across strata. We assumed that such variation was likely to continue in the 2003 survey and should be taken into account in determining the screening sample size. On the other hand, the nonresponse subsampling rate at both screener and main interview stages would remain constant across all strata. We now discuss how the various stratum-specific rates were estimated from the 1998 results.

Screener Completion Rates. Table 8.21 in The 1998 SSBF Methodology Report contains the screener nonresponse adjustment factors for 352 subgroups defined by employment size, urban/rural status, and census division. From this table, we derived the 1998 screener completion rates for the subgroups corresponding to the 2003 sampling strata. Our analysis showed that urban businesses, smaller businesses, and businesses in the New England and Middle Atlantic Census divisions experienced lower completion rates than their counterparts. To capture this variation, we computed the 1998 screener completion rates for nine groups of businesses. We first combined size classes 20-49, 50-99, and 100-499 to form a single 20-499 size class. Next, we divided each of the two redefined size classes (1-19 and 20-499) between urban and rural. Finally, we divided each size and urban/rural combination into two geographic categories (region 1 and the other three regions combined) except that urban businesses of size 20-499 were divided into three categories (region 1, region 2, and the other two regions combined)57.

These 1998 screener completion rates were used to estimate the expected screener completion rate per group for the 2003 survey. However, the 1998 rates were not used directly as the expected completion rates in 2003. Instead, they were used to modify the overall completion rates assumed for pass 1 and pass 2 under each scenario. The estimated 2003 rates were derived through the following steps. First, the ratio of the 1998 group rate to the overall rate was computed for each group. For the first group (1-19, rural, and region 1), the ratio was (.73/.70)= 1.0429. Second, this ratio was used to modify the assumed overall screener completion rate for 2003 to derive the group-specific rate. For example, under Scenario 1, the assumed screener completion rates were .40 and .30 for pass 1 and pass 2, respectively (see Table 6.3). The expected 2003 screener completion rate for the first group for pass 1 under scenario 1 was estimated as .40*1.0429=.42, and the pass 2 rate was estimated as .30*1.0429=.31. The expected 2003 screener completion rates for the other groups under each scenario were derived following the same procedures.


Table 6.7 Marginal Totals for Each of the Dimensions of the Sample Stratification
Stratification Variable:
Category
Count Percent
Total Population:
N/A
9,701,023 100.0
Size Class:
1. 1-19 and Unknown
9,079,692 93.6
Size Class:
2. 20-49
414,040 4.3
Size Class:
3. 50-99
126,649 1.3
Size Class:
4. 100-499
80,642 0.8
Urban/Rural:
1. Urban
8,104,056 83.5
Urban/Rural:
2. Rural
1,596,967 16.5
Census Division:
1. New England
541,160 5.6
Census Division:
2. Middle Atlantic
1,343,423 13.8
Census Division:
3. East North Central
1,379,616 14.2
Census Division:
4. West North Central
661,417 6.8
Census Division:
5. South Atlantic
1,840,925 19.0
Census Division:
6. East South Central
481,879 5.0
Census Division:
7. West South Central
1,074,037 11.1
Census Division:
8. Mountain
673,540 6.9
Census Division:
9. Pacific
1,705,026 17.6


Table 6.8 Allocation of 4,000 Completed Interviews to Strata
Size Class
1-19
Stratum1
Size Class
1-19
Sample Size
Size Class 20-49
Stratum
Size Class 20-49
Sample Size
Size Class 50-99
Stratum
Size Class 50-99
Sample Size
Size Class 100-499
Stratum
Size Class 100-499
Sample Size
111 116.502 211 23.300 311 23.300 411 23.300
112 289.210 212 57.842 312 57.842 412 57.842
113 297.014 213 59.403 313 59.403 413 59.403
114 142.391 214 28.478 314 28.478 414 28.478
115 396.313 215 79.263 315 79.263 415 79.263
116 103.740 216 20.748 316 20.748 416 20.748
117 231.219 217 46.244 317 46.244 417 46.244
118 145.002 218 29.000 318 29.000 418 29.000
119 367.065 219 73.413 319 73.413 419 73.413
121 22.957 221 4.591 321 4.591 421 4.591
122 56.991 222 11.398 322 11.398 422 11.398
123 58.529 223 11.706 323 11.706 423 11.706
124 28.059 224 5.612 324 5.612 424 5.612
125 78.096 225 15.619 325 15.619 425 15.619
126 20.443 226 4.089 326 4.089 426 4.089
127 45.563 227 9.113 327 9.113 427 9.113
128 28.574 228 5.715 328 5.715 428 5.715
129 72.333 229 14.467 329 14.467 429 14.467
Total 2,500 Total 500 Total 500 Total 500

1 The stratum identifier is a 3-digit number in which the first digit indicates size class, the second digit indicates urban (1) or rural (2), and the third digit represents Census division.

Table 6.9 reports the results of this exercise, where R_{s1} and R_{s2} represent the screener completion rate for pass 1 and pass 2, respectively.

Main Interview Eligibility Rate. The main interview eligibility rate among screener completes also showed significant variation in 1998. The eligibility rate was lower among larger businesses and rural businesses, and the variation was quite substantial. To avoid relying too much on the 1998 experience, we computed the eligibility rate for eight groups formed by the cross of size class and urban/rural status. Table 6.10 presents the 1998 group rate, the ratio of the group rate to the overall rate, and the expected eligibility rate for each group in 2003. The expected 2003 eligibility rate per group was estimated as .65 times the ratio. For example, the expected rate for group one (1-19, urban) was estimated as .65*1.0405=.68.

Main Interview Completion Rate. Table 8.28 in The 1998 SSBF Methodology Report contains the main interview nonresponse adjustment factors for various subgroups defined by minority status, employment size, urban/rural status, type of industry, credit score range, and census division. Extensive cell collapsing was performed in 1998 within minority groups and across census divisions within non-minority groups, making it impossible to calculate the 1998 completion rate for subgroups corresponding to the 2003 strata. However, it was possible to examine the variation of main interview completion rates between urban and rural businesses and across size classes. Based on that examination, we computed the 1998 main interview completion rate for four groups of businesses. The 1998 group rates and the expected 2003 group rates are presented in Table 6.11, where R_{m1} and R_{m2} represent the main interview completion rate at pass 1 and pass 2, respectively. Again, the 1998 group rates were used to modify the overall main interview completion rates assumed for pass 1 and pass 2. For example, the first group (1-49, rural) achieved a main interview completion rate of .36 in 1998, which was 1.0909 times higher than the overall 1998 completion rate of .33. This ratio of 1.0909 was used to modify the overall completion rates assumed for pass 1 and pass 2 under each scenario. Thus, under scenario 1, the expected pass 1 completion rate was estimated as .40*1.0909=.44, and the expected pass 2 completion rate was estimated as .20*1.0909=.22. Expected main interview completion rates under the other two scenarios were derived in the same manner.

The estimated 2003 screener completion rate, main interview eligibility rate, and main interview completion rate were then used to replace the corresponding rates assumed for the overall sample. In doing so, the group rates were applied to all strata within that group. For example, the first group in Table 6.9 corresponds to strata 111 and 112. For these two strata, the expected screener completion rates were .36 and .27 for pass 1 and pass 2, respectively under Scenario 1. The first group in Table 6.10 corresponds to strata 111-119. For these 9 strata, the expected main interview eligibility rate was 0.68 under Scenario 1. Similarly, the first group in Table 6.11 corresponds to strata 111-119 and 211-219. For these 18 strata, the expected main interview completion rates were .39 and .19 for pass 1 and pass 2, respectively under Scenario 1.

Table 6.9 Expected Screener Completion Rate by Group: Estimated from Table 8.21 in the 1998 SSBF Methodology Report
Group 1998 Group Rate Ratio of Group Rate to Overall Rate Expect-ed
Group Rates:
Scenario 1

R_{s1}
Expect-ed
Group Rates:
Scenario 1

R_{s2}
Expect-ed
Group Rates:
Scenario 2

R_{s1}
Expect-ed
Group Rates:
Scenario 2

R_{s2}
Expect-ed
Group Rates:
Scenario 3

R_{s1}
Expect-ed
Group Rates:
Scenario 3

R_{s2}
1-19 Urban
Region 1
.63 .9000 .36 .27 .45 .32 .54 .36
1-19 Rural
Region 1
.73 1.0429 .42 .31 .52 .37 .63 .42
1-19 Urban,
Region 2, 3, 4
.69 .9857 .39 .30 .49 .35 .59 .39
1-19 Rural
Region 2, 3, 4
.76 1.0857 .43 .33 .54 .38 .65 .43
20-499 Urban
Region 1
.72 1.0286 .41 .31 .51 .36 .62 .41
20-499 Rural
Region 1
.79 1.1286 .45 .34 .56 .40 .68 .45
20-499 Urban
Region 2
.82 1.1714 .47 .35 .59 .41 .70 .47
20-499 Urban
Region 3, 4
.77 1.1000 .44 .33 .55 .39 .66 .44
20-499 Rural
Region 2, 3, 4
.80 1.1429 .46 .34 .57 .40 .69 .46
Overall .70 1.0000 N/A N/A N/A N/A N/A N/A

Table 6.10 Expected Main Interview Eligibility Rate by Group
Size Class:Urban/Rural 1998 Group Rate (Group Rate)/(Overall Rate) Expected 2003 Rate
1-19:Urban 0.77 1.0405 0.68
1-19:Rural 0.78 1.0541 0.69
20-49:Urban 0.75 1.0135 0.66
20-49:Rural 0.65 0.8784 0.57
50-99:Urban 0.61 0.8243 0.54
50-99:Rural 0.53 0.7162 0.47
100-499:Urban 0.60 0.8108 0.53
100-499:Rural 0.41 0.5541 0.36
Overall:N/A 0.74 1.0000 0.65


Table 6.11 Expected 2003 Main Interview Completion Rates by Group: Estimated from Table 8.28 in the 1998 SSBF Methodology Report
Group 1998 Group Rate Ratio of Group Rate to Overall Rate Expected
Group Rates: Scenario 1

R_{m1}

Expected
Group Rates: Scenario 1

R_{m2}

Expected
Group Rates: Scenario 2

R_{m1}

Expected
Group Rates: Scenario 2

R_{m2}

Expected
Group Rates: Scenario 3

R_{m1}

Expected
Group Rates: Scenario 3

R_{m2}

1-49

Urban

.32 .9697 .39 .19 .49 .29 .58 .39
1-49 Rural .36 1.0909 .44 .22 .55 .33 .65 .44
50-499 Urban .29 .8788 .35 .18 .44 .27 .53 .35
50-499 Rural .43 1.3030 .52 .26 .65 .39 .78 .52
Overall .33 1.0000 N/A N/A N/A N/A N/A N/A

For a particular stratum, let n_i denote the size of the screening sample, n_c denote the number of complete interviews required, and R_e denote the main interview eligibility rate. We can express the size of the screening sample as a function of the number of complete interviews and all the expected outcome rates.

[6.5] n_i =\frac{n_c }{\left( {R_{s1} +\left( {1-R_{s1} } \right)\ast .59\ast .5\ast R_{s2} } \right)\ast R_e \ast \left( {R_{m1} +\left( {1-R_{m1} } \right)\ast .6\ast R_{m2} } \right)}

In this expression, .59 is the rate of nonrespondents among screener incompletes, .5 is the subsampling rate for screener nonresponse, and .6 is the subsampling rate for main interview nonresponse. Table 6.12 presents the screening sample needed in each stratum under scenario 3.

Overall, the estimated screening sample size was 14,165 across all strata under the second approach, or 505 more than the estimated sample size under the first approach.

We adopted the estimate from the second approach since it incorporated group-specific information and it contained a sample allocation. Since we had decided to select an initial sample of 37,600 cases from the DMI frame, we inflated the estimated screening sample size per stratum by the factor of 37,600/14,165 so that the initial sample summed to 37,600 across all strata. The resulting allocation of the initial sample is reported in the last column of Table 6.12, and this allocation was used to select the sample from the DMI frame. Therefore, the initial sample contained 37,600 -businesses, the screening sample consisted of 14,165 businesses, and rest of the sample (23,435 businesses) was kept as the reserve. In later sections, we will discuss how we divided the screening sample into batches and how we selected additional sample from the reserve sample based on actual survey results from the earlier batches.


Table 6.12 Allocation of Initial Screening Sample to Strata
Stratum R_{s1} R_{s2} R_{m1} R_{m2} R_e Complete Allocation Estimated Screening Sample Initial Sample
111 0.54 0.36 0.58 0.39 0.68 116.502 429 1139
112 0.54 0.36 0.58 0.39 0.68 289.210 1065 2827
113 0.59 0.39 0.58 0.39 0.68 297.014 1011 2684
114 0.59 0.39 0.58 0.39 0.68 142.391 485 1287
115 0.59 0.39 0.58 0.39 0.68 396.313 1349 3581
116 0.59 0.39 0.58 0.39 0.68 103.740 353 937
117 0.59 0.39 0.58 0.39 0.68 231.219 787 2089
118 0.59 0.39 0.58 0.39 0.68 145.002 493 1309
119 0.59 0.39 0.58 0.39 0.68 367.065 1249 3315
121 0.63 0.42 0.65 0.44 0.69 22.957 67 178
122 0.63 0.42 0.65 0.44 0.69 56.991 165 438
123 0.65 0.43 0.65 0.44 0.69 58.529 165 438
124 0.65 0.43 0.65 0.44 0.69 28.059 79 210
125 0.65 0.43 0.65 0.44 0.69 78.096 220 584
126 0.65 0.43 0.65 0.44 0.69 20.443 57 151
127 0.65 0.43 0.65 0.44 0.69 45.563 128 340
128 0.65 0.43 0.65 0.44 0.69 28.574 80 212
129 0.65 0.43 0.65 0.44 0.69 72.333 203 539
211 0.62 0.41 0.58 0.39 0.66 23.300 78 207
212 0.62 0.41 0.58 0.39 0.66 57.842 194 515
213 0.70 0.47 0.58 0.39 0.66 59.403 179 475
214 0.70 0.47 0.58 0.39 0.66 28.478 86 228
215 0.66 0.44 0.58 0.39 0.66 79.263 252 669
216 0.66 0.44 0.58 0.39 0.66 20.748 66 175
217 0.66 0.44 0.58 0.39 0.66 46.244 147 390
218 0.66 0.44 0.58 0.39 0.66 29.000 92 244
219 0.66 0.44 0.58 0.39 0.66 73.413 233 618
221 0.68 0.45 0.65 0.44 0.57 4.591 15 40
222 0.68 0.45 0.65 0.44 0.57 11.398 37 98
223 0.69 0.46 0.65 0.44 0.57 11.706 38 101
224 0.69 0.46 0.65 0.44 0.57 5.612 18 48
225 0.69 0.46 0.65 0.44 0.57 15.619 50 133
226 0.69 0.46 0.65 0.44 0.57 4.089 13 35
227 0.69 0.46 0.65 0.44 0.57 9.113 29 77
228 0.69 0.46 0.65 0.44 0.57 5.715 18 48
229 0.69 0.46 0.65 0.44 0.57 14.467 47 125
311 0.62 0.41 0.53 0.35 0.54 23.300 103 273
312 0.62 0.41 0.53 0.35 0.54 57.842 256 680
313 0.70 0.47 0.53 0.35 0.54 59.403 236 626
314 0.70 0.47 0.53 0.35 0.54 28.478 113 300
315 0.66 0.44 0.53 0.35 0.54 79.263 332 881
316 0.66 0.44 0.53 0.35 0.54 20.748 87 231
317 0.66 0.44 0.53 0.35 0.54 46.244 193 512
318 0.66 0.44 0.53 0.35 0.54 29.000 121 321
319 0.66 0.44 0.53 0.35 0.54 73.413 307 815
321 0.68 0.45 0.78 0.52 0.47 4.591 16 42
322 0.68 0.45 0.78 0.52 0.47 11.398 40 106
323 0.69 0.46 0.78 0.52 0.47 11.706 40 106
324 0.69 0.46 0.78 0.52 0.47 5.612 19 50
325 0.69 0.46 0.78 0.52 0.47 15.619 53 141
326 0.69 0.46 0.78 0.52 0.47 4.089 14 37
327 0.69 0.46 0.78 0.52 0.47 9.113 31 82
328 0.69 0.46 0.78 0.52 0.47 5.715 20 53
329 0.69 0.46 0.78 0.52 0.47 14.467 50 133
411 0.62 0.41 0.53 0.35 0.53 23.300 105 279
412 0.62 0.41 0.53 0.35 0.53 57.842 261 693
413 0.70 0.47 0.53 0.35 0.53 59.403 241 640
414 0.70 0.47 0.53 0.35 0.53 28.478 116 308
415 0.66 0.44 0.53 0.35 0.53 79.263 338 897
416 0.66 0.44 0.53 0.35 0.53 20.748 89 236
417 0.66 0.44 0.53 0.35 0.53 46.244 197 523
418 0.66 0.44 0.53 0.35 0.53 29.000 124 329
419 0.66 0.44 0.53 0.35 0.53 73.413 313 831
421 0.68 0.45 0.78 0.52 0.36 4.591 21 56
422 0.68 0.45 0.78 0.52 0.36 11.398 52 138
423 0.69 0.46 0.78 0.52 0.36 11.706 53 141
424 0.69 0.46 0.78 0.52 0.36 5.612 26 69
425 0.69 0.46 0.78 0.52 0.36 15.619 70 186
426 0.69 0.46 0.78 0.52 0.36 4.089 19 50
427 0.69 0.46 0.78 0.52 0.36 9.113 41 109
428 0.69 0.46 0.78 0.52 0.36 5.715 26 69
429 0.69 0.46 0.78 0.52 0.36 14.467 65 173
Total  N/A   N/A   N/A  N/A   N/A  4,000 14,165 37,600


6.5.3 Sample Size Revisions

The estimated screening sample size of 14,165 businesses was based on assumptions about various outcome rates. In particular, it was based on the most optimistic assumptions about screener and main interview completion rates. These assumptions had to be evaluated against actual survey results, and the estimated screening sample size might be revised based on such evaluation.

From an operational standpoint, it was deemed best to release the sample in three batches, with the first two batches each comprising 40% of the sample and the final 20% in the third batch. However, it became obvious soon after data collection started that the actual screener and main interview completion rates would be much lower than the optimistic assumptions. With that realization, we increased the sample size of batch 3 to 5,666 cases (the same as the first two batches) for a total sample size of 16,998 cases. As more information became available later, we decided to add a batch 4. The size of batch 4 was determined based on actual outcome rates from the first two batches. Table 6.13 compares the assumed rates against actual survey results from batch 1 and batch 2. Note that only the major outcome rates are compared here, and the Scenario 3 rates were used to estimate the screening sample size earlier. Other outcome rates, such as eligibility rate among screener incompletes, also affected the sample size estimation for batch 4.


Table 6.13 Assumed vs. Actual Outcome Rates
Rate Scenario 1 Scenario 2 Scenario 3 Batch 1 Batch 2
Pass 1 Screener Completion Rate 40% 50% 60% 49% 50%
Main Interview Eligibility Rate Among Pass 1 Screener Completes 65% 65% 65% 72% 71%
Proportion of Pass 1 Screener Incompletes Eligible for Subsampling 59% 59% 59% 79% 78%
Average subsampling rate for Pass 1 nonrespondents 50% 50% 50% 50% 50%
Pass 2 Screener Completion Rate 30% 35% 40% 29% 34%
Main Interview Eligibility Rate Among Pass 2 Screener Completes 65% 65% 65% 72% 72%
Pass 1 Main Interview Completion Rate 40% 50% 60% 37% 41%
Average subsampling rate for Pass 1 incompletes 60% 60% 60% 60% 60%
Pass 2 Main Interview Completion Rate 20% 30% 40% 28% 28%

Table 6.13 shows that the realized screener and main interview completion rates were much lower than expected. No rate was better than scenario 2 and some of the rates actually approached scenario 1, the worst case scenario. By replacing the original assumptions by the actual rates, we estimated that batch 4 should include 6,800 cases in order for there to be 4,000 completed main interviews by the end of the survey. Therefore, the final screening sample included 23,798 businesses.

To determine the composition of batch 4, we analyzed the relative productivity of the 5 sample balancing groups in early October of 2004. There were three batches in the field at the time. However, only results from the first two batches were used in this analysis since batch 3 was too new to be reliable. The analysis showed that group productivity was similar although there seemed to be a small disparity among the groups. We measured group productivity by the number of complete interviews as a proportion of the total targeted number of interviews per group. Table 6.14 shows the total number of complete interviews from the first two batches, the total targeted number of complete interviews, and the number of complete interviews as a proportion of the target per sample balancing group as of the date of the evaluation.


Table 6.14 Total Completes as a Proportion of Total Target by Sample Balancing Group for batches 1 and 2
Sample Balancing Group Total Complete Total Target Proportion Completed
1. Size Class 1-Urban 893 2,088 42.8%
2. Size Class 1-Rural 193 412 46.8%
3. Size Class 2 211 500 42.2%
4. Size Class 3 236 500 47.2%
5. Size Class 4 207 500 41.4%
Total 1,740 4,000 43.5%

By the time this evaluation took place the survey had completed 1,740 interviews, or 43.5% of the 4,000 interviews targeted. Groups 1, 3, and 5 had relatively lower productivities than the other two classes, but overall the differences were considered small. It appeared that no group was lagging behind in a significant way. Based on this analysis, we decided that no sample balancing was necessary. Therefore, the batch 4 sample was allocated to the strata in the same way as the other batches.

Table 6.15 presents the final screening sample size by stratum and batch.

Table 6.15 Final Screening Sample Size by Stratum and Batch
Stratum Batch 1 Batch 2 Batch 3 Batch 4 All Batches
1 172 171 172 206 721
2 426 426 426 511 1,789
3 404 405 404 486 1,699
4 194 194 194 232 814
5 540 539 539 648 2,266
6 141 141 142 170 594
7 315 315 314 377 1,321
8 197 198 198 237 830
9 500 499 499 599 2,097
10 27 27 27 32 113
11 66 66 66 79 277
12 66 66 66 80 278
13 31 32 32 38 133
14 88 88 88 105 369
15 23 22 23 28 96
16 51 52 51 61 215
17 32 32 32 38 134
18 81 81 81 98 341
19 31 31 31 38 131
20 78 77 78 93 326
21 72 72 71 86 301
22 34 34 35 41 144
23 101 101 100 121 423
24 26 27 27 31 111
25 59 58 59 71 247
26 37 37 36 44 154
27 93 93 94 112 392
28 6 6 6 7 25
29 15 15 14 18 62
30 15 15 16 18 64
31 7 8 7 8 30
32 20 20 20 25 85
33 5 5 5 6 21
34 12 12 12 14 50
35 7 7 7 9 30
36 19 19 19 22 79
37 41 41 41 49 172
38 103 102 102 123 430
39 94 95 94 114 397
40 45 45 46 54 190
41 133 133 132 159 557
42 35 34 35 42 146
43 77 78 77 93 325
44 48 48 49 58 203
45 123 123 123 147 516
46 7 6 6 8 27
47 16 16 16 19 67
48 16 16 16 19 67
49 7 8 7 9 31
50 21 21 22 26 90
51 6 5 5 7 23
52 12 13 13 14 52
53 8 8 7 10 33
54 20 20 21 24 85
55 42 42 42 51 177
56 105 104 104 125 438
57 96 97 97 116 406
58 47 46 46 56 195
59 135 135 136 162 568
60 35 36 35 43 149
61 79 79 79 94 331
62 50 49 49 60 208
63 125 125 126 150 526
64 8 9 8 10 35
65 21 21 21 25 88
66 21 21 21 26 89
67 11 10 11 12 44
68 28 28 28 34 118
69 7 8 7 9 31
70 17 16 17 20 70
71 10 11 10 12 43
72 26 26 26 31 109
Total 5,666 5,666 5,666 6,800 23,798


6.6 Assigning Sample to Batches and Replicates

Based on the allocation of the initial sample, we selected a stratified systematic sample of 37,600 businesses from the DMI frame. The sample was selected independently from each of the 72 strata.  Systematic sampling consists of selecting every k^{th} sampling unit after a random start. In order to obtain a proportional representation of different types of businesses, we sorted the businesses within each stratum by primary SIC code prior to sample selection. Sorting placed similar businesses next to each other on the frame, which would assure that the sample include a mix of businesses based on SIC code.

Once the sample was selected, it was assigned to batches and replicates to facilitate sample management at the telephone center58. In forming batches and replicates, the original sample design must be taken into account. Ideally, batches and replicates should be constructed through stratified systematic sampling from the same 72 strata. This approach promises maximum flexibility in sample balancing as survey results may be evaluated at the stratum level to guide further sample release. However, managing subsamples for each and every stratum would be a substantial administrative burden for the telephone center. As a compromise, we formed batches and replicates within batches by the following five groups:

- Size class 1-19 and Urban;

- Size class 1-19 and Rural;

- Size class 20-49;

- Size class 50-99; and

- Size class 100-499.

The purpose of breaking up the first size class was to allow for more flexible sample balancing within that size class. With this approach, sample balancing would involve manipulating the composition of later batches with respect to the distribution of the sample over the five groups. For example, if businesses in the first group experienced a lower than expected completion rate, the later batches would include a higher proportion of such businesses than its share in the original sample.


6.6.1 Formation of Batches

Due to the uncertainty surrounding eligibility and completion rates, the batches were not all drawn at the same time. The first two batches, consisting of 5,666 businesses each, were selected from the initial sample of 37,600 businesses. Initially 11,332 businesses were drawn simultaneously from the original sample of 37,600 businesses. They were assigned to either batch 1 or batch 2 systematically. Before batch 1 and batch 2 results could be analyzed fully, batch 3 was drawn. Batch 3 consisted of 5,666 businesses from the remaining sample of 26,268 businesses. Batch 3 had the same composition as previous two batches. Following the analysis of batch 1 and 2 results, the final batch was drawn. Batch 4 consisted of 6,800 businesses from the remaining 20,602 businesses. Similarly, batch 4 had the same composition as the previous three batches. The remaining 13,802 cases were placed in the reserve sample that was never needed in the 2003 SSBF.

Each batch had the same distribution over the five groups as that of the initial sample. The batches were formed as follows. Suppose that the proportion distribution of the initial sample over the five groups is \left\{ {p_1 ,p_2 ,p_3 ,p_4 ,p_5 ,} \right\}. If a particular batch contains n ases, it would include np_1 cases from group 1, np_2 cases from group 2, and so on. To select np_1 cases for group 1 from the initial sample, we sorted all the businesses in group 1 by stratum and SIC order. We then selected a systematic sample of np_1 cases from the sorted list. The same systematic procedure was repeated with the four other groups. The five systematic samples selected from the five groups made up the batch.

According to our initial plan, sample balancing, if necessary, would take place during the selection of batch 3. Therefore, the composition of the third batch could be different than that of the first two batches depending on the results of the first two batches. The sample for batch 3 would be selected after batch 1 was completed so that the results from batch 1 could be used to evaluate the initial assumptions. At that point, adjustments could be made to the size and composition of batch 3 to meet the sample goals. In reality, no sample balancing took place in the third batch or the fourth batch, so the composition of all four batches was the same and all four batches were formed in the same way.


6.6.2 Formation of Replicates

Since replicates were never used in sample release, we only provide a brief description of how they were formed. Replicates were defined within batches such that each replicate or group of replicates constituted a random subsample of the batch. Once a case was assigned to a replicate, it could never be reassigned to a different replicate. Replicates were not required to be of the same size from batch to batch, although the within-batch variation was small. Replicate assignment within each batch was achieved using the following steps.

The first step was to determine the number of replicates to be formed. It was decided that each replicate would include roughly 300 cases. So, if nb was the size of batch b, the number of replicates to be formed within batch bwas:

[6.6] R_b =\left( {{n_b } \mathord{\left/ {\vphantom {{n_b } {300}}} \right. \kern-\nulldelimiterspace} {300}} \right)

If n_{b} was not a multiple of 300, R_b would be rounded to the nearest integer. In that case, each replicate would have slightly more or less than 300 cases and all replicates would not be of equal size. For example, R_1 ={5,666} \mathord{\left/ {\vphantom {{5,666} {300}}} \right. \kern-\nulldelimiterspace} {300}=18.89, we therefore defined 19 replicates within batch 1. We denote the R_b replicates defined over batch b as r_b =\left\{ {r_{b1} ,r_{b2} ,...,r_{b,R_b } } \right\}_{.}

The second step was to define sub-replicates within each of the five groups. Suppose that batch bcontains n_{bg} sample businesses from group g. These businesses were assigned to R_b random subsamples or sub-replicates, each consisting of r_{bg} =n_{bg} /R_bbusinesses. This was accomplished using systematic sampling within each group after the list was sorted by stratum and SIC code. The sampling interval was simply R_b and the resulting R_b possible samples under systematic sampling were used as the R_b sub-replicates. If n_{bg} is not a multiple ofR_b , the sub-replicates would not be of equal size. Let sr_{bg} denote the set of R_b sub-replicates defined over group gof batch b, then sr_{bg} =\left\{ {sr_{bg1} ,sr_{bg2} ,...,sr_{bgR_b } } \right\}.

The last step was to form replicates by combining the sub-replicates, one from each group. Within each group, we randomly ordered the R_{b} sub-replicates from 1 to R_{b}. The first replicate was formed by including the first sub-replicate from each group in random order; the second replicate was formed by including the second sub-replicate from each group; and so on. The R_{b} replicates defined over batch _{b} would be:

[6.7]
\begin{displaymath} r_{b1} =\left\{ {sr_{b11} ,sr_{b21} ,sr_{b31} ,sr_{b41} ,sr_{b51} } \right\} \end{displaymath}

r_{b2} =\left\{ {sr_{b12} ,sr_{b22} ,sr_{b32} ,sr_{b42} ,sr_{b52} } \right\}

            ....

\begin{displaymath} r_{bR_b } =\left\{ {sr_{b1R_b } ,sr_{b2R_b } ,sr_{b3R_b } ,sr_{b4R_b } ,sr_{b5R_b } } \right\} \end{displaymath}

where sr_{b11} represent the first sub-replicate within group 1 of batch b, sr_{b21} represent the first sub-replicate within group 2 of batch b, and so on.


6.6.3 Adjustment of Selection Probabilities for Batch Selection

Since all replicates in the four batches were released, we only needed to adjust the selection probabilities for batch selection. For a business i in stratum h (h=1,...,72) and group g (g=1,...,5), the ultimate selection probability had two components: \pi _{ih} was the probability of selection into the initial sample; and \pi _{ig} was the probability of selection into any of the four batches given its selection into the initial sample. The ultimate selection probability for each business was the product of these two components.

The original selection probability \pi _{ih} was the initial sample size divided by the population size per stratum. That is,

[6.8] \pi _{ih} =\frac{n_h }{N_h }

Since each batch was formed by sampling independently from each group, the second probability \pi _{ig} would be computed separately for each of the five groups. Let B_{1g}, B_{2g}, B_{3g}, and B_{4g} denote the event of selection into batch 1, batch 2, batch 3, and batch 4, respectively, given that a business in group g was selected into the initial sample. Then, \pi _{ig} =P(B_{1g} )+P(B_{2g} )+P(B_{3g} )+P(B_{4g} ), since these events are disjoint and mutually exclusive. Suppose that the initial sample contains n_g businesses from group g, and batch 1, batch 2, batch 3, and batch 4 include n_{1g} , n_{2g} , n_{3g}, and n_{4g} businesses from group g, respectively. We have:

[6.9] \pi _{ig} =\left( {n_{1g} +n_{2g} +n_{3g} +n_{4g} } \right)/n_g

The ultimate selection probability was:

[6.10] P=\pi _{ih} \pi _{ig}

which should be very close to the number of cases released divided by the total number of cases in the frame per stratum.


6.7 Nonresponse Subsampling and Design Effects

This section discusses nonresponse subsampling and its impact on sampling errors. Subsampling of screener and main interview nonrespondents was a major design innovation introduced to the 2003 SSBF59. The purpose of subsampling was to improve the chances of achieving the challenging response rate goal while controlling for cost and nonresponse bias more effectively. The basic concept was to conduct the survey in two phases or passes. We first attempted an interview with all cases that were available for the screener or the main interview in pass 1. Then, in addition to businesses that made appointments with the interviewers to complete the interview, we selected a subsample of nonrespondents (screener and main interview) for further interview attempts in pass 2. Since the pass 2 sample, including all the hard appointment callbacks, represented a fraction of all nonrespondents, we were able to focus limited resources more effectively on a smaller sample in pass 2.

Nonresponse subsampling could lead to increased variance of sample estimates. In the planning stage of the survey, we estimated this increase in variance by design effect, a concept introduced earlier. We discuss both the estimated and realized design effects in this section. While discussing increased variance due to subsampling, it is important to remember that there would be a variance price paid due to the nonresponse itself and the adjustments made to the weights as a result of nonresponse. The advantage of subsampling is that we could focus resources where nonresponse was the greatest and thereby mitigate its effects. The goal, despite all the additional complexity, was to minimize, for a fixed budget, the combined effect of subsampling and nonresponse on the final survey mean square error. The focus afforded by subsampling would allow us to minimize potential nonresponse bias, so that the combination of bias and sampling error, the mean square error, would be minimized. Since biases are generally unknown, our discussion here focuses on variance or design effects.


6.7.1 Subsampling Method

For subsampling purposes, we divided pass 1 screener incompletes between nonrespondents and finalized nonrespondents/noncontacts60. Finalized nonrespondents were those who received a final nonrespondent disposition code at the end of pass 1, including hostile refusals, language barriers, physically/mentally incapacitated, unavailable, and away for the entire field period. Finalized nonrespondents also included cases that were known to be ineligible for the main interview even though they did not explicitly complete the screener. Noncontact cases were businesses with which no human contact was ever established during pass 1. These included businesses associated with disconnected numbers, wrong numbers, computer/fax numbers, fast busy signals, and busy and/or no answers. Noncontact and finalized nonrespondents were excluded from subsampling for pass 2. Once pass 1 was deemed to be completed, no further attempts were made to screen these cases.

The screener nonrespondents were subjected to subsampling as they were considered more likely to complete the screener with additional effort. The nonrespondents were businesses that did not complete the screener but with which a human contact or some other promising contact was established during pass 1. Similarly, pass 1 main interview incompletes were divided between those eligible for subsampling and those ineligible. All main interview incompletes were eligible for subsampling except the hostile refusals and those found to be ineligible during pass 1 of the main interview even though they were determined to be eligible by the screener.

We decided to subsample screener nonrespondents at a rate of 50% and main interview nonrespondents at a rate of 60%. These rates were somewhat arbitrary, but they were deemed reasonable for two major considerations. First, the rates were low enough to allow for more effective focus of resources on a smaller pass 2 sample. Obviously, the subsampling rates should not be too high, or the potential of increased response rate would diminish. Second, the rates were high enough to avoid substantial increase in design effects. The subsampling rates should not be too low either, or the increased design effects could overwhelm the benefit of subsampling. Under the proposed subsampling rates, we estimated that the design effect per size class would be about 1.2, while the design effect per stratum would be even smaller. The realized design effects were slightly greater than expected, and these are reported later in this section.

Subsampling was carried out within each batch. After the call design for pass 1 screener interview was completed for all cases in the batch (see Chapter 4), the screener nonrespondents eligible for subsampling were identified. Similarly, after the call design for pass 1 main interview was completed for all cases in the batch, the main interview nonrespondents eligible for subsampling were identified. In principle, subsampling of nonrespondents should take place independently within each stratum so that the distribution of the subsample would reflect that of the original sample. However, since many strata were small, especially after the sample was divided into batches, it was more practical to subsample nonrespondents by groups of strata. Therefore, we conducted nonresponse subsampling by the following eight groups:

• Size class 1-19 and Urban;

• Size class 1-19 and Rural;

• Size class 20-49 and Urban;

• Size class 20-49 and Rural;

• Size class 50-99 and Urban;

• Size class 50-99 and Rural;

• Size class 100-499 and Urban;

• Size class 100-499 and Rural.

Within each subsampling group, nonrespondents were sampled systematically. First, all pass 1 nonrespondents within each group were sorted by stratum and SIC code. Next, the sampling interval was determined for each group based on the specified subsampling rate. Finally, a systematic sample was selected from the sorted list. Since no sample balancing took place, all nonrespondents had the same probability of being selected for further interview attempts in pass 2. Table 6.16 lists the screener disposition codes that were eligible for subsampling and the number of cases selected into the pass 2 sample, and Table 6.17 contains the same information for the main interview subsampling.

Table 6.16 Pass 2 Screener Sample Size by Disposition Code and Batch
Screener Code:Screener Subcode Screener Disposition Batch 1 Batch 2 Batch 3 All Batches
16:35 Regular Busy 4 8 8 20
17:31 Ring No Answer 360 242 232 834
17:32 Answering Machine - No Message Left 69 240 157 466
17:34 Answering Machine - Message Left 55 68 103 226
17:36 Transferred to Voicemail - Message Left 11 6 5 22
17:37 Transferred to Voicemail - No Message Left 45 56 64 165
17:38 Owner/Proxy to Call 800 Number 80 71 66 217
17:39 Owner Unavailable - Message Left 12 11 13 36
17:51 Hung Up During Introduction 55 43 24 122
17:52 Proxy Refusal 2 2 1 5
17:53 Owner Refusal 12 11 8 31
17:54 Gatekeeper Refusal 11 10 5 26
17:59 Owner Unavailable - No Callback Established 46 48 29 123
17:60 Advance Letter Re-mail Request 10 10 8 28
17:61 Fax/Email Advance Letter Request 7 19 14 40
17:62 Proxy Refusal - Suspend 27 16 26 69
17:63 Owner Refusal - Suspend 133 82 117 332
17:64 Gatekeeper Refusal - Suspend 57 28 54 139
17:66 Privacy Manager 14 6 9 29
22:141 Callback Request - Soft 49 27 15 91
22:142 Callback Request - Soft (Suspend) 71 95 99 265
Total:N/A N/A 1,130 1,099 1,057 3,286


Table 6.17 Pass 2 Main Interview Sample Size by Disposition Code and Batch
Main Code:Main Subcode Main Disposition Batch 1 Batch 2 Batch 3 All Batches
7:1 Refusal Letter Request - Confidentiality 14 5 0 19
7:3 Refusal Letter Request - Too Busy/Not Enough Time 32 8 0 40
7:4 Refusal Letter Request - Legitimacy 0 1 0 1
7:5 Refusal Letter Request - General Letter 34 10 0 44
16:35 Regular Busy 6 1 3 10
17:31 Ring No Answer 129 138 121 388
17:32 Answering Machine - No Message Left 69 75 42 186
17:34 Answering Machine - Message Left 31 15 38 84
17:36 Transferred to Voicemail - Message Left 1 3 0 4
17:37 Transferred to Voicemail - No Message Left 12 20 9 41
17:38 Owner/Proxy to Call 800 Number 44 38 22 104
17:w39 Owner Unavailable - Message Left 6 16 10 32
17:51 Hung Up During Introduction 7 11 3 21
17:52 Proxy Refusal 1 0 2 3
17:53 Owner Refusal 21 19 20 60
17:54 Gatekeeper Refusal 5 2 2 9
17:59 Owner Unavailable - No Callback Established 15 16 5 36
17:62 Proxy Refusal - Suspend 3 9 4 16
17:63 Owner Refusal - Suspend 74 72 49 195
17:64 Gatekeeper Refusal - Suspend 19 35 17 71
17:66 Privacy Manager 1 1 3 5
22:141 Callback Request - Soft 57 30 64 151
22:142 Callback Request - Soft (Suspend) 80 44 108 232
Total:N/A N/A 661 569 522 1752


6.7.2 Estimated Design Effects

Estimated design effects per size class were used to allocate the 4,000 interviews among the size classes in the planning stage of the survey. We now describe in greater detail how the design effects were estimated. In addition, we briefly present the realized design effects. Detailed weighting procedures are discussed in Section 6.9.

Design effects could arise from various sources: clustering, stratification, unequal weighting, post-stratification, and other complex design features. Unequal weighting was the main source of design effects for the 2003 SSBF. The design effects due to unequal weighting may be estimated by,

[6.11] DEFF=1+cv_w^2

where cv_w^2 is the squared coefficient of variation of the weights. To estimate design effects, we had to simulate the weighting process. Every business in the sample had a base weight determined by its initial selection probability. The base weight would be adjusted subsequently for screener subsampling, screener nonresponse, main interview subsampling, and main interview nonresponse61.

Suppose that all cases in stratum h have an initial sampling weight of w_h . The weight adjusted for screener subsampling would depend on whether a case completed the screener during pass 1 or pass 2. For cases that completed the screener during pass 1 (S1), the adjusted weight would be the same as the original weightw_h . For cases that completed the screener during pass 2 (S2), the adjusted weight would be w_h /R_s where R_s denotes the screener subsampling rate. At this point, a nonresponse adjustment factor would be applied to the weight. However, this factor would not affect the design effects within stratum since it would be constant within stratum. Let's denote the screener nonresponse adjusted weight as w_h f_{h\left( s \right)} and \left( {w_h /R_s } \right)f_{h\left( s \right)} for S1 and S2 cases, respectively, where f_{h\left( s \right)} represents the screener nonresponse adjustment factor per stratum. The screener nonresponse adjusted weight would be the base weight for the main interview. For cases that completed both the screener and the main interview during pass 1 (S1M1), the weight adjusted for main interview subsampling would be _{ }w_h f_{h\left( s \right)} . For cases that completed the screener during pass 2 and the main interview during pass 1 (S2M1), the weight adjusted for main interview subsampling would be \left( {w_h /R_s } \right)f_{h\left( s \right)} . For cases that completed the screener during pass 1 and the main interview during pass 2 (S1M2), the weight adjusted for main interview subsampling would be \left( {w_h f_{h\left( s \right)} } \right)/R_m , where R_m denotes the main interview subsampling rate. Finally, for cases that completed both the screener and the main interview during pass 2, the weight adjusted for main interview subsampling would be \left( {\left( {w_h /R_s } \right)f_{h\left( s \right)} } \right)/R_m . Again, since we planned to conduct main interview nonresponse adjustment within stratum, that adjustment would not affect the within stratum design effects. If the main interview nonresponse adjustment factor is f_{h\left( m \right)} , then the final weight for the four types of cases would be,

[6.12] S1M1: w_{1h} =w_h f_{h\left( s \right)} f_{h\left( m \right)}

[6.13] S2M1: w_{2h} =\left( {w_h /R_s } \right)f_{h\left( s \right)} f_{h\left( m \right)}

[6.14] S1M2: w_{3h} =\left( {w_h f_{h\left( s \right)} f_{h\left( m \right)} } \right)/R_m

[6.15] S2M2: w_{4h} =w_h /\left( {R_s R_m } \right)f_{h\left( s \right)} f_{h\left( m \right)}

In other words, there would be only four unique weights per stratum. In particular, since the initial sampling weight w_h and the nonresponse adjustment factors f_{h\left( s \right)} and f_{h\left( m \right)} were common for all four types of cases, all the variation in the weights within stratum was due to subsampling. Note that adjustments for eligibility were not treated here due to the lack of information during the planning stage.

The distribution of the four types of cases would depend on several factors, including screener completion rates, proportion of screener incompletes to be subsampled, and main interview completion rates, all of these could vary across strata. Suppose that, in stratum h , the number of complete interviews for the four types of cases is n_{1h} , n_{2h} , n_{3h} , and n_{4h} , respectively. Then the mean of the weight per stratum would be,

[6.16] \bar {w}_h =\frac{\sum\limits_{k=1}^4 {\left( {n_{kh} w_{kh} } \right)} }{\sum\limits_{k=1}^4 {n_{kh} } }

And the variance of the weight per stratum would be,

[6.17] V\left( {w_h } \right)=\frac{\sum\limits_{k=1}^4 {n_{kh} \left( {w_{kh} -\bar {w}_h } \right)^2} }{\sum\limits_{k=1}^4 {n_{kh} } }

where k indexes the four types of cases (S1M1, S1M2, S1M1, and S2M2). The estimated design effect per stratum due to weighting would be,

[6.18] DEFF=1+V\left( {w_h } \right)/\bar {w}_h^2

Table 6.18 presents the design effect estimates for the 72 strata. Based on our simulation, the average design effect per stratum was 1.114, i.e., the variance of an estimate would be increased by 11.4% with subsampling. The variation of design effects across strata was very small. This was because the assumed completion rates did not vary much across strata and the assumed subsampling rates were the same in all strata. As a result, the distribution of complete interviews over the four types of cases was similar in all strata.


Table 6.18 Design Effects per Stratum Due to Unequal Weighting
Stratum DEFF
111 1.114
112 1.114
113 1.108
114 1.108
115 1.108
116 1.108
117 1.108
118 1.108
119 1.108
121 1.099
122 1.099
123 1.096
124 1.096
125 1.096
126 1.096
127 1.096
128 1.096
129 1.096
211 1.105
212 1.105
213 1.095
214 1.095
215 1.100
216 1.100
217 1.100
218 1.100
219 1.100
221 1.093
222 1.093
223 1.092
224 1.092
225 1.092
226 1.092
227 1.092
228 1.092
229 1.092
311 1.107
312 1.107
313 1.098
314 1.098
315 1.103
316 1.103
317 1.103
318 1.103
319 1.103
321 1.081
322 1.081
323 1.080
324 1.080
325 1.080
326 1.080
327 1.080
328 1.080
329 1.080
411 1.107
412 1.107
413 1.098
414 1.098
415 1.103
416 1.103
417 1.103
418 1.103
419 1.103
421 1.081
422 1.081
423 1.080
424 1.080
425 1.080
426 1.080
427 1.080
428 1.080
429 1.080

We also estimated the design effects per size class, which was an important factor in determining the sample allocation across the four size classes. The average design effect per size class was estimated at 1.17, and there was virtually no variation across size classes. Design effects per size class were estimated using similar procedures as described above except that the coefficient of variation of the final weights was estimated over all complete cases within the size class. The design effect was greater per size class because there were differential nonresponse adjustments per stratum within the same size class. Again, adjustment for eligibility was not considered here. To the extent that eligibility rates (screener and main interview) varied across strata within the same size class, the design effects per size class would be greater than estimated. For allocation purposes, we used a conservative design effect of 1.25 per size class, as described earlier.

The overall design effect estimate was 1.664, an increase of about 16% over the 1998 design effect of 1.44. The reason for the moderate increase was that only a small fraction of the complete cases would actually have extreme weights. Based on our analysis, about 80% of all complete cases would be S1M1 cases; about 19% of complete cases would either be S1M2 or S2M1 cases; and less than 1% of the complete cases would be S2M2 cases.


6.7.3 Realized Design Effects

The realized design effects before weight trimming (Section 6.9.10) were slightly higher than the estimated design effects. Based on the relative variance of the untrimmed weights, the overall design effect was about 1.796. The realized design effect by size class was also higher than estimated. Table 6.19 reports the number of completes, the relative variance of the weights, and the design effect by frame size class.


Table 6.19 Realized Design Effect per Frame Size Class Before Weight Trimming
Frame Size Class No. Completes CV of Weights Design Effect
1 2,719 0.540136 1.29
2 532 0.636669 1.41
3 537 0.587141 1.34
4 480 0.614358 1.38
Total 4,268 0.892262 1.796

Based on the survey-updated information, however, a total of 632 businesses turned out to be in a different size class than the frame data indicated. This difference was due to a combination of frame errors, imputation, and actual changes in business size over time. Table 6.20 shows the joint distribution of the 4,268 businesses over the frame size class and the updated size class. For example, 69 businesses were sampled as class 1 but turned out to be class 2 businesses.


Table 6.20 Distribution of Completes Over the Original and Updated Size Classes
Frame Size Class Updated Size Class
1
Updated Size Class
2
Updated Size Class
3
Updated Size Class
4
Total
1 2,628 69 13 9 2,719
2 140 358 26 8 532
3 47 123 310 57 537
4 27 19 95 339 480
Total 2,842 569 444 413 4,268

These changes have important analytical implications. The analyst is likely to use the updated information to define size classes, and such a redefinition would drastically increase the design effect per size class. The impact on design effect is greater when businesses moved from a lower class to a higher class because such businesses were sampled with lower probability and they would carry extremely large weights in the new size class. As a result, the design effect was dramatically increased in the three larger size classes, as shown in Table 6.21. We implemented weight trimming procedures to reduce the impact of the extreme weights; these procedures are discussed in Section 6.9.10. The final design effect due to unequal weighting per size class is reported in Table 6.22.


Table 6.21 Realized Design Effect per Updated Size Class Before Weight Trimming
Updated Size Class No. Completes CV of Weights Design Effect
1 2,842 0.59743 1.36
2 569 1.32196 2.75
3 444 2.39293 6.73
4 413 1.79703 4.23
Total 4,268 0.892262 1.796


Table 6.22 Final Design Effect per Updated Size Class After Weight Trimming
Updated Size Class No. Completes CV of Weights Design Effect
1 2842 0.59743 1.36
2 569 1.20302 2.45
3 444 1.14499 2.31
4 413 1.00535 2.01
Total 4,268 0.87965 1.77


6.8 The Five Percent Follow-Up Subsample


6.8.1 Purpose of the Subsample

As discussed in Section 6.9, the screening interview had three possible outcomes: eligible, ineligible, and unknown eligibility. In order to compute a response rate, some assumption regarding the eligibility of the unknown cases must be made. The unknown cases were unlikely to be eligible at the same rate as the cases for which eligibility was known, but some evidence was needed for alternative assumptions. A sample of the unknown cases was selected for intensive follow-up to try to determine their eligibility. There were two types of cases with unknown eligibility - noncontacts and nonrespondents - and we selected follow-up samples of both types.

Note that although the original plan was to select the 5% follow-up cases on an on-going basis at the finish of each pass (noncontacts at the end of each pass 1 and nonrespondents at the end of each pass 2), due to practical timing constraints the entire subsample was selected within a three-week period toward the end of data collection.


6.8.2 Selection of the Subsample

Noncontacts suitable for follow-up subsampling included final computer/fax tone, final fast busy, final disconnected/temporarily disconnected, final wrong number, final noncontact with busy and no answers, or with all busy or all no answers. The subsample of noncontacts was selected from all cases in batches 1, 2, and 3 defined as such by the end of pass 162. Since batch 4 did not have a pass 2, noncontacts were subsampled from all existing week-6 batch 4 cases, thus giving enough time to squeeze out the noncontacts. The rate determined from this subsample was applied to all cases in these categories of noncontacts.

Subsampled nonrespondents included pending language barrier, pending physically/mentally incapacitated, pending non-hostile refusals, and other non-finalized nonrespondents not included as noncontacts. The follow-up sample of nonrespondents was selected after the completion of pass 2 screener interviewing. The rate determined from this subsample was applied to all nonrespondents in these categories, as well as to hostile and other nonrespondents finalized by the end of pass 2.

Noncontacts were subsampled from all four batches, while 5% of the nonrespondents, based on an estimate of the total number of nonrespondents in all four batches, were subsampled from only the first two batches. The estimated total number of nonrespondents was based on the average number (743.5) of nonrespondents suitable for subsampling in batches 1 and 2 applied to all four batches and adjusted for the fact that batch 4 was larger (6,800 vs. 5,666) than the other three batches and did not have a screener pass 2.


Table 6.23 Sample Size for 5% Follow-Up Study
Batch Noncontacts
Suitable for Subsampling
Noncontacts
Subsampled
Nonrespondents
Suitable for Subsampling
Non-respondents
Subsampled
1 489 25 785 106
2 498 25 702 95
3 603 31 N/A N/A
4 631 32 N/A N/A
Total 2,221 113 4,015 (estimated1) 201

1{(785+702)/2}*{3+(6800/5666)*2} Return to Text

Prior to subsampling, suitable noncontacts were sorted by final disposition code, original sampling stratum, SIC code, and DUNS number. For nonrespondents, first each suitable case was hierarchically assigned to one of five "refusal" categories according to whether or not the case's call history contained an owner, proxy, or gatekeeper refusal, a hung-up-during-interview call, or no indication of refusal. Nonrespondents were then sorted by "refusal" category, stratum, SIC code, and DUNS number prior to sample selection. Both noncontacts and nonrespondents were systematically subsampled within batch.


6.8.3 Estimates from the Subsample

A description of the methodology used to locate noncontacts and nonrespondents and determine their eligibility can be found in Section 4.6.4. Overall, based on cases with known eligibility status, 56 of 71 noncontacts (e_1 =78.9%) and 25 of 144 nonrespondents (e_2 =17.4%) were found to be ineligible. See highlighted rows of Table 6.24 and Table 6.25. Excluding the indeterminate cases from the rate calculations is equivalent to assuming that they are eligible at the same rate as the determined follow-up cases.


Table 6.24 Final Dispositions for Noncontacts in Five-Percent Follow-Up Study
Final Disposition from 5%
Follow-Up Study
Number of Cases % of Total Number of Cases % of Number of Cases with Known Eligibility
Ineligible 56 49.6% 78.9%
Ineligible, Confirmed Out-of-Business 46 40.7% 64.8%
Ineligible, In-Business 10 8.9% 14.1%
Eligible 15 13.3% 21.1%
Eligibility Status Indeterminate 42 37.2% N/A
Total 113 100% 100%


Table 6.25 Final Dispositions for Nonrespondents in 5% Follow-Up Study
Final Disposition from 5%
Follow-Up Study
Number of Cases % of Total Number of Cases % of Number of Cases with Known Eligibility
Ineligible 25 12.4% 17.4%
Ineligible, Confirmed Out-of-Business 9 4.5% 6.3%
Ineligible, In-Business 16 8.0% 11.1%
Eligible 119 59.2% 82.6%
Eligibility Status Indeterminate 57 28.4% N/A
Total 201 100% 100%

These estimates, were used to adjust the sampling weights by assuming that 78.9% and 17.4% of the end-of-study noncontacts (2,400 cases) and nonrespondents (4,051 cases), respectively, were ineligible. Table 6.27 shows final screener disposition codes and eligibility adjustments.


6.9 Weighting Procedures

The 2003 SSBF attempted screener interviews with 23,798 firms in four batches. The main interview eligibility status was determined for 14,061 firms, of which 9,687 were found to be eligible. The interviewers were able to complete the main interview with 4,268 firms and these firms formed the final SSBF analysis sample. A final analysis weight was calculated for each complete case to support weighted estimation. The primary purpose of the weights was to correct for potential bias due to unequal selection probabilities and nonresponse. A secondary purpose was to compensate for ineligible businesses that were part of the original sample. Informally, the sum of the analysis weights of responding firms approximate the total number of firms in the target population.

The final analysis weight was calculated in multiple stages. The first stage was the calculation of the initial base weight to account for the sample design. Subsequent weighting stages adjusted for batch selection, screener subsampling, screener and main eligibility, screener nonresponse, main interview subsampling, and main interview nonresponse.


6.9.1 Initial Base Weight: w_{1ih}

The initial base weight for a sample firm was the reciprocal of the probability of selection under the sample design. Let n_h be the initial sample size of stratum h, N_h the frame size of stratum h, and _{ }\pi _{ih} the initial probability of selection for case i in stratum h. Then the initial base weight was defined by,

[6.19] w_{1ih} =\frac{1}{\pi _{ih} }=\frac{N_h }{n_h }

All the 37,600 firms in the initial sample ordered from Dun & Bradstreet had an initial base weight. The summation of w_{1ih} over all sampled firms is equal the total number of firms in the sampling frame. That is,

[6.20] \begin{displaymath} \sum_{h=1}^{72} \sum_{i=1}^{n_{h}} w_{1ih} = \sum_{h=1}^{72} N_{h} = 9,701,023 . \end{displaymath}


6.9.2 Adjustment for Batch Selection: w_{2ih}

The selection of firms from the initial sample into batches represented an additional sampling stage that should be accounted for in the weighting process. As described earlier, batch selection was conducted independently within each of the five groups. Suppose that the initial sample contains n_{g} businesses from group g. Suppose further that n_{1g} , n_{2g} , n_{3g}, and n_{4g} businesses from groupg were selected into batch 1, batch 2, batch 3, and batch 4, respectively. Then, for a sample business in groupg, the probability of selection into any of the four batches given its selection into the initial sample was,

[6.21] \pi _{ig} =\frac{n_{1g} +n_{2g} +n_{3g} +n_{4g} }{n_g}

Therefore, the weight adjusted for batch selection is:

[6.22] w_{2ih} =\delta _{2i} w_{1ih} \frac{1}{\pi _{ig} }

where \delta _{2i} =1 if case i was selected into one of the batches and 0 otherwise. Only the 23,798 cases that were selected into one of the batches had a positive_{ }w_{2ih} .


6.9.3 Adjustment for Sample Release: w_{3ih}

According the original plan, the sample would be released by replicates within batches. For a sample business in group g, the probability of being released given its selection into one of the batches was:

[6.23] \frac{n_g^{'} }{n_{1g} +n_{2g} +n_{3g} +n_{4g} }

where n_g^{'} is the total number of cases released from group g, and the denominator is the total number of cases selected into the batches from group g. Thus, the weight adjusted for sample release would be:

[6.24] w_{3ih} =\delta _{3i} w_{1ih} \frac{n_g }{n_g^{'}}

where \delta _{3i} =1 if case i was released and 0 otherwise. In reality, all 23,798 cases were released ( \delta _{3i} =1 for all 23,798 cases assigned to batches 1 to 4). Therefore, w_{3ih} and w_{2ih} were identical for all cases.


6.9.4 Adjustment for Screener Subsampling: w_{4ih}

The pass 2 subsample was selected exclusively from pass 1 screener nonrespondents. The pass 2 subsample had two components. First, nonrespondents that made appointments63 with the interviewers to complete the screening interview were included in the pass 2 subsample with certainty. Second, a 50% subsample was selected from the rest of the pass 1 nonrespondents. Therefore, the size of the pass 2 subsample was typically greater than 50% of pass 1 nonrespondents. As described in Section 6.7, the subsampling of nonrespondents took place within each batch and within each of the eight subsampling groups64. Table 6.26 presents the size of the screener pass 2 subsample by batch and by subsampling group.

To adjust the weights for screener subsampling, we distinguished among four types of cases: 1) those automatically selected into the pass 2 subsample with certainty; 2) those randomly selected into the 50% subsample; 3) those eligible for pass 2 subsampling but not selected into the subsample; and 4) those not eligible for subsampling. The type 1 and type 4 cases retained their weights from the previous stage. The type 2 cases, those selected into the pass 2 subsample, had their weights inflated by the inverse of the subsampling rate so that they represented the entire set of pass 1 nonrespondents that were eligible for subsampling. Finally, the type 3 cases received an adjusted weight of zero because they would be represented by the type 2 cases.

Let r_{bg} denote the effective subsampling rate in group_{ }g within batch_{ }b. The weight adjusted for the screener subsampling was calculated as,

w_{4ih} =w_{3ih} certainty selections and those not eligible for subsampling

[6.25] w_{4ih} =w_{3ih} /r_{bg} those randomly selected into the subsample

w_{4ih} =0 those eligible but not selected into the subsample


6.9.5 Adjustment for Eligibility: w_{5ih}

The screening interview had three possible outcomes: eligible, ineligible, and unknown eligibility. Businesses that met all the eligibility criteria were eligible for the survey. Ineligible businesses included the following:

• Branches, divisions, and subsidiaries of a business;

• Not-for-Profit businesses;

• Government-owned businesses;

• Financial businesses;

• Agricultural businesses;

• Businesses with 500 or more employees; and

• Businesses not in operation on December 31, 2003 or/and the date of the interview.

The eligibility status was unknown for businesses that were not contacted and for most businesses that did not complete a screening interview. Eligibility was determined for a small number of cases that did not actually complete a screening interview, for example when the firm was confirmed to be out of business.

The adjustment for eligibility was carried out as follows. The cases known to be eligible retained their previous weight w_{4ih} ; the cases known to be ineligible received a weight of zero; and the cases where eligibility was unknown received a weight that was the product of w_{4ih} and the eligibility rate estimated from the 5% follow-up sample.

The cases where eligibility was unknown included pass 1 noncontacts, some pass 1 finalized nonrespondents, and all pass 2 nonrespondents. The eligibility rates among pass 1 noncontacts and pass 2 nonrespondents were estimated from the 5% subsample of screener incompletes (see Section 6.8). For the pass 1 finalized nonrespondents, we had to make assumptions about their eligibility rate since they were not represented by the 5% subsample. We assumed that 1) pass 1 finalized nonrespondents had the same eligibility rate as pass 1 noncontacts if they had the following disposition codes: disconnected numbers, wrong numbers, computer/fax numbers, fast busy signal, and busy or/and no answer; and 2) pass 1 finalized nonrespondents had the same eligibility rate as pass 2 nonrespondents if they had the following dispositions codes: language barrier, privacy manager, unavailable, answering machine, proxy refusal, away for the entire field period, respondent/owner refusal, physically/mentally incapacitated, gatekeeper refusal, and hostile (and other finalized) refusal. Putting all this together, the weight adjusted for eligibility was:

w_{5ih} =w_{4ih} eligible cases

[6.26] w_{5ih} =0 ineligible cases

w_{5ih} =w_{4ih} e_k unknown eligibility cases

where e_k (k=1,2) is the estimated or assumed eligibility rate for the particular category of unknown eligibility cases (i.e., noncontacts or nonrespondents). As reported in Section 6.8.3, the estimated eligibility rate was 17.4% and 78.9% for the noncontacts and nonrespondents, respectively.

All released cases except known ineligibles and those subsampled out at pass 2 had a positive weight_{ }w_{5ih} . The total adjusted weight w_{5ih} estimates the number of eligible businesses in the frame.

Table 6.26 Pass 2 Screener Sample Size by Batch and Subsampling Group
Batch:Subsampling Group Certainty Cases Noncertainty Cases Total Cases
1:1 14 610 624
1:2 0 88 88
1:3 7 112 119
1:4 1 10 11
1:5 2 140 142
1:6 1 16 17
1:7 3 134 137
1:8 0 20 20
1:All 28 1130 1158
2:1 5 586 591
2:2 1 86 87
2:3 0 96 96
2:4 0 17 17
2:5 0 132 132
2:6 1 18 19
2:7 3 140 143
2:8 1 24 25
2:All 11 1099 1110
3:1 13 554 567
3:2 1 72 73
3:3 1 101 102
3:4 0 16 16
3:5 3 137 140
3:6 0 15 15
3:7 7 142 149
3:8 0 20 20
3:All 25 1057 1082
Total:N/A 64 3,286 3,350

Table 6.27 shows the number of cases falling into each screener disposition code by batch and how the eligibility adjusted weight was calculated by screener disposition code.
Disposition Code Disposition Description Batch1 Batch2 Batch3 Batch4 All Batches Eligibility Status w_{5ih}
19/0 Complete Eligible 2,222 2,276 2,232 2,957 9,687 Eligible w_{4ih}
11/1 Ineligible/Owner Screened 465 492 588 763 2,308 Ineligible 0
11/2 Ineligible/Proxy Screened 324 355 347 473 1,499 Ineligible 0
11/3 Ineligible/DK Response/ Owner Screened 37 61 63 62 223 Ineligible 0
11/5 Ineligible/RF Response/

Owner Screened

20 13 13 13 59 Ineligible 0
11/11 Not Screened - Not in Operation in 2003 17 15 8 9 49 Ineligible 0
11/12 Not Screened - Not Currently in Operation 49 66 22 36 173 Ineligible 0
11/13 Not Screened - Majority Owned Subsidiary 5 1 0 4 10 Ineligible 0
11/14 Not Screened - Not for Profit 13 5 0 4 22 Ineligible 0
11/15 Not Screened - Not Privately Owned 6 1 0 0 7 Ineligible 0
11/16 Not Screened - 500 Employees or More 3 1 0 0 4 Ineligible 0
11/17 Not Screened - Not the Headquarters 2 0 1 0 3 Ineligible 0
11/18 Farm or Financial Institutions 5 5 4 3 17 Ineligible 0
25/1 Final Language Barrier 26 25 7 27 85 Unknown Nonrespondent w_{4ih} e_2
25/3 Final Computer/Fax Tone 28 35 46 36 145 Unknown Noncontact w_{4ih} e_1
25/6 Final Fast Busy 33 23 24 40 120 Unknown Noncontact w_{4ih} e_1
25/7 Final Disconnected/ Temporarily Disconnected 207 256 298 340 1,101 Unknown Noncontact w_{4ih} e_1

Table 6.27 - continued
Disposition Code Disposition Description Batch1 Batch2 Batch3 Batch4 All Batches Eligibility Status w_{5ih}
25/8 Final Wrong Number 160 128 177 181 646 Unknown Noncontact w_{4ih} e_1
33/66 Final Privacy Manager 13 9 19 24 65 Unknown Nonrespondent w_{4ih} e_2
33/87 Final Noncontact with Busy and No Answers 5 9 5 9 28 Unknown Noncontact w_{4ih} e_1
33/88 Final Noncontact Selected for 5% Follow-up 106 95 0 0 201 Unknown Noncontact w_{4ih} e_1
33/89 Final Nonrespondent Selected for 5% Follow-up 25 25 31 32 113 Unknown Nonrespondent w_{4ih} e_2
33/90 Final Noncontact with ALL Busy or ALL No Answers 44 28 35 52 159 Unknown Noncontact w_{4ih} e_1
33/91 Final Unavailable 161 153 191 563 1,068 Unknown Nonrespondent w_{4ih} e_2
33/92 Final Answering Machine 70 55 61 198 384 Unknown Nonrespondent w_{4ih} e_2
33/93 Final Proxy Refusal 39 29 31 79 178 Unknown Nonrespondent w_{4ih} e_2
33/94 Final Away for Entire Field Period 20 4 1 4 29 Unknown Nonrespondent w_{4ih} e_2
33/95 Final Respondent/Owner Refusal 263 222 236 533 1,254 Unknown Nonrespondent w_{4ih} e_2
33/96 Final Physically/Mentally Incapacitated 3 1 4 2 10 Unknown Nonrespondent w_{4ih} e_2
33/97 Final Gatekeeper Refusal 143 138 129 277 687 Unknown Nonrespondent w_{4ih} e_2
33/99 Final Hostile/Other Refusal 21 41 37 79 178 Unknown Nonrespondent w_{4ih} e_2


6.9.6 Adjustment for Screener Nonresponse: w_{6ih}

The purpose of the screener nonresponse adjustment was to reduce potential bias due to screener nonresponse. We fitted a logistic regression model to predict the response propensity and then used the predicted propensity score to form nonresponse adjustment cells. This adjustment was carried out within each adjustment cell as follows,

[6.27] w_{6ih} =w_{5ih} \delta _{6i} \left( {\frac{\sum\nolimits_c {w_{5ih} } }{\sum\nolimits_c {w_{5ih} \delta _{6i} } }} \right)

where \delta _{6i} =1 if the case is a completed screener, and 0 otherwise. The adjustment factor, the term in the parentheses, is the inverse of the weighted screener response rate per adjustment cell. The summations are over all cases within each adjustment cell c.

Logistic regression weighting based on predicted response propensities has been successfully used in other surveys. The method was appropriate for SSBF since there was a good deal of information available for the respondents as well as nonrespondents65. In addition, this approach could incorporate both categorical and continuous variables in the weighting process. The data used to fit the model included all 16,138 cases that had a positive_{ }w_{5ih} . Results from the logistic regression can be found in Appendix CCC.

For each case, a response propensity was obtained from the model. The sample was then stratified by the response propensity to form 40 adjustment cells. In order to preserve the total weight per size class, the cells were formed within size classes. Size class 0-19 was divided into 25 cells, and each of the remaining size classes was divided into 5 cells. To form the cells within each size class, we sorted the sample by the predicted response propensity so that cases within the same cell had similar response propensities according to the model. Table 6.28 presents the 40 cells and their adjustment factors.

The screener nonresponse-adjusted weight w_{6ih} was the main interview base weight. Only businesses that completed the screener and were eligible for the main interview had a positive_{ }w_{6ih} . Thus, the sum of_{ }w_{6ih} represented an estimate of the number of eligible businesses in the frame.

Table 6.28 Screener Nonresponse Adjustment Cells
Adjustment Cell Size
Class
Sample
Size
Number of Respondents Weighted
Response Rate
Adjustment
Factor
1 0-19 403 318 0.80 1.243
2 0-19 403 304 0.71 1.400
3 0-19 403 308 0.75 1.337
4 0-19 403 281 0.75 1.338
5 0-19 403 279 0.72 1.390
6 0-19 403 271 0.71 1.410
7 0-19 403 245 0.64 1.560
8 0-19 403 248 0.64 1.556
9 0-19 403 239 0.62 1.607
10 0-19 403 227 0.61 1.644
11 0-19 403 234 0.59 1.688
12 0-19 402 226 0.61 1.636
13 0-19 402 218 0.61 1.636
14 0-19 402 213 0.58 1.726
15 0-19 402 200 0.57 1.743
16 0-19 402 208 0.59 1.698
17 0-19 402 188 0.53 1.894
18 0-19 402 184 0.56 1.777
19 0-19 402 190 0.55 1.820
20 0-19 402 191 0.55 1.809
21 0-19 402 170 0.51 1.946
22 0-19 402 156 0.48 2.072
23 0-19 402 150 0.49 2.021
24 0-19 402 158 0.49 2.037
25 0-19 402 151 0.46 2.190
26 20-49 368 301 0.81 1.231
27 20-49 368 274 0.73 1.368
28 20-49 368 255 0.68 1.466
29 20-49 368 240 0.70 1.436
30 20-49 369 203 0.61 1.627
31 50-99 430 347 0.79 1.263
32 50-99 430 321 0.73 1.369
33 50-99 431 296 0.67 1.494
34 50-99 431 257 0.61 1.633
35 50-99 431 244 0.63 1.576
36 100-499 417 326 0.77 1.295
37 100-499 417 278 0.67 1.485
38 100-499 417 292 0.69 1.458
39 100-499 416 256 0.63 1.583
40 100-499 416 240 0.60 1.675


6.9.7 Adjustment for Main Interview Nonresponse Subsampling: w_{7ih}

We subsampled 60% of the pass 1 main interview incompletes for further interviewing attempts in pass 2. About 5% of cases that were determined to be eligible for the main interview by the screener turned out to be ineligible at the main interview. These cases were excluded from the pass 2 subsampling, as they were hostile and other already finalized refusals. The main interview pass 2 subsample also had two components. First, businesses that made appointments with the interviewers to complete the interview, those which had partially completed interviews, pending worksheet mailouts or had not yet been worked at all were included in the subsample with certainty. Second, a 60% subsample was selected from the rest of the pass 1 incompletes that were eligible for subsampling. Again, subsampling took place by batch within each group. Table 6.29 reports the size of the subsample by batch and by subsampling group.

Adjustment for main interview nonresponse subsampling was analogous to the adjustment for screener nonresponse subsampling. Pass 1 completes, certainty cases, and those not eligible for subsampling retained their weights from the previous stage. Pass 1 incompletes that were randomly selected into the subsample, the noncertainty cases, had their weights inflated by the inverse of the subsampling rate so that they represented the entire set of pass 1 incompletes that were eligible for subsampling. Finally, pass 1 incompletes that were eligible for subsampling but not selected into the subsample received an adjusted weight of zero.

Table 6.29 Pass 2 Main Interview Sample Size by Batch and Subsampling Group
Batch:Subsampling Group Certainty Cases Non-certainty Cases Total Cases
1:1 130 294 424
1:2 20 48 68
1:3 38 71 109
1:4 7 16 23
1:5 37 107 144
1:6 3 14 17
1:7 33 101 134
1:8 4 10 14
1:All 272 661 933
2:1 135 241 376
2:2 23 41 64
2:3 19 76 95
2:4 8 14 22
2:5 43 80 123
2:6 7 15 22
2:7 37 88 125
2:8 4 14 18
2:All 276 569 845
3:1 157 233 390
3:2 19 43 62
3:3 48 60 108
3:4 7 14 21
3:5 38 76 114
3:6 5 11 16
3:7 55 74 129
3:8 6 11 17
3:All 335 522 857
Total:N/A 883 1,752 2,635

Let r_{bg}^{'} denote the effective subsampling rate for incompletes in batch b and group_{ }g. The weight adjusted for subsampling was calculated as follows,

w_{7ih} =w_{6ih} certainty cases and those not eligible for subsampling

[6.28] w_{7ih} =w_{6ih} /r_{bg}^{'} pass 1 incompletes selected into subsample

w_{7ih} =0 pass 1 incompletes not selected into subsample

All businesses that completed the main interview in pass 1 and those selected into the subsample for pass 2 had a positive weight w_{7ih} . The sum of weight w_{7ih} represented an estimate of the number of businesses eligible for the main interview.


6.9.8 Adjustment for Main Interview Eligibility: w_{8ih}

In an earlier weighting adjustment for eligibility, we assigned a weight of zero to ineligible businesses identified at the screener stage. Whenever the screening interview was done with a proxy, the eligibility questions were asked again prior to administering the main interview. Due to the time lag from the screener to the main interview and more accurate information obtained at the time of the main interview, about 5% of the businesses that were determined to be eligible by the screener turned out to be ineligible at the time of the main interview. The weights for such confirmed ineligible cases were set to zero. The weights for all pass 2 incompletes were adjusted downward to reflect the empirical ineligibility rate of 5% between the screener and the main. There was no eligibility adjustment for eligible cases that completed the survey. Thus, the weight adjusted for main interview eligibility was,

w_{8ih} =w_{7ih} eligible

[6.29] w_{8ih} =0 ineligible

w_{8ih} =e_m w_{7ih} incompletes

where e_m =.95 is the estimated eligibility rate among the incompletes (see the next section for the definition of e_m ). The sum of weight w_{8ih} represented a revised estimate of the number of businesses eligible for the main interview.


6.9.9 Adjustment for Main Interview Nonresponse: w_{9ih}

Of the 9,687 businesses that were part of the main interview sample, a total of 4,268 businesses completed the main interview. Table 6.30 summarizes the final disposition codes and the number of cases by disposition codes and by batch.

Table 6.30 Main Interview Final Disposition Codes
Disposition Code Disposition Description Batch 1 Batch 2 Batch 3 Batch 4 All Batches
19/1 Complete per FRB Criteria 1,043 1,066 1,069 1,090 4,268
19/2 Incomplete per FRB criteria 80 118 67 50 315
25/1 Final Language Barrier 2 3 6 6 17
25/92 Subsampled Out Batch 1 384 0 0 0 384
25/94 Subsampled Out Batch 2 0 381 0 0 381
25/96 Subsampled Out Batch 3 0 0 348 0 348
25/99 Ineligible - Screened out During Main Interview 79 60 47 38 224
33/66 Final Privacy Manager 0 0 0 3 3
33/90 Final Non-Contact w/All No Answer or All Busy 0 0 0 16 16
33/91 Final Unavailable 129 139 155 779 1,202
33/92 Final Answering Machine 4 0 8 55 67
33/93 Final Proxy Refusal 27 42 31 55 155
33/94 Final Away For Entire Field Period 2 2 0 14 18
33/95 Final Respondent/Owner Refusal 276 222 250 530 1,278
33/96 Final Physically/Mentally Incapacitated 2 5 2 2 11
33/97 Final Gatekeeper Refusal 67 65 73 121 326
33/98 Final Partial Complete 75 64 82 165 386
33/99 Final Hostile/Other Refusal 52 109 94 33 288
Total N/A 2,222 2,276 2,232 2,957 9,687

Only the 4,268 businesses that completed the main interview according to FRB completeness criteria received a positive analysis weight. The weights carried by the eligible incomplete cases were transferred to the complete cases through nonresponse adjustments similar to those made following the screener. The main interview nonresponse adjustment was also based on predicted response propensities from a logistic regression model. The sample was then stratified by response propensities for nonresponse adjustments. The following formula was used to calculate the nonresponse adjusted weight,

[6.30] w_{9ih} =w_{8ih} \delta _{9i} \left( {\frac{\sum\nolimits_l {w_{8ih} } }{\sum\nolimits_l {w_{8ih} \delta _{9i} } }} \right)

where \delta _{9i} =1 for main interview completes, and 0 otherwise. The adjustment factor, the term in the parentheses, was the inverse of the weighted main interview response rate per adjustment cell. The summations were over all the cases within each nonresponse adjustment cell.

The data used to fit the logistic regression model included all cases that went to the main that were not subsampled out or determined to be ineligible, that is the 8,350 cases that had a positive w_{8ih} . The results from logistic regression can be found in Appendix CCC.

For each case, a response propensity was obtained from the fitted model. The sample was then stratified by the response propensity to form 20 adjustment cells. In order to preserve the total weight per size class, the cells were formed within size classes. Size class 0-19 was divided into 11 cells, and each of the remaining size classes was divided into 3 cells. To form the cells within each size class, we sorted the sample by the predicted response propensity so that cases within the same cell had similar response propensities according to the model. Table 6.31 presents the 20 cells and their adjustment factors.


6.9.10 Weight Trimming: w_{10ih}

The weights_{ }w_{9ih} were highly variable due to disproportional stratification, nonresponse subsampling, eligibility and nonresponse adjustments, as well as frame errors. Variability in the weights can be both beneficial and deleterious. Extreme variation in the weights, however, can result in excessively large variances and increase the mean squared error of the estimates. If a small number of cases have a disproportionate influence on the estimates or a significant effect on the variances, then one might choose to trim the outlier weights at the end of the weighting process. Trimming extreme weights may introduce a small bias, but the intent is to reduce the variance, thereby reducing the mean squared error.

As reported earlier, the overall design effect due to unequal weighting was 1.796. The overall effective sample size was (4,268/1.796)=2,376, which should be large enough for most analytical purposes. The design effect per size class as defined by frame data was only slightly larger than expected. However, the design effect per size class as defined by the size information collected during the interview (see Table 6.20) was much larger. Examining the design effect by the updated size class was more relevant because analysts are likely to use the survey-derived information to define size classes. Table 6.32 shows, by updated size class, the number of complete interviews, some summary descriptive statistics about the weights, and the design effect due to unequal weighting.

Table 6.31 Main Interview Nonresponse Adjustment Cells
Adjustment Cell Size Class Sample Size Number of Respondents Weighted Response Rate Adjustment
Factor
1 0-19 443 308 0.669 1.495
2 0-19 443 297 0.657 1.522
3 0-19 442 295 0.666 1.501
4 0-19 442 290 0.623 1.606
5 0-19 442 277 0.598 1.672
6 0-19 442 250 0.530 1.887
7 0-19 442 256 0.546 1.831
8 0-19 442 222 0.474 2.112
9 0-19 442 229 0.500 2.000
10 0-19 442 161 0.350 2.855
11 0-19 442 134 0.286 3.498
12 20-49 365 236 0.623 1.605
13 20-49 365 191 0.496 2.018
14 20-49 365 105 0.263 3.802
15 50-99 417 228 0.512 1.953
16 50-99 417 192 0.440 2.273
17 50-99 416 117 0.274 3.644
18 100-499 381 204 0.500 2.001
19 100-499 380 163 0.418 2.395
20 100-499 380 113 0.285 3.510


Table 6.32 Summary Statistics of the Weights and Design Effects Prior to Weight Trimming By Updated Size Class
Class Count Mean Median Standard Deviation Minimum Maximum Range DEFF
1 2,842 2,035.34 1,847.65 1,215.97 12.78 13,263.88 13,251.10 1.357
2 569 666.70 383.58 881.34 28.11 10,094.63 10,066.52 2.748
3 444 240.01 119.82 574.34 14.65 8,330.56 8,315.91 6.726
4 413 153.59 91.96 276.01 12.78 3,280.04 3,267.26 4.229
Total 4,268 1,484.02 1,423.14 1,324.13 12.78 13,263.88 13,251.10 1.796

When a business was sampled in one size class but classified to a different size class based on survey-derived information, its weight tended to be an outlier in the updated size class. Small outliers have little impact on the variance of the weights, while large outliers can increase the variance drastically. The design effect of the three larger size classes was inflated primarily by cases that were initially sampled from a smaller size class. Not only did weight trimming seem necessary in these cases, it would also be more effective in terms of reducing the design effect and increasing the effective sample size. The variance of the weights should drop dramatically as the extremely large outliers are trimmed, enough to offset potential increases in bias.

The following four figures describe the distribution of the weights per size class after the weights were sorted by ascending order within each updated size class. Based on visual inspection, there were some extremely large weights in every size class, but the outliers were most obvious in the three larger size classes.

Figure 6.2 Distribution of Untrimmed Weights: Size Class One

Figure 6.2 Distribution of Untrimmed Weights: Size Class One.
This figure depicts the distribution of the untrimmed weights for all 2,828 observations in size class one.  The x-axis is the observation number and the y-axis is the value of the untrimmed weights.  The smallest weights are near zero and there is a gradual increase until observation number 2,500, where the untrimmed weight is approximately 3,000. Following observation number 2,500 there is a steady increase until the last observation. The final observation in size class one (observation number 2,828) has an untrimmed weight approximately equal to 14,000.

Figure 6.3 Distribution of Untrimmed Weights: Size Class Two

Figure 6.3 Distribution of Untrimmed Weights: Size Class Two. This figure depicts the distribution of the untrimmed weights for all 571 observations in size class two. The x-axis is the observation number and the y-axis is the value of the untrimmed weights.  The smallest weights are near zero and there is a gradual increase until observation number 499, where the untrimmed weight is approximately 1,000. Following observation number 499 there is a steady increase until the last observation. The final observation in size class two (observation number 571) has an untrimmed weight approximately equal to 10,000.

Figure 6.4 Distribution of Untrimmed Weights: Size Class Three

Figure 6.4 Distribution of Untrimmed Weights: Size Class Three. This figure depicts the distribution of the untrimmed weights for all 454 observations in size class three. The x-axis is the observation number and the y-axis is the value of the untrimmed weights.  The smallest weights are near zero and there is a gradual increase until observation number 422, where the untrimmed weight is approximately 400. Following observation number 422 there is a steady increase until the last observation. The final observation in size class three (observation number 454) has an untrimmed weight approximately equal to 8,200.

Figure 6.5 Distribution of Untrimmed Weights: Size Class Four

Figure 6.5 Distribution of Untrimmed Weights: Size Class Four. This figure depicts the distribution of the untrimmed weights for all 413 observations in size class four. The x-axis is the observation number and the y-axis is the value of the untrimmed weights.  The smallest weights are near zero and there is a gradual increase until observation number 380, where the untrimmed weight is approximately 300. Following observation number 380 there is a steady increase until the last observation. The final observation in size class four (observation number 413) has an untrimmed weight approximately equal to 3,200.

To identify the outliers more systematically and objectively, we examined the design effect per size class at various trimming levels. The purpose was to reduce the variance of weights and increase the effective sample size without introducing significant bias. It was assumed that dramatic decreases in design effects should more than offset increases in biases. Figure 6.6 to Figure 6.9 depict the resultant design effects when different numbers of cases are trimmed.

Figure 6.6 DEFF by Trimming Level: Size Class One

Figure 6.6 DEFF by Trimming Level: Size Class One. This figure depicts the resultant design effects when varying numbers of observations have their weights trimmed for size class one.  The x-axis is the number of observations trimmed and the y-axis is the resultant design effect.  The design effects range from approximately 1.34 to 1.36, with the design effect declining as more observations were trimmed.  After the eleventh weight was trimmed, the resultant design effect was 1.36.

Figure 6.7 DEFF by Trimming Level: Size Class Two

Figure 6.7 DEFF by Trimming Level: Size Class Two. This figure depicts the resultant design effects when varying numbers of observations have their weights trimmed for size class two. The x-axis is the number of observations trimmed and the y-axis is the resultant design effect.  When only the largest weight was trimmed the design effect was 2.75.  After the four largest weights were trimmed, the resultant design effect falls consistently below 2.50.  There is a very gradual decline in the design effect after four largest weights were trimmed. After the nineteen weights were trimmed, the resultant design effect was 2.25.

Figure 6.8 DEFF by Trimming Level: Size Class Three

Figure 6.8 DEFF by Trimming Level: Size Class Three. This figure depicts the resultant design effects when varying numbers of observations have their weights trimmed for size class three. The x-axis is the number of weights trimmed and the y-axis is the resultant design effect.  The figure depicts the number of weights trimmed from one to nineteen.  When only the largest weight was trimmed the design effect was 6.75.  After the twelve largest weights were trimmed, the resultant design effect falls stabilizes to 2.5.  There is a very gradual decline in the design effect after twelve largest weights were trimmed. After the nineteen weights were trimmed, the resultant design effect was 2.0.

Figure 6.9 DEFF by Trimming Level: Size Class Four

Figure 6.9 DEFF by Trimming Level: Size Class Four. This figure depicts the resultant design effects when varying numbers of observations have their weights trimmed for size class four. The x-axis is the number of weights trimmed and the y-axis is the resultant design effect.  The figure depicts the number of weights trimmed from one to nineteen.  When only the largest weight was trimmed the design effect was 4.2.  After the ten largest weights were trimmed, the resultant design effect falls stabilizes to 2.5.  There is a very gradual decline in the design effect after ten largest weights were trimmed. After the nineteen weights were trimmed, the resultant design effect was 1.75.

Trimming the largest weights would have little impact on the design effect for size class 1. We therefore decided not to trim any weights there. For size class 2, we trimmed the four largest weights to bring the design effect down to about 2.5. Size class 3 contained more outliers, but the design effect started to stabilize after the twelve largest weights were trimmed. Beyond that, only small reductions in design effect were possible. We decided that such small reductions were not worth the risk of introducing unnecessary bias. Similarly, the ten largest weights were trimmed in size class 4.

We trimmed the outlier weights to the next largest non-outlier weight in the same size class. To preserve the total weight per size class, the total trimmed weight was redistributed proportionately to all cases within each updated size class. Suppose that in a particular size class the sum of the weight was W_0 before trimming and W_1 after trimming. We adjusted all the weights within the size class by the factor W_0 /W_1 to derive the final weight_{ }w_{10ih} . The final weight was,

[6.31] w_{10ih} =\frac{W_0 }{W_1 }w_{9ih}

Table 6.33 shows, by updated size class, the number of complete interviews, and the final summary descriptive statistics about the weights, and the final design effect due to unequal weighting.
Class Count Mean Median Standard Deviation Minimum Maximum Range DEFF
1 2,842 2,035.34 1,847.65 1,215.97 12.78 13,263.88 13,251.10 1.357
2 569 666.70 391.32 802.05 28.68 4576.39 4547.72 2.447
3 444 240.01 152.28 274.81 18.62 1456.09 1437.47 2.311
4 413 153.59 107.96 154.42 15.00 845.53 830.52 2.011
Total 4,268 1,484.02 1,422.01 1,305.41 12.78 13,263.88 13,251.10 1.796

The sum of the w_{10ih} weights is the final estimate of the total number of eligible businesses in the frame. Table 6.34 shows the number of cases with positive weight and the sum of the weight at each step of the weighting process.

Table 6.34 Sum of Weights at Each Weighting Step
Weight Description Number of Cases Sum of Weight
w_{1ih} Base weight 37,600 9,701,023
w_{2ih} Batch selection 23,798 9,701,171
w_{3ih} Replicate release w/in batch 23,798 9,701,171
w_{4ih} Screener subsampling 20,512 9,700,847
w_{5ih} Screener eligibility adjustment 16,138 6,647,602
w_{6ih} Nonresponse adjustment 9,687 6,647,602
w_{7ih} Main subsampling 8,574 6,649,840
w_{8ih} Main eligibility adjustment 8,350 6,333,780
w_{9ih} Main nonresponse adjustment 4,268 6,333,780
w_{10ih} Weight trimming 4,268 6,333,780


6.10 Response Rates


6.10.1 Introduction to the Concept of Multiple Rates

With a "perfect" frame of the target population for SSBF, all cases in the frame would be eligible to participate in the main interview. In reality, the frame does not represent the target population because there are errors in the frame and because certain eligibility criteria cannot be assessed with the frame data. When dealing with an imperfect frame, the weights and response rates are commonly adjusted to reflect the frame inaccuracies. Eligibility adjustments must be made from the information collected during interviewing.

For this survey, a number of completion rates for both the screener and main interview were considered to deal with the "imperfect" frame issue. The various rates make different assumptions about the frame, target population, screener eligibility, and main interview eligibility.

Three completion rates were considered for the screener. The first screener rate (SR_{1}) represents the completion rate assuming all cases sampled are eligible for the screener. In this calculation, all cases that complete the screener - regardless of their eligibility status for the main SSBF survey - are included in the numerator whereas all cases that were sampled are included in the denominator. The second screener rate (SR_{2}) assumes that those ineligible for the main survey are also ineligible for the screener regardless of whether they complete the screener. Therefore, it removes all cases that are known or estimated to be ineligible for the main survey from both the numerator and the denominator. SR_{2} thus represents a screener completion rate for eligible firms only. The third screener rate (SR_{3}) is somewhere in between the first and second. It assumes that only firms that are out of business (OOB) are not eligible for the screener. Therefore, it removes all cases that are known or are estimated to be OOB from both the numerator and the denominator, while retaining all other cases regardless of their eligibility for the main interview. SR_{3} represents the proportion of completed screening interviews (eligible or ineligible) with firms in business among all sampled firms known or estimated to be in business.

For the main interviews, two completion rates were considered initially. The first main interview completion rate, MR_{1}, assumes that only eligible firms go on to the main interview. MR_{1} is the ratio of completed main interviews to all screened firms advancing to the main interview. In practice, some of the firms thought to be eligible after screening turned out to be ineligible when different information was collected in the main interview. The second main interview completion rate, MR_{2}, adjusts for these late-determined ineligibles by removing them from the denominator. A third main interview rate, MR_{3}, was developed to account for likely ineligible cases among the main interview nonrespondents. The third rate MR_{3}can be interpreted as the proportion of eligible firms that completed the main interview.

To get an overall response rate, we multiply a screener completion rate and a main interview completion rate. For example, the response rate designated below as RR_{5} is calculated as the product of SR_{2} and MR_{3}. It represents the proportion of eligible firms completing the screener and main interview. In contrast, RR_{1}, which is the product of SR_{1} and MR_{1}, assumes all firms on the frame are eligible for the screener and all firms sent on to the main interview are eligible for the main. Five different response rates are presented in this chapter. If all firms on the frame were eligible for the main interview (i.e., if the frame were perfect), then all of the response rates would be equal.


6.10.2 Background

Professional organizations such as AAPOR (2004) and CASRO (1982) have defined standard procedures to calculate survey response rates. NORC (2001) has developed its own standard response rate procedures that are consistent with the AAPOR and CASRO guidelines. The methods considered for the 2003 SSBF follow the basic principles in these standard procedures.

The response rate is generally defined as the ratio of the completed cases to the number of eligible cases in the sample. Let C be the number of cases that completed the main interview, I the number of cases that completed the screening interview but were determined to be ineligible to participate in the main interview, E the number of eligible cases that completed the screening interview but did not complete the main interview, and U the number of cases that did not complete the screening interview thereby leaving their eligibility status for the main interview undetermined. Together C+I+E+U is the total initial sample size. The response rate is defined by,

[6.32] RR=\frac{C}{C+E+eU}

where e is the proportion of cases with unknown eligibility that are in fact eligible, i.e., the eligibility rate among the unknown cases. The true proportion e is generally unknown. In most cases, however, it is acceptable to use the value estimated from the confirmed cases,

[6.33] e=\frac{C+E}{C+E+I}

The SSBF involves two separate interviews with two separate instruments: the screener instrument and the main interview instrument. In addition, both the screener and the main interview involve the subsampling of nonrespondents for further interviewing attempts. For surveys that involve a screener interview separate from the main interview, usually the response rate can be expressed as the product of the screener completion rate and the main interview completion rate as shown below:

[6.34] \begin{array}{l} RR=\frac{C}{C+E+eU} \ \quad \;\,=\frac{C}{\left( {C+E} \right)+\left( {\frac{C+E}{C+E+I}} \right)U} \ \quad \;\,=\frac{C\left( {C+E+I} \right)}{\left( {C+E} \right)\left( {C+E+I} \right)+\left( {C+E} \right)U} \ \quad \;\,=\frac{C\left( {C+E+I} \right)}{\left( {C+E} \right)\left( {C+E+I+U} \right)} \ \quad \;\,=\frac{C}{\left( {C+E} \right)}\frac{\left( {C+E+I} \right)}{\left( {C+E+I+U} \right)} \ \quad \;\,=R_2 \ast R_1 \ \end{array}

where_{ }R_2 denotes the main interview completion rate among the cases that are eligible for the main interview and_{ }R_1 denotes the screener completion rate among cases that are eligible for the screener.

Expression [6.34] is the CASRO and AAPOR response rate where the unknown eligibility cases are assumed to be eligible in the same proportion as among the confirmed cases.

We adopted the same conceptual framework to compute the SSBF response rate. We computed the screener completion rate and the main interview completion rate separately, and used the product of the two as the overall response rate. In general, completion rates are used to measure how well the various components of a sample survey are accomplished. These component completion rates are then multiplied to form the overall response rate which is a summary measure of the result of all efforts, properly carried out, to execute a survey.

Calculating the SSBF response rate was complex given the complex sample design. First, we did not plan to use expression [6.33] to estimate the eligibility rate among the unknown cases. In fact, the purpose of the 5% follow-up subsample of screener incompletes (see Section 6.8.1) was to estimate a more reliable eligibility rate for the unknown cases than the eligibility rate given in expression [6.33]. Using the eligibility rate estimated from the 5% subsample for the unknown cases means that the product of the screener completion rate and the main interview completion rate is no longer the same as [6.34], although they are in agreement in principle. However, using improved eligibility assumptions, with evidence to support them, is consistent with the AAPOR standard.

Second, while the CASRO and AAPOR response rates are presented as unweighted in the standards, NORC usually computes weighted response rates for its sample surveys. To compute the response rate for the SSBF, the screener completion rate was weighted by the screener base weight, and the main interview completion rate was weighted by the main interview base weight. The weighted response rate measures the proportion of the eligible cases in the sample frame that is represented by the respondents in the survey (see Section 6.10.5 later for a more detailed discussion of the weighted response rates). To the extent that the sampling frame is perfect, the weighted response rate measures the proportion of the target population that is represented by the respondents of the survey.

Finally, multiple response rates can be calculated based on different eligibility assumptions for the screener and main interview. We computed a response rate based on the eligibility assumptions preferred by the FRB.


6.10.3 Screener Completion Rates


6.10.3.1 Screener Notation

This subsection introduces the notation that is used in presenting the formulas for three potential screener completion rates.

1) n: The total screening sample size consisting of all cases in the released (worked) replicates. Note that n is the size of the final sample worked, which may be smaller than the screening sample initially selected.

2) n_1 : Cases that could not complete the screener during pass 1 but made hard appointment to complete the screener at a later time. These cases are referred to as the hard callbacks and they were included in the subsample for pass 2 with certainty.

3) n_2 : Cases that completed the screener in pass 1 and were eligible for the main interview.

4) n_3: Cases that completed the screener in pass 1 and were ineligible for the main interview.

5) n_4: Cases that were known to be ineligible although they did not complete the screener in pass 1.

6) n_5 : Finalized pass 1 noncontacts. These are cases with which no contact was made during the pass 1 screener. Their eligibility status for the main interview was estimated from the 5% subsample of noncontacts.

7) n_6 : Finalized pass 1 nonrespondents. These cases were not subjected to subsampling for pass 2. Their eligibility status was estimated from the 5% subsample of nonrespondents.

8) n_7 : Pass 1 screener incompletes that were not finalized (as noncontacts or nonrespondents) and were subjected to subsampling for pass 2.

9) r_s : The screener nonresponse subsampling rate.

10) n_{8(1)} : Hard callback cases that completed the screener in pass 2 and were eligible for the main interview.

11) n_{8(7)} : Other cases (other than the callbacks) in the subsample that completed the screener in pass 2 and were eligible for the main interview.

12) n_{9(1)} : Hard callback cases that completed the screener in pass 2 and were ineligible for the main interview.

13) n_{9(7)} : Other cases (other than the callbacks) in the subsample that completed the screener in pass 2 and were ineligible for the main interview.

14) n_{10(1)} : Hard callback cases that did not complete the screener in pass 2. Their eligibility status will be estimated from the 5% subsample.

15) n_{10(7)} : Other cases (other than the callbacks) in the subsample that did not complete the screener in pass 2. Their eligibility status was estimated from the 5% subsample of nonrespondents.

16) n_{11} : Cases that were known to be out of business (OOB) among screened ineligible cases in pass 1. These cases represent a subset ofn_3.

17) n_{12} : OOB cases among those that were known to be ineligible although they did not complete the screener in pass 1. This is a subset of n_4.

18) n_{13(1)} : OOB cases among hard callback cases that were screened in pass 2 and were ineligible for the main interview. This is a subset of n_{9(1)} .

19) n_{13(7)} : OOB cases among other cases (other than the callbacks) in the subsample that were screened in pass 2 and were ineligible for the main interview. This is a subset of_{ }n_{9(7)} .

20) e_1 : Estimated main interview eligibility rate among pass 1 screener noncontacts (n_5 ) based on the 5% subsample of noncontacts.

21) e_2 : Estimated main interview eligibility rate among pass 2 screener nonrespondents (n_{10(1)} and _{ }n_{10(7)} ) based on the 5% subsample of nonrespondents.

22) e_3 : Estimated OOB rate among pass 1 screener noncontacts (n_5 ) based on the 5% subsample of noncontacts.

23) e_4 : Estimated OOB rate among pass 2 screener nonrespondents (n_{10(1)} and _{ }n_{10(7)} ) based on the 5% subsample of nonrespondents.


6.10.4 Screener Completion Rate Calculations

Based on different screener eligibility assumptions about the unknown cases, we defined three screener completion rates.

The first screener completion rate assumes that all sample businesses were eligible for the screener. Under this most conservative assumption, the denominator is simply the total screening sample sizen. The numerator includes cases that completed the screener either in pass 1 or pass 2. Screener completes in pass 2, except for the hard callbacks that were included in the subsample with certainty, are weighted by the inverse of the screener nonresponse subsampling rate r_{s}. Cases that were known to be ineligible without completing the screener are not counted as screener completes. The first screener completion rate is defined as:

[6.35] SR_1 =\frac{n_2 +n_3 +n_{8(1)} +n_{9(1)} +\left( {n_{8(7)} +n_{9(7)} } \right)r_s^{-1} }{n}

The second screener completion rate, as proposed by the FRB, assumes that cases that were ineligible for the main interview were also ineligible for the screener. This assumption requires the removal of all ineligible cases, known or estimated, from both the numerator and the denominator of the completion rate formula. In particular, cases that completed the screener but were ineligible for the main interview are not counted as screener completes. The second screener completion rate is defined as

[6.36] SR_2 =\frac{n_2 +n_{8(1)} +n_{8(7)} r_s^{-1} }{n-n_3 -n_4 -n_{9(1)} -n_{9(7)} r_s^{-1} -n_5 (1-e_1 )-(n_6 +n_{10(1)} +n_{10(7)} r_s^{-1} )(1-e_2 )}

This rate can be thought of as a screener completion rate among the subset of the sample that was eligible for the main interview. The eligibility rates e_1 and e_2 were estimated from the 5% subsample of the screener incompletes. The finalized pass 1 nonrespondents were not represented by the 5% subsample, so their eligibility rate could not be estimated directly. Expression [6.36] assumes that the eligibility rate among finalized pass 1 nonrespondents is e_2 , the same rate as estimated from batch 1 pass 2 nonrespondents.

The third screener completion rate, as used by NORC in its algorithm to determine the sample size in Section 6.5, assumes that OOB cases were ineligible for the screener. This is analogous to the tradition in random digit dial household surveys where nonresidential telephone numbers are considered ineligible for the screener. With this assumption, OOB cases, known or estimated, must be removed from both the numerator and the denominator. The third screener completion rate is defined as

[6.37] SR_3 =\frac{n_2 +n_3 -n_{11} +n_{8(1)} +n_{9(1)} -n_{13(1)} +(n_{8(7)} +n_{9(7)} -n_{13(7)} )r_s^{-1} }{n-n_{11} -n_{12} -n_{13(1)} -n_{13(7)} r_s^{-1} -n_5 e_3 -(n_6 +n_{10(1)} +n_{10(7)} r_s^{-1} )e_4 }

The OOB rates _{ }e_3 and e_4 could also be estimated from the 5% subsample of the screener incompletes. Again, the finalized pass 1 nonrespondents were not represented in the 5% subsample, so their OOB rate would have to be assumed. Expression [6.37] assumes that the OOB rate among finalized pass 1 nonrespondents is_{ }e_4 , the same rate as among batch 1 pass 2 nonrespondents.

The three screener completion rates represent three ratios of screener completes to all sample cases eligible for the screener. To compute a weighted screener completion rate, all cases are weighted by the base weight w_{3ih} , i.e., the inverse of the probability of selection (see Section 6.9 for weighting procedures). After weighting, each term in the completion rate formulas represents the weighted number of cases in that category.

The second screener completion rate, SR_{2}, was selected for purposes of computing the final overall response rate. This choice is consistent with the way response rates were computed in prior rounds. This screener completion rate was computed to be 61.92%.


6.10.5 Main Interview Completion Rates


6.10.5.1 Main Interview Notation

We first introduce the notation used in presenting the formulas for the two main interview completion rates.

1) m: The pass 1 main interview sample size. This includes all cases that were determined by the screener to be eligible for the main interview.

2) m_1: Eligible cases that did not complete the main interview during pass 1 but made hard appointment to complete the interview at a later time. These cases are referred to as the hard callbacks and were included in the subsample for pass 2 with certainty.

3) m_2 : Cases that completed the main interview in pass 1.

4) m_3 : Cases among pass 1 incompletes that were confirmed to be ineligible. These are the cases that were determined to be eligible by the screener but turned out to be ineligible at the main interview.

5) m_4 : Finalized nonrespondents at pass 1. These cases were not subsampled for pass 2.

6) m_5 : Other pass1 nonrespondents. These cases made up the frame for the subsampling for pass 2.

7) m_{6(1)} : Hard callback cases in the subsample that completed the interview at pass 2.

8) m_{6(5)} : Other cases (other than hard callbacks) in the subsample that completed the interview at pass 2.

9) m_{7(1)} : Hard callback cases in the subsample that turned out to be ineligible.

10) m_{7(5)} : Other cases (other than hard callbacks) in the subsample that turned out to be ineligible.

11) m_{8(1)} : Hard callback cases in the subsample that were nonrespondents after pass 2.

12) m_{8(5)} : Other cases (other than hard callbacks) in the subsample that were nonrespondents after pass 2.

13) r_m : The main interview nonresponse subsampling rate.

14) e_m : The assumed eligibility rate for nonrespondents.


6.10.6 Main Interview Completion Rate Calculations

Based on different eligibility assumptions about the unknown cases, we defined three main interview completion rates.

The first completion rate assumes that all cases in the main interview sample at the start of pass 1 were eligible for the main interview. Under this assumption, the denominator is the total main interview sample sizem. The numerator includes cases that completed the main interview either in pass 1 or pass 2. The main interview completes in pass 2, except for the hard callbacks that were included in the subsample with certainty, are weighted by the inverse of the main interview nonresponse subsampling rater_m . The first main interview completion rate is defined as

[6.38] MR_1 =\frac{m_2 +m_{6(1)} +m_{6(5)} r_m^{-1} }{m}

The second main interview completion rate removes the known ineligible cases among m from the denominator, but it still assumes that all other nonrespondents were eligible. The known ineligible cases are those that were determined to be eligible by the screener but turned out to be ineligible at the time of the main interview. All interview nonrespondents are assumed to be eligible. The second main interview completion rate is defined as

[6.39] MR_2 =\frac{m_2 +m_{6(1)} +m_{6(5)} r_m^{-1} }{m-m_3 -m_{7(1)} -m_{7(5)} r_m^{-1} }

After data collection was completed, it was determined that the number of cases confirmed to be ineligible at the main interview stage was 4.99%, which is not insignificant. The decision was made to assume that the nonrespondents were ineligible at the same rate. That is, we actually computed a third main interview completion rate defined as

[6.40] MR_3 =\frac{m_2 +m_{6(1)} +m_{6(5)} r_m^{-1} }{m-m_3 -m_{7(1)} -m_{7(5)} r_m^{-1} -(1-e_m )(m_4 +m_{8(1)} +m_{8(5)} r_m^{-1} )}

where the eligibility rate was estimated on an unweighted basis as

[6.41] e_m =1-\frac{m_3 +m_{7(1)} +m_{7(5)} }{m_3 +m_{7(1)} +m_{7(5)} +m_2 +m_{6(1)} +m_{6(5)} }

The three main interview completion rates provide ratios of the complete cases to the total eligible cases based on different eligibility assumptions about the unknown cases. To compute the weighted completion rate_{ }MR_3 , all cases were weighted by the main interview base weight. Only the m businesses that completed the screener interview and were eligible for the main interview had a positive weight for this calculation. The main interview completion rate is estimated as 52.36%.


6.10.7 Overall Response Rates

Nine overall response rates may be computed based on the three screener completion rates and three main interview completion rates. However, we are mainly interested in the five response rates below.

[6.42] RR_1 =SR_1 \ast MR_1

[6.43] RR_2 =SR_1 \ast MR_2

[6.44] RR_3 =SR_2 \ast MR_2

[6.45] RR_4 =SR_3 \ast MR_2

[6.46] RR_5 =SR_2 \ast MR_3

The first response rate, RR_1 , is the most conservative as it assumes that all screener incompletes were eligible for the screener interview and all main interview incompletes were eligible for the main interview. RR_1 would be the true response rate if all sample cases were eligible for the screener and all cases that were determined to be eligible for the main interview were in fact eligible at the time of the main interview. The next three response rates involve the same main interview completion rate but three different screener completion rates. The fourth rate, RR_4 , reflects NORC's early understanding of eligibility for the screener, and early sampling plans were consistent with this approach. Later, after the FRB clarified its understanding of screener eligibility, emphasis shifted towardRR_3 . Only after reviewing the number of ineligibles determined at the main interview stage was the decision made to allow for the possibility that main interview nonrespondents might be ineligible. The final response rate, RR_5 , is the overall response rate ultimately selected for this study. RR_5 was estimated to be 32.42%.

As noted earlier, all three screener completion rates would be calculated using the same batch-adjusted base weight w_{3ih} that is available to all n cases. The base weight w_{3ih} approximates the number of cases in the frame that the sample case represents. This base weight was applied to all cases that were part of the screener completion rate calculation. With the base weight applied, each term in the formulas represents the weighted number of cases or the total weight in that category. For example, the unweighted n_2 in expression [6.35] represents the number of cases in the sample that completed the screener in pass 1 and were eligible for the main interview. When the base weight is applied to all cases withinn_2 , n_2 represents the number of eligible cases in the frame that completed the screener in pass 1. The weighted version of the other terms should be interpreted in the same way. All three weighted screener completion rates measure the proportion of the screener eligible cases in the frame that is represented by the screener respondents. The only difference between the various screener completion rates is the underlying assumptions of what it means to be eligible for the screener.

Different sets of main interview base weights for the m cases would be needed to compute the weighted main interview completion rates. This is because the development of the main interview weights needs to be consistent with the screener eligibility assumptions that underlie the differences among the three screener completion rates. Because of the multiple sets of weights required, we did not compute all of the main interview completion rates and all of the overall response rates. Weight w_{6ih} in Section 6.9.6 is suitable for computing the selected main interview completion rate.


6.11 Design Changes

The design of the 2003 SSBF evolved somewhat from the initial proposal to the formal plan and to final execution. All changes were made with the active participation and consent of the FRB. Most of these changes are described in the prior sections. This section provides a summary of all the design changes. Then, it focuses on the decision not to match the sample to the InfoUSA database.


6.11.1 Summary of Design Changes

InfoUSA: The major change at the beginning of the survey was to drop the idea of matching the D&B sample to the InfoUSA database in an attempt to increase the efficiency of the sample design. The analytical results behind this important decision are described in detail later in this section.

500+ firms: In order to minimize the potential of frame error, the initial proposal was to subsample a small number of cases with more than 500 employees to determine what proportion of them would actually qualify for the study. This had been done in the previous years of the SSBF. Unlike the previous SSBF, the 2003 survey excluded businesses with 500 or more employees from its target population.

Nonrespondent Subsampling: The original sampling plan did not address the subject of hard appointment callbacks and how they should be treated during subsampling. While these cases were nonrespondents, it was suggested that they represented a special class of nonrespondents who are more likely to complete if given more time. Therefore, when pass 1 screener incompletes were subsampled for pass 2, businesses with hard appointment callbacks were included in the pass 2 subsample with certainty. For the main interviews, the pass 2 subsample included the following types of cases with certainty: hard appointment callbacks, partially completed interviews, cases with re-mailed worksheets, and all unreleased cases.

Final Screening Sample: As discussed in Section 6.5, the final screening sample size was bigger than initially estimated under the most optimistic assumptions. We first increased the size of batch 3 as soon as it became clear that the actual screener and main interview completion rates were much lower than expected. We then added a batch 4 based on various realized outcome rates in early October 2004. Analytical results indicated that sample balancing was not necessary for batch 3 or batch 4 since the productivity of the five sample balancing groups was similar at the time of the evaluation. Finally, the added batch 4 was not subjected to nonresponse subsampling due to timing constraints.

Out of Business Cases: Our initial sampling plan assumed that out-of-business (OOB) cases were ineligible for the screener. The original weighting and response rate procedures were consistent with this initial assumption. We later used the FRB definition of screener eligibility in weighting and response rate calculations. The FRB definition states that only businesses eligible for the main interview are considered eligible for the screener. See Section 6.10 for detailed discussions.

5% follow-up: The design for the selection of the 5% follow-up subsample of screener incompletes also deviated slightly from the initial plan. While the 5% subsample of screener noncontacts was selected from all four batches, the 5% subsample of screener nonrespondents was selected only from the first two batches. In addition, we revisited the size and composition, and altered the goal of the follow-up subsample to be consistent with the clarified definition of eligibility. Finally, the original plan was to select 5% follow-up cases on an ongoing basis and finish each pass; due to practical timing constraints, however, the entire subsample was selected within a three-week period toward the end of data collection. See Section 6.8 for details.

Sample Release: Operationally, the sample release was done by batches rather than replicates. Since all replicates were released and worked in all batches, the replicates were no longer relevant (Section 6.6). Another operational change was the increased incentives for nonrespondents late in data collection to increase response rates. Thus, not all cases received the same treatment. For a detailed discussion about incentives see Section 4.7.5.3.

Weighting: Regarding estimation, the weighting procedures deviated from the original plan in several aspects. First, we used predicted response propensities from logistic regression models to define the nonresponse adjustment cells for both screener and main interview nonresponse adjustments. This differed from the traditional weighting class adjustment method that was proposed in our original sampling plan. Second, we applied an empirical 95% eligibility rate to the main interview nonrespondents, while the original sampling plan assumed all main interview nonrespondents are eligible. Finally, we implemented weight trimming procedures to control the variance of the weights within each updated size class (see Section 6.9 for details).


6.11.2 InfoUSA Database


6.11.2.1 InfoUSA Database

NORC's past experience with the D&B database suggested that these data contained errors, one of which is the inclusion of firms no longer in business and hence ineligible for the 2003 SSBF. Our original proposal called for matching the D&B to data from InfoUSA, a compiler of business telephone directories. The idea was to use a second source of information to identify firms with a lower probability of being in business, so that these firms could be subsampled in an effort to reduce data collection costs. Firms in the D&B file for which no match could be found in the InfoUSA file would be considered suspect in terms of still being in business. The premise of such a match is that two sources showing a business in operation increases the ex ante probability that the business is genuinely in operation. Thus, our original approach involved matching our D&B sample to InfoUSA, treating all matches as very likely to be in operation and including those in the fielded sample with certainty. For those businesses in D&B with no match in InfoUSA, we planned to assume a lower likelihood of being in operation and therefore, to subsample from this group to reduce the number of non-existent businesses we would try to reach.

The strength of this plan was that if non-matches turned out to be mostly non-operational businesses, considerable resources could be saved by not calling so many former businesses. However, any completed cases from the non-match group would receive a large weight (due to subsampling reducing the probability of selection). Thus the risk was that if a large number of operational businesses comprised the non-match group, we would end up with a large number of cases with large weights, yielding a large variance in the weights, which would in turn reduce the effective sample size. This result could occur, for instance, if the non-matches were mostly comprised of businesses that had moved.

After the start of the project, an additional fact came to light, which is that InfoUSA collects information to flag businesses on their list as being non-operational, or out of business (OOB). For our purposes on the 2003 SSBF, this results in three categories of businesses: 1) D&B businesses that match to InfoUSA businesses as operational, 2) D&B businesses that match to InfoUSA businesses as non-operational, and 3) D&B businesses that do not match to InfoUSA businesses at all.


6.11.2.2 Evaluation of Three Approaches to Matching

In discussions between project staff and FRB staff, many uncertainties were identified with regard to the methods used by InfoUSA to conduct a match. We therefore determined that pretesting was necessary. After many conversations with InfoUSA, we identified three match procedures to test, using the 1000 sampled cases intended for the two pretests.

NORC sent InfoUSA the pretest samples and asked them to match these to their database using three matching methods. The matching was done independently for each method, and they are described below66.

1) Standard Match (SM). This approach required that 80% of the characters in the company name and company address match to one of their records. InfoUSA claims typically to match 60-65% of a file. They also claim that their standard match is designed to be very reliable, thus not producing false matches or multiple matches.

2) Loose Match (LM). This approach was analogous to the SM, but relaxed the requirement of character matches for company name and address somewhere below the 80% threshold (supposedly 60%). This match procedure was selected to account for different spellings, typos, or other minor variations that might have caused a non-match.

3) Six-digit Match (6M): This approach defined a match as a record that matched on company name (80%) and the first 6-digits of the telephone number (exactly). The intention of this match was to capture firms that moved, but retained their phone number.

As described in Section 2.6, NORC ended up using 750 of the 1000 firms that were originally selected for pretests 1 and 2 for pretest 1. By calling these firms to conduct screening interviews, NORC was able to classify most businesses in the Pretest 1 sample as either operational or non-operational. In addition, NORC conducted various locating activities to resolve any unknown classifications.

These procedures were intended to answer the following questions:

1) What is the match rate? If InfoUSA cannot match a reasonable percentage of D&B businesses, matching is not worthwhile.

2) What percentage of matched in-business records are determined to be in-business? If this percentage is not high, matching is not worthwhile.

3) What percentage of non-matched records are in-business? If this percentage is high, then matching is not worthwhile.

4) What percentage of the matched out-of-business records are out of business? If this percentage is low, we cannot use the OOB flag. If it is quite high, we could totally eliminate these records from the sample frame, or at least subsample them at very low rates.


6.11.2.3 Evaluation of Three Approaches to Matching

NORC's analysis of the resulting matches spawned questions about the Loose Match (LM) and the 6-digit Match (6M) that were never satisfactorily answered by InfoUSA. The Standard Match (SM), however, lived up to expectations. As InfoUSA predicted, the SM resulted in matches for approximately 60% of the D&B sample (n=592 of 1000 in-business firms). There were also 19 out-of-business firms, which InfoUSA refers to as "nixies." Moreover, close inspection of the matched records revealed that these matches appeared to meet the specified matching criteria.

The LM proved to be completely unreliable. Based on the definition of the Loose Match, we were anticipating the LM matches to be a superset of the SM matches, but this was not the case. For many of the LM matches (and non-matches), no rationale behind the match-status was evident. Although its matches appeared to be highly reliable, the 6M method was too conservative, as many potential matches were left as non-matches.

The following types of inconsistencies were identified with the Loose and 6-digit Matches:

1) Standard matches that do not appear as loose matches (n=238).

2) Standard matches for which the first 6 digits of the phone numbers match, but there is not a corresponding 6-digit match for that record (n=270).

3) Loose matches with company names that are so different that the looseness criteria described should not have picked them up as matches (n=47 of the first 100 records).

4) Loose matches that look good enough to be standard matches, but are not (n=37 of the first 100 records).

5) Cases for which there is both a standard match and a loose match; however, the standard match has the same firm name, but the loose match has a different firm name that does not match the D&B firm name (34 of the first 100).

Because of these inconsistencies, the matching pretest was unable to fully answer the first question above. However, because of its 60% match rate, the Standard Match still held promise for use in sample design. NORC next looked at the results of Pretest 1 screening to evaluate the operational status of matches and non-matches, and also to evaluate the accuracy of InfoUSA's OOB flag.


6.11.2.4 Evaluation of Three Approaches to Matching

The purpose of this analysis was to evaluate the usefulness of the matching outcome, using information obtained in the pretest about whether or not the firm was currently in operation, and whether this varied by firm size or ownership type.

After screening was completed, NORC classified the businesses into three categories, based on screening outcome. Table 6.35 shows the number of cases in each category. To show how operating status was determined, Table 6.36 shows a cross-tabulation of operating status and final case disposition.


Table 6.35 Operating Status of Pretest 1 Cases
Operating Status1 Description N Percent
B Operating 688 91.7
N Not Operating 57 7.6
U Unknown Operating Status 5 0.7
Total N/A 750 100

1The definition of operating status used to evaluate the usefulness of the InfoUSA database is different than imposed elsewhere on this project. Return to Table


Table 6.36 Operating Status of Pretest 1 Cases by Final Disposition
Final Disposition Operating Status N Percent
COMPLETE-Ineligible (Operating) B 93 12.4
COMPLETE-Ineligible (Not Operating) N 11 0.1
COMPLETE-Eligible B 302 40.3
LANGUAGE BARRIER B 3 0.4
COMPUTER TONE/FAX N 1 0.1
DISCONNECTED/WRONG NUMBER N 32 4.3
NO LONGER IN BUSINESS N 23 3.1
PRIVACY MANAGER B 6 0.8
LOCATED AFTER DATA COLL PD ENDED B 15 2.0
NON-CONTACT WITH BUSY AND NO ANSWER U 3 0.4
NON-CONTACT W/ALL NO ANSWER OR ALL BUSY U 2 0.3
FINAL UNAVAILABLE/AWAY FOR FIELD PERIOD B 82 10.9
NON-CONTACT W/ANSWERING MACHINE B 20 2.7
PROXY REFUSAL B 5 0.7
R/OWNER REFUSAL B 85 11.3
GATEKEEPER REFUSAL B 72 9.6
HOSTILE REFUSAL B 5 0.7

1 Screener data was used to determine this case was ineligible as it was not in operation at the time of the screener interview. Return to Table

As Table 6.35 shows, 92% of the firms sampled from D&B for Pretest 1 were operating as a business at the time of the pretest. The driving force behind the matching with InfoUSA data was the assumption that in-operation rate of firms in the D&B sample would be lower than 92%. The five percent follow-up subsample of the 1998 SSBF suggests an estimate of 88 to 90%67. Nevertheless, 8% of the main survey sample of 37,600 firms translates into 3,008 out-of-business firms. Therefore, the value of predicting out-of-business firms, without having to go through the screening process, is still valuable. It is noteworthy that, of the 62 pretest cases finalized as either N or U, the average number of calls made to reach this disposition was 5.4 (minimum calls=1, maximum calls 26, mode=4). Therefore, potential cost savings do exist by identifying firm characteristics that make an enterprise likely to be out of business.

Table 6.37 compares SM matches with SM non-matches in terms of operating status. As this table shows, there is a difference in the operation rates for SM matches and non-matches (i.e., 95.6% in-operation for SM matches, and 85.8% in-operation for SM non-matches). However, the difference is not substantial. Furthermore, the percent of non-matches that are in-business is quite high. Therefore, the assumption that non-matches are less likely to be in business appears to be correct, but the SM non-matches cannot simply be assumed not to be viable businesses.

Based on these results, there appears little to be gained from stratifying the 2003 SSBF sample by match and non-matches. However, since the sample design involves a variety of stratifications, NORC might still consider subsampling within a set of cells if there were sufficient differences between the matches and non-matches. Two possibilities would be to partition the SM matches and SM non-matches by employment size (0-19, 20-49, 50-99, and 100-499) and/or business type (sole proprietor, partnership, and corporation). The results in Table 6.38 through Table 6.42 show a familiar pattern: most subgroups of the SM non-match group have lower in-operation rates than their SM match counterparts, but the non-match SM subgroups nevertheless have high in-operation rates. Thus, we may not assume that any subgroup of the non-matches is largely comprised of enterprises that are not operating as a business.

Table 6.37 Operating Status and Match Outcome by Final Disposition
Final Disposition Operating Status Non-Matches
N
Non-Matches
Percent
Matches
N
Matches Percent
COMPLETE-Ineligible B 46 15.5 47 10.4
COMPLETE-Eligible B 91 30.7 211 46.5
LANGUAGE BARRIER B 2 0.7 1 0.2
PRIVACY MANAGER B 5 1.7 1 0.2
LOCATED AFTER DATA COLL PD ENDED B 7 2.4 8 1.8
FINAL UNAVAILABLE/AWAY FOR FIELD PERIOD B 29 9.8 53 11.7
NON-CONTACT W/ANSWERING MACHINE B 7 2.4 13 2.9
PROXY REFUSAL B 2 0.7 3 0.7
R/OWNER REFUSAL B 36 12.2 49 10.8
GATEKEEPER REFUSAL B 28 9.5 44 9.7
HOSTILE REFUSAL B 1 0.3 4 0.9
TOTAL B 254 85.8 434 95.6
COMPLETE-Ineligible N 0 0.0 1 0.2
COMPUTER TONE/FAX N 1 0.3 0 0.0
DISCONNECTED/WRONG NUMBER N 25 8.4 7 1.5
NO LONGER IN BUSINESS N 14 4.7 9 2.0
TOTAL N 40 13.5 17 3.7
NON-CONTACT WITH BUSY AND NO ANSWER U 1 0.3 2 0.4
NON-CONTACT W/ALL NO ANSWER OR ALL BUSY U 1 0.3 1 0.2
TOTAL U 2 0.7 3 0.7

Table 6.38 Operating Status by Firm Size and Match Outcome
Firm Size: Operating Status Non-Matches
N
Non-Matches
Percent
Matches
N
Matches
Percent
0-19:B 139 47.0 199 43.8
0-19:N 27 9.1 9 2.0
0-19:U 1 0.3 1 0.2
20-49:B 84 28.4 170 37.4
20-49:N 8 2.7 6 1.3
20-49:U 0 0.0 2 0.4
50-99:B 20 6.8 42 9.3
50-99:N 4 1.4 0 0.0
50-99:U 1 0.3 0 0.0
100-499:B 11 3.7 23 5.1
100-499:N 1 0.3 2 0.4
100-499:U 0 0.0 0 0.0


Table 6.39 Operating Status by Ownership Type and Match Outcome
Ownership Type:Operating Status Non-Matches
N
Non-Matches
Percent
Matches
N
Matches
Percent
Sole Proprietorship:B 97 32.8 128 28.2
Sole Proprietorship:N 18 6.1 5 1.1
Sole Proprietorship:U 1 0.3 2 0.4
Partnership:B 75 25.3 151 33.3
Partnership:N 15 5.1 6 1.3
Partnership:U 0 0.0 1 0.2
Corporation:B 82 27.7 155 34.1
Corporation:N 7 2.4 6 1.3
Corporation:U 1 0.3 0 0.0


Table 6.40 Operating Status by Firm Size, Ownership Type and Match Outcome
Firm Size:
Ownership Type:
Operating Status
Non-Matches
N
Non-Matches
Percent
Matches
N
Matches
Percent
0-19:
Sole Proprietorship:
B
51 17.2 62 13.7
0-19:
Sole Proprietorship:
N
11 3.7 2 0.4
0-19:
Partnership:
B
36 12.2 72 15.9
0-19:
Partnership:
N
11 3.7 3 0.7
0-19:
Partnership:
U
0 0.0 1 0.2
0-19:
Corporation:
B
52 17.6 65 14.3
0-19:
Corporation:
N
5 1.7 4 0.9
0-19:
Corporation:
U
1 0.3 0 0.0
20-49:
Sole Proprietorship:
B
41 13.9 53 11.7
20-49:
Sole Proprietorship:
N
5 1.7 3 0.7
20-49:
Sole Proprietorship:
U
0 0.0 2 0.4
20-49:
Partnership:
B
24 8.1 59 13.0
20-49:
Partnership:
N
2 0.7 2 0.4
20-49:
Corporation:
B
19 6.4 58 12.8
20-49:
Corporation:
N
1 0.3 1 0.2
50-99:
Sole Proprietorship:
B
3 1.0 8 1.8
50-99:
Sole Proprietorship:
N
1 0.3 0 0.0
50-99:
Sole Proprietorship:
U
1 0.3 0 0.0
50-99:
Partnership:
B
8 2.7 13 2.9
50-99:
Partnership:
N
2 0.7 0 0.0
50-99:
Corporation:
B
9 3.0 21 4.6
50-99:
Corporation:
N
1 0.3 0 0.0
100-499:
Sole Proprietorship:
B
2 0.7 5 1.1
100-499:
Sole Proprietorship:
N
1 0.3 0 0.0
100-499:
Partnership:
B
7 2.4 7 1.5
100-499:
Partnership:
N
0 0.0 1 0.2
100-499:
Corporation:
B
2 0.7 11 2.4
100-499:
Corporation:
N
0 0.0 1 0.2


Table 6.41 Operating Status by Firm Size (Percent Calculation by Firm Size)
Firm Size:Operating Status Non-Matches
N
Non-Matches
Percent
Matches
N
Matches
Percent
0-19:B 139 83.2 199 95.2
0-19:N 27 16.2 9 4.3
0-19:U 1 0.6 1 0.5
0-19:TOTAL 167 100.0 209 100.0
20-49:B 84 91.3 170 95.5
20-49:N 8 8.7 6 3.4
20-49:U 0 0.0 2 1.1
20-49:TOTAL 92 100.0 178 100.0
50-99:B 20 80.0 42 100.0
50-99:N 4 16.0 0 0.0
50-99:U 1 4.0 0 0.0
50-99:TOTAL 25 100.0 42 100.0
100-499:B 11 91.7 23 92.0
100-499:N 1 8.3 2 8.0
100-499:U 0 0.0 0 0.0
100-499:TOTAL 12 100.0 25 100.0


Table 6.42 Operating Status by Collapsed Firm Size (Percentage Calculation by Firm Size)
Firm Size:Operating Status Non-Matches
N
Non-Matches
Percent
Matches
N
Matches
Percent
0-19:B 139 83.2 199 95.2
0-19:N 27 16.2 9 4.3
0-19:U 1 0.6 1 0.5
0-19:TOTAL 167 100.0 209 100.0
20-499:B 115 89.1 235 95.9
20-499:N 13 10.1 8 3.3
20-499:U 1 0.8 2 0.8
20-499:TOTAL 129 100.0 245 100.0


6.11.3 Evaluation of InfoUSA's Out of Business Flag

Finally, NORC used the information gained in the pretest about the operating status of the firm to evaluate InfoUSA's Out-of-Business flag. InfoUSA's term for those records flagged as not in operation is "nixie." In all, 19 of the 1000 firms in the pretest sample were flagged nixies; 12 of these appeared among the 750 firms comprising the sample for Pretest 1. Of these 12, six were determined to be in business during the pretest and six were determined not to be in operation. Given that less than 8% of the overall sample was found not to be in operation, a 50% out-of-operation rate for nixies shows that this flag has some predictive power. However, given the level of effort InfoUSA claims to make before it flags a record as out of business, a 50% in-business rate is a dismal failure. Given the small sample size, we hesitate to draw too firm a conclusion. It is possible that we encountered a very bad draw, i.e., that if we had a larger sample of nixies, perhaps the in-business rate would turn out much lower. However, based on what information we have, we do not consider the nixie flag to be of value for the 2003 SSBF sample design.


6.11.4 Evaluation of D&B-InfoUSA Matching

Based on the results described above, NORC concluded that the information generated by matching the D&B file to InfoUSA's file would not enhance the sample design or operational efficiency of data collection. First, only the Standard Match proved reliable, preventing any attempts to narrow the non-match stratum. Second, while in-business rates were lower for the non-match group, the in-business rate was still high for that group. To use the non-match group as intended would require either a high subsampling rate, defeating the purpose of the exercise, or potentially result in substantial design effects. Third, the InfoUSA out-of-business flag (nixies) proved unreliable. Half of the 12 nixies we examined were, in fact, operating businesses.

On the basis of these findings, FRB and NORC staff decided that no matching to InfoUSA should be undertaken for the purposes of sampling. The potential benefits did not warrant the costs of designing and implementing such a scheme. Nor was it worth complicating the sample design and weighting procedures for such small potential savings


7 2003 SSBF Lessons Learned

In this chapter, we examine some of the new methods and procedures used for the 2003 SSBF and offer some insights on what worked well and what did not work as well as expected. The Survey of Small Business Finances (SSBF) has always been a difficult survey to conduct due to many factors, including the technical nature of the subject matter, the heterogeneous nature of small businesses and small business owners, and the length of the telephone interview. Prior to 2003, the survey had been conducted three times. Response rates declined from about 65 percent in 1987 to 52 percent in 1993. Despite considerable efforts to maintain response rates, in 1998 overall response rates declined further to 33 percent. Many of the changes implemented in the 2003 design were implemented to maintain or increase response rates while preserving data quality and the breadth of information collected.

Among the changes for the 2003 survey, the most prominent were the focus on interviewers and interviewer training, a revamped sampling strategy, redesigned respondent contact materials, enhancements to the Computer Assisted Telephone Interviewing (CATI) program, and the use of respondent incentives. This chapter analyzes each of these elements individually and makes some suggestions as to what might be done to improve the study the next time. The chapter concludes with a discussion of a realistic project schedule and budgeting of time and resources.


7.1 Interviewers

The SSBF collects complex, detailed financial information, as well as many other details about ownership and business practices. Small business owners represent a wide variety of backgrounds and have varying levels of familiarity with the concepts and terminology used during the interview. For these reasons, NORC planned to employ a well-informed and highly trained staff of interviewers who would be able to gain respondent cooperation, answer respondent questions, explain the intent of complicated questions and questions that require specialized knowledge to answer, and collect high quality data. To address shortcomings identified in previous iterations of this survey, NORC attempted in the 2003 survey to first determine the characteristics of effective SSBF interviewers, and then develop a recruiting plan to find them.

Once recruited, interviewers received a carefully developed training. As noted in Chapter 3, screener training consisted of 1.5 days of training on conducting the screener interview. After passing a screener certification mock, interviewers had approximately three days of production interviewing to reinforce the training. The following week, interviewers received an additional 2.5 days of training on the main interview. (Both sessions included modules on gaining respondent cooperation.) After passing a main interview certification mock, interviewers began dialing to reinforce the training they had just received. At the end of this process, trained and certified interviewers were allocated to screening or interviewing depending on need and on their proficiency with the two instruments.

In addition to their initial training, interviewers received weekly feedback based on monitoring, and attended special-purpose trainings as needed to refine their skills and supplement previous trainings. The following subsections discuss the approaches used and the levels of success experienced during recruiting and training.


7.1.1 Interviewer Skills

Because of the technical nature of the study, the ideal SSBF interviewer needs strong numerical facility, a good understanding of business and accounting terms, and the communication and persuasion skills that NORC has found essential to interviewing success. Many potential candidates possessed some but not all of these characteristics. For the 2003 SSBF, we looked first to find candidates with financial backgrounds, and second to find those that had the communication and persuasion skills required of interviewers.

What we found was that interviewers who were familiar with accounting, taxes and bookkeeping were among the best when these attributes were combined with the skills essential to interviewing in general. Their backgrounds helped them understand the instruments quickly. They were able to answer questions from respondents, and raise any questions they were not able to answer so that policy decisions could be made. Having a background in one of these relevant fields was not sufficient for an interviewer to be successful, however. Other interviewers with similar financial backgrounds enjoyed less success. They had less experience calling strangers, effectively handling questions and concerns, and persuading owners to participate. Some, in fact, exhibited little desire to gain these skills. We concluded that general interviewing skills were essential. Many calls that do not result in a completed screener or questionnaire need to be made. Many activities in addition to administering the questionnaire need to be performed, including gaining cooperation, navigating through businesses by telephone to reach owners, setting appointments, and recording accurate call notes, among many others. For the SSBF, having the skills and qualities of a good interviewer was essential; having a background in a relevant field was highly desirable. Finding a sufficient number of candidates with both qualities was challenging. NORC attempted to teach interviewing skills to interviewers who understood the content of the questionnaire with mixed results.

For future studies, it would be important to screen recruits for both interviewing skills and relevant business-related background. If a sufficient number of candidates with both skill sets are not available, the preference should be for candidates with good interviewing skills over candidates who have the desired background but not the interviewing skills.


7.1.2 Using Employment Agencies

Usually NORC places advertisements in newspapers, job fairs, and on the internet. Those who are interested call a hotline and leave their contacting information. They are called back to complete a 15-minute screening interview, which provides them with information about hours, pay rate, work schedules, and job sites, in addition to giving NORC an opportunity to do an initial assessment of their voice quality and communication skills. Candidates who are still interested and meet NORC's basic requirements are scheduled to attend a group interview, where more information about the job is provided and exercises are conducted to ensure basic compatibility with the task. Those who are still interested and pass the group interview are hired, attend general training, and are then deemed ready to attend project-specific training. NORC's usual approach to interviewer recruitment was heavily supplemented on SSBF by using employment agencies to recruit interviewers with financial backgrounds.

On SSBF, most of the candidates attending general training prior to project-specific training were acquired through employment agencies that specialized in placing people with accounting and bookkeeping skills. NORC sent the agencies a description of our requirements and preferences, and the agencies sent available candidates to our usual group interview. Those who were still interested and met NORC's requirements were invited to our standard general training. There were two basic categories of agency candidates: those with accounting experience, and those with bookkeeping experience who had usually also done reception and/or sales. The assumption was that these agencies would be better able to recruit candidates with the relevant business backgrounds we were seeking, since they were drawing from a large, existing pool of known candidates.

Advantages and disadvantages to using this approach were discovered. The agencies were, as expected, able to provide a large number of candidates in a timely way. As discussed earlier in this chapter, those candidates who had both the relevant business background and interviewing skills were among the best and most productive interviewers on the project. Additionally, agency employees who performed poorly were easily terminated compared with the termination process for NORC hires.

The disadvantage was that attrition among agency employees was higher than expected. In retrospect, higher-than-expected attrition occurred for the following reasons:

1) A small number of people were terminated for poor performance despite their best efforts. This was not out of line with NORC's experience on other surveys using staff recruited using more traditional methods.

2) A small number of people who had both the relevant background and the essential interviewing skills left because they got better jobs, despite the higher-than-usual rate of pay offered to SSBF interviewers.

3) A relatively large number of people who could have developed into effective interviewers quit or were terminated rather than try to develop these skills. An unusually large percentage of them left before main training. This decreased the study's return on training expenditures. These people may have felt that interviewing was a step down from the type of work they were accustomed to, or they may have realized that they preferred working with numbers to working with people. NORC suspects that many of them only realized how challenging the job was when they started screener production and chose to find another assignment through the agency rather than meet this challenge.

NORC suggests that the tendencies in item 3 above could be mitigated in future by including a retention bonus in any agreements made with employment agencies. For example, a bonus that would be paid to the agency for every candidate that worked 60 days after training might more effectively engage the agencies in evaluating candidates for this assignment, and encourage them to partner with NORC to identify candidates that are well-suited to the task. Other conditions could also be considered, such as paying a lower rate for training hours, followed by increases to the rate after 30 days and 60 days.


7.1.3 Mentoring Interviewers

SSBF interviewers had a steep learning curve during the first few weeks on the job. For many SSBF interviewers, this was their first exposure to telephone interviewing. Too often new interviewers did not have the skills or experience to avert refusals; that is, to counter objections persuasively enough to leave the door open for the next interviewer to turn a reluctant respondent into a willing, cooperative respondent.

During interviewer debriefings, many interviewers mentioned that it was helpful to be assigned to specific supervisors who could acquaint them with the project and telephone interviewing. For future rounds, assigning every interviewer with one mentor, who could provide quick, on-the-job coaching (in addition to coaching provided by supervisors), would limit the extent to which new interviewers inadvertently closed the door on otherwise promising cases. It should be noted that this could be expensive in both time and money and should be carefully weighed against the potential gains.


7.1.4 Training in Gaining Cooperation

Gaining respondent cooperation is a key to the success of any survey. Gaining cooperation presented additional challenges on SSBF, due to the sensitive nature of the data collected and the target population of small business owners, most of whom have few, if any, employees and so are usually very busy. Among the screening interviews that NORC was unable to complete, 38% were incomplete because the firm refused to participate.

Many of the interviewers needed extra help and coaching in how to gain cooperation with respondents and avoid refusals. When this became apparent, NORC created an additional training on gaining cooperation based on the recent work of Groves and McGonagle68 to help interviewers develop these skills, as discussed in Chapter 3. Participation in this training accomplished many goals. It demonstrated the importance of success and acknowledged the key role interviewers play. It reinvigorated interviewers to receive this additional attention. It demonstrated to many of them that their peers had superior skills, and challenged them to improve. It provided them with additional tools and techniques for making these improvements. It demonstrated that NORC was dedicated to helping them become more successful. Without exception, voluntary feedback from interviewers was favorable. To be most useful in the future, gaining cooperation training should be conducted no more than three weeks after interviewers start production. At this point, they have sufficient experience in live dialing to build on the gaining cooperation sessions included in the initial project training


7.1.5 Timing of Training

Training for the main interview generally took place less than a week after screener training was complete. Some interviewers indicated that having the training for screening and main questionnaire interviewing back-to-back was a lot of content for employees who were trying to learn interviewing techniques, instrument content, and data collection protocols. It was suggested that it would be helpful for interviewers to listen to experienced interviewers gain cooperation and administer the survey before returning to the classroom for additional training in gaining cooperation and working through difficult sections of the questionnaire. In that way, the content of the training would be more meaningful and less abstract.


7.2 Sample Design

The sampling design for the 2003 study differed from 1998 in two main ways. First, no minority oversample was required, which had a large impact on the amount of time necessary between the screening and the main interview. Second, the sample was drawn in batches and then systematically subsampled for both screening and main interviews, which introduced significant logistical problems in managing the batches and the subsampling. The next subsections discuss various aspects of the design from an operational standpoint.


7.2.1 Time between Screening and Main Interview

One major design change in 2003 was to considerably shorten the time between completion of the screener and the first call to complete the main interview69. The design called for calling a respondent within one week of completing the screener, shortly after he or she had received the worksheet mailing. The intent was to heighten the study's sense of immediacy, reduce the gaining-cooperation challenge for main interviewing, and reduce locating that could result when firms move or go out of business between the screening and the main interview.

Reducing this interval seemed to have been effective. Interviewer labor associated with the main questionnaire was lower in 2003 compared with 1998, suggesting that respondents were able to recall screening, and that the rapport established during screening was still in effect. It posed other operational and logistical challenges (i.e., the need to ship worksheets promptly to eligible businesses, coordinating the close-down of screening interviews in order to subsample for pass two of main interviewing within batches), but NORC was prepared to handle these challenges. NORC has not conducted any in-depth cost-benefit analysis to determine whether data quality was improved or other possible benchmarks were improved compared to the 1998 survey. It is safe to conclude, though, that it is logistically possible to run the study under the new time frame.


7.2.2 Sample Batches

Sample batches were intended to allow NORC to adjust the size of batch 3 depending on the response rates achieved on batches 1 and 2 while at the same time enabling NORC to implement the two pass approach, discussed below70. However, operationally, the batches created a significant amount of work. The added complexity of managing batches and passes introduced the need for several different management control systems, ranging from the level-of-calling-effort tracking and reporting to receipting and mailing advance letters and refusal conversion letters. These systems in some cases did not exist prior to data collection. In addition, each batch and pass required that programming for pulling the screening and main interview subsamples be developed and tested. In addition, the fourth batch did not use subsampling, and so required different procedures and a different weight calculation, adding to the strain of creating and managing multiple sample batches.

NORC exacerbated this complex situation by introducing new materials or procedures into different batches. For example, toward the end of the study, different respondent incentives were offered to respondents in different batches, to accelerate the completion of the study. This required changing the interviewer-read questionnaire text, retraining interviewers, adding new job aids, and implementing a tracking system to make sure that respondents were receiving the correct incentive amounts. In addition, NORC also varied the contents of the advance mailings by batch to try to improve response rates. Tracking these changes presented challenges to the management team.

For future rounds of the study, NORC recommends that the batch structure be reexamined to determine whether there are more efficient, less labor-intensive ways to manage the sample. These could include using only two batches, and releasing sample in smaller replicates within the batches.


7.2.3 Two-pass Interviewing

Conducting two passes during each screening and main interviewing of each batch added to the complexity of both the sampling and data collection tasks, and lengthened the data collection schedule. The schedule was such that in some 10-day periods, subsamples needed to be drawn from multiple batches for screening and main interviews. This created periods of intense activity both in sampling and data collection management. The approach also tends to increase the variance of the weights and the design effect. Subsampling did allow NORC to focus the energy of its best interviewers on the cases with the most potential in pass two, and to develop a calling strategy for each case. The subsampling strategy should be reexamined for future rounds to determine whether it might be less labor-intensive to manage if there were fewer batches. In addition, it may be possible to manage the caseload to increase efficiency without subsampling. At this time, the net effect from the two-pass interviewing strategy remains unclear.


7.3 Respondent Materials

The study used many different respondent materials throughout, most of which were similar to materials used for 1998. For the most part, the materials performed as expected. NORC and the FRB experimented with the content and format of advance and refusal conversion mailings and the wording of refusal conversion letters. The next subsections discuss some of these changes.


7.3.1 Letter Enclosures

NORC varied the enclosures in the advance and refusal conversion mailings by batch. During batch two, NORC began putting two dollars in the refusal conversion mailings to make the mail more memorable, demonstrate serious purpose and encourage respondents to participate. However, during the holiday season, NORC decided to send the refusal conversion mailings via Federal Express to gain additional attention amid the crush of holiday mail. The cost of the shipment would have been greater than the two dollars enclosed, so NORC decided to leave out the two dollars. NORC was unable to measure how much the two dollars contributed compared to shipping via Federal Express as the periods over which the two methods were used differed. However, production stayed steady during the holiday season and did not drop significantly as had been feared.

We recommend that for the next round of the study, an evaluation of mailing materials be built into the study design. This could include pretest studies or a controlled experiment built into the earliest mailings for the study. Although NORC made changes to the mailings throughout this round, the changes were based more on anecdotal evidence and management team experience rather than on empirical data.


7.3.2 Refusal Conversion Letters

Five different refusal conversion letters were created for the screening portion of the study. Different versions were developed to address specific objections, e.g., concerns about the study's legitimacy, not enough time to do the survey, and so forth. Between screening passes, supervisors reviewed call notes for every pass one refusal case to determine which of the five versions of the conversion letter a case should receive. Some screening batches had more than 1,000 refusal cases to review. This was a labor-intensive step that could only be consistently performed by a few capable and knowledgeable supervisors, and that had to be conducted in the middle of a time-sensitive process during which refusal cases were not being worked.

Reducing the number of versions of refusal letters, or using just one letter that addressed three or four of respondents' top concerns, would have been less labor intensive, although less targeted. Because determining the effectiveness of multiple versions of conversion letters was not a study objective, NORC does not have data on the added value of the additional versions. Fewer versions of the refusal conversion letter were used during main interviewing, based on anecdotal evidence that respondents generally had fewer reasons to refuse the main interview than the screener. The process of reviewing refusal cases and assigning specific letter types was not as labor intensive for the main interview as it was for screening.


7.4 Cati Innovations

Many enhancements to questionnaire ordering, wording, and skip patterns were implemented in the 2003 SSBF CATI. Of particular importance are the dollar amount read-back and the institution look-up, both of which were implemented to improve overall data quality.


7.4.1 Dollar Verification Screens

There are many questions in the SSBF questionnaire that request dollar amounts. The amounts reported by respondents vary widely across firms. Because it is possible to indicate the same amount using a variety of expressions (e.g., one million six, one million six hundred thousand, and one point six million), the 2003 SSBF displayed all dollar amounts in words after they had been entered numerically. Interviewers read back the dollar amounts in the words displayed on the screen, to verify that amounts had been interpreted and entered accurately. Errors were immediately corrected. To this point, no further analysis has been done on the reliability of the data collected. Anecdotally, interviewers indicated that errors were occasionally identified and corrected. Some respondents expressed irritation or annoyance with this procedure, but interviewers were trained to explain the need for it, and respondents quickly learned to accept their role in ensuring the accuracy of these critical data.


7.4.2 Institution Look-Up Process

Another innovation was the implementation of an automated institution look-up procedure within the CATI program. This procedure used a database of more than 109,000 branch records of depository institutions to obtain the exact physical location of the branch of the institution used by the responding business. This procedure was driven by the zip code (or city and state) and the institution name. By using this procedure, interviewers could select rather than enter the exact physical location of the institution. Although systematic analysis of the benefits of this procedure has not been conducted, initial analysis indicates that it was successfully employed in about 50 percent of the applicable cases. However, there are numerous cases where institutions were available on the database but were not identified by the respondent or interviewer. If this procedure is to be used in the future, additional training and closer monitoring of interviewers and the data collected should be conducted to maximize the effectiveness of the procedure.


7.5 Respondent Incentives

In response to the declining response rates observed in previous surveys, the 2003 SSBF design called for offering respondents a token incentive for completing the main interview. Initially, the respondents were offered the choice of either $50 or a Dun & Bradstreet package of reports for small businesses which retailed for $199. As the study progressed, NORC increased the respondent incentive fees. Over the course of the study, the incentive was raised for pass two refusal conversion to $100, then $200, and finally $500.

NORC has not studied the data extensively, but anecdotally, there appeared to be an increase in willingness to cooperate at the $200 level if respondents were not initially cooperative. NORC moved to $500 over the holiday season because the end of the study was approaching and interviewers began recontacting some of the oldest cases. For cases that had been cooling off for a significant amount of time, i.e., from batches one, two, and three, the $500 offer was a successful incentive. NORC was able to complete almost 300 more cases than it had planned between December 6, 2004 and January 1, 2005 and believes that the high incentives of $200 and $500 were a major contributor to its ability to accomplish this. Whether $300 or $400 would have had the same effect as $500 is unknown. However, the next round of the study should provide sufficient funding to pay incentives higher than the equivalent of $50.

Again, while no analysis has been done to date, interviewers reported that having a choice of incentives appeared to be viewed positively. However, the choice of nonmonetary incentives should be given further consideration. The association of D&B with the project, NORC, and the FRB was confusing for some respondents, and interviewers reported that it sent the wrong message in some situations. During the final interviewer debriefing, some of the interviewers suggested that a subscription to a business newspaper such as the Wall Street Journal might have left a more favorable impression.


7.6 Schedule

The SSBF is an information-intensive, heavily IT-driven, enormously complex study. It can be tremendously challenging to respondents, interviewers, programmers, statisticians, managers, and other staff. It is a sustained, large-scale effort with thousands of subtasks. To manage the study's complexity, and to be able to accommodate unforeseen opportunities and problems, it is essential to develop a realistic, detailed schedule of tasks and subtasks and their interrelationships at the outset of the project, and not as the project proceeds. Moreover, it is imperative that sufficient time be built into the schedule, and adequate staffing be maintained throughout the project.

In particular, sufficient time should be allowed for questionnaire design and testing, and designing the data delivery files. Due to the complexity of the instrument, many CATI changes and corrections reverberated across multiple paths and sections, even those that seemed simple and straightforward. Planning and managing these activities effectively is required to keep data collection on schedule. The questionnaire testing plan should be sufficiently robust to ensure that skip paths and consistency checks that are driven by preloaded data operate correctly. Improving the stability of the questionnaire can facilitate fully realized, timely deliveries of pretest and main data, to ensure that the questionnaire is functioning as intended. In addition, if a complex sampling design is to be used, sufficient time needs to be allotted for extensive programming and testing of the sampling methods.


8 Bibliography

The American Association for Public Opinion Research. (2004). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. Third edition. Lenexa, Kansas: AAPOR.

Council of American Survey Research Organizations. (1982 ). "On the Definition of Response Rates. A Special Report of the CASRO Task Force on Completion Rates." Port Jefferson, NY: CASRO.

Haggerty, C., Grigorian, K., Harter, R., and Stewart, A. (2001). "The 1998 Survey of Small Business Finance: Methodology Report." Report submitted to The Board of Governors of the Federal Reserve by the National Opinion Research Center.

Groves, R. and McGonagle, K. (2001). "A Theory-Guided Interviewer Training Protocol Regarding Survey Participation." Journal of Official Statistics, Vol. 17, No. 2, 249-265

Kish, L. (1965). Survey Sampling. New York: John Wiley & Sons.

National Opinion Research Center. (2001). "NORC Statistical Standard 15: Calculation of Response Rates." Chicago: NORC.

Small Business Survey Group. (1999). Codebook for 1993 National Survey of Small Business Finances (NSSBF). Washington, DC: Board of Governors of the Federal Reserve System.



Footnotes

1. For single-unit establishments, the office was considered the headquarters by definition. Return to Text
2. If a proxy owner completed the screening interview, the firm was requalified in this section; the respondent was asked the same eligibility questions that had been asked in the screener. Return to Text
3. The completion rate is defined as the number of completed screener cases divided by the number of cases released in the sample. Return to Text
4. The weighted response rate approximates the number of businesses that responded to the survey in the target population divided by the number of all eligible businesses. The final weight was calculated in multiple stages. The first stage was the calculation of the initial base weight to account for the sample design. A base weight for a sample business was the reciprocal of the probability of selection under the sample design. The subsequent weighting stages represented adjustments to the base weight for batch selection, sample release, eligibility, screener nonresponse subsampling, screener nonresponse, main interview nonresponse subsampling, and main interview nonresponse. Finally, outlier weights were trimmed as described in section 6.9.10.

Return to Text

5. In 1987 and 1993, the survey was called the National Survey of Small Business Finances. Return to Text
6. Respondents were also asked at the end of the interview to return any other materials they had prepared for the interview, such as income statements, balance sheets and copies of Federal tax returns. NORC provided respondents with a postage-paid, pre-addressed envelope to facilitate the return. Return to Text
7. Not all firms have the same fiscal year-end date. Because interviewing for the questionnaire was to commence in June, 2004, not all firms would have completed their fiscal year 2003 taxes. Consequently, those firms whose fiscal year ended between June 1 and December 30 were asked to report their income and balance sheet data as of fiscal year 2002 whereas firms whose fiscal year ended between December 31 and May 31 were asked to provided these data as of fiscal year 2003. Return to Text
8. Race and ethnicity data are generally unavailable from publicly available lists of firms. In 1998, in order to oversample minority and Hispanic owned firms, the sample first had to be screened for ethnic/minority status and then once that was determined, appropriate samples of these firms could be selected for questionnaire interviewing. This was not necessary in 2003 because the sample design did not call for oversampling of Hispanic and minority-owned businesses. Return to Text
9. Additional information that interviewers could call up from CATI to address respondents' questions about specific survey questions. Return to Text
10. See footnote in this chapter, above. Return to Text
11. Details about the two pretest instruments, respondent materials, survey processes and protocols, and debriefings are included in The 2003 Survey of Small Business Finances Pretest I Report and The 2003 Survey of Small Business Finances Pretest II Report. These reports were delivered to the FRB in May and October 2004, respectively. Return to Text
12. Pretest 2 was completed with too little time before the main field period to allow a complete and detailed analysis of the collected data. Most of the information collected from pretest 2 involved interviewer and supervisor experiences and observations. Return to Text
13. To be able to contact respondents if necessary. As explained to respondents, email contacts would be infrequent and the email address would not be given to anyone or any organization outside of the those conducting the study. Return to Text
14. Metropolitan Statistical Area Return to Text
15. New England County Metropolitan Area Return to Text
16. Federal Information Processing Standards Return to Text
17. Among all screeners, eligible or ineligible, completed through one of four close statements. Return to Text
18. Timings are based on a sample of completed main interviews. See Appendix E for detailed timing results including average timings for each subsection of the main interview. Return to Text
19. Interviewers would learn about other records respondents were using later in the interview. Another point about this section was that, based on the pretest debriefings, interviewers were instructed to give respondents time - even several minutes, if needed - to collect their materials, which might need to be retrieved from hardcopy files. Return to Text
20. In each loan and deposit section, up to 20 institutions could be identified. Individual account information was collected for up to three sources. If more than three sources, individual information was collected on the largest two sources and the amounts and other characteristics of all other sources were combined and asked about together in the third loop. Return to Text
21. The principal owner was the owner reported in Section C with the largest ownership share. If no owner owned at least 10% of the firm, the questions on the owner's recent credit history were not asked. Return to Text
22. Beginning in January 2005, NORC changed its website, and its employees' email addresses, from norc.net or norc.uchicago.edu extensions to norc.org. However, for projects still in the field at the time of the change, including the 2003 SSBF, email addresses and websites ending in norc.net or norc.uchichago.edu were not disabled, and respondents could use these addresses for the duration of the study. Return to Text
23. NORC employees from other studies that were invited to work on SSBF had gone through group interviews and general training before starting on their first study with NORC. Return to Text
24. A proxy is someone who is not an owner, but who has the knowledge to answer basic questions about the firm - questions that were asked in the screener, including number of employees and organization type. In larger firms a proxy was sometimes a CFO or accountant; in very small firms a proxy was sometimes a long-term employee or spouse. Rarely was an administrative assistant or secretary qualified by interviewers to be an eligible proxy. Return to Text
25. Or, for institutions that were used by a firm to apply for a loan only, the branch where the firm applied for the loan. Return to Text
26. QxQ refers to question-by-question instructions. In CATI, each question has a help screen containing additional information about the question such as its purpose, definitions of terms, or other clarifying information that might be needed by the respondent. Return to Text
27. The timings are estimated from a sample of completed interviews. For additional information, see the timing report in Appendix E. Return to Text
28. Promising is discussed in detail in section 4.6.2 below. For simplicity, promising means all cases where contact with a respondent had been made, but the case had not yet been completed or finalized for other reasons. Return to Text
29. From the start of screening to the end of main interviewing. Return to Text
30. Appendix HH contains details on how timings were calculated. Return to Text
31. NORC stopped working the cases, but kept them open, in case we needed additional screened eligible cases to help to achieve 4,000 completes. Screener pass two cases were not finalized until the end of data collection. Return to Text
32. A qualified proxy was not an owner, but someone with basic knowledge about the firm. In larger firms, qualified proxies were often comptrollers, CEOs, vice presidents of finance or accountants. In very small firms, qualified proxies were sometimes a full-time employee or relative who worked for the firm. Typically, administrative assistants, secretaries and receptionists did not qualify to be proxies. Return to Text
33. Note that for some pass two cases, interviewers did not have the opportunity to make three attempts to reach an owner in pass one. For example, a firm could have been contacted with answering machines and gatekeeper refusals only in pass one, with interviewers never having had the opportunity to ask to speak to qualified proxy Return to Text
34. This task may have been more difficult for participants screened in January compared to those screened earlier in the study, given that some firms may have just ended their 2004 fiscal year. Return to Text
35. Of respondents asked, approximately two-thirds scheduled an appointment. Return to Text
36. In metropolitan areas that comprised multiple states, such as New York and Washington, D.C., NORC searched for firm name and owner name in each state, e.g., for a New York-based firm, searches were done in New York, New Jersey and Connecticut. Return to Text
37. In at least one instance, a recipient accepted a certified letter without checking the accuracy of the addressee. As a test, NORC sent a certified letter addressed to a fictitious law firm, but to the actual street address of NORC's Chicago office. The letter was accepted. Return to Text
38. Most SAQs were returned by nonrespondents, but NORC did receive a handful of completed SAQs from noncontacts. Return to Text
39. If a proxy owner completed the screening interview, the firm was requalified in this section; the respondent was asked the same eligibility questions that had been asked in the screener. Return to Text
40. Generally, promising means all cases where contact with a respondent had been made, but the case had not yet been completed or finalized for other reasons Return to Text
41. As non-hostile refusals in pass one, all of these cases were eligible to be subsampled into pass two. Return to Text
42. In batch one, the incentive stayed at $50 for pass one hard appointments and partial completed cases that were sampled into pass two with certainty. For batches two and three, these cases were sent conversion letters offering $100. Return to Text
43. About 25%-35% of respondents who completed screeners and were eligible for the main study provided email addresses at the end of screening. Return to Text
44. After looking further into the D&B reports, a small number of respondents changed their mind about their preferred token of appreciation, and asked NORC for a financial incentive instead of the D&B reports. NORC complied with these requests. Return to Text
45. Sometimes respondents did not want to reveal the actual name of a source of credit used by their firm. In that case interviewers were instructed to ask the respondent to provide a pseudonym, such as "My brother's bank," and enter "XXX_" as a prefix to the pseudonym. Return to Text
46. QxQ refers to question-by-question instructions. In CATI, each question has a help screen containing additional information about the question such as its purpose, definitions of terms, or other clarifying information that might be needed by the respondent. Return to Text
47. Final completion rates for the study are complicated by weighting and other adjustments. Completion rates reported in this chapter are less complex. They are simple division of the number of cases worked divided by the number of cases completed. For the main interview, a complete case is a case that went through the entire interview and passed the FRB's set of criteria for completeness of responses. Return to Text
48. Subsampling only occurred in the first three batches. Due to time limitations, no nonresponse subsampling was implemented at either the screener or the main interview in Batch 4. Return to Text
49. Firms with 500 or more employees were not removed from the abstract for the pretest file by D&B. However, prior to drawing the pretest sample, NORC removed them. This comprised less than 2% of the abstract. Return to Text
50. This was the frame size after the 1,912 overlapping pretest cases were removed from the original main study DMI frame. Originally, 2,000 cases were selected for the pretests from the December 2003 DMI frame; however, 88 of these cases had been removed from the May 2004 DMI abstract received by NORC. Return to Text
51. Credit scores were not obtained for all firms on the DMI frame. They were purchased after screening only for firms that had been selected into one of the batches and whose final screening disposition was noncontact, nonrespondent, or eligible for the main. Return to Text
52. Federal Information Processing Standards. Return to Text
53. This assumption is changed later in calculating the nonresponse weighting adjustments where all sample businesses, including those that are out of business at the time of the screener, are assumed to be eligible for the screener. Return to Text
54. When the screener was completed by a proxy, the case was asked all eligibility questions a second time prior to beginning the main interview. Return to Text
55. The response rates described in this section were estimated for sample planning purposes only. The final response rate calculations incorporated different eligibility assumptions, as discussed in Section 6.10. Return to Text
56. Based on the variance of the weights, we estimated that the average design effect per size class was about 1.17 (section 6.7.3). Return to Text
57. As defined in 6.4, Region 1 is comprised of divisions 1 and 2; region 2 combines divisions 3 and 4; region 3 combines divisions 5, 6, and 7; and region 4 combines divisions 8 and 9. Return to Text
58. The sample was divided into replicates within each batch, but the replicates were not actually used in sample release. Return to Text
59. As noted earlier, subsampling was not implemented in batch 4 due to administrative considerations. Return to Text
60. As noted above, all hard appointment callbacks continued on to pass 2; i.e., they were subsampled with certainty. Return to Text
61. Eligibility adjustment was not part of the simulation. To the extent that eligibility rate varied among strata, the design effects due to unequal weighting would be underestimated. Return to Text
62. Some misclassified noncontact cases made it into pass 2. This error was recognized prior to drawing the 5% subsample of noncontacts so that these cases were included in the eligible universe from which the sample was drawn. Return to Text
63. These are hard appointments, where the respondent requested a callback at a specific time. Soft appointments, where the interviewer designated a callback time, were not selected into pass 2 with certainty; instead, they were subsampled at the rate of 50% along with other nonrespondents. Return to Text
64. The subsampling groups were different than the sample balancing groups described in Section 6.6. The subsampling groups are defined in Section 6.7. Return to Text

65. Note that when constructing nonresponse adjustment analysis, nonrespondents and noncontacts are treated identically--both are considered to be screener nonrespondents. Return to Text
66. These descriptions are based upon verbal understandings of the InfoUSA match procedures. However, InfoUSA reviewed and agreed to a set of specifications based on these descriptions. In fact, it is clear from the analysis below that the loose match and six-number match did not resemble these descriptions. Return to Text
67. The difference may reflect different compositions of the samples. The pretest was evenly divided among ownership types, generating a greater percentage of larger establishments than in the main study. Large establishments are more likely to be in business. Return to Text
68. A Theory-Guided Interviewer Training Protocol Regarding Survey Participation, Groves and McGonagle, 2001 Return to Text
69. Different objectives in 1998 dictated a longer period between screening and interviewing. The 1998 design called for oversampling minority-owned businesses. Unfortunately, there were (are) no publicly available data on minority-ownership. Consequently, in 1998 the entire sample of firms was first interviewed for eligibility and minority-status. After screening was completed for the entire sample, then the main sample was drawn and fielded. In some cases, this caused as much as a six month delay between the first screening contact with the firm and the subsequent contact to conduct the main interview. Return to Text
70. To facilitate sample management and adjustment of sample goals such as response rates and target sample sizes, many surveys divide samples into randomly selected replicates and then release replicates as needed to the production center. The batches in the 2003 SSBF are simply "super-replicates" consisting of large portions of the overall sample (20-30% per batch) and were required to implement the pass 1 - pass 2 approach discussed in section 7.3.2 below. Return to Text

This version is optimized for use by screen readers. Descriptions for all mathematical expressions are provided in LaTex format. A printable pdf version is available. Return to Text