[FRB logo]
[SCF
      logo]
1995 Survey of consumer Finances
Here: Introduction
Next:Question Text, Variable Names, & Responses


CODEBOOK FOR 1995 SURVEY OF CONSUMER FINANCES

Arthur Kennickell
SCF Project Director



Table of Contents
1.Introduction
2.Question Text, Variable Names, & Responses
3.Interview Program
4.CAPI-database Concordance
5.Net Worth Program
6.Public Data Set Variable List


INTRODUCTION




WARNING: This codebook contains over 51,000 lines of text, including this introduction, variable descriptions, the program used to collect the survey data, and other material. Most users will probably NOT want to print the entire document. Generally, we work with the file in electronic form.

This codebook serves as the principal guide to the variables included on the final public version (June 3, 1997 version) of the 1995 SCF dataset. However, not every variable included in this codebook is actually in the public use dataset. Among other things, the dataset does NOT include most variables related to the sample design, details of geography, or the 3-digit industry and occupation codes. Although we have attempted to mark the variables in the codebook that are not available to the public, there may be errors or omissions. The definitive list of the variables included is given at the end of this file entitled Public Data Set Variable List. Please consult that list to determine whether a given variable is available to you.

For a general overview of the 1995 SCF, see Arthur B. Kennickell, Martha Starr-McCluer, and Annika E. Sunden, "Family Finances in the U.S.: Evidence from the Survey of Consumer Finances," Federal Reserve Bulletin, January 1997. (102 KB PDF | 1.2 MB Postscript) Results you may obtain from using this release of the 1995 SCF may differ from those reported in this article for several reasons. First, the Bulletin article is based on an earlier version of the data. Second, the analysis weights used in that article were altered to provide robust estimates of the detailed categories shown: In brief, the data were examined for extreme outliers, and where a given case was overly influential in determining an outcome, the weight was trimmed and other weights were inflated to maintain a constant population. Finally, as noted below, the public version of the data has been systematically altered to minimize the likelihood that unusual individual cases could be identified. Our analysis of the public dataset suggests that these changes should not alter the conclusions of reasonable analyses of the data.

QUESTIONNAIRE
The 1995 SCF was collected using computer-assisted personal interviewing (CAPI). Thus, there is no questionnaire in the usual sense. This codebook serves as the authoritative guide to the definitions of variables included in the survey. At the end of this file, a copy of the Autoquest (Surveycraft) Interview Program that was used to collect the data is included. The AQ program serves as the authoritative reference for questions relating to question ordering and skip sequences. Because question ordering is important in understanding the meaning of many questions, users of the data are encouraged to consult the AQ program. In the survey dataset, many variables have been moved, recoded, or inferred; almost always such changes can be identified from the shadow variables associated with the variables. At the very end of this file, a translation of most AQ variables into SCF variables is provided, see the section entitled CAPI-Database Concordance. However, as noted there, this list is incomplete. Nevertheless, a diligent user should be able to deduce the relationship between all the variables.

FILES INCLUDED
The full public dataset consists of two pieces in addition to this codebook file. The main dataset, which contains most of the survey variables, is a 470 megabyte file (typically stored as a SAS transport file in zipped form: 8.2 megabytes in this form). A file of of 49.1 megabytes (23.5 megabytes in zipped transport form) contains 999 replicate weights and multiplicity factors intended to be used for variance estimation.

VARIABLE NAMES
The main data values are stored in the SAS dataset using variable names prefixed by an "X." We have tried, insofar as it was possible, to retain the variable numbering system used in earlier SCFs. Where the content of a variable has changed in a substantive way, we have assigned a new variable number. A small number of questions were added, and a small number were deleted. Each of the variables in the main dataset has a "shadow" variable that describes--in almost all cases--the original state of the variable (i.e., whether it was missing for some reason, a range response was given, etc.). An exception is reported values which have been imputed or otherwise altered to protect the privacy of respondents (see below); such values are not flagged in any systematic way. Users who so desire may use the shadow variables to restore the data to something very close to their original condition. The shadow variables have the same numbers as the main variable, but have a prefix of "J." A list of the values taken by the shadow variables is given in the section below entitled "DISCUSSION OF RANGE DATA COLLECTION AND J-CODES."

UNIT OF ANALYSIS
Most of the data in the survey are for a subset of the household unit referred to as the "primary economic unit" (PEU). In brief, the PEU consists of an economically dominant single individual or couple (married or living as partners) in a household and all other individuals in the household who are financially dependent on that individual or couple. For example, in the case of a household composed of a married couple who own their home, a minor child, a dependent adult child, and a financially independent parent of one of the members of the couple, the PEU would be the couple and the two children. Summary information is collected at the end of the interview for all household members who are not included in the PEU. Throughout the codebook, we refer to the "head" of the household. The use of this term is euphemistic and merely reflects the systematic way in which the dataset is organized. The head is taken to be the single core individual in a PEU without a core couple. In a PEU with a central couple, the head is taken to be either the male in a mixed-sex couple or the older individual in the case of a same-sex couple. No judgment about the internal organization of the households is implied by this organization of the data. When the original respondent was someone other than the person determined to be the head in this sense, all data (including response codes) were systematically swapped with that person's spouse or partner. The variable X8000 indicates which cases have been subjected to such rearrangement.

IMPUTATION
The missing data in the survey have been imputed five times by drawing repeatedly from an estimate of the conditional distribution of the data. These imputations are stored as five successive replicates ("implicates") of each data record. Thus, the number of observations in the dataset (21,495) is five times the actual number of respondents (4299); see below in the weight section of the codebook for a discussion of the use of these implicates. The imputation procedure is described in detail in "Imputation of the 1989 Survey of Consumer Finances: Multiple Imputation and Stochastic Relaxation", by Arthur Kennickell. For a general discussion of multiple imputation and its uses, see Multiple Imputation for Nonresponse in Surveys by Donald B. Rubin, John Wiley and Sons, 1987. The multiple imputations allow users to estimate the amount of uncertainty in estimates that is due to imputation. For users who want to estimate only simple statistics such as means and medians ignoring imputation error, it will probably be sufficient to divide the weights by 5. Users who want to estimate regressions should be cautious in their treatment of the implicates. Many regression packages will treat each of the five implicates as an independent observation and correspondingly inflate the reported significance of results. Users who want to calculate regression estimates, but who have no immediate use for proper significance tests (perhaps for exploratory work), could either regress the average of the dependent and independent values across the implicates, or multiply the standard errors of the regression (on all observations) by the square root of five. For an easily understandable discussion of multiple imputation in the SCF from a user's point of view, see Catherine Montalto and Jaimie Sung, "Multiple Imputation in the 1992 Survey of Consumer Finances," Financial Counseling and Planning, Volume 7, 1996, pages 133-146 (or on the Internet at http://www.hec.ohio-state.edu/hanna/imput.htm).

"OTHER" CODES
In almost every case where a respondent could supply a response that did not fit in the codeframe offered to interviewers on their computer screens, the CAPI program was constructed to allow the entry of a verbatim response. There were a few open-ended questions that were set up to accept only a verbatim response. All of these responses were run through a standard coding process at NORC. Once the data were at the FRB, strenuous efforts were made to resolve all instances of responses that remained coded as "other." Because all verbatim responses were captured by the CAPI program, the resolution process was simplier than in the past. Such responses that remain are unusual legitimate responses which do not fit within the existing codeframe, and because they appear unlikey to reoccur in future surveys, the codeframe was not augmented. Responses that were not informative were treated as missing values and were imputed. In the 1992 survey, scanned images of the paper questionniares were stored on CD ROM, and all "other" responses were looked up. However, because the collection of verbatim responses was not enforced to the degreee that is possible with CAPI, a substantial fraction of cases contained no additional information. Thus, there is a larger fraction of unresolved "other" responses in 1992 than in 1995. In 1989, it appears that the SCF coders were more successful than in 1992 in resolving "other" responses. Because of the different treatment of verbatim responses over time, analysts should exercise caution in time series comparisons of "other" responses.

ANALYSIS WEIGHTS
Because the SCF sample is not an equal-probability design, weights play a critical role in interpreting the survey data. The main dataset contains the final nonresponse-adjusted sampling weights. These weights are intended to compensate for unequal probabilities of selection in the original design and for unit nonresponse (failure to obtain an interview). The weight (X42001) is a partially design-based weight constructed at the Federal Reserve using original selection probabilities and frame information along with aggregate control totals estimated from the Current Population Survey. The population defined by the weights for *each implicate* (see above) is 99.0 million households. This weight is a relatively minor revision of the consistent weight series (X42000) maintained for the SCFs beginning with 1989 (For a detailed discussion of these weights, see "Consistent Weight Design for the 1989, 1992, and 1995 SCFs and the Distribution of Wealth," by Arthur B. Kennickell and R. Louise Woodburn, Review of Income and Wealth, Series 45, Number 2, June 1999, pp. 193-215 or the longer version given on the SCF web site at http://www.federalreserve.gov/pubs/oss/oss2/method.html). The nature of the revisions to the consistent weights is described in "Revisions to the SCF Weighting Methodology: Accounting for Race/Ethnicity and Homeownership," by Arthur Kennickell (see SCF web site). A version of the revised weight has been computed for all the surveys beginning with 1989, and this variable has been added to the public versions of the SCF datasets. Users should be aware that the sum of each of the weights over all sample cases and imputation replicates is equal to five times the number of households in the sample universe. Although the weights should produce reliable results at the level of broad aggregates (e.g., net worth and income ), it is important to remember that many of the variables collected in the SCF are highly skewed in their distribution and that many such variables will apply to only a relatively small fraction of the sample. In the SCF group at the Federal Reserve, we routinely review our calculations for the presence of overly-influential outliers, and robust techniques are applied when appropriate. We encourage other users to exercise similar care in analyzing the data. Users who use the SAS procedure PROC UNIVARIATE are particularly warned to use the FREQ option (as opposed to the WEIGHT option) to obtain weighted medians.

SAMPLING ERROR
Because we are unable to give users any sample information about cases in the dataset, they will be unable on their own to compute reasonable estimates of the sampling variances of their estimates. To facilitate such estimation, we have included two files of replicate weights and multiplicity factors--one corresponding to X42000 and one to X42001. Using detailed information about the original sample design, we selected 999 sample replicates from the final set of completed cases in a way intended to capture the important dimensions of sample variation (Arthur Kennickell, Douglas McManus and Louise Woodburn, "Weighting design for the 1992 Survey of Consumer Finances" (HTML | 2.6 MB Postscript) for details). For each survey case and each replicate, the file contains a weight (WT1B1-WT1B999) and the number of times the case was selected in the replicate (MM1-MM999). We computed weights for each replicate using exactly the same procedures we used for the main weights. Replicate weights were computed only for the first implicate of each case. For most purposes, users will probably want to multiply the weight times the multiplicity: in all cases the sum of each of the weights times the corresponding multiplicities of the cases equals the total number of households. To estimate the sampling variance of the mean of family income, for example, a user would estimate the mean 999 times using the replicate weights and compute the standard error of that estimate. An estimate of the total standard error is given by SQRT((6/5)*imputation variance + sampling variance).

A simple SAS program to compute the standard error due to sampling and imputation for the mean and median of a given variable is provided below. This program may be adapted easily for other types of calculations. To conserve on necessary memory, the program computes sampling error using blocks of 100 replicate weights rather than the full set at once. Users with large amounts of RAM may wish to increase the size of these blocks, and those with smaller amounts may wish to decrease the size.

    * MACRO MEANIT;
    * AK: May 1, 1997 version;
    * DSN specifies the name of the dataset to be used (the dataset
      should contain the following: the main weight renamed as WGT0, a
      set of variables WGT1-WGT999 equal to the replicate weights
      multiplied by the corresponding multiplicity factors, a variable
      for which one wishes to compute the standard error due to
      imputation and sampling for the mean and median, and a variable
      IMPLIC equal to the implicate number of each case)
      VAR contains the name of the variable for which one desires
      standard errors
      PFLAG: blank prints interim statistics/any character string
      (e.g., NO) surpresses printing;
    %MACRO MEANIT(DSN=,VAR=,PFLAG=);

    * compute global mean/median;
      PROC UNIVARIATE DATA=&DSN;
        FREQ WGT0;
        VAR &VAR;
      RUN;
    
    * rank order the data for the median calculation;
      PROC SORT DATA=&DSN;
        BY &VAR;
      RUN;

      PROC IML WORKSPACE=10000000 SYMSIZE=5000;
        RESET LOG LINESIZE=78;

    *   first imputation variance;

        EDIT &DSN;
        TEMP={IMPLIC &VAR WGT0};
        READ ALL VAR TEMP INTO MDATA;

    *   total population;
        POP=SUM(MDATA[,3])/5;

    *   create matrix to hold values of means/medians by implicates;
        IM=SHAPE(0,1,5);
        ID=SHAPE(0,1,5);

    *   compute mean/median;
        DO I=1 TO 5;
          IMP=MDATA[LOC(MDATA[,1]=I),2:3];
    *     compute mean;
          MM=IMP[,1]#IMP[,2];
          IM[1,I]=MM[+,]/POP;
    *     compute median;
          DD=IMP[RANK(IMP[,1]),];
          DD[,2]=CUSUM(DD[,2])/POP;
          ID[1,I]=DD[MIN(LOC(DD[,2]>=.5)),1];
          FREE IMP MM DD;
        END;
        FREE MDATA;

        %IF (&PFLAG EQ ) %THEN %DO;
          PRINT IM ID;
        %END;

    *   next sampling variance;
    *   create matrix to hold values of means/medians by replicates;
        RM=SHAPE(0,1,999);
        RD=SHAPE(0,1,999);
    
        %DO I=1 %TO 10;

          %PUT CLUMP NUMBER &I;
          %IF (&I EQ 1) %THEN %DO;
            %LET TOP=99;
            %LET BOT=1;
            %LET LEN=100;
          %END;
          %ELSE %DO;
            %LET BOT=%EVAL(&TOP+1);
            %LET TOP=%EVAL(&TOP+100);
            %LET LEN=101;
          %END;
          %LET WSTR=%STR();
          %DO J=&BOT %TO ⊤
            %LET WSTR=&WSTR WGT&J;
          %END;

          EDIT &DSN;
          TEMP={&VAR &WSTR};
          READ ALL VAR TEMP WHERE (IMPLIC=1) INTO MDATA;
    
    *     compute means;
          MEAN=MDATA[,2:&LEN]#MDATA[,1];
          RM[,&BOT:&TOP]=MEAN[+,]/POP;

    *     compute medians;
          DO I=2 TO &LEN;
            MDATA[,I]=CUSUM(MDATA[,I])/POP;
            RD[&BOT+I-2]=MDATA[MIN(LOC(MDATA[,I]>=.5)),1];
          END;
          FREE MDATA;
        %END;

        %IF (&PFLAG EQ ) %THEN %DO;
          PRINT RM RD;
        %END;

    *   finally, compute standard error wrt imputation/sampling;
    *   (X-X-bar)**2/(n-1);
        IVM=(IM-IM[,+]/5)##2;
        IVM=IVM[,+]/4;
        IVD=(ID-ID[,+]/5)##2;
        IVD=IVD[,+]/4;

        RVM=(RM-RM[,+]/999)##2;
        RVM=RVM[,+]/998;
        RVD=(RD-RD[,+]/999)##2;
        RVD=RVD[,+]/998;

    *   SQRT(((ni+1/(ni))*SIGMAI**2) + SIGMAR**2));
        TVM=SQRT((6/5)*IVM+RVM);
        TVD=SQRT((6/5)*IVD+RVD);

        IVM=SQRT(IVM);
        IVD=SQRT(IVD);
        RVM=SQRT(RVM);
        RVD=SQRT(RVD);

        PRINT "STD DEV IMPUTATION: MEAN: " IVM "    MEDIAN: " IVD;
        PRINT "STD DEV SAMPLING: MEAN: " RVM "    MEDIAN: " RVD;
        PRINT "COMBINED STD DEV: MEAN: " TVM "    MEDIAN: " TVD;

      QUIT;



    %MEND MEANIT;


    * create dataset from main dataset and replicate weight file;
    DATA DAT(KEEP=NW IMPLIC WGT0-WGT999);
      MERGE xxx.main_ds(KEEP=Y1 X42001 ...) 
        xxx.rep_wgts(KEEP=Y1 MM1-MM999 WT1B1-WT1B999);
      BY Y1;

    * multiply replicate weights by the multiplicity;
      ARRAY MULT {*} MM1-MM999;
      ARRAY RWGT {*} WT1B1-WT1B999;
      ARRAY WGTS {*} WGT1-WGT999;
      DO I=1 TO DIM(MULT);
*       take max of multiplicity/weight: where cases not selected for
        a replicate, there are missing values in these variables;
        WGTS{I}=MAX(0,MULT{I})*MAX(0,RWGT{I});
      END;
      WGT0=X42001;

    * define implicate number of case;
      IMPLIC=Y1-10*YY1;

    * define net worth (for example);
      NW=.......;
    RUN;

    * run the macro;
    %MEANIT(DSN=DAT,VAR=NW);

Users who want to estimate more complex statistics, particularly regressions, should be cautious in their treatment of the implicates. Many regression packages will treat each of the five implicates as an independent observation and correspondingly inflate the reported significance of results. Users who want to calculate regression estimates, but who have no immediate use for proper significance tests, could either average the dependent and independent values across the implicates or multiply their standard errors by the square root of five. For an easily understandable discussion of multiple imputation in the SCF from a user's point of view, see Catherine Montalto and Jaimie Sung, "Multiple Imputation in the 1992 Survey of Consumer Finances," Financial Counseling and Planning, Volume 7, 1996, pages 133-146 (or on the Internet at http://www.hec.ohio-state.edu/hanna/imput.htm). That article also contains a set of simple SAS macros to use to compute correct standard errors from multiply imputed data. An alternative that is useful for handling the output of general modeling routines is the following set of SAS code: * MACRO MISECOMP computes standard errors corrected for multiple imputation; * The input may be regression results, or any other results (e.g., probits) that include a point estimate and a standard error estimate for each implicate; * The datasets are named &DSN.1-&DSN&NIMP (where &DSN and &NIMP are defined below); * The form of the input dataset is described above; * Often, it is quite easy to copy output directly from a statistical procedure into the form of this program without deleting extraneous information; * The required input variables are VARN (a name of the statistic of interest in all NIMP datasets), B1-B&NIMP (a working name for the point estimate of interest for each implicate--where the terminal number corresponds to the terminal number of the input dataset), and S1-S&NIMP (a working name for the standard error of the point estimate in each implicate--where the terminal number corresponds to the terminal number of the input dataset; * The parameters of the MACRO are: NIMP: number of implicates (default is 5) DSN: first part of name of each of the NIMP input datasets (e.g., DSN11, DSN12,...,DSN15 could be results for implicates 1-5 for model 1) (default is DSN1i, where "i" ranges from 1 to NIMP) PRNTPR: determines the number of digits of the output data (default is SAS format 10.6); * The output includes three lines for each unique VARN in the input datasets: the final point estimate, the final standard error, and the final t-statistic; **************************************************************************; * Steps to compute standard errors; * (1) run each model (regressions, probits, etc.) for each of the five implicates separately; * (2) copy the model outputs into program code as described above; /* For example, DATA DSNij; INPUT VARN $ Bi Si; CARDS; data here ; RUN; where "i" ranges over the number of distinct models treated, and "j" ranges over the number of implicates. NOTE: any technique that reads VARN, Bi and Si into the datasets will work. */ * (3) call MISECOMP (MACRO defaults will work correctly for the SCF if the dataset names are DSN11, DSN12, DSN13, DSN14, DSN15); *************************************************************************; *************************************************************************; %MACRO MISECOMP(NIMP=5,DSN=DSN1,PRNTPR=10.6); DATA &DSN.1; SET &DSN.1; ORD=_N_; RUN; %DO I=1 %TO &NIMP; PROC SORT DATA=&DSN&I; BY VARN; RUN; %END; DATA ALL; MERGE %DO I=1 %TO &NIMP; &DSN&I %END; ;; BY VARN; ARRAY BMOD {*} %DO I=1 %TO &NIMP; B&I %END;; ARRAY SMOD {*} %DO I=1 %TO &NIMP; S&I %END;; BETA=0; SIGMA=0; ST=0; DO J=1 TO &NIMP; BETA=BMOD{J}+BETA; SIGMA=SMOD{J}**2+SIGMA; END; BETA=BETA/&NIMP; SIGMA=SIGMA/&NIMP; DO I=1 TO &NIMP; ST=ST+(BETA-BMOD{I})**2; END; SIGMA=SQRT(SIGMA+(1+1/5)*ST/(5-1)); TSTAT=BETA/SIGMA; RUN; PROC SORT DATA=ALL; BY ORD; RUN; DATA ALL; SET ALL; PUT VARN @15 BETA &PRNTPR / @15 SIGMA &PRNTPR / @15 TSTAT &PRNTPR; RUN; %MEND MISECOMP; %MISECOMP;

SUMMARY VARIABLES
We have not made an effort to include summary variables (e.g., net worth) in the dataset. Although it is complicated to construct such variables, it is our belief that a substantial amount of judgment is involved in selecting which variables to include, and that other analysts should make their own decisions. However, as a guide to users, we have included at the end some
SAS code to compute net worth according to our routine definitions.

DISCLOSURE REVIEW
To protect the privacy of individual respondents, the data in this release have been systematically altered by several means to minimize the possibility of identifying any survey respondent. For some discrete variables, small or unusual cells were collapsed as noted in the variable descriptions in the next section of the codebook. Continuous variables were rounded. Data were also blurred by other unspecified means. In addition, a number of other cases were identified for more extensive treatment. Some of these cases were selected on the basis of extreme or unusual data values. Other cases were selected at random. For each of these cases, a selection of critical variables was set to missing and statistically imputed subject to constraints designed to ensure that any distortions induced in key population statistics would be minimal. The geographic identifiers here have been systematically altered for a subset of respondents by swapping their locations with those of otherwise similar respondents. Where relevant, the codebook provides more detailed information on cell collapsing and other techniques.

It is important to note that aside from the cell collapsing, there is no key in this codebook or in the dataset that would allow users to identify directly either which data items have been smoothed or otherwise altered, or which cases were selected for imputation of critical values (that is, the shadow variables in this dataset may not always reflect the true original status of every variable). Although this blurring of the data will have some effect on analysis, that effect should be negligible in most cases. For further details on the procedures taken to protect the identity of respondents, see "Disclosure Review and Its Implications for the 1992 Survey of Consumer Finances" by Gerhard Fries, Barry Johnson, and R. Louise Woodburn (1997 working paper, SCF group, Federal Reserve Board). Users who feel that the restrictions imposed on the public dataset are too constricting are encouraged to submit written proposals for expanded data release, and those requests will be given serious consideration in the release of data from future surveys.

Dollar variables have been rounded according to the following scheme:

DO I = 1 TO DIM($VARs);
 IF (0 < $_VAR < 5) THEN $_VAR=1;
 ELSE IF (5 <= $_VAR < 1000) THEN $_VAR=MAX(1,ROUND($_VAR,10));
 ELSE IF (1000 <= $_VAR < 10000) THEN $_VAR=ROUND($_VAR,100);
 ELSE IF (10000 <= $_VAR < 1000000) THEN $_VAR=ROUND($_VAR,1000);
 ELSE IF (1000000 <= $_VAR) THEN $_VAR=ROUND($_VAR,10000);
 ELSE IF (-1000 <= $_VAR < - 5) THEN $_VAR=ROUND($_VAR,10);
 ELSE IF (-10000 <= $_VAR < -1000) THEN $_VAR=ROUND($_VAR,100);
 ELSE IF (-1000000 < $_VAR < -10000) THEN $_VAR=ROUND($_VAR,1000);
 ELSE IF .Z < $_VAR <= -1000000 THEN $_VAR=-1000000;
END;

An important exception to this rounding rule is amounts that were
reported in an hourly frequency (e.g., X4112).  If the hourly amount
is greater than $25, then the above rounding rule applies.  Otherwise,
the amount is rounded to the nearest $.10.

CASE ID NUMBERS
Under the original numbering system (XX1), the sample design is apparent from the identification numbers. Thus, each case included in the public version of the dataset has been given an identification number (YY1), which is intended to mask the knowledge of which cases were drawn from the SCF list sample. It is not possible to know with certainty from the information provided in the public version of this dataset which cases derive from the list sample. Because we routinely use the original numbers internally, users who direct questions to us about specific cases might want to be sure to emphasize that they are using the external ID number to avoid confusion.

DATA REVIEW
We have spent many hours searching for errors in the data. Many seeming inconsistencies are actually in the raw data and appear to have no obvious reconciliation. Other types of inconsistencies may have been induced as a byproduct of imputation, even though elaborate checks are built into the imputation routines. We ask our colleagues who use this dataset to help us find the remaining resolvable inconsistencies. Our presumption is always that the respondent understood each question and reported accurately, and that the process of transcription and coding did not distort that information. In the relatively small number of cases where other information led us beyond a reasonable doubt of the validity of the data, we have changed data, either by altering values directly or by setting them to missing and imputing them; in all such cases, the shadow variables indicate that we have overridden reported data.

CONTACT INFORMATION
It is likely that some users will have trouble understanding the organization of the data at first. IF AFTER HAVING FRAMED A FOCUSED QUESTION AND EXHAUSTED ALL OF YOUR LOCAL RESOURCES, YOUR PROBLEM PERSISTS, you may call Gerhard Fries at ((202) 452-2578 or e-mail [email protected]) or me at ((202)-452-2247 or e-mail [email protected])). ****We prefer correspondence via e-mail.**** While we would like to be helpful to you, please realize that we are not set up to provide extensive services to users. We hope that by persistence, you will almost always be able to figure out what you need by consulting the questionnaire and the codebook below. We should be your last resort.


DISCUSSION OF RANGE DATA COLLECTION AND J-CODES
Dollar values in the 1995 SCF were collected in a way that takes advantage of the power of CAPI (for a detailed description and analysis see "Using Range Techniques with CAPI in the 1995 Survey of Consumer Finances" (HTML | 133 KB PDF | 240 KB Postscript) by Arthur B. Kennickell (1996 working paper, SCF group, Federal Reserve Board). In the past, we had evidence that some respondents volunteered figures in ranges. Good interviewers have always tried to get respondents to settle on a single "best" figure, but sometimes it may be that there may be no firm figure (e.g., the value of a privately-held business may be known only at the point it is actually sold) and probing too far could cause the respondent to answer "don't know". The 1995 survey allowed for responses to be reported in ranges volunteered by respondents. There is another class of respondent that may not volunteer a range, who do not know (or will not give) an exact figure, but who will give some information about the value. To obtain information from this second group of people, we have included in the CAPI program two options. First, a respondent who is uncomfortable actually saying an amount may report a letter from a card that specifies a number of ranges. The range card has been used very successfully in earlier waves of the SCF, but CAPI allows the option to be presented consistently. Second, a respondent who declines the use of the range card is asked a series of questions in a "decision tree" that are designed to specify a range. In earlier SCFs, the decision tree was used for people who did not know or refused to report a figure for their total income, and in the current Health and Retirement Survey such a procedure is used for many dollar figures. The evidence from both sources is very encouraging. In the 1995 SCF, the decision tree breaks vary by question (so that, for example, monthly rent is not subject to the same ranges as the value of corporate stock). The computer sequences used for range followup for all dollar values in the 1995 survey (known as "DKDOL") are outlined schematically in a section below. It should be noted that interviewers were strongly instructed that a single dollar value is the best answer to each of these questions. Although there is the distinct possibility that respondents may become "trained" in the use of the range questions during the course of the interview (the effect of this training is unclear at present: respondents may tend to report "too many" ranges because they know that they are allowed; alternatively, respondents may learn that it is much quicker to give a single dollar figure), interviewers should be using all of the standard techniques to get respondents to give a single figure where possible.
Schematic diagram of sequence used for all dollar questions:

                 Qnn.  How much is your [******]?

level 1:  $________  $___RANGE              $______DK       $__Refuse
                                                |________________|
level 2:  Confirm    Range card                    Range card?
                     or dollar range?

                     RC      DR             YES       NO/DK    Refuse

level 3:  OUT        Letter  Upper bound    Letter    Decision
                             Lower bound              tree

level 4:             OUT     Confirm        OUT       Confirm  OUT

level 5:                     OUT                      OUT


(OUT=proceed to next question)

At the first level, the respondent has the option of providing a
dollar amount (as in the past, interviewers were strongly urged to
obtain a single dollar value where possible), volunteering a range,
answering "don't know," or refusing to answer.  Each of these
responses implies a different sequence of questions.  In the case of a
single dollar figure, the CAPI program displays in words the number
the interviewer has typed into the computer and proceeds to the next
question.  If the respondent volunteers a range, there is an option to
report either a range in dollars (and in some cases the upper or lower
bound of a range may be missing--e.g., as in the case where a
respondent answers "greater than a million dollars") or to give a
letter from a range card (the ranges are given below).  If the
respondent answers "don't know" or refuses to answer, the program will
present a request to use the range card.  If the respondent is unable
to use the range card (answers "no" or "don't know"), the program
presents a series of questions known as a "decision tree," which is
specified in greater detail below.  If the respondent refuses when
asked to use the range card, the program proceeds to the next
question.  The exact question text for this sequence is given below.

Because of software limitations, negative ranges presented a special
problem.  It was not feasible to build in negative ranges directly.
As a compromise, interviewers were instructed to collect the ranges in
absolute values and record in a comment box available in the program
the fact that the range was negative.

Text presented to interviewer at level 2 if R volunteers a range:

     CHOOSE: 

     ENTER LETTER FROM RANGE CARD
     ENTER LOW END AND HIGH END OF RANGE


Text presented to interviewer at level 3 if R volunteers a range and
chooses the range card at level 2:

     ENTER LETTER FROM RANGE CARD:

          Possible card responses shown on range card:

               A    ......    $1 - $100
               B    ......    $101 - $500
               C    ......    $501 - $750
               D    ......    $751 - $1,000
               E    ......    $1,001 - $2,500
               F    ......    $2,501 - $5,000
               G    ......    $5,001 - $7,500
               H    ......    $7,501 - $10,000
               I    ......    $10,001 - $25,000
               J    ......    $25,001 - $50,000
               K    ......    $50,001 - $75,000
               L    ......    $75,001 - $100,000
               M    ......    $100,001 - $250,000
               N    ......    $250,001 - $1 million
               O    ......    $1 million - $5 million
               P    ......    $5 million - $10 million
               Q    ......    $10 million - $25 million
               R    ......    $25 million - $50 million
               S    ......    $50 million - $100 million
               T    ......    More than $100 million


Text presented to interviewer at level 3 if R volunteers a range and
gives a dollar range at level 2:

     ENTER LOW END OF RANGE :  $___,___,___.00

     ENTER HIGH END OF RANGE : $___,___,___.00


Text presented to interviewer at level 2 if R answers DK/Ref at level 1:

     Can you give me a range from this card?  HAND R RANGE CARD.

     YES
     NO

Text presented to interviewer at level 3 if R answers DK/Ref at
level 1 and answers YES at level 2:

     ENTER LETTER FROM RANGE CARD:

          Possible card responses shown on range card:
               See above

Decision tree sequence presented to interviewer at level 3 if R
answers DK/Ref at level 1 and NO/DK at level 2:

     CONSIDER THE FOLLOWING 7 NUMBERS WHICH ARE STRICKLY INCREASING IN
     VALUE: V1, V2, V3, V4, V5, V6, AND V7.  RESPONDENTS ARE ASKED A
     SEQUENCE OF QUESTIONS TO FIND THE INTERVALS DEFINED BY THESE NUMBER A
     GIVEN VARIABLE FALLS.

     Q1.  Was it V4 dollars or more?

               YES --> GO TO Q2
               NO, DK --> GO TO Q5
               Ref --> EXIT

     Q2.  Was it V5 dollars or more?

               YES --> GO TO Q3
               NO, DK, Ref --> EXIT

     Q3.  Was it V6 dollars or more?

               YES --> GO TO Q4
               NO, DK, Ref --> EXIT

     Q4.  Was it V7 dollars or more?

               YES, NO, DK, Ref --> EXIT
     
     Q5.  Was it V1 dollars or more?

               YES --> GO TO Q6
               NO, DK, Ref --> EXIT

     Q6.  Was it V2 dollars or more?

               YES --> GO TO Q7
               NO, DK, Ref --> EXIT

     Q7.  Was it V3 dollars or more?

               YES, NO, DK, Ref --> EXIT


To allow for appropriate ranges for all dollar questions, there are
eight different versions of the V1 to V7 variables given below.

Version     V1     V2       V3       V4         V5         V6          V7
   1     10,000  100,000  250,000  500,000    1,000,000  5,000,000   10,000,000
   2     50,000  100,000  500,000  1,000,000  5,000,000  10,000,000  25,000,000
   3     50,000  100,000  150,000  250,000    500,000    1,000,000   5,000,000 
   4     5,000   25,000   50,000   100,000    250,000    500,000     1,000,000 
   5     5,000   10,000   25,000   50,000     100,000    250,000     750,000  
   6     500     1,000    5,000    10,000     25,000     75,000      250,000  
   7     100     250      500      1,000      2,000      10,000      50,000   
   8     50      100      250      500        1,000      5,000       10,000   

There are 31 possible unique outcomes of each version of each of the 8
versions of the decision tree:

1. Q1=NO, Q5=NO
2. Q1=NO, Q5=DK
3. Q1=NO, Q5=Ref
4. Q1=NO, Q5=YES, Q6=NO
5. Q1=NO, Q5=YES, Q6=DK
6. Q1=NO, Q5=YES, Q6=Ref
7. Q1=NO, Q5=YES, Q6=YES, Q7=NO
8. Q1=NO, Q5=YES, Q6=YES, Q7=DK
9. Q1=NO, Q5=YES, Q6=YES, Q7=Ref
10. Q1=NO, Q5=YES, Q6=YES, Q7=YES
11. Q1=DK, Q5=NO
12. Q1=DK, Q5=DK ---> NOTE: RESULTS IN NO BOUNDING INFORMATION
13. Q1=DK, Q5=Ref ---> NOTE: RESULTS IN NO BOUNDING INFORMATION
14. Q1=DK, Q5=YES, Q6=NO
15. Q1=DK, Q5=YES, Q6=DK
16. Q1=DK, Q5=YES, Q6=Ref
17. Q1=DK, Q5=YES, Q6=YES, Q7=NO
18. Q1=DK, Q5=YES, Q6=YES, Q7=DK
19. Q1=DK, Q5=YES, Q6=YES, Q7=Ref
20. Q1=DK, Q5=YES, Q6=YES, Q7=YES
21. Q1=Ref ---> NOTE: RESULTS IN NO BOUNDING INFORMATION
22. Q1=YES, Q2=NO
23. Q1=YES, Q2=DK
24. Q1=YES, Q2=Ref
25. Q1=YES, Q2=YES, Q3=NO
26. Q1=YES, Q2=YES, Q3=DK
27. Q1=YES, Q2=YES, Q3=Ref
28. Q1=YES, Q2=YES, Q3=YES, Q4=NO
29. Q1=YES, Q2=YES, Q3=YES, Q4=DK
20. Q1=YES, Q2=YES, Q3=YES, Q4=Ref
31. Q1=YES, Q2=YES, Q3=YES, Q4=YES


-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
Definitions of the "J" Variables (1995 version)


0  = value reported on original tape (possibly altered during NORC
     editing).

1  = question is inapplicable for R (e.g., R has no checking account
     so value of checking account is coded as zero -- NOTE: there are
     no zeros in the dataset other than such values).

2  = data moved from another location (not including re-arranging
     columns in a grid); data moved from another location and added to
     data already at new location (e.g., wage income from spouse
     reported in independent adult part of section Y added to data
     reported for R in Section T).

3  = data provided for a question with a branch structure, but not
     known which branch data should be in (e.g., AGI given, but filing
     status unknown, R has mutual funds but answers NO to all types).

4  = data change (to non-missing value) at FRB based on comments/verbatims.
     Also includes editing changes specified by NORC that were not
     possible to implement in their data handling system.

5  = indicates a value coded from a verbatim ("other/specify") response.
6  = data moved/changed as result of coded verbatim ("other/specify")
     response.

8  = recode of survey variables, no missing values in antecedents.
     For sequences of variables, such as the determination of whether
     a dollar value or a percent is reported at Xhead-var/X4206/X4207,
     one variable determines which type of variable is reported
     (dollar/percent) subsequently; in some cases, the CAPI program
     skips to the dollar value question and will puruse the DKDOL
     sequence if necessary; if a dollar value (or range) is reported
     in this way, the J-code of the initial choice variable is set to
     the value '8.'

9  = recode of survey variables, insufficient data collected to
     compute value, not imputed.

10 = part of reported value reported elsewhere and edited out here
     (e.g., wage income of NPEU member also reported at X5701 along
     with income of PEU resulting in J5702=10) or entire reported
     value reported elsewhere and edited out here (e.g., all of wage
     income of NPEU member reported at X5701 resulting in X5701=5,
     J5701=10, X5702=0 and J5702=14).

12 = in case of regular installment loans where term is DK, non-missing
     typical payment moved to monthly payment section.
 
13 = coded value overridden by another value after editing completed
14 = inap given hard-code decision (12, 13, 15, or 16)
15 = hard-coded imputation determined during cleaning.
16 = other reassignment resulting from cleaning that overrides
     reported data (e.g., the cleaning of the institutions grid in
     Section A).
17 = value of originally missing data item implied by other variable(s).

18 = value originally inap as consequence of CAPI logic, new value
     inferred from other values.

19 = value changed, but logical content not altered (e.g., institution
     reported for an account, but no link to institutions grid.
     Account variable changed to pointer to an added institution of
     type indicated by original account question).

25 = correction of NORC edit error or to a non-missing/non-inapplicable value.

ALL RESPONSES THAT FOLLOW HAVE AT LEAST SOME MISSING INFORMATION

90 = Bounding information available based on summary information
     provided by respondent (typically, if a R does not know
     information about items beyond a certain number in a set of
     detailed questions about a larger number of such items, the R is
     asked one or a number of summary questions about all remaining
     instances).
91 = Same as 90, but R gave range data for the summary information.


RANGE RESPONSES:

POSITIVE RANGES

DECISION TREE RESPONSES THAT RESULTED IN A BOUND FOR POSITIVE NUMBERS

(NOTE: for decision tree codes, responses that resulted in no usable
bounding information are collected separately below)
'*' indicates an open-ended interval

NOTE: for J-code outcomes from 101-878, 921-940, and 971-990, .5 is
added to the J-code if the original response was DK

101=Decision tree response, version 1: outcome 1 (*,<=V1)
102=Decision tree response, version 1: outcome 2 (*,<=V4)
103=Decision tree response, version 1: outcome 3 (*,<=V4)
104=Decision tree response, version 1: outcome 4 (>V1,<=V2
105=Decision tree response, version 1: outcome 5 (>V1,<=V4)
106=Decision tree response, version 1: outcome 6 (>V1,<=V4)
107=Decision tree response, version 1: outcome 7 (>V2,<=V3)
108=Decision tree response, version 1: outcome 8 (>V2,<=V4)
109=Decision tree response, version 1: outcome 9 (>V2,<=V4)
110=Decision tree response, version 1: outcome 10 (>V3,<=V4)
111=Decision tree response, version 1: outcome 11 (*,<=V1)
112=Decision tree response, version 1: outcome 14 (V1,V2)
113=Decision tree response, version 1: outcome 15 (>V1,*)
114=Decision tree response, version 1: outcome 16 (>V1,*)
115=Decision tree response, version 1: outcome 17 (>V2,<=V3)
116=Decision tree response, version 1: outcome 18 (>V2,*)
117=Decision tree response, version 1: outcome 19 (>V2,*)
118=Decision tree response, version 1: outcome 20 (>V3,*)
119=Decision tree response, version 1: outcome 22 (>V4,<=V5)
120=Decision tree response, version 1: outcome 23 (>V4,*)
121=Decision tree response, version 1: outcome 24 (>V4,*)
122=Decision tree response, version 1: outcome 25 (>V5,<=V6)
123=Decision tree response, version 1: outcome 26 (>V5,*)
124=Decision tree response, version 1: outcome 27 (>V5,*)
125=Decision tree response, version 1: outcome 28 (>V6,<=V7)
126=Decision tree response, version 1: outcome 29 (>V6,*)
127=Decision tree response, version 1: outcome 30 (>V6,*)
128=Decision tree response, version 1: outcome 31 (>V7,*)     

201=Decision tree response, version 2: outcome 1  (*,<=V1)     
202=Decision tree response, version 2: outcome 2  (*,<=V4)     
203=Decision tree response, version 2: outcome 3  (*,<=V4)     
204=Decision tree response, version 2: outcome 4  (>V1,<=V2    
205=Decision tree response, version 2: outcome 5  (>V1,<=V4)   
206=Decision tree response, version 2: outcome 6  (>V1,<=V4)   
207=Decision tree response, version 2: outcome 7  (>V2,<=V3)   
208=Decision tree response, version 2: outcome 8  (>V2,<=V4)   
209=Decision tree response, version 2: outcome 9  (>V2,<=V4)
210=Decision tree response, version 2: outcome 10 (>V3,<=V4)
211=Decision tree response, version 2: outcome 11 (*,<=V1)  
212=Decision tree response, version 2: outcome 14 (V1,V2)   
213=Decision tree response, version 2: outcome 15 (>V1,*)   
214=Decision tree response, version 2: outcome 16 (>V1,*)   
215=Decision tree response, version 2: outcome 17 (>V2,<=V3)
216=Decision tree response, version 2: outcome 18 (>V2,*)   
217=Decision tree response, version 2: outcome 19 (>V2,*)   
218=Decision tree response, version 2: outcome 20 (>V3,*)   
219=Decision tree response, version 2: outcome 22 (>V4,<=V5)
220=Decision tree response, version 2: outcome 23 (>V4,*)   
221=Decision tree response, version 2: outcome 24 (>V4,*)   
222=Decision tree response, version 2: outcome 25 (>V5,<=V6)
223=Decision tree response, version 2: outcome 26 (>V5,*)   
224=Decision tree response, version 2: outcome 27 (>V5,*)   
225=Decision tree response, version 2: outcome 28 (>V6,<=V7)
226=Decision tree response, version 2: outcome 29 (>V6,*)   
227=Decision tree response, version 2: outcome 30 (>V6,*)   
228=Decision tree response, version 2: outcome 31 (>V7,*)   

301=Decision tree response, version 3: outcome 1  (*,<=V1)     
302=Decision tree response, version 3: outcome 2  (*,<=V4)     
303=Decision tree response, version 3: outcome 3  (*,<=V4)     
304=Decision tree response, version 3: outcome 4  (>V1,<=V2    
305=Decision tree response, version 3: outcome 5  (>V1,<=V4)   
306=Decision tree response, version 3: outcome 6  (>V1,<=V4)   
307=Decision tree response, version 3: outcome 7  (>V2,<=V3)   
308=Decision tree response, version 3: outcome 8  (>V2,<=V4)   
309=Decision tree response, version 3: outcome 9  (>V2,<=V4)   
310=Decision tree response, version 3: outcome 10 (>V3,<=V4)  
311=Decision tree response, version 3: outcome 11 (*,<=V1)    
312=Decision tree response, version 3: outcome 14 (V1,V2)     
313=Decision tree response, version 3: outcome 15 (>V1,*)     
314=Decision tree response, version 3: outcome 16 (>V1,*)     
315=Decision tree response, version 3: outcome 17 (>V2,<=V3)  
316=Decision tree response, version 3: outcome 18 (>V2,*)     
317=Decision tree response, version 3: outcome 19 (>V2,*)     
318=Decision tree response, version 3: outcome 20 (>V3,*)     
319=Decision tree response, version 3: outcome 22 (>V4,<=V5)  
320=Decision tree response, version 3: outcome 23 (>V4,*)     
321=Decision tree response, version 3: outcome 24 (>V4,*)     
322=Decision tree response, version 3: outcome 25 (>V5,<=V6)  
323=Decision tree response, version 3: outcome 26 (>V5,*)     
324=Decision tree response, version 3: outcome 27 (>V5,*)     
325=Decision tree response, version 3: outcome 28 (>V6,<=V7)  
326=Decision tree response, version 3: outcome 29 (>V6,*)     
327=Decision tree response, version 3: outcome 30 (>V6,*)     
328=Decision tree response, version 3: outcome 31 (>V7,*)     

401=Decision tree response, version 4: outcome 1  (*,<=V1)     
402=Decision tree response, version 4: outcome 2  (*,<=V4)     
403=Decision tree response, version 4: outcome 3  (*,<=V4)     
404=Decision tree response, version 4: outcome 4  (>V1,<=V2    
405=Decision tree response, version 4: outcome 5  (>V1,<=V4)   
406=Decision tree response, version 4: outcome 6  (>V1,<=V4)   
407=Decision tree response, version 4: outcome 7  (>V2,<=V3)   
408=Decision tree response, version 4: outcome 8  (>V2,<=V4)   
409=Decision tree response, version 4: outcome 9  (>V2,<=V4)   
410=Decision tree response, version 4: outcome 10 (>V3,<=V4)  
411=Decision tree response, version 4: outcome 11 (*,<=V1)    
412=Decision tree response, version 4: outcome 14 (V1,V2)     
413=Decision tree response, version 4: outcome 15 (>V1,*)     
414=Decision tree response, version 4: outcome 16 (>V1,*)     
415=Decision tree response, version 4: outcome 17 (>V2,<=V3)  
416=Decision tree response, version 4: outcome 18 (>V2,*)     
417=Decision tree response, version 4: outcome 19 (>V2,*)     
418=Decision tree response, version 4: outcome 20 (>V3,*)     
419=Decision tree response, version 4: outcome 22 (>V4,<=V5)  
420=Decision tree response, version 4: outcome 23 (>V4,*)     
421=Decision tree response, version 4: outcome 24 (>V4,*)     
422=Decision tree response, version 4: outcome 25 (>V5,<=V6)  
423=Decision tree response, version 4: outcome 26 (>V5,*)     
424=Decision tree response, version 4: outcome 27 (>V5,*)     
425=Decision tree response, version 4: outcome 28 (>V6,<=V7)  
426=Decision tree response, version 4: outcome 29 (>V6,*)     
427=Decision tree response, version 4: outcome 30 (>V6,*)     
428=Decision tree response, version 4: outcome 31 (>V7,*)     

501=Decision tree response, version 5: outcome 1  (*,<=V1)     
502=Decision tree response, version 5: outcome 2  (*,<=V4)     
503=Decision tree response, version 5: outcome 3  (*,<=V4)     
504=Decision tree response, version 5: outcome 4  (>V1,<=V2    
505=Decision tree response, version 5: outcome 5  (>V1,<=V4)   
506=Decision tree response, version 5: outcome 6  (>V1,<=V4)   
507=Decision tree response, version 5: outcome 7  (>V2,<=V3)   
508=Decision tree response, version 5: outcome 8  (>V2,<=V4)   
509=Decision tree response, version 5: outcome 9  (>V2,<=V4)   
510=Decision tree response, version 5: outcome 10 (>V3,<=V4)  
511=Decision tree response, version 5: outcome 11 (*,<=V1)    
512=Decision tree response, version 5: outcome 14 (V1,V2)     
513=Decision tree response, version 5: outcome 15 (>V1,*)     
514=Decision tree response, version 5: outcome 16 (>V1,*)     
515=Decision tree response, version 5: outcome 17 (>V2,<=V3)  
516=Decision tree response, version 5: outcome 18 (>V2,*)     
517=Decision tree response, version 5: outcome 19 (>V2,*)     
518=Decision tree response, version 5: outcome 20 (>V3,*)     
519=Decision tree response, version 5: outcome 22 (>V4,<=V5)  
520=Decision tree response, version 5: outcome 23 (>V4,*)     
521=Decision tree response, version 5: outcome 24 (>V4,*)     
522=Decision tree response, version 5: outcome 25 (>V5,<=V6)  
523=Decision tree response, version 5: outcome 26 (>V5,*)     
524=Decision tree response, version 5: outcome 27 (>V5,*)     
525=Decision tree response, version 5: outcome 28 (>V6,<=V7)  
526=Decision tree response, version 5: outcome 29 (>V6,*)     
527=Decision tree response, version 5: outcome 30 (>V6,*)     
528=Decision tree response, version 5: outcome 31 (>V7,*)     

601=Decision tree response, version 6: outcome 1  (*,<=V1)     
602=Decision tree response, version 6: outcome 2  (*,<=V4)     
603=Decision tree response, version 6: outcome 3  (*,<=V4)     
604=Decision tree response, version 6: outcome 4  (>V1,<=V2    
605=Decision tree response, version 6: outcome 5  (>V1,<=V4)   
606=Decision tree response, version 6: outcome 6  (>V1,<=V4)   
607=Decision tree response, version 6: outcome 7  (>V2,<=V3)   
608=Decision tree response, version 6: outcome 8  (>V2,<=V4)   
609=Decision tree response, version 6: outcome 9  (>V2,<=V4)   
610=Decision tree response, version 6: outcome 10 (>V3,<=V4)  
611=Decision tree response, version 6: outcome 11 (*,<=V1)    
612=Decision tree response, version 6: outcome 14 (V1,V2)     
613=Decision tree response, version 6: outcome 15 (>V1,*)     
614=Decision tree response, version 6: outcome 16 (>V1,*)     
615=Decision tree response, version 6: outcome 17 (>V2,<=V3)  
616=Decision tree response, version 6: outcome 18 (>V2,*)     
617=Decision tree response, version 6: outcome 19 (>V2,*)     
618=Decision tree response, version 6: outcome 20 (>V3,*)     
619=Decision tree response, version 6: outcome 22 (>V4,<=V5)  
620=Decision tree response, version 6: outcome 23 (>V4,*)     
621=Decision tree response, version 6: outcome 24 (>V4,*)     
622=Decision tree response, version 6: outcome 25 (>V5,<=V6)  
623=Decision tree response, version 6: outcome 26 (>V5,*)     
624=Decision tree response, version 6: outcome 27 (>V5,*)     
625=Decision tree response, version 6: outcome 28 (>V6,<=V7)  
626=Decision tree response, version 6: outcome 29 (>V6,*)     
627=Decision tree response, version 6: outcome 30 (>V6,*)     
628=Decision tree response, version 6: outcome 31 (>V7,*)     

701=Decision tree response, version 7: outcome 1  (*,<=V1)
702=Decision tree response, version 7: outcome 2  (*,<=V4)     
703=Decision tree response, version 7: outcome 3  (*,<=V4)     
704=Decision tree response, version 7: outcome 4  (>V1,<=V2    
705=Decision tree response, version 7: outcome 5  (>V1,<=V4)   
706=Decision tree response, version 7: outcome 6  (>V1,<=V4)   
707=Decision tree response, version 7: outcome 7  (>V2,<=V3)   
708=Decision tree response, version 7: outcome 8  (>V2,<=V4)   
709=Decision tree response, version 7: outcome 9  (>V2,<=V4)   
710=Decision tree response, version 7: outcome 10 (>V3,<=V4)  
711=Decision tree response, version 7: outcome 11 (*,<=V1)    
712=Decision tree response, version 7: outcome 14 (V1,V2)     
713=Decision tree response, version 7: outcome 15 (>V1,*)     
714=Decision tree response, version 7: outcome 16 (>V1,*)     
715=Decision tree response, version 7: outcome 17 (>V2,<=V3)  
716=Decision tree response, version 7: outcome 18 (>V2,*)     
717=Decision tree response, version 7: outcome 19 (>V2,*)     
718=Decision tree response, version 7: outcome 20 (>V3,*)     
719=Decision tree response, version 7: outcome 22 (>V4,<=V5)  
720=Decision tree response, version 7: outcome 23 (>V4,*)     
721=Decision tree response, version 7: outcome 24 (>V4,*)     
722=Decision tree response, version 7: outcome 25 (>V5,<=V6)  
723=Decision tree response, version 7: outcome 26 (>V5,*)     
724=Decision tree response, version 7: outcome 27 (>V5,*)     
725=Decision tree response, version 7: outcome 28 (>V6,<=V7)  
726=Decision tree response, version 7: outcome 29 (>V6,*)     
727=Decision tree response, version 7: outcome 30 (>V6,*)     
728=Decision tree response, version 7: outcome 31 (>V7,*)     

801=Decision tree response, version 8: outcome 1  (*,<=V1)     
802=Decision tree response, version 8: outcome 2  (*,<=V4)     
803=Decision tree response, version 8: outcome 3  (*,<=V4)     
804=Decision tree response, version 8: outcome 4  (>V1,<=V2    
805=Decision tree response, version 8: outcome 5  (>V1,<=V4)   
806=Decision tree response, version 8: outcome 6  (>V1,<=V4)   
807=Decision tree response, version 8: outcome 7  (>V2,<=V3)   
808=Decision tree response, version 8: outcome 8  (>V2,<=V4)   
809=Decision tree response, version 8: outcome 9  (>V2,<=V4)   
810=Decision tree response, version 8: outcome 10 (>V3,<=V4)  
811=Decision tree response, version 8: outcome 11 (*,<=V1)    
812=Decision tree response, version 8: outcome 14 (V1,V2)     
813=Decision tree response, version 8: outcome 15 (>V1,*)     
814=Decision tree response, version 8: outcome 16 (>V1,*)     
815=Decision tree response, version 8: outcome 17 (>V2,<=V3)  
816=Decision tree response, version 8: outcome 18 (>V2,*)     
817=Decision tree response, version 8: outcome 19 (>V2,*)     
818=Decision tree response, version 8: outcome 20 (>V3,*)     
819=Decision tree response, version 8: outcome 22 (>V4,<=V5)  
820=Decision tree response, version 8: outcome 23 (>V4,*)     
821=Decision tree response, version 8: outcome 24 (>V4,*)     
822=Decision tree response, version 8: outcome 25 (>V5,<=V6)  
823=Decision tree response, version 8: outcome 26 (>V5,*)     
824=Decision tree response, version 8: outcome 27 (>V5,*)     
825=Decision tree response, version 8: outcome 28 (>V6,<=V7)  
826=Decision tree response, version 8: outcome 29 (>V6,*)     
827=Decision tree response, version 8: outcome 30 (>V6,*)     
828=Decision tree response, version 8: outcome 31 (>V7,*)

RANGE CARD RESPONSES FOR POSITIVE NUMBERS

901=Range card response via [F9]: range A.  $1 to $100                
902=Range card response via [F9]: range B.  $101 to $500              
903=Range card response via [F9]: range C.  $501 to $750              
904=Range card response via [F9]: range D.  $751 to $1,000            
905=Range card response via [F9]: range E.  $1,001 to $2,500          
906=Range card response via [F9]: range F.  $2,501 to $5,000          
907=Range card response via [F9]: range G.  $5,001 to $7,500          
908=Range card response via [F9]: range H.  $7,501 to $10,000         
909=Range card response via [F9]: range I.  $10,001 to $25,000        
910=Range card response via [F9]: range J.  $25,001 to $50,000        
911=Range card response via [F9]: range K.  $50,001 to $75,000        
912=Range card response via [F9]: range L.  $75,001 to $100,000       
913=Range card response via [F9]: range M.  $100,001 to $250,000      
914=Range card response via [F9]: range N.  $250,001 to $1,000,000    
915=Range card response via [F9]: range O.  $1,000,001 to $5,000,000  
916=Range card response via [F9]: range P.  $5,000,001 to $10,000,000 
917=Range card response via [F9]: range Q.  $10,000,001 to $25,000,000
918=Range card response via [F9]: range R.  $25,000,001 to $50,000,000
919=Range card response via [F9]: range S.  $50,000,001 to $100,000,000
920=Range card response via [F9]: range T.  More than $100,000,000

921=Range card response via DKDOL: range A.  $1 to $100
922=Range card response via DKDOL: range B.  $101 to $500
923=Range card response via DKDOL: range C.  $501 to $750
924=Range card response via DKDOL: range D.  $751 to $1,000
925=Range card response via DKDOL: range E.  $1,001 to $2,500
926=Range card response via DKDOL: range F.  $2,501 to $5,000
927=Range card response via DKDOL: range G.  $5,001 to $7,500
928=Range card response via DKDOL: range H.  $7,501 to $10,000
929=Range card response via DKDOL: range I.  $10,001 to $25,000
930=Range card response via DKDOL: range J.  $25,001 to $50,000
931=Range card response via DKDOL: range K.  $50,001 to $75,000
932=Range card response via DKDOL: range L.  $75,001 to $100,000
933=Range card response via DKDOL: range M.  $100,001 to $250,000
934=Range card response via DKDOL: range N.  $250,001 to $1,000,000
935=Range card response via DKDOL: range O.  $1,000,001 to $5,000,000
936=Range card response via DKDOL: range P.  $5,000,001 to $10,000,000
937=Range card response via DKDOL: range Q.  $10,000,001 to $25,000,000
938=Range card response via DKDOL: range R.  $25,000,001 to $50,000,000
939=Range card response via DKDOL: range S.  $50,000,001 to $100,000,000
940=Range card response via DKDOL: range T.  More than $100,000,000


RESPONDENT-PROVIDED DOLLAR RANGE FOR POSITIVE NUMBERS

941=Upper and lower bounds given
942=Upper bound given, lower bound missing
943=Lower bound given, upper bound missing


INTERVIEW COMMENT INDICATES THAT RANGES ARE NEGATIVE

DECISION TREE RESPONSES THAT RESULTED IN A BOUND FOR NEGATIVE NUMBERS

(NOTE: for decision tree codes, responses that resulted in no usable
bounding information are collected separately below)
151=Decision tree response, version 1: outcome 1 (negative value)
152=Decision tree response, version 1: outcome 2 (negative value)
153=Decision tree response, version 1: outcome 3 (negative value)
154=Decision tree response, version 1: outcome 4 (negative value)
155=Decision tree response, version 1: outcome 5 (negative value)
156=Decision tree response, version 1: outcome 6 (negative value)
157=Decision tree response, version 1: outcome 7 (negative value)
158=Decision tree response, version 1: outcome 8 (negative value)
159=Decision tree response, version 1: outcome 9 (negative value)
160=Decision tree response, version 1: outcome 10 (negative value)
161=Decision tree response, version 1: outcome 11 (negative value)
162=Decision tree response, version 1: outcome 14 (negative value)
163=Decision tree response, version 1: outcome 15 (negative value)
164=Decision tree response, version 1: outcome 16 (negative value)
165=Decision tree response, version 1: outcome 17 (negative value)
166=Decision tree response, version 1: outcome 18 (negative value)
167=Decision tree response, version 1: outcome 19 (negative value)
168=Decision tree response, version 1: outcome 20 (negative value)
169=Decision tree response, version 1: outcome 22 (negative value)
170=Decision tree response, version 1: outcome 23 (negative value)
171=Decision tree response, version 1: outcome 24 (negative value)
172=Decision tree response, version 1: outcome 25 (negative value)
173=Decision tree response, version 1: outcome 26 (negative value)
174=Decision tree response, version 1: outcome 27 (negative value)
175=Decision tree response, version 1: outcome 28 (negative value)
176=Decision tree response, version 1: outcome 29 (negative value)
177=Decision tree response, version 1: outcome 30 (negative value)
178=Decision tree response, version 1: outcome 31 (negative value)

251=Decision tree response, version 2: outcome 1 (negative value)
252=Decision tree response, version 2: outcome 2 (negative value)
253=Decision tree response, version 2: outcome 3 (negative value)
254=Decision tree response, version 2: outcome 4 (negative value)
255=Decision tree response, version 2: outcome 5 (negative value)
256=Decision tree response, version 2: outcome 6 (negative value)
257=Decision tree response, version 2: outcome 7 (negative value)
258=Decision tree response, version 2: outcome 8 (negative value)
259=Decision tree response, version 2: outcome 9 (negative value)
260=Decision tree response, version 2: outcome 10 (negative value)
261=Decision tree response, version 2: outcome 11 (negative value)
262=Decision tree response, version 2: outcome 14 (negative value)
263=Decision tree response, version 2: outcome 15 (negative value)
264=Decision tree response, version 2: outcome 16 (negative value)
265=Decision tree response, version 2: outcome 17 (negative value)
266=Decision tree response, version 2: outcome 18 (negative value)
267=Decision tree response, version 2: outcome 19 (negative value)
268=Decision tree response, version 2: outcome 20 (negative value)
269=Decision tree response, version 2: outcome 22 (negative value)
270=Decision tree response, version 2: outcome 23 (negative value)
271=Decision tree response, version 2: outcome 24 (negative value)
272=Decision tree response, version 2: outcome 25 (negative value)
273=Decision tree response, version 2: outcome 26 (negative value)
274=Decision tree response, version 2: outcome 27 (negative value)
275=Decision tree response, version 2: outcome 28 (negative value)
276=Decision tree response, version 2: outcome 29 (negative value)
277=Decision tree response, version 2: outcome 30 (negative value)
278=Decision tree response, version 2: outcome 31 (negative value)

351=Decision tree response, version 3: outcome 1 (negative value)
352=Decision tree response, version 3: outcome 2 (negative value)
353=Decision tree response, version 3: outcome 3 (negative value)
354=Decision tree response, version 3: outcome 4 (negative value)
355=Decision tree response, version 3: outcome 5 (negative value)
356=Decision tree response, version 3: outcome 6 (negative value)
357=Decision tree response, version 3: outcome 7 (negative value)
358=Decision tree response, version 3: outcome 8 (negative value)
359=Decision tree response, version 3: outcome 9 (negative value)
360=Decision tree response, version 3: outcome 10 (negative value)
361=Decision tree response, version 3: outcome 11 (negative value)
362=Decision tree response, version 3: outcome 14 (negative value)
363=Decision tree response, version 3: outcome 15 (negative value)
364=Decision tree response, version 3: outcome 16 (negative value)
365=Decision tree response, version 3: outcome 17 (negative value)
366=Decision tree response, version 3: outcome 18 (negative value)
367=Decision tree response, version 3: outcome 19 (negative value)
368=Decision tree response, version 3: outcome 20 (negative value)
369=Decision tree response, version 3: outcome 22 (negative value)
360=Decision tree response, version 3: outcome 23 (negative value)
371=Decision tree response, version 3: outcome 24 (negative value)
372=Decision tree response, version 3: outcome 25 (negative value)
373=Decision tree response, version 3: outcome 26 (negative value)
374=Decision tree response, version 3: outcome 27 (negative value)
375=Decision tree response, version 3: outcome 28 (negative value)
376=Decision tree response, version 3: outcome 29 (negative value)
377=Decision tree response, version 3: outcome 30 (negative value)
378=Decision tree response, version 3: outcome 31 (negative value)

451=Decision tree response, version 4: outcome 1 (negative value)
452=Decision tree response, version 4: outcome 2 (negative value)
453=Decision tree response, version 4: outcome 3 (negative value)
454=Decision tree response, version 4: outcome 4 (negative value)
455=Decision tree response, version 4: outcome 5 (negative value)
456=Decision tree response, version 4: outcome 6 (negative value)
457=Decision tree response, version 4: outcome 7 (negative value)
458=Decision tree response, version 4: outcome 8 (negative value)
459=Decision tree response, version 4: outcome 9 (negative value)
460=Decision tree response, version 4: outcome 10 (negative value)
461=Decision tree response, version 4: outcome 11 (negative value)
462=Decision tree response, version 4: outcome 14 (negative value)
463=Decision tree response, version 4: outcome 15 (negative value)
464=Decision tree response, version 4: outcome 16 (negative value)
465=Decision tree response, version 4: outcome 17 (negative value)
466=Decision tree response, version 4: outcome 18 (negative value)
467=Decision tree response, version 4: outcome 19 (negative value)
468=Decision tree response, version 4: outcome 20 (negative value)
469=Decision tree response, version 4: outcome 22 (negative value)
470=Decision tree response, version 4: outcome 23 (negative value)
471=Decision tree response, version 4: outcome 24 (negative value)
472=Decision tree response, version 4: outcome 25 (negative value)
473=Decision tree response, version 4: outcome 26 (negative value)
474=Decision tree response, version 4: outcome 27 (negative value)
475=Decision tree response, version 4: outcome 28 (negative value)
476=Decision tree response, version 4: outcome 29 (negative value)
477=Decision tree response, version 4: outcome 30 (negative value)
478=Decision tree response, version 4: outcome 31 (negative value)

551=Decision tree response, version 5: outcome 1 (negative value)
552=Decision tree response, version 5: outcome 2 (negative value)
553=Decision tree response, version 5: outcome 3 (negative value)
554=Decision tree response, version 5: outcome 4 (negative value)
555=Decision tree response, version 5: outcome 5 (negative value)
556=Decision tree response, version 5: outcome 6 (negative value)
557=Decision tree response, version 5: outcome 7 (negative value)
558=Decision tree response, version 5: outcome 8 (negative value)
559=Decision tree response, version 5: outcome 9 (negative value)
560=Decision tree response, version 5: outcome 10 (negative value)
561=Decision tree response, version 5: outcome 11 (negative value)
562=Decision tree response, version 5: outcome 14 (negative value)
563=Decision tree response, version 5: outcome 15 (negative value)
564=Decision tree response, version 5: outcome 16 (negative value)
565=Decision tree response, version 5: outcome 17 (negative value)
566=Decision tree response, version 5: outcome 18 (negative value)
567=Decision tree response, version 5: outcome 19 (negative value)
568=Decision tree response, version 5: outcome 20 (negative value)
569=Decision tree response, version 5: outcome 22 (negative value)
570=Decision tree response, version 5: outcome 23 (negative value)
571=Decision tree response, version 5: outcome 24 (negative value)
572=Decision tree response, version 5: outcome 25 (negative value)
573=Decision tree response, version 5: outcome 26 (negative value)
574=Decision tree response, version 5: outcome 27 (negative value)
575=Decision tree response, version 5: outcome 28 (negative value)
576=Decision tree response, version 5: outcome 29 (negative value)
577=Decision tree response, version 5: outcome 30 (negative value)
578=Decision tree response, version 5: outcome 31 (negative value)

651=Decision tree response, version 6: outcome 1 (negative value)
652=Decision tree response, version 6: outcome 2 (negative value)
653=Decision tree response, version 6: outcome 3 (negative value)
654=Decision tree response, version 6: outcome 4 (negative value)
655=Decision tree response, version 6: outcome 5 (negative value)
656=Decision tree response, version 6: outcome 6 (negative value)
657=Decision tree response, version 6: outcome 7 (negative value)
658=Decision tree response, version 6: outcome 8 (negative value)
659=Decision tree response, version 6: outcome 9 (negative value)
660=Decision tree response, version 6: outcome 10 (negative value)
661=Decision tree response, version 6: outcome 11 (negative value)
662=Decision tree response, version 6: outcome 14 (negative value)
663=Decision tree response, version 6: outcome 15 (negative value)
664=Decision tree response, version 6: outcome 16 (negative value)
665=Decision tree response, version 6: outcome 17 (negative value)
666=Decision tree response, version 6: outcome 18 (negative value)
667=Decision tree response, version 6: outcome 19 (negative value)
668=Decision tree response, version 6: outcome 20 (negative value)
669=Decision tree response, version 6: outcome 22 (negative value)
660=Decision tree response, version 6: outcome 23 (negative value)
661=Decision tree response, version 6: outcome 24 (negative value)
662=Decision tree response, version 6: outcome 25 (negative value)
663=Decision tree response, version 6: outcome 26 (negative value)
664=Decision tree response, version 6: outcome 27 (negative value)
665=Decision tree response, version 6: outcome 28 (negative value)
666=Decision tree response, version 6: outcome 29 (negative value)
667=Decision tree response, version 6: outcome 30 (negative value)
668=Decision tree response, version 6: outcome 31 (negative value)

751=Decision tree response, version 7: outcome 1 (negative value)
752=Decision tree response, version 7: outcome 2 (negative value)
753=Decision tree response, version 7: outcome 3 (negative value)
754=Decision tree response, version 7: outcome 4 (negative value)
755=Decision tree response, version 7: outcome 5 (negative value)
756=Decision tree response, version 7: outcome 6 (negative value)
757=Decision tree response, version 7: outcome 7 (negative value)
758=Decision tree response, version 7: outcome 8 (negative value)
759=Decision tree response, version 7: outcome 9 (negative value)
760=Decision tree response, version 7: outcome 10 (negative value)
761=Decision tree response, version 7: outcome 11 (negative value)
762=Decision tree response, version 7: outcome 14 (negative value)
763=Decision tree response, version 7: outcome 15 (negative value)
764=Decision tree response, version 7: outcome 16 (negative value)
765=Decision tree response, version 7: outcome 17 (negative value)
766=Decision tree response, version 7: outcome 18 (negative value)
767=Decision tree response, version 7: outcome 19 (negative value)
768=Decision tree response, version 7: outcome 20 (negative value)
769=Decision tree response, version 7: outcome 22 (negative value)
770=Decision tree response, version 7: outcome 23 (negative value)
771=Decision tree response, version 7: outcome 24 (negative value)
772=Decision tree response, version 7: outcome 25 (negative value)
773=Decision tree response, version 7: outcome 26 (negative value)
774=Decision tree response, version 7: outcome 27 (negative value)
775=Decision tree response, version 7: outcome 28 (negative value)
776=Decision tree response, version 7: outcome 29 (negative value)
777=Decision tree response, version 7: outcome 30 (negative value)
778=Decision tree response, version 7: outcome 31 (negative value)

851=Decision tree response, version 7: outcome 1 (negative value)
852=Decision tree response, version 7: outcome 2 (negative value)
853=Decision tree response, version 7: outcome 3 (negative value)
854=Decision tree response, version 7: outcome 4 (negative value)
855=Decision tree response, version 7: outcome 5 (negative value)
856=Decision tree response, version 7: outcome 6 (negative value)
857=Decision tree response, version 7: outcome 7 (negative value)
858=Decision tree response, version 7: outcome 8 (negative value)
859=Decision tree response, version 7: outcome 9 (negative value)
860=Decision tree response, version 7: outcome 10 (negative value)
861=Decision tree response, version 7: outcome 11 (negative value)
862=Decision tree response, version 7: outcome 14 (negative value)
863=Decision tree response, version 7: outcome 15 (negative value)
864=Decision tree response, version 7: outcome 16 (negative value)
865=Decision tree response, version 7: outcome 17 (negative value)
866=Decision tree response, version 7: outcome 18 (negative value)
867=Decision tree response, version 7: outcome 19 (negative value)
868=Decision tree response, version 7: outcome 20 (negative value)
869=Decision tree response, version 7: outcome 22 (negative value)
870=Decision tree response, version 7: outcome 23 (negative value)
871=Decision tree response, version 7: outcome 24 (negative value)
872=Decision tree response, version 7: outcome 25 (negative value)
873=Decision tree response, version 7: outcome 26 (negative value)
874=Decision tree response, version 7: outcome 27 (negative value)
875=Decision tree response, version 7: outcome 28 (negative value)
876=Decision tree response, version 7: outcome 29 (negative value)
877=Decision tree response, version 7: outcome 30 (negative value)
878=Decision tree response, version 7: outcome 31 (negative value)

RANGE CARD RESPONSES FOR NEGATIVE NUMBERS

951=Range card response via [F9]: range A.  -$1 to -$100
952=Range card response via [F9]: range B.  -$101 to -$500
953=Range card response via [F9]: range C.  -$501 to -$750              
954=Range card response via [F9]: range D.  -$751 to -$1,000            
955=Range card response via [F9]: range E.  -$1,001 to -$2,500          
956=Range card response via [F9]: range F.  -$2,501 to -$5,000          
957=Range card response via [F9]: range G.  -$5,001 to -$7,500          
958=Range card response via [F9]: range H.  -$7,501 to -$10,000         
959=Range card response via [F9]: range I.  -$10,001 to -$25,000        
960=Range card response via [F9]: range J.  -$25,001 to -$50,000        
961=Range card response via [F9]: range K.  -$50,001 to -$75,000        
962=Range card response via [F9]: range L.  -$75,001 to -$100,000       
963=Range card response via [F9]: range M.  -$100,001 to -$250,000      
964=Range card response via [F9]: range N.  -$250,001 to -$1,000,000    
965=Range card response via [F9]: range O.  -$1,000,001 to -$5,000,000  
966=Range card response via [F9]: range P.  -$5,000,001 to -$10,000,000 
967=Range card response via [F9]: range Q.  -$10,000,001 to -$25,000,000
968=Range card response via [F9]: range R.  -$25,000,001 to -$50,000,000
969=Range card response via [F9]: range S.  -$50,000,001 to -$100,000,000
970=Range card response via [F9]: range T.  Less than -$100,000,000

971=Range card response via DKDOL: range A.  -$1 to -$100
972=Range card response via DKDOL: range B.  -$101 to -$500
973=Range card response via DKDOL: range C.  -$501 to -$750
974=Range card response via DKDOL: range D.  -$751 to -$1,000
975=Range card response via DKDOL: range E.  -$1,001 to -$2,500
976=Range card response via DKDOL: range F.  -$2,501 to -$5,000
977=Range card response via DKDOL: range G.  -$5,001 to -$7,500
978=Range card response via DKDOL: range H.  -$7,501 to -$10,000
979=Range card response via DKDOL: range I.  -$10,001 to -$25,000
980=Range card response via DKDOL: range J.  -$25,001 to -$50,000
981=Range card response via DKDOL: range K.  -$50,001 to -$75,000
982=Range card response via DKDOL: range L.  -$75,001 to -$100,000
983=Range card response via DKDOL: range M.  -$100,001 to -$250,000
984=Range card response via DKDOL: range N.  -$250,001 to -$1,000,000
985=Range card response via DKDOL: range O.  -$1,000,001 to -$5,000,000
986=Range card response via DKDOL: range P.  -$5,000,001 to -$10,000,000
987=Range card response via DKDOL: range Q.  -$10,000,001 to -$25,000,000
988=Range card response via DKDOL: range R.  -$25,000,001 to -$50,000,000
989=Range card response via DKDOL: range S.  -$50,000,001 to -$100,000,000
990=Range card response via DKDOL: range T.  Less than -$100,000,000


RESPONDENT-PROVIDED DOLLAR RANGE FOR NEGATIVE NUMBERS

991=Upper and lower bounds given (negative amount)
992=Upper bound given, lower bound missing (negative amount)
993=Lower bound given, upper bound missing (negative amount)


OTHER RANGE RESPONSES THAT YIELDED NO NUMERICAL BOUNDING INFORMATION:
ALL VARIABLES WITH J-CODE VALUES BELOW THIS POINT INITIALLY CONTAIN
MISSING VALUE CODES AND ALL VARIABLES WITH J-CODE VALUES ABOVE THIS
POINT INITIALLY CONTAIN A RANGE MID-POINT OR OTHER SUCH VALUE

INTERVIEWER COMMENT INDICATING NEGATIVE NUMBER

994=Decision tree response, any version: outcome 21 (negative amount)
995=Decision tree response, any version: outcome 12 (negative amount)
996=Decision tree response, any version: outcome 13 (negative amount)

997=R reached range card field by agreeing to give a range at a
    dollar field, volunteered to give a letter from the range card, and
    subsequently responded DK/Refuse letter from the range card
    (negative amount)
998=R answered DK/Refused to a dollar question, volunteered to give a
    letter from the range card, and subsequently responded DK/Refuse
    letter from the range card (negative amount)

999=R reached a field allowing both an upper bound and a lower bound
    for a dollar amount by volunteering to give a range, but
    subsequently responded DK/Ref to both upper and lower bound
    (negative amount)

1000=R answered DK to main $ question, and refused following
     question requesting a range from the range card (negative amount)
1001=R answered Ref to main $ question, and refused following
     question requesting a range from the range card (negative amount)

NO INDICATION OF NEGATIVE NUMBER


1094=Decision tree response, any version: outcome 21
1095=Decision tree response, any version: outcome 12
1096=Decision tree response, any version: outcome 13

1097=R reached range card field by agreeing to give a range at a
    dollar field, volunteered to give a letter from the range card, and
    subsequently responded DK/Refuse letter from the range card
1098=R answered DK/Refused to a dollar question, volunteered to give a
    letter from the range card, and subsequently responded DK/Refuse
    letter from the range card

1099=R reached a field allowing both an upper bound and a lower bound
    for a dollar amount by volunteering to give a range, but
    subsequently responded DK/Ref to both upper and lower bound


1100=R answered DK to main $ question, and refused following
     question requesting a range from the range card
1101=R answered Ref to main $ question, and refused following
     question requesting a range from the range card

OTHER CODES FOR MISSING DATA

2050 = original response was DK.
2051 = original response was NA (includes interviewer errors,
     and missing data resulting from editing decisions).  Does not
     include data missing as a result of missing higher-order questions.
2052 = original response missing as a result of missing information for
     a higher-order question (typically a YES/NO cut question).  In
     this case, the higher-order question has been imputed in such
     a way as to render the response appropriate.  Also includes some
     other miscellaneous cases: (1) if a dollar variable was missing
     and DKDOL returned a DK/REF, the corresponding frequency is given
     a missing value code equal to that of the dollar field; (2)
     similarly, for clusters of variables containing a dollar amount
     and percent options.
2053 = refused
2054 = some, DK how many (see B6).

2060 = unresolved data problem (none should remain in final dataset).

2079 = data missing because of questionnaire error, or data not collected.
2080 = recode variable, missing because data not collected for
       sub-group, data to be imputed.
2081 = recode variable, some, but not all components originally missing.
2082 = recode variable, all components originally missing.

2097 = override of reported information with (at least partially)
       imputed data
2098 = override of reported/inap./other information with a missing value.

2099 = used for absent spouse for J104 or J105 when X104 or X105 < 0.

3000 = data missing because R broke off the interview (each of these
       cases reviewed to be sure that sufficient information is
       reported that the case can count as a "partial accepted as
       complete")

3001 = program, reporting or recording error.

3002 = temporary value given to variables containing illegal values.
       These will all be resolved in editing and converted to other
       existing codes.  (includes "range U")

3003 = illegal zeroes

3004 = uninformative/irrelevant verbatim response

3005 = data not available (applies to data from HEF)


General instructions for J variable coding for recoded variables:
  When a recoded variable is taken directly from another single
    X-variable, it should have the same J-variable code.
  When a recoded variable may come from a single variable in the
    original X-variables, or as the result of a calculation based on
    some number of X-variables, it is important to distinguish the
    information content in the J-variables.  As noted above, when the
    value is taken directly, the J-variable should have exactly the
    same value as that for the X-variable's shadow J-variable.
    However, when some calculation is involved, this should be
    reflected in the J-variable -- codes 8, 2081, and 2082.
  When a recode cannot be computed because some part of the underlying
    information was not collected for some subset of cases, the
    recode's J-variable should be coded 9 or 2080.


GRIDS

Some sets of questions have a natural iterative pattern. For example, the survey asks for detailed information on up to the first six checking accounts owned by the PEU, and summary information is collected about all remaining accounts. The detailed questions are the same for each account. In past interviews done with paper and pencil, some respondents have resisted answering all the detailed questions and have been willing to provide only summary information. Typically, interviewers recorded the summary information in the margins of the questionniare, and editors allocated the data to the skipped questions according to a set of fixed rules. To allow for a variety of respondent-interviewer interactions in the SCF CAPI program, the grid questions were organized to provide a way of collecting summary information in a systematic way. We refer to the associated summary variables as "mop-up variables." Past surveys also indicated that some respondents recalled additional instances of items once they began answering questions in a grid, but interviewers often did not revise the originally reported number. The CAPI procedures were set up to allow for this possibility as well.

Consider first a respondent who gives a non-missing response to the question that asks for the number of items of the type to be queried in the grid. The interviewer would ask the respondent the first set of detailed questions on the item. Then, the interviewer would be confronted with a question (not to be read to the respondent);

INTERVIEWER: CAN R PROVIDE INFORMATION ABOUT ANOTHER xxxx?

The intention of this question was to allow the interviewer to deal with a potentially hostile respondent and immediately branch to the mop-up questions. If the respondent was cooperative, the interviewer entered a YES response and continued through an identical procedure for each iteration until either the number of items reported was exhausted, or the maximum number of detailed questions was asked and the mop-up question was asked to get summary information on all remaining items. If the respondent reported a number of items less than the maximum number about which the detailed questions are asked, the following question was asked at the end of the final iteration:

Do you (or your family living here) have another xxxx?

A YES response here indicates that the respondent recalled an additional instance in the process of answering the detailed questions. A respondent could continue to "add" iterations until the maximum number of iterations is reached and the mop-up questions are asked.

Another possibility is that a respondent may either not know or be unwilling to tell the number of instances of an item. Because it is known that there is at least one such instance, the first set of detailed questions is asked. Then the respondent is asked:

Do you (or your family living here) have another xxxx?

The questioning then proceedes exactly as it would for a respondent who recalled additional instances.

In processing the data, several steps were taken. Sometimes, interviewers sensed an unwillingness to answer additional questions even though only one more instance remained. In such cases, the mop-up data were mapped into the grid. The fact of this movement is not directly recorded in the J-variables for such cases, though the movement can be deduced from the patterns of J-variables of other questions within an iteration that do not have mop-up equivalents. When respondents added instances, the originally reported number was updated and stored in the customary SCF variable number. The originally reported number of instances has been retained in the dataset since such information cannot be recovered in any other way from the data made available. When summary information was given by respondents who broke off their responses in a grid prematurely, that information was used to bound the imputations of the detailed data. Data items that have an associated J-variable with a value of 90 are ones where a complete response was given in the parallel mop-up variable, and those with a J-variable of 91 are ones where a range response was given in the parallel mop-up variable. There are some complicated mixed cases where a respondent did not give a non-missing value for the number of instances, but was willing to provide non-missing mop-up data. Though tedious, it is possible to deduce this information from the data provided.



ACKNOWLEDGMENTS

The SCF is a large project that involves intense commitment by many people. At the Federal Reserve, the main project staff involved with the creation of the data has included Gerhard Fries, Arthur Kennickell, Kevin Moore, Martha Starr-McCluer, Amy Stubbendick, Annika Sunden, Brian Surette, and Diane Whitmore. Robert Dietz, an unpaid FRB intern, made valuable contributions, including looking up the value of every automobile of every respondent in a used car guide. Important support has come from the FRB officer corps, particulary Edward Ettin and Myron Kwast who have invested their credibility in making the project possible. In addition to funding the survey, the individual members of the Board of Governors have actively encouraged the use of the survey. Support from the Statistics of Income Division at the IRS has been essential. Louise Woodburn has been deeply involved in the statistical design, weighting and disclosure review of the survey. Barry Johnson has been tireless in his work to obtain the necessary data for the selection of the list sample, in his work on the disclosure review, and in sharing the insights he has gained in working with the IRS estate tax data. Dan Skelly, the director of SOI, and James Nunns at the Office of Tax Analysis at the Department of the Treasury have encouraged us through many difficult periods. At the National Opinion Research Center at the University of Chicago, very many people have touched the project in important ways. The project directors for the 1995 SCF at NORC were Nick Holt and Alisu Schoua-Guesberg. Geoff Walker and Val Cooke developed the software used for CAPI. At the time the program was written, it was the largest such program ever constructed, and it is only through their creativity and dedication that the ultimate product was successful. Deep thanks are due to many others at the NORC central office, including Phil DePoy, Martin Frankel, Karen Grigorian, Thomas Harris, Nick Holt, Felice Levin, Judy Lindmark, Shawn Marsh, Jim Rogers, William Tillford, Suzanne Turner, and Robert Wagers. I apologize to the many other people whose names I cannot remember. One of the greatest strengths of NORC is its field staff. The managers in the field for the 1995 SCF included Lee Brandon, Idamae Downs, Shirley Flood, Lynn Gallagher, Rececca Harrass, Janice Hosier, Patricia Johnson, Alice Lavka, Myrna Luncy, Susan Miller, Nancy Mutz, Sandra Pitzer, Norma Smith, Barbara Watt, Linda Wiedmer, and Jacqueline Winchell. These are very creative and dedicated people. Pat Phillips deserves special recognition in her superb management of field activities. The interviewers, some of whom may prefer not to be named, were the people who did the hardest work. In 1995, SCF interviewers included many experienced interviewers--some on earlier SCFs--and others from a wide variety of backgrounds (there were business people, musicians, painters, scholars, writers, and many others). They deserve the deep gratitude of all users of the SCF data. The only people who gave more than the interviewers were the survey respondents, who are necessarily anonymous. May every user remember that some person gave his or her time to create the data that make their analysis possible. No set of acknowledgements would be complete without mentioning three people. Fritz Scheuren, formerly director of Statistics of Income at the IRS has provided early and continuing encouragement, insights, and support for the SCF project. Bob Avery, my predecessor as director of the SCF, is a colleague who not only created the atmosphere that made the current development of the project possible, but who continues to contribute as a sounding board for our ideas. Finally, Dorothy S. Projector, project director of the Federal Reserve's landmark 1962-63 Survey of Financial Characteristics of Consumers, set a very high standard for all future work on household wealth surveys.

Top of Page | Next: Question Text, Variable Names, & Responses
Home | Surveys | OSS | SCF index | 1995 SCF index

To comment on this site, please fill out our feedback form.
Last update: February 29, 2000, 5:00pm