Finance and Economics Discussion Series: 2007-67 Screen Reader version

Continuous Time Extraction of a Nonstationary Signal with Illustrations in Continuous Low-pass and Band-pass Filtering

Keywords: Continuous Time Processes, Cycles, Hodrick-Prescott Filter, Linear Filtering, Signal Extraction, Turning points

Abstract:

This paper sets out the theoretical foundations for continuous-time signal extraction in econometrics. Continuous-time modeling gives an effective strategy for treating stock and flow data, irregularly spaced data, and changing frequency of observation. We rigorously derive the optimal continuous-lag filter when the signal component is nonstationary, and provide several illustrations, including a new class of continuous-lag Butterworth filters for trend and cycle estimation.

Disclaimer:

Disclaimer: This report is released to inform interested parties of research and to encourage discussion. The views expressed on statistical issues are those of the authors and not necessarily those of the U.S. Census Bureau or of the Federal Reserve Board.

1 Introduction

This paper concentrates on signal extraction in continuous-time. The goal is to set the stage for the development of coherent discrete filters in various applications. Thus, we start by setting up a fundamental signal extraction problem and by investigating the properties of the optimal continuous-lag filters . This part of the frequency and time domain analysis is done independently of sampling type and interval and so may be seen as fundamental to the dynamics of the problem.

A key result of the paper is a proof of the signal extraction formula for nonstationary models; this is crucial for many applications in economics. In discrete-time, methods for nonstationary series rely on theoretical foundations, as set out in Bell (1984). In continuous-time, Whittle (1983) sketches an argument for stationary models; a satisfactory proof of the signal extraction formula in this case is provided by Kailath, Sayed, and Hassibi (2000). Whittle (1983) also provides results for the nonstationary case, but omits proof, and in particular, fails to consider the initial value assumptions that are central to the problem.

In this paper, we extend the proof to the case of a nonstationary signal, that is, integrated of order , where is an integer. We also treat the case of a white noise irregular, which is frequently used in standard models in continuous-time econometrics. Since continuous-time white noise is essentially the first derivative of Brownian motion (which is nowhere differential), this requires a careful mathematical treatment to ensure that the signal extraction problem is well-defined.

A second result is the development of a class of continuous-lag Butterworth filters for economic data. In particular, we introduce low-pass and band-pass filters in continuous-time that are analogous to the filters derived by Harvey and Trimbur (2003) for the corresponding discrete-time models, and their properties are illustrated through plots of the continuous-time gain functions. One special case of interest is the derivation of a continuous-lag filter from the smooth trend model; this gives a continuous-time extension of the popular Hodrick-Prescott (HP) filter (Hodrick and Prescott, 1997). At the root of the model-based band-pass is a class of higher order cycles in continuous-time. This class generalizes the stochastic differential equation (SDE) model for a stochastic cycle developed in Harvey (1989) and Harvey and Stock (1993).

The study of business cycles has remained of interest to researchers and policymakers for some time. Some of the early work in continuous-time econometrics was geared toward this application; Kalecki (1935) and James and Belz (1936) used a model in the form of a differential-difference equation (DDE) to describe business cycle movements. The DDE form gives an alternative to the SDE form that is of some theoretical interest; see Chambers and McGarry (2002). Its usefulness for methodology seems, however, limited since the DDE admits no convenient representation in either the time or frequency domain. The SDE form, in contrast, has an intuitive structure. In introducing the class of higher order models, we derive analytical expressions for the spectral density; this gives a clear summary of the cyclical properties of the model.

Our formulation remains general so that, for instance, it includes the Continuous-Time Autoregressive Integrated Moving Average (CARIMA) processes of Brockwell and Marquardt (2005). These follow the SDE form and so can be handled analytically. Throughout the paper, examples are given to help explain the methodology.

In focussing on continuous-time foundations in this paper, we also note the clear practical motivations for this strategy. A continuous-time approach gives flexibility in a number of directions: in the treatment of stock and flow variables in economics, in working with missing data and with irregularly spaced data, and in handling a general frequency of observation. This last point is immediately practical; even when considering just a single economic variable, with different sampling frequencies (for instance, quarterly and annual), to preserve consistency in discrete trend estimates requires a unified basis for filter design. See Bergstrom (1988, 1990) for further discussions of the advantages of continuous-time analysis.

The practical application of the continuous-time strategy is sketched at the end of this paper, and is set out in greater detail in a companion paper (McElroy and Trimbur, 2007); therein we examine how the optimal continuous-lag filter may be discretized to yield expressions for the discrete-time weights appropriate for data sampled under various conditions. The method is illustrated with a number of examples, including the continuous-time HP filter. In recent work, Ravn and Uhlig (2002), Maravall and del Rio (2001), and Harvey and Trimbur (2007) have investigated how to adapt the HP filter to monthly and annual series, given that the filter was originally designed for quarterly US GDP. We show how the continuous-time analogue of the HP filter is discretized to yield a set of consistent discrete filters, thereby solving the problem of adapting the filter.

Most previous methods rely on the discretization of the underlying continuous-time model, so that the analysis is done in discrete-time. Our approach instead uses the continuous-time formulation of filtering more directly. Thus, the method centers on the use of the underlying continuous-lag filters, on their properties, and on the transformations needed for discrete datasets.

The theoretical results derived here set the foundation for broader applications. For instance, a new class of low-pass and band-pass filters are presented in Section 4 of this paper. Further, in considering the signal extraction problem in continuous-time, we derive filters to measure velocity and acceleration of a time series, which could be useful for the analysis of turning points.

The rest of the paper is organized as follows. Section 2 reviews continuous-time filtering, based on material from Priestley (1981), Hannan (1970), and Koopmans (1974). Section 3 sets out the signal extraction framework. In section 4, examples are given for economic series; an extension of the standard HP filter is derived from the smooth trend model, and a general class of band-pass filters is presented. Extensions to methodology are then described, specifically, the conversion of continuous-lag filters to estimate growth rates and other characteristics of an underlying component. Section 5 discusses the application of the method to real series, and Section 6 concludes. Proofs are given in the Appendix.

2 Continuous-Time Processes and Filters

This section sets out the theoretical framework for the analysis of continuous-time signal processing and filtering. Much of the treatment follows Hannan (1970); also see Priestley (1981) and Koopmans (1974). Let for , the set of real numbers, denote a real-valued time series that is measurable and square-integrable at each time. The process is weakly stationary by definition if it has constant mean - set to zero for simplicity - and autocovariance function given by

 (A.1)

Note that the autocovariances are defined for the continuous range of lags . Thus if is a Gaussian process, completely describes the dynamics of the stochastic process. A convenient model for stationary continuous-time processes that is analogous to moving averages in discrete time series is given by
 (A.2)

where is square integrable on , and is continuous-time white noise (). In this case, , where . If is Gaussian, then , the derivative of a standard Wiener process. Though is nowhere differentiable, can be defined using the theory of Generalized Random Processes, as in Hannan (1970, p. 23). It is convenient to work with models expressed in terms of the disturbance , because this makes it easy to see the connection with discrete models based on white noise disturbances.

As an example, Brockwell's (2001) Continuous-time Autoregressive Moving Average () models can be written as

where is a polynomial of order , and is a polynomial of order , and is the derivative operator. The condition for stationarity is analogous to the one for a discrete polynomial: the roots of the equation must all have strictly negative real part. It can be shown (Brockwell and Marquardt, 2005) that following such a stationary model can be re-expressed in the form (2), for an appropriate .

Next, we define the continuous-time lag operator via the equation

 (A.3)

for any and for all times . We denote the identity element by , just as in discrete time. Then a Continuous-Lag Filter is an operator with associated weighting kernel (an integrable function) such that
 (A.4)

The effect of the filter on a process is
 (A.5)

The requirement of integrability for the function is a mild condition that is sufficient for many problems. However, when the input process is nonintegrable over , an integrable may become inadmissible as a kernel, i.e., it may fail to give a well-defined process as output. In such a case, we may need to assume that is differentiable to a specified order, with integrable or square integrable derivatives.

This development parallels the discussion in Priestley (1981), where the filter is written as

with denoting the Laplace transform of . As will be discussed below, we can make the identification , which effectively maps Priestley's formulation into (4).

2.1 Continuous-lag filters in the Frequency Domain

In analogy with the discrete-time case, the frequency response function is obtained by replacing by the argument :

 (A.6)

Denoting the continuous-time Fourier Transform by , equation (6) can be written as .

Example 1

Consider a Gaussian kernel In this example, the inclusion of the normalizing constant means that the function integrates to one; since applying the filter tends to preserve the level of the process, it could be used as a simple trend estimator. The frequency response has the same form as the weighting kernel and is given by .

The power spectrum of a continuous time process is the Fourier Transform of its autocovariance function :

 (A.7)

The gain function of a filter is the magnitude of the frequency response, namely
 (A.8)

As in discrete time series signal processing, passing an input (stationary) process through the filter results in an output process with spectrum multiplied by the squared gain; so the gain function gives information about how contributions to the variance at various frequencies are attenuated or accentuated by the filter. Note that in contrast to the discrete case where the domain is restricted to the interval the functions in (7) and (8) are defined over the entire real line. Given a candidate gain function taking the inverse Fourier Transform in continuous-time yields the associated weighting kernel:
 (A.9)

This expression is well-defined for any integrable . Integrability is a mild condition satisfied by nearly all filters of practical interest.

Example 2

Weighting kernels that decay exponentially on either side of the observation point have often been applied in smoothing trends; this pattern arises frequently in discrete model-based frameworks, e.g., Harvey and Trimbur (2003). Similarly, in the continuous time setting, a simple example of a trend estimator is the double exponential weighting pattern , In this case, one can show using integral calculus that , as a formal expression. The Fourier transform has the same form as a Cauchy probability density function, namely . This means that the gain of the low-pass filter decays slowly as

2.2 The Derivative Filter and Nonstationary Processes

In (3), the extension of the lag operator to the continuous-time framework is made explicit. In building models, we can treat as an algebraic quantity as in the discrete-time framework. The extension of the differencing operator, , used to define nonstationary models, is discussed in Hannan (1970, p. 55) and Koopmans (1974).

To define the mean-square differentiation operator , consider the limit of measuring the displacement of a continuous-time process, per unit of time, over an arbitrarily small interval :

The limits are interpreted to converge in mean square. Thus, we see that taking the derivative has the same effect as applying the continuous lag filter . This holds for all mean-sqaure differentiable processes , implying ; note that Priestley (1981) derives the equivalent via Taylor series arguments. This operator will be our main building block for nonstationary continuous-time processes. It will also be useful in thinking about rates of growth and rates of rates of growth - the velocity and acceleration of a process, respectively. We refer to as the derivative filter; taking powers yields higher order derivative filters. For instance, ( gives a measure of acceleration with respect to time. We note that the frequency response of is .

Standard discrete-time processes are written as difference equations, built on white noise disturbances. In analogy, continuous time processes can be written as differential equations, built on an extension of white noise to continuous-time. Thus a natural class of models is the Integrated Filtered Noise processes, which are given by

 (A.10)

for some integrable , and order of differential . This class will be denoted ; it encompasses a wide variety of linear continuous-time models. As an example, Brockwell and Marquardt (2005) define the class of Continuous-time Autoregressive Integrated Moving Average () models as the solution to
 (A.11)

Thus, applying the derivative filter times transforms into a stationary process. The autoregressive order is the degree of the polynomial , and the moving average order is the degree of the polynomial . The constraint is necessary to ensure the process is well-defined; this ensures that the spectral density of is an integrable function. This gives the process. The original process is nonstationary and is said to be integrated of order in the continuous-time sense. Now this can be put into an form: starting from (11), we can write (formally)
 (A.12)

Using the definition of , it follows that with . Deriving the kernel requires an expression for the rational function in terms of an integral over powers of , namely . Using the formulation of Priestley (1981), we see that models can be equivalently expressed as models where the kernel 's Laplace transform is a rational function.

Example 3: Higher order stochastic cycles

We consider a general class of continuous-time stochastic cycles. These are indexed by a positive integer that denotes the order of the model.

Denote the cyclical process by , and let represent an auxiliary process used in the construction of the model. Define . An th order stochastic cycle in continuous-time is given by

where represent additional auxiliary processes. The coefficient matrix in (13) is
The parameter is called the damping factor; it satisfies . The stochastic variation in the cycle per unit time depends on the continuous-time variance parameter . The parameter controls the persistence of cyclical fluctuations, and its specific role depends on . Generally, for higher orders, the model generates smoother dynamics for the cycle.

Since the parameter corresponds roughly to a peak in the spectrum, it indicates a central frequency of the cycle, and 2 is an average period of oscillation. As is a frequency in continuous-time, it can be any positive real number, though for macroeconomic data, business cycle theory will usually suggest a value in some intermediate range.

To construct a Gaussian cyclical process, the increment can be derived from Brownian motion , that is, .

In analyzing cyclical behavior, it is natural to consider the frequency domain properties. To derive the spectra for various , start by rewriting (13) as a recursive formula:

with the initialization . The solution is
so that the th order cycle has a form. The power spectrum is
 (A.14)

One advantage of working with the cycle in continuous-time is the possibility of analyzing turning points instantaneously. The expected incremental change in the cycle for is

 (A.15)

For , this reduces to
 (A.16)

Based on the smoothness of estimated cycles, the higher order models are likely to give more reliable indication of turning points.

Expression (16) is similar to the one in Harvey (1989, p. 487). There is, however, a slight difference because the form in (13) is analogous to the Butterworth form' used in Harvey and Trimbur (2003). The alternative form in Harvey (1989) and in Harvey and Stock (1985) is analogous to the balanced form' also considered in Harvey and Trimbur (2003). The key advantage of the Butterworth form, as used here, is its convenience for the analysis of spectra and gain functions.

We have shown that the class of models in (13) are equivalent to processes whose parameters satisfy the conditions needed for periodic behavior. As an alternative, an openly specified model can be used to describe a general pattern of serial correlation; thus, a could be used to capture first-order autocorrelation in the noise component, for instance, due to temporary effects of weather. When there is clear indication of cyclical dynamics, however, the models in (13) give a more direct analysis that is easier to interpret.

The properties of a class of stochastic cycles1 are set out in Trimbur (2006) for the discrete-time case. The properties of the continuous-time models give a similar flexibility in describing periodic behavior.

Figure 1 shows the spectrum for parameter values . In particular, the function is plotted, where the normalizing constant includes the unconditional variance of the cycle; this is computed in Mathematica by numerical integration of the power spectrum for given values of and . For , the damping factor is set to , and for , it is set to . Lower values of are appropriate for the higher order models because of their resonance property. Thus, the cyclical shocks reinforce the periodicity in , making the oscillations more persistent for given . The difference in spectra in figure 1 indicates that the periodicity is more clearly defined for the second order cycle.

The spectrum peaks at a period around for moderate values of , say which is the standard business cycle range. The maximum does not occur exactly at , however, except as tends to unity. The case gives a nonstationary cycle, where one could, in theory, forecast out to unlimited horizons. Thus, in economic modeling, attention is usually restricted to stationary models where .

3 Signal Extraction in Continuous Time

This section develops the signal extraction problem in continuous time. A new result with proof is given for estimating a nonstationary signal from stationary noise. Whittle (1983) shows a similar result for nonstationary processes, but omits the proof and in particular, fails to recognize the importance of initial conditions. Kailath, Sayed, and Hassibi (2000, p. 221 - 227) prove the formula for the special case of a stationary signal. We extend the treatment of Whittle (1983) by providing proofs, at the same time illustrating the importance of initial value assumptions to the result. Further, the cases where the differentiated signal or noise process or both are are treated rigorously.

3.1 Nonstationary signal and initial conditions

Consider the following model for a continuous time process :

 (A.17)

where is stationary. The aim is to estimate the underlying signal in the presence of the noise, and it will be assumed that or integrated of order .

In general, is any non-negative integer; the special case reduces to stationary . In many applications of interest, we have , so that the th derivative of denoted by , is stationary. It is assumed that and are mean zero and uncorrelated with one another. In the standard case, both autocovariance functions, and , are integrable. An extension could also be considered where or or both are represented by a multiple of the Dirac delta function, which gives rise to tempered distributions (see Folland, 1995); the associated spectral densities are flat, indicating a corresponding process.

The process satisfies the stochastic differential equation

 (A.18)

From Section 2.2, the spectral density of is
 (A.19)

From Hannan (1970, p. 81), the nonstationary process can be written in terms of some initial values plus a -fold integral of the stationary . For example, when ,

for some initial value random variable . Note that this remains valid both for and for . When ,
for initial position and velocity . In general, we can write
 (A.20)

with the operator defined by . Note that (20) holds for the signal as well.

For an process, let denote the collection of values and higher order derivatives at time . It is assumed that is uncorrelated with both and for all . This assumption is analogous to Assumption A in Bell (1984), except that now higher order derivatives are involved.

3.2 Formula for the optimal filter

Consider the theoretical signal extraction problem for a bi-infinite series that follows (17). The optimal linear estimator of the signal gives the minimum mean square error. Thus, the goal is to minimize such that for some weighting kernel . The notation for a continuous-lag filter was introduced earlier. The problem is to determine the optimal choice of for general nonstationary models of the form (17). The following theorem shows the main result.

Theorem 1   For the process in (17), suppose that is uncorrelated with both and for all . Also, assume that and are mean zero weakly stationary processes that are uncorrelated with one another, with autocovariance functions that are either integrable or given by constant multiples of the Dirac delta function, interpreted as a tempered distribution. Let
If is integrable with continuous derivatives (if , we only require that be continuous), then the linear minimum mean square error estimate of is given by
The function is the continuous weighting kernel of the optimal filter. The spectral density of the error process is
hence the MSE is

If is Gaussian, then is optimal among all estimators. The filter will be referred to as a continuous-lag Wiener-Kolmogorov () filter. This distinguishes from discrete-time model-based filters, which are only defined over a discrete set of lags. In contrast, here we focus on the model-based filters derived in continuous-time.

One of the important properties of the filters is that they pass, or preserve, polynomials, in analogy to discrete-lag filters constructed to have this property in discrete-time. In particular,

for a polynomial of sufficiently low degree. To make this explicit, the filter passes when
for any up to the degree of , with denoting the Kronecker delta. It is shown in the proof of Theorem 1 that, provided that the associated moments exist, a filter passes polynomials of degree up to .

Note that the noise and differentiated signal can be either or can have integrable autocovariance functions. The signal extraction problem for different cases determines different classes of weighting kernels. We can now define continuous-lag filters that reflect the nonstationary component of a time series.

In particular, the nonstationarity means that the signal includes a stochastic trend and so is represented by an integrated process. First, we show a simple example of the case ; this reduces to stationarity, so the only requirement on is continuity.

Example 4

For , let have autocovariance function denoted by . Suppose further that has autocovariance function . Then is characterized by , and the associated spectral densities are

The signal resembles a damped trend, whereas is a pink noise process that incorporates pseudo-cyclical and irregular fluctuations. The ratio of spectra is integrable and continuous, and from Example 2 the inverse Fourier Transform gives a simple filter with kernel .

Example 5

Consider now the case , and suppose that the spectral density of the differentiated signal is where has the form of the standard normal density function, and that for some constant . The signal extraction filter has a continuous-time frequency response given by

which yields a double-exponential weighting kernel . This kernel passes lines and constants and could be used as a simple device for trend smoothing.

4 Illustrations of Continuous-Lag Filtering

In this section, examples of continuous-lag filters are given for economic time series. The filters are based on the class of models; this class is particularly convenient for computing weighting kernels and offers flexibility for a range of applications. The spectral densities and that enter the formula for the gain are both rational functions in . Taking their ratio yields another rational function in for As these analytical expressions summarize the comprehensive effects of the filter, they can be studied and used in filter design in different contexts.

We focus on examples where models are set up within different signal extraction problems. The specifications are guided by applications of interest in economics; their solutions rely on the theorem given in the last Section for handling nonstationary series. In the first example, we start with the simplest case, the local level model. The second example considers an extension of the well-known HP filter, which has been widely used in macroeconomics as a detrending method. We show that the expression for the continuous-lag HP filter is relatively simple, and this gives a basis for detrending data with different sampling conditions.

The third example extends the treatment with a derivation of continuous-lag low-pass and band-pass Butterworth filters. This class of filters represents the analogue of the discrete-time filters introduced in Harvey and Trimbur (2003). The band-pass filters arise naturally as cycle estimators in a well-defined model that jointly describes trend, cyclical, and noisy movements. The general cyclical processes in continuous-time are defined in an analogous way to the discrete-time models studied in Trimbur (2006). Note that the cyclical components are equivalent to certain models. In the analysis of periodic behavior, it is more direct to work with the structural form; the frequency parameter, for instance, reflects the average, or central, periodicity.

The low-pass and band-pass filters we present have the property of mutual consistency. That is, they may be applied simultaneously. Other procedures, in contrast, do not preserve this property when the two filters are designed separately, or when they are based on different source models.

It is generally straightforward to investigate the weighting kernels of the filters. This involves calculating the residues of rational functions, a standard problem for which well-known procedures are available. Still, the computations can become burdensome in particular cases, so in presenting illustrations, we restrict attention to some standard filters. In these cases, the derivation of analytical results is feasible, with the expressions simple enough to provide a clear interpretation.

In the framework of filters, source models can be formulated to adapt to different situations. For instance, some series of Industrial Output are subject to weather effects that induce short-lived serial correlation. In estimating the trend, the base model can be set up with a low-order CAR or CARMA component.

The approach to filter design can also be adapted to focus on certain properties of the signal, such as rate of change. Thus, in a more general framework, our target of estimation becomes a functional of the signal. This opens the door to a number of potential applications, such as turning point analysis, where the interest centers on some aspect of the signal's evolution over an interval. After describing the basic principle, in a fourth example, we examine an application to measuring velocity and acceleration of signal.

Illustration 1: Local Level Model

The trend plus noise model is written as where denotes the stochastic level, and is continuous-time white noise with variance parameter , denoted by . See Harvey (1989) for discussion. An interpretation of the variance is that has autocovariance function for any (integrable) auxiliary weighting kernel .

The local level model assumes , where . The signal-noise ratio in the continuous-time framework is defined as . So the observed process requires one derivative for stationarity, and we write . The spectral densities of the differentiated trend and observed process are

Though the constant function is nonintegrable over the real line, the frequency response of the signal extraction filter is given by the ratio which is integrable. As in the previous example, the weighting kernel has the double exponential shape:
The rate of decay in the tails now depends on the signal-noise ratio of the underlying continuous-time model.

Illustration 2: Smooth Trend Model

The local linear trend model (Harvey 1989, p. 485) has the following specification:

where and are uncorrelated. Setting gives the smooth trend model for which noisy fluctuations in the level are minimized and the movements occur due to changes in slope. The data generating process is where is white noise uncorrelated with . Now the signal-noise ratio is

Recall that the discrete-time smooth trend model underpins the well-known HP filter for estimating trends in discrete time series; see Hodrick and Prescott (1997), as well as Harvey and Trimbur (2003). Here we develop an analogous filter for the continuous-time smooth trend model. We may write the model as

The spectral densities of the appropriately differentiated trend and series are
Hence the ratio gives the frequency response function of the filter; the error spectrum is . Taking the inverse Fourier transform of this function (see the appendix for details of the derivation) yields the weighting kernel
 (A.22)

This gives the continuous-time extension of the HP filter. From the discussion following Theorem 1, the kernel in (22) passes cubics.

Figure 2 shows the weighting function for three different values of . As the signal-noise ratio increases, the trend becomes more variable relative to noise, so the resulting kernel places more emphasis on nearby observations. Similarly, as decreases, the filter adapts by smoothing over a wider range. The negative side-lobes, apparent in the figure for enable the filter to pass quadratics.

Illustration 3: Continuous-Lag Band-Pass

Consider again the class of stochastic cycles in Example 3. A simple (nonseasonal) model for a continuous-time process in macroeconomics is given by

where is a trend component that accounts for long-term movements and the cyclical component follows (13) for index . The irregular is meant to absorb any random, or nonsystematic variation, and in direct analogy with discrete-time, it is assumed that in continuous-time, . The definition of the th order trend is

for integer . For , this gives standard Brownian motion. For is integrated Brownian motion, the continuous-time analogue of the smooth trend, as in the previous illustration.

In formulating the estimation of as a signal extraction problem, we set the nonstationary signal to and the noise' to . This is done just to map the estimation problem to the framework developed in the last Section; it is not intended to suggest any special importance of signal' as a target of extraction. Actually in this case, the `noise' part will usually be of greatest interest in a business cycle analysis. Thus, the optimal filter is constructed for , and the complement of this filter, ( yields the band-pass. Similarly, to formulate the estimation of , take and .

Thus, the class of continuous-lag Butterworth filters are given by

where stands for low-pass filter and stands for band-pass filter, both of order pair ( These expressions follow from combining the power spectrum of the cycle with the pseudo-spectrum of . Defining as the signal-noise ratio for the trend and as the signal-noise ratio for the cycle, it follows that
 (A.23)

 (A.24)

Here the order denotes the order of integration as determined by the stochastic trend model. The definitions in (23) and (24) parallel the development of Harvey and Trimbur (2003) for the discrete-time case. Note that in the continuous-time case, we must have integrability of the frequency response function over the entire real line rather than over a restricted interval.

Figure 3 illustrates the low-pass and band-pass gain functions for trend order and cycle orders The low-pass dips at intermediate frequencies to accommodate the presence of the cycle in the model. Figure 4 shows a comparison of the band-pass filter for and 4. The other parameters are the same as in the previous figure. Note the increased sharpness produced by the higher order model.

Illustration 4: Smooth Trend Velocity and Acceleration

In some applications, interest centers on some property of the signal, such as its growth rate, rather than on the value of the signal itself. In particular, consider the linear operator . The conditional expectation of is equal to applied to the conditional expectation of , since is linear. So for Gaussian processes, assuming that is integrable - where is the frequency response of the original filter - the weighting kernel for estimating is given by the th derivative of , that is, . The first derivative of the signal indicates a velocity, or growth rate. The second derivative indicates acceleration, or variation in growth rate.

More generally, we can consider signals with a linear operator. To compute the mean squared error of as an estimate of the target , multiply the error spectrum from Theorem 1 by the squared magnitude of . This results in the spectrum of the new error, whose integral equals the mean squared error. If, for example, then the error spectrum is multiplied by by , and the result is then integrated over ( .

Velocity and acceleration estimates can be computed for the HP filtered signal. The filters are constructed directly from the Smooth Trend model. In Newtonian mechanics, a local maximum in a particle's trajectory is indicated by zero velocity together with a negative acceleration; similarly, velocity and acceleration indicators may be used to discern a downturn or recession in a macroeconomic series.

Since is integrable, both derivatives of are well-defined. Direct calculation yields

The velocity filter, or first derivative with respect to time, has the interpretation of a growth rate for the trend. The weighting kernel in Figure 5 shows how the growth in the signal is assessed by comparing forward-looking displacements with recent displacements. Likewise, the acceleration indicates the second derivative, or curvature. The weighting kernel in Figure 6 has a characteristic sharp decline around the origin, so that contemporaneous and nearby values are subtracted in estimating changes in growth.

5 Application: unequally sampled data

In this Section, we outline a procedure for the practical application of the method. The discussion of discretization is kept concise, as this material is set out in detail in McElroy and Trimbur (2007).

Starting with a base model incontinuous-time, the classification into stock and flow is reflected in the measurement of observations. Denote the underlying process by . Given the sampling interval , a stock observation at the th time point is defined as

 (A.25)

The times of observations correspond to for integer . The discrete stock time series is then the sequence of values { .

A series of flow observations has the form

 (A.26)

where is both the interval of cumulation and the interval separating successive observation points. Note that, more generally, the times ..., ,... need not be equally spaced, but for now we assume for simplicity that the spacing is constant at .

For a finite sample, let be the column vector of observations. Then for a Gaussian process , the law of iterated conditional expectations yields

 (A.27)

That is, our best estimate of the signal at any time , given the observations , is a convolution of the weighting kernel with an interpolated and extrapolated estimates . Thus the estimate of the signal is computed as follows:
1. Determine from a fitted continuous-time model.
2. Compute for a set of values on a fine mesh.
3. Compute a numerical approximation to the integral in (27) and in this way approximate the signal estimate.

Explicit worked examples are beyond the scope of this paper, but the above suggests how the continuous-lag filter could be directly applied for a general problem. Another approach, which we prefer, is to first discretize and then apply the appropriate discrete filter to the observed data , and this approach is set out in detail in McElroy and Trimbur (2007). For now, it should be emphasized that our approach, based on a continuous-lag kernel, can handle signal estimation in a rather general context, that is, with an unequally spaced series of stock or flow data, and a with a signal time point lying in between observation times. This generality of the signal extraction problem cannot be achieved in a purely discrete-time setting and could only possibly be achieved, with great difficulty, in an approach requiring full model discretization.

6 Conclusion

This paper has solved the problem of establishing a theoretical foundation for continuous-time signal extraction with nonstationary models. Economic series commonly show some kind of stochastic trend. The rigorous treatment of nonstationarity given in this paper thus paves the way for the design of filters appropriate in economics. As examples we have presented a new class of low-pass and band-pass filters for time series.

In further work (see McElroy and Trimbur, 2007), we show how such continuous-lag filters may be discretized for application to real data. One special case of interest is how to adapt the HP filter to changing observation interval and to different modes of measurement, such as stock and flow sampling. More generally, a broad range of filters may be used as the basis for analysis. In addition to model flexibility, there is also flexibility how the filters may be adapted to the target of estimation. This has been illustrated with velocity and acceleration filters that could be used in applications where turning point indicators are of interest.

A key aspect of our approach is the rigor and generality of the treatment. A continuous-lag filter gives the basis for estimating a flexible target signal when the sample has various properties, such as unequal spacing, stock or flow sampling, missing data, and mixed frequency data.

Acknowledgements.

Appendix: Proofs

Proof of Theorem 1. Throughout, we shall assume that , since the case is essentially handled in Kailath et al. (2000). In order to prove the theorem, it suffices to show that the error process is orthogonal to the underlying process . By (20), it suffices to show that is orthogonal to and the initial values . So we begin by analyzing the error process produced by the proposed weighting kernel . We first note the following interesting property of . The moments of

for exist by the smoothness assumptions on , and are easily shown to equal zero if (i.e., for , the moments are zero so long as they exist - their existence is not guaranteed by the assumptions of the theorem). Moreover, the integral of is equal to 1 if . These properties ensure (when ) that the filter passes polynomials of degree less than This is because for . We first note that representation (20) also extends to the signal: Then the error process is
Since passes polynomials, , where is the Dirac delta function. Note that any filter that does not pass polynomials cannot be MSE optimal, since the error process will grow unboundedly with time. So we have
which is orthogonal to by Assumption A. Due to the representation (20), it is sufficient to show that the error process is uncorrelated with . For any real
which uses the fact that . Now we have
 (A.2)

If is integrable, we can write . If instead, then ; we can still use the above Fourier representation of in (A.2), because the various integrals will take care of the non-integrability of automatically. Since , we obtain that (A.2) is equal to
When integrated against , we use the moments property of to obtain

This uses , which is not integrable if ; yet will be integrable under the conditions of the theorem. As for the noise term in (A.1), we first note that exists for each since exists by assumption; this existence is interpreted in the sense of Generalized Random Processes (Hannan, 1970). In particular
This Fourier representation is valid even when , since is integrable by assumption. Similarly,
where the derivatives are interpreted in the sense of distributions - i.e., when this quantity is integrated against a suitably smooth test function, the derivatives are passed over via integration by parts:
Since for is integrable by assumption, we have , and the second term in (A.1) becomes
This cancels with the first term of (A.1), which shows that is MSE optimal. Using similar techniques, the error spectral density is obtained as well. Derivation of the Weighting Kernel in Illustration 2. We compute the Fourier Transform via the Cauchy Integral Formula (Ahlfors, 1979), letting for simplicity:
We can replace by because the integrand is even. The standard approach is to compute the integral of the complex function
along the real axis by computing the sum of the residues in the upper half plane, and multiplying by (since is bounded and integrable in the upper half plane). It has two simple poles there: and . The residues work out to be

respectively. Summing these and multiplying by gives the desired result, after some simplification. To extend beyond the case, simply let and multiply by by change of variable.

Bibliography

Ahlfors, L., (1979)
Complex Analysis. New York, New York: McGraw-Hill.
Bell, W., (1984)
Signal extraction for nonstationary time series. The Annals of Statistics 12, 646 - 664.
Bergstrom, A. R., (1988)
The History of Continuous-Time Econometric Models. Econometric Theory 4, 365-383.
Bergstrom, A. R., (1990)
Continuous Time Econometric Modelling. New York, New York: Oxford University Press.
Brockwell, P., (2001)
Lévy-Driven CARMA Processes. Annals of the Institute of Statistical Mathematics 53, 113-124.
Brockwell, P. and Marquardt, T., (2005)
Lévy-Driven and Fractionally Integrated ARMA Processes with Continuous Time Parameter. Statistica Sinica 15, 477-494.
Chambers, M. and McGarry, J., (2002)
Modeling Cyclical Behavior With Differential-Difference Equations in an Unobserved Components Framework. Econometric Theory 18, 387-419.
Folland, G., (1995)
Introduction to Partial Differential Equations. Princeton: Princeton University Press.
Gandolfo, G., (1993)
Continuous Time Econometrics. London, England: Chapman and Hall.
Hannan E., (1970)
Multiple Time Series. New York, New York: Wiley.
Harvey A. and Stock, J., (1985)
The Estimation of Higher-Order Continuous-Time Autoregressive Models. Econometric Theory 1, 97-117.
Harvey A. and Stock, J., (1993)
Estimation, Smoothing, Interpolation, and Distribution for Structural Time-Series Models in Continuous Time. In P.C.B. Phillips (ed.), Models, Methods and Applications of Econometrics, 55-70.
Harvey, A. and Trimbur, T., (2003)
General Model-Based Filters for Extracting Cycles and Trends in Economic Time Series. Review of Economics and Statistics 85, 244-255.
Harvey, A. and Trimbur, T., (2007)
Trend Estimation, Signal-Noise Rations and the Frequency of Observations. In G. L. Mazzi and G. Savio (ed.), Growth and Cycle in the Eurozone, 60-75. Basingstoke: Palgrave MacMillan.
Hodrick, R., and Prescott, E., (1997)
Postwar U.S. Business Cycles: An Empirical Investigation. Journal of Money, Credit, and Banking 29, 1 - 16.
James, R., and Belz, M., (1936)
On a Mixed Difference and Differential Equation. Econometrica 4, 157-160.
Jones R., (1981)
Fitting a Continuous-Time Autoregression to Discrete Data. In D.F. Findley (ed.), Applied Time Series Analysis, 651-674. New York: Academic Press.
Kailath T., Sayed, A., and Hassibi, B., (2000)
Linear Estimation. Upper Saddle River, New Jersey: Prentice Hall.
Kalecki, M., (1935)
A Macrodynamic Theory of Business Cycles. Econometrica 3, 327-344.
Koopmans, L., (1974)
The Spectral Analysis of Time Series. New York, New York: Academic Press, Inc.
Maravall, A. and del Rio, A., (2001)
Time Aggregation and the Hodrick-Prescott Filter. Bank of Spain Working Paper 0108.
McElroy T., and Trimbur, T., (2007)
A coherent approach to filter design and interpolation for nonstationary stock and flow time series observed at a variable sampling frequency. mimeo
Priestley M., (1981)
Spectral Analysis and Time Series. London: Academic Press.
Ravn, M. and Uhlig, H., (2002)
On Adjusting the HP Filter for the Frequency of Observation. Review of Economics and Statistics 84, 371 - 380.
Stock J., (1987)
Measuring Business Cycle Time. The Journal of Political Economy 95, 1240-1261.
Stock J., (1988)
Estimating Continuous-Time Processes Subject to Time Deformation: An Application to Postwar U.S. GNP. Journal of the American Statistical Association 83, 77-85.
Trimbur T., (2006)
Properties of higher order stochastic cycles. Journal of Time Series Analysis 27, 1-17.
Whittle, P., (1983)
Prediction and Regulation. Oxford: Blackwell Publishers.

Footnotes

* Corresponding author. Address: Division of Research and Statistics; 20th and C Street, NW; Federal Reserve Board; Washington, DC 20551. Email: Thomas.M.Trimbur@frb.gov. Return to Text
1. Note that these models have a different form from the models that would result from the exact discretization of (13). In general, starting with a continuous-time model with uncorrelated components leads to a discretized model that has either correlated components or MA disturbances; one expects, however, that the basic structure of the discrete-time models should remain unchanged. Return to Text

This version is optimized for use by screen readers. Descriptions for all mathematical expressions are provided in LaTex format. A printable pdf version is available. Return to Text