The Federal Reserve Board eagle logo links to home page

Skip to: [Printable Version (PDF)] [Bibliography] [Footnotes]
Finance and Economics Discussion Series: 2010-13 Screen Reader version

A Reliable and Computationally Efficient Algorithm for Imposing the Saddle Point Property in Dynamic Models

Gary S. Anderson 1

Keywords: Saddle point property, linear rational expections models


This paper describes a set of algorithms for quickly and reliably solving linear rational expectations models. The utility, reliability and speed of these algorithms are a consequence of 1) the algorithm for computing the minimal dimension state space transition matrix for models with arbitrary numbers of lags or leads, 2) the availability of a simple modeling language for characterizing a linear model and 3) the use of the QR Decomposition and Arnoldi type eigenspace calculations. The paper also presents new formulae for computing and manipulating solutions for arbitrary exogenous processes.

1 Introduction and Summary

Economists at the Board have an operational need for tools that are useful for building, estimating and simulating moderate to large scale rational expectations models. This context dictates a need for careful attention to computational efficiency and numerical stability of the algorithms.

These algorithms have proved very durable and useful for staff at the central bank. Many economists at the Federal Reserve Board have used the algorithms in their daily work and their research.2With the exception of researchers at other European central banks[Zagaglia, 2002], few economists outside the US central bank seem to know about the method. This paper attempts to make the method and approach more widely available by describing the underlying theory along with a number of improvements on the original algorithm.3

The most distinctive features of this approach are:

This unique combination of features makes the algorithm especially effective for large models. See [Anderson, 2006] for a systematic comparison of this algorithm with the alternatives procedures.

The remainder of the paper is organized as follows. Section 3 states the saddle point problem. Section 3 describes the algorithms for solving the homogeneous and inhomogeneous versions of the problem and describes several implementations. Section 4 shows how to compute matrices often found useful for manipulating rational expectations model solutions: the observable structure(Section 4.1) and stochastic transition matrices(Section 4.2). The Appendices contain proofs for the linear algebra underlying the algorithm and the solution of a simple example model.

2 Saddle Point Problem Statement

Consider linear models of the form:

\displaystyle \sum_{i=-\tau}^\theta{H_i x_{t+i}}= \Psi z_{t}, \,\, t = 0,\ldots,\infty (1)

with initial conditions, if any, given by constraints of the form

\displaystyle x_i = x^{data}_i, i = - \tau, \ldots, -1 (2)

where both  \tau and  \theta are non-negative, and  x_t is an L dimensional vector of endogenous variables with

\displaystyle \lim_{ t \rightarrow\infty} \Vert x_t\Vert < \infty%\nonumber (3)

and  z_t is a  k dimensional vector of exogenous variables.

Section 3 describes computationally efficient algorithms for determining the existence and uniqueness of solutions to this problem.

3 The Algorithms

The uniqueness of solutions to system 1 requires that any transition matrix characterizing the dynamics of the linear system have an appropriate number of explosive and stable eigenvalues[Blanchard & Kahn, 1980], and that a certain set of asymptotic linear constraints are linearly independent of explicit and certain other auxiliary initial conditions[Anderson & Moore, 1985].

The solution methodology entails

  1. Manipulating the left hand side of equation 1 to obtain a state space transition matrix,  A, along with a set of auxiliary initial conditions,  Z for the homogeneous solution.
    \displaystyle Z \begin{bmatrix}x_{-\tau}\\ \vdots \\ x_{\theta} \end{bmatrix}=0 \,\,\,\,and\displaystyle \, \begin{bmatrix}x_{-\tau+1}\\ \vdots \\ x_{\theta} \end{bmatrix} =A \begin{bmatrix}x_{-\tau}\\ \vdots \\ x_{\theta-1} \end{bmatrix} (4)

    See Section 3.1.1.
  2. Computing the eigenvalues and vectors spanning the left invariant space associated with large eigenvalues. See Section 3.1.2.
    \displaystyle V A = \mathcal{M} V (5)

    with the eigenvalues of  \mathcal{M} all greater than one in absolute value.
  3. Assembling asymptotic constraints,  Q, (See Section 3.1.2.) by combining the:
    1. auxiliary initial conditions identified in the computation of the transition matrix and
    2. the invariant space vectors
    \displaystyle Q= \begin{bmatrix}Z\\ V \end{bmatrix} (6)

  4. Investigating the rank of the linear space spanned by these asymptotic constraints and, when a unique solution exists,
    1. Computing the auto-regressive representation,  B. See Section 3.1.3.
    2. Computing matrices,  \phi, F, \vartheta for characterizing the impact of the inhomogeneous right hand side term. See Section 3.2.1.

Figure 1 presents a flowchart of the algorithm.

Figure 1: Algorithm Components. The figure provides a flow chart of Algorithm 1 outlined in the appendix.

3.1 Homogeneous Solution

Suppose, for now, that  \Psi=0:4

\displaystyle \sum_{i= - \tau}^\theta{ H_i x_{ t + i } }= 0, t \geq0 (8)
\displaystyle \lim_{ t \rightarrow\infty} \Vert x_t\Vert < \infty (9)

The homogeneous specification 8 is not restrictive. Since the procedure can handle inhomogeneous versions of equation 1 by recasting the problem in terms of deviations from a steady state value. However, the next section provides a more intuitive, flexible and computationally efficient alternative for computing inhomogeneous solutions.5

3.1.1 State Space Transition Matrix and Auxiliary Initial Conditions:  A, Z

This section describes how to determine a first-order state space representation of the equation system 8. The method is an extension of the shuffling algorithm developed in[Luenberger, 1978,Luenberger, 1977]. If  H_\theta is non-singular, we can immediately obtain  x_{t+\theta} in terms of  x_{t-\tau} \ldots x_{t+\theta-1}

\displaystyle - H_{\theta}^{-1}\begin{bmatrix}H_{-\tau}&\hspace{0.25in}&\dots&\hspace{0.25in}&H_{\theta-1} \end{bmatrix} (10)

However, the natural specification of many economic models has singular  H_\theta.

This, first, algorithm applies full rank linear transformations to equations from the original linear system in order to express  x_{t+\theta} in terms of  x_{t-\tau} \ldots x_{t+\theta-1}. It produces an unconstrained, typically explosive, autoregressive representation for the evolution of the components of the state space vectors and a set of vectors that provide important auxiliary initial conditions.

Section A.1 presents a proof that repeating this process of annihilating and regrouping rows ultimately produces an  H^{\sharp,k}=H^{\sharp,\ast} with  H^{\sharp,\ast}_\theta non-singular. The proof identifies a benign rank condition that guarantees that the algorithm will successfully compute the unconstrained autoregression and the auxiliary initial conditions.

Figure 2: Initial Tableau. The figure shows the distribution of potentially non zero coefficients in an array characterizing the linear constraints for t equal zero to t equal T. The grey areas show how each L rows of the matrix, designated by $H^{\sharp,0}$ are repeated but shifted right L columns for each increment in the time index t. Generally, the rightmost block of each of the L rows is singular.
Figure 3: Forward Row Annihilation. The figure shows the result of premultiplying each of the L rows representing the linear constraints at a given point in time by the matrix annihilating the first few rows of the rightmost column of each of the matrices t equal 0 to T. The figure shows how regrouping the rows of the matrix produces a group of rows labelled $F^{\sharp,1}$ ( equivalent to $Z^{\sharp,1}$ at the first stage ) at the top of the tableau. It also shows dotted lines around the new repeated group of rows $H^\sharp,1$ that has a potentially non-singular right hand block.
Figure 4: Full Rank Leading Block. The figure shows the result of repeating the process of annihilating the right hand blocks of regrouped rows. The figure shows the augmentation of $Z^{\sharp,k}$ by $F^{\sharp,k}$ and the dotted lines around the rows that comprise $H^{\sharp,k}$. The figure shows how this has lead to a non-singular right hand block for $H^{\sharp,k}$ represented by the black square in the right of the regrouped set of rows.

Figures 2-4 provide a graphical characterization of the linear algebraic transformations characterizing the algorithm. Figure 2 presents a graphical characterization of the relevant set of linear constraints for  t=0 \ldots (\tau{}+\theta{}). The figure represents the regions where the coefficients are potentially non-zero as shaded gray. If  H^{\sharp,0}_\theta is singular, one can find a linear combination of its rows which preserves the rank of  H^{\sharp,0}_\theta but which annihilates one or more of its rows.

Consequently, one can pre-multiply the matrices presented in Figure 2 by a unitary matrix to get the distribution of zeros displayed in Figure 3. Since the matrices repeat over time, one need only investigate the rank of the square matrix  H^{\sharp,0}_\theta.  U_1 H^{\sharp,0}=\begin{bmatrix}F^{\sharp,1} U_1 H^{\sharp,0}_\theta\end{bmatrix}. With some of the rows of  U_1 H^{\sharp,0}_\theta all zeros. One can regroup the rows in the new tableau to get  H^{\sharp,1}. By construction, the rank of  H^{\sharp,1}_\theta can be no less than the rank of  H^{\sharp,0}_\theta.

Note that changing the order of the equations in  H^{\sharp,0}_\theta will not affect the rank of  H^{\sharp,0}_\theta or the space spanned by the nonzero rows that will enter  H^{\sharp,1}_\theta. Since  H^{\sharp,0}_\theta is not full rank, a subset of the rows of  H^{\sharp,1}_\theta will span the same space as  H^{\sharp,0}_\theta. The other rows in  H^{\sharp,1}_\theta will come from linear combinations of the original system of equations in effect "shifted forward" in time.

One can think of the "Full Rank Leading Block" matrix as the result of premultiplications of the "Initial Tableau" by a sequence of unitary matrices. Implementations of the algorithm can take advantage of the fact that the rows of the matrices repeat. The regrouping can be done by "shifting equations forward" in time in an  L \times L(\tau + 1 +\theta) version of the tableau.

Section A.1 presents a proof that repeating this process of annihilating and regrouping rows ultimately produces an  H^{\sharp,k}=H^{\sharp,\ast} with  H^{\sharp,\ast}_\theta non-singular. Algorithm 1 presents pseudo code for an algorithm for computing the components of the state space transition matrix and the auxiliary initial conditions.

Algorithm 1  
\begin{program} % latex2html id marker 311\mbox{Given $H$,} \mbox{ compute the... ...s&\mathcal{H}^k_{\theta}\end{bmatrix},A,\mathcal{Z}^k \} \ENDFUNCT \end{program}

The algorithm terminates with:

\displaystyle \begin{bmatrix}H^{\sharp\ast}_{-\tau}&\hspace{0.25in}&\dots&\hspace{0.25in}&H^{\sharp\ast}_{\theta} \end{bmatrix} \begin{bmatrix}x_{-\tau}\\ \vdots \\ x_{\theta} \end{bmatrix} =0 (11)

with  H^{\sharp\ast}_{\theta} non singular. Let

\displaystyle \Gamma^\sharp=- (H^{\sharp\ast}_{\theta})^{-1}\begin{bmatrix}H^{\sharp\ast}_{-\tau}&\hspace{0.25in}&\dots&\hspace{0.25in}&H^{\sharp\ast}_{\theta-1} \end{bmatrix} (12)


\displaystyle x_{\theta} = \Gamma^\sharp \begin{bmatrix}x_{-\tau}\\ \vdots \\ x_{\theta-1} \end{bmatrix} %\nonumber (13)

This unconstrained auto-regression in  x_t provides exactly what one needs to construct the state space transition matrix.

\displaystyle A^\sharp= \begin{bmatrix}\begin{matrix}0&I \end{matrix}\\ \Gamma^\sharp \end{bmatrix} (14)

so that

\displaystyle \begin{bmatrix}x_{-\tau+1}\\ \vdots \\ x_{\theta} \end{bmatrix} = A \begin{bmatrix}x_{-\tau}\\ \vdots \\ x_{\theta-1} \end{bmatrix} (15)

3.1.2 Asymptotic Linear Constraint Matrix:  Q

In order to compute solutions to equation 8 that converge, one must rule out explosive trajectories. Blanchard and Kahn[Blanchard & Kahn, 1980] used eigenvalue and eigenvector calculations to characterize the space in which the solutions must lie. In contrast, our approach uses an orthogonality constraint to characterize regions which the solutions must avoid.

Each left eigenvector associated with a given eigenvalue is orthogonal to each right eigenvector associated with roots associated with different eigenvalues. Since vectors in the left invariant space associated with roots outside the unit circle are orthogonal to right eigenvectors associated with roots inside the unit circle, a given state vector that is part of a convergent trajectory must be orthogonal to each of these left invariant space vectors. See theorem 4. Thus, the algorithm can exploit bi-orthogonality and a less burdensome computation of vectors spanning the left invariant space in order to rule out explosive trajectories.

If the vectors in V span the invariant space associated with explosive roots, trajectories satisfying equation 8 are non-explosive if and only if

\displaystyle V A = \mathcal{M} V (16)

with the eigenvalues of  \mathcal{M} similar to the Jordan blocks of A associated with all eigenvalues greater than one in absolute value.
\displaystyle V \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t+\theta-1} \end{bmatrix}=0 (17)

for some  t.6

Combining V and  Z^\sharp completely characterizes the space of stable solutions satisfying the linear system 8.

\displaystyle Q= \begin{bmatrix}Z^{\sharp}\\ V \end{bmatrix} (18)

The first set of equations come from the equations in equation system 8 which do not appear in the transformed system of Equation 1 but must nevertheless be satisfied. The second set of equations come from the constraint that the solutions are non-explosive. Algorithm 3.1.2 provides pseudo code for computing Q.

Algorithm 2  
\begin{program} % latex2html id marker 410\mbox{Given $A,Z^{\sharp,\ast}$,} \FUNCT \mathcal{F}_{\ref{alg:asymptoticConstraints}} (A,Z^{\sharp,\ast}) \text{Compute $V$, the vectors spanning the left} \text{ invariant space of $A$\ associated with eigenvalues } \text{ greater than one in magnitude} Q:=\begin{bmatrix}Z^{\sharp,\ast}\\ V\end{bmatrix}\vert return\vert \, Q \ENDFUNCT \end{program}

3.1.3 Convergent Autoregressive Representation:  B

The first two algorithms together produce a matrix  Q characterizing constraints guaranteeing that trajectories are not explosive. See theorem 5 and corollary 2 for a proof. [Hallet & McAdam, 1999] describes how to use the matrix  Q from Section 3.1.2 to impose saddle point stability in non linear perfect foresight models. However, for linear models with unique saddle point solutions it is often useful to employ an autoregressive representation of the solution. Theorem 6 in Section A.3 provides a fully general characterization of the existence and uniqueness of a saddle point solution.

A summary for typical applications of the algorithm follows. Partition  Q= \begin{bmatrix} Q_L & Q_R \end{bmatrix} where  Q_L has  L\tau columns. When  \eta =L \theta,  Q_R is square. If  Q_R is non-singular, the system has a unique solution7

\displaystyle \begin{bmatrix}B\\ \underset{2}{B}\\ \vdots \\ \underset{\theta}{B} \end{bmatrix} = Q_R^{-1} Q_L \,\, and solutions are of the form \displaystyle \begin{matrix}x_t=B \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t-1} \end{bmatrix},& x_{t+k}=\underset{k}{B} \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t-1} \end{bmatrix} \end{matrix} (19)

Algorithm 3 provides pseudo code for computing B.

Algorithm 3  
\begin{program} % latex2html id marker 460\mbox{Given $Q$,} \FUNCT \mathcal{F}_{\ref{alg:bmat}} (Q) \vert cnt\vert=noRows (Q) \vert return\vert\begin{cases} \{Q,\infty\} &\vert cnt\vert < L\theta \{Q,0\} &\vert cnt\vert > L\theta \{Q,\infty\}& (Q_R singular) \{B=-Q_R^{-1} Q_L,1\} &otherwise \end{cases}\ENDFUNCT \end{program}

3.2 Inhomogeneous Solution

Now, suppose

\displaystyle \sum_{i= - \tau}^\theta{ H_i x_{ t + i } }= \Psi{} z_{t}, t \geq0 (20)
\displaystyle \lim_{ t \rightarrow\infty} \Vert x_t\Vert < \infty (21)

3.2.1 Inhomogeneous Factor Matrices:  \phi , F

Theorem 1   Given structural model matrices,  H_i, i=-\tau,\ldots,\theta and  \Psi, convergent autoregression matrix  B there exist inhomogeneous factor matrices,  \phi and  F such that with
\displaystyle \begin{bmatrix}B_{-\tau}&\dots&B_{-1} \end{bmatrix}=B (22)
\displaystyle \phi= \left ( H_0 + \begin{bmatrix}H_{1}\ldots H_\theta \end{bmatrix} \begin{bmatrix}B\\ \vdots\\ \mathcal{B}_{\theta} \end{bmatrix} \right )^{-1} (23)
\displaystyle x_t=\sum_{i=-\tau}^{-1} B_i x_{t+i} + \begin{bmatrix}0&\dots 0&I \end{bmatrix} \sum_{s=0}^\infty (F^{s} \begin{bmatrix}0\\ \phi\Psi{}z_{t+s} \end{bmatrix}) (24)

will satisfy the linear inhomogeneous system equation 20.

See Section A.4 for derivations and formulae. Algorithm 4 provides pseudo code for computing  \phi and  F.

Algorithm 4  
\begin{program} % latex2html id marker 514\mbox{Given $H, Q$} \FUNCT \mathcal{F}_{\ref{alg:inhomog}} (H,Q) Where B= \begin{bmatrix}B_L&B_R \vdots&\vdots B_L^\theta&B_R^\theta \end{bmatrix}= Q_R^{-1} Q_L \par \phi= (H_0 + H_+ \begin{bmatrix}B_R \vdots \mathcal{B}_{R\theta} \end{bmatrix})^{-1} F=\begin{bmatrix}0&I \vdots&&\ddots 0&&&I -\phi H_+\begin{bmatrix}0 \vdots 0 I \end{bmatrix}&-\phi H_+\begin{bmatrix}0 \vdots I B_R \end{bmatrix}&\ldots&-\phi H_+\begin{bmatrix}I B_R \vdots \mathcal{B}_{\theta-1} \end{bmatrix}\end{bmatrix}\vert return\vert (\phi,F) \ENDFUNCT \end{program}

3.2.2 Exogenous VAR Impact Matrix,  \vartheta {}

Modelers can augment the homogeneous linear perfect foresight solutions with particular solutions characterizing the impact of exogenous vector autoregressive variables.

Theorem 2  


\displaystyle z_{t+1} = \Upsilon{} z_t (25)

one can show that

\displaystyle x_{t} = \begin{bmatrix}B_L& B_R \end{bmatrix} \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t-1} \end{bmatrix} + \vartheta{} z_t (26)


\displaystyle vec(\vartheta{}) = \begin{bmatrix}0&\dots 0&I \end{bmatrix} (I - \Upsilon^T\otimes{} F)^{-1} vec \begin{bmatrix}0\\ \vdots \\ 0 \\ \phi\Psi \end{bmatrix} (27)

See Section A.5 for derivations and formulae.

3.3 Implementations

This set of algorithms has been implemented in a wide variety of languages. Three implementations, a Matlab, a "C", and a Mathematica implementation, are available from the author.Each implementation avoids using the large tableau of Figure 2. They each shift elements in the rows of a single copy of the matrix  H. Each implementation eliminates inessential lags from the autoregressive representation before constructing the state space transition matrix for invariant space calculations.

The most widely used version is written in MATLAB. The MATLAB version has a convenient modeling language front end for specifying the model equations and generating the  H_i matrices.

The "C" version, designed especially for solving large scale models is by far the fastest implementation and most frugal with memory. It uses sparse linear algebra routines from SPARSKIT[Saad, 1994] and HARWELL[Numerical Analysis Group, 1995] to economize on memory. It avoids costly eigenvector calculations by computing vectors spanning the left invariant space using ARPACK[Lehoucq et al., 1996].

For small models, one can employ a symbolic algebra version of the algorithms written in Mathematica. On present day computers this, code can easily construct symbolic state space transition matrices and compute symbolic expressions for eigenvalues for models with 5 to 10 equations. The code can often obtain symbolic expressions for the invariant space vectors when the transition matrix is of dimension 10 or less.

4 Other Useful Rational Expectations Solution Calculations

Economists use linear rational expectations models in a wide array of application. The following sections describe calculations which are useful for optimal policy design, model simulation and estimation exercises.

4.1 Observable Structure:  S

To compute the error term for estimation of the coefficients of these models, one must commit to a particular information set. Two typical choices are t and t-1 period expectations.

Given structural model matrices,  H_i, i=-\tau,\ldots,\theta and convergent autoregression matrices  B_i,i=-\tau,-1 there exists an observable structure matrix,  S

\displaystyle \epsilon_t= S \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t} \end{bmatrix} (28)

See Section A.6 for a derivation and formula for  S. Algorithm 5 provides pseudo code for computing S for a given lag,  k^\ast in the availability of information.

Algorithm 5  
\begin{program} % latex2html id marker 584\mbox{Given $B,k^\ast$} \FUNCT \mathcal{F}_{\ref{alg:obs}} (B,k^\ast) \tilde{B}= \begin{bmatrix}\begin{matrix}0&I \end{matrix}B \end{bmatrix}S= \begin{bmatrix}0_{L\times L\max (0,k^\ast-1)}& H_{-\tau}&\ldots&H_0 \end{bmatrix} + \begin{bmatrix}\begin{bmatrix}H_{1}\ldots H_\theta \end{bmatrix} \begin{bmatrix}B \vdots \mathcal{B}_{\theta} \end{bmatrix} \tilde{B}^{k^\ast} & 0_{L\times L\max (0,k^\ast-1)} \end{bmatrix}\vert return\vert (S) \ENDFUNCT \end{program}

4.2 Stochastic Transition Matrices:  \mathcal {A}, \mathcal {B}

To compute covariances, practitioners will find it useful to construct the stochastic transition matrices  \mathcal{A} and  \mathcal{B}.

Given structural model matrices,  H_i, i=-\tau,\ldots,\theta and convergent autoregression matrices  B_i,i=-\tau,-1 there exist stochastic transition matrices  \mathcal{B},  \mathcal{A} such that

\displaystyle \begin{bmatrix}x_{t-\tau+1}\\ \vdots\\ x_{t} \end{bmatrix}= \mathcal{A}\begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t-1} \end{bmatrix} + \mathcal{B}\begin{bmatrix}\epsilon_{t} + \Psi (E[z_t\vert I_t]-E[z_t\vert I_{t-1}]) \end{bmatrix} (29)

See Section A.7 for derivations and formulae for  \mathcal{A} and  \mathcal{B}. Algorithm 6 provides pseudo code for computing  \mathcal{A} and  \mathcal{B}

Algorithm 6  
\begin{program} % latex2html id marker 631\mbox{Given $H,\Psi,S$} \FUNCT \mathcal{F}_{\ref{alg:trans}} (H,S) \mathcal{A}=\begin{bmatrix}0 &I&& \vdots&&\ddots& 0&&&I S_0^{-1} S_{t-\tau-\max (k^\ast-1,0)+1} &\dots &\dots&S_0^{-1} S_{-1} \end{bmatrix}\mathcal{B}=\begin{bmatrix}0 \vdots 0 S_0^{-1} \end{bmatrix}\vert return\vert (\mathcal{A},\mathcal{B}) \ENDFUNCT \end{program}

5 Conclusions

This paper describes a set of algorithms that have proved very durable and useful for staff at the central bank. The most distinctive features of this approach are:

The unique combination of features makes the algorithm more efficient than all the alternatives--especially for large models. Staff at the Federal Reserves have developed a large scale linear rational expectations model consisting of 421 equations with one lead and one lag. This model provides an extreme example of the speed advantage of the Anderson-Moore Algorithm(AMA). On an Intel(R) Xeon 2.80GHz CPU running Linux the MATLAB version of AMA computes the rational expectations solution in 21 seconds while the the MATLAB version of gensys, a popular alternative procedure, requires 16,953 seconds. See [Anderson, 2006] for a systematic comparison of this algorithm with the alternative procedures. The code is available for download at


Anderson, Gary., 2006.
Solving Linear Rational Expections Models: A Horse Race.
Tech. rept. Federal Reserve System, Finance and Economics Discussion Series.
Anderson, Gary, & Moore, George., 1985.
A Linear Algebraic Procedure For Solving Linear Perfect Foresight Models.
Economics Letters, 17(3).
Blanchard, Olivier Jean, & Kahn, Charles M., 1980.
The Solution of Linear Difference Models under Rational Expectations.
Econometrica, 48(5), 1305-11.
Bomfim, Antulio., 1996.
Forecasting the Forecasts of Others: Expectational Heterogeneity and Aggregate Dynamics.
Tech. rept. 1996-41. Federal Reserve Board, Finance and Economics Discussion Series.
Edge, Rochelle M., Laubach, Thomas, & Williams, John C., 2003.
The responses of wages and prices to technology shocks.
Tech. rept. 2003-21. Federal Reserve Bank of San Francisco, Working Papers in Applied Economic Theory.
Fuhrer, Jeffrey C., 1994.
Optimal Monetary Policy and the Sacrifice Ratio.
Pages 43-69 of: Fuhrer, Jeffrey C. (ed), `Goals, Guidelines, and Constraints Facing Monetary Policymakers'.
Federal Reserve Bank of Boston Conference Series No. 38.
Fuhrer, Jeffrey C., 1996.
Monetary Policy Shifts and Long-term Interest Rates.
Quarterly Journal of Economics, 111(November), 1183-1209.
Fuhrer, Jeffrey C., 1997a.
Inflation/Output Variance Trade-offs and Optimal Monetary Policy.
Journal of Money Credit and Banking, 29, No. 2(May), 214-234.
Fuhrer, Jeffrey C., 1997b.
The (Un)Importance of Forward-Looking Behavior in Price Specifications.
Journal of Money Credit and Banking, 29, No. 3(August), 338-350.
Fuhrer, Jeffrey C., 1997c.
Towards a Compact, Empirically-Verified Rational Expectations Model for Monetary Policy Analysis.
Carnegie-Rochester Conference Series on Public Policy, forthcoming.
Fuhrer, Jeffrey C., & Hooker, Mark W., 1993.
Learning About Monetary Regime Shifts in an Overlapping Wage Contract Economy.
Journal of Economic Dynamics and Control, 17, 531-553.
Fuhrer, Jeffrey C., & Madigan, Brian., 1997.
Monetary Policy When Interest Rates are Bounded at Zero.
Review of Economics and Statistics, 79, No. 4(November), 573-585.
Fuhrer, Jeffrey C., & Moore, George R., 1995.
Forward-Looking Behavior and the Stability of a Conventional Monetary Policy Rule.
Journal of Money Credit and Banking, 27, No. 4(November), 1060-70.
Fuhrer, Jeffrey C., & Moore, George R., 1995a.
Inflation Persistence.
Quarterly Journal of Economics, 110(February), 127-159.
Fuhrer, Jeffrey C., & Moore, George R., 1995b.
Monetary Policy Trade-Offs and the Correlation Between Nominal Interest Rates and Real Output.
American Economic Review, 85(March), 219-239.
Fuhrer, Jeffrey C., Moore, George, & Schuh, Scott., 1995.
A Comparison of GMM and Maximum-Likelihood Estimates of Quadratic Cost Inventory Models.
Journal of Monetary Economics, 35(February), 115-157.
Hallet, Andrew Hughes, & McAdam, Peter (eds)., 1999.
Analyses in Macroeconomic Modelling.
Kluwer Academic Publishers.
Chap. Accelerating Non Linear Perfect Foresight Solution by Exploiting the Steady State Linearization.
Lehoucq, R.B., Sorensen, D.C., & Yang, C., 1996.
ARPACK USERS GUIDE: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods.
Tech. rept. Department of Computational and Applied Mathematics Rice University.
Levin, Andrew, Williams, John, & Wieland, Volker., 1998
Are Simple Rules Robust under Model Uncertainty?
Seminar Paper.
Luenberger, David G., 1977.
Dynamic Equations in Descriptor Form.
IEEE Transactions on Automatic Control, 312-321.
Luenberger, David G., 1978.
Time-Invariant Descriptor Systems.
Automatica, 14, 473-480.
Noble, Ben., 1969.
Applied Linear Algebra.
Prentice-Hall, Inc.
Numerical Analysis Group., 1995.
The Harwell Sparse Matrix Library Manual.
Tech. rept. The Numerical Analysis Group.
Orphanides, Athanasios., 1998.
Monetary Policy Evaluation with Noisy Information.
Tech. rept. 98-50. Finance and Economics Discussion Series.
Orphanides, Athanasios, & Wieland, Volker., 1998
Price Stability and Monetary Policy Effectiveness when Nominal Interest Rates are Bounded at Zero.
Working Paper.
Orphanides, Athanasios, & Williams, John C., 2002.
Robust Monetary Policy Rules with Unknown Natural Rates.
Brookings Papers on Economic Activity.
Orphanides, Athanasios, Small, David, Wilcox, David, & Wieland, Volker., 1997
A Quantitative Exploration of the Opportunistic Approach to Disinflation.
Tech. rept. 97-36. Federal Reserve Board, Finance and Economics Discussion Series.
Saad, Youcef., 1994.
SPARSKIT: A Basic Tool Kit for Sparse Matrix Computations.
Tech. rept. Computer Science Department, University of Minnesota.
Taylor, John., 1979.
Staggered Wage Setting in a Macro Model.
The American Economic Review, 69(2), 108-113.
Zagaglia, Paolo., 2002.
Matlab Implementation of the AIM Algorithm: A Beginner's Guide.
Tech. rept. 169. Universita' Politecnica delle Marche (I), Dipartimento di Economia.

A. Proofs

A.1 Unconstrained Autoregression

Theorem 3   Let
\displaystyle \mathcal{H}=\left . \begin{bmatrix}H_{-\tau}&\hspace{0.25in}&\dots&\hspace{0.25in}&H_{\theta}\\ &H_{-\tau}&\hspace{0.25in}&\dots&\hspace{0.25in}&H_{\theta}\\ &&\ddots\\ &&&H_{-\tau}&\hspace{0.25in}&\dots&\hspace{0.25in}&H_{\theta}\\ \end{bmatrix} \right \}      {\scriptstyle\tau+\theta+1} (A.1)

There are two cases:
  • When  \mathcal{H} is full rank the algorithm terminates with  Z^{\sharp\ast} (  Z^{\flat\ast}) and non-singular  H^{\sharp\ast}_{\theta} (  H^{\flat\ast}_{\tau})
  • When  \mathcal{H} is not full rank the algorithm terminates when some row of  \begin{bmatrix} \mathcal{H}^k_{-\tau}\ldots\mathcal{H}^k_\theta \end{bmatrix} is zero.
Proof Consider the case when  \mathcal{H} is full rank. Each step of the algorithm applies a rank preserving pre-multiplication by a non singular matrix. Each step of the algorithm where  H^{\sharp,k}_\theta is singular, increases the row rank of  Z^{\sharp,k} by at least one. At each step  \mathcal{Z}^{\sharp,k} are full rank. The rank of  Z^{\sharp,k} cannot exceed  L (\tau + \theta).  \blacksquare

The following corollary indicates that models with unique steady-states always terminate with non singular  \mathcal{H}^{\sharp,\ast}_{\theta}.

Corollary 1   If  ( {\sum_{i= - \tau}^\theta { H_i}} ) is non singular then
  1.  \mathcal{H} is full rank.
  2. The origin is the unique steady state of equation 1.
  3. there exists a sequence of elementary row operations that transforms  \mathcal{H} into  \mathcal{H}^\ast
Proof Suppose  \mathcal{H} is not full rank. Then there is a non zero vector  V    such that  V \mathcal{H} =0. Consequently,
\displaystyle \begin{bmatrix}V_{-\tau}\ldots V_{\theta} \end{bmatrix} \mathcal{H} \begin{bmatrix}I&\ldots&I\\ \vdots&\ldots&\vdots\\ I&\ldots&I&\\ \end{bmatrix}=0 (A.2)


\displaystyle V_i ({\sum_{j= - \tau}^\theta { H_j}})=0 \, \forall i (A.3)

So that  ( {\sum_{i= - \tau}^\theta { H_i}} ) must be singular.  \blacksquare

A.2 Asymptotic Constraints

Theorem 4   Consider a left invariant space and a right invariant space with no eigenvalues in common. Suppose  V_1 spans the left invariant space and  W_2 spans the right invariant space.
\displaystyle V_1 A = T_1 V_1 (A.4)
\displaystyle A W_2= W_2 T_2 (A.5)

With eigenvalues of  T_1 \neq T_2. Then  V_1 W_2=0
Proof A right eigenvector  x_i and a left-eigenvector  y_j corresponding to distinct eigenvalues  \lambda_i and  \lambda_j are orthogonal.[Noble, 1969] Finite dimensional matrices have finite dimensional Jordan blocks. Raising a given matrix to a power produces a matrix with smaller Jordan blocks. Raising the matrix to a high enough power ultimately eliminates all nontrivial Jordan Blocks. Consequently, the left invariant space vectors are linear combination of the left eigenvectors and the right invariant space vectors are a linear combination of the right eigenvectors of the transition matrix raised to some finite power.
\displaystyle V_1 = \beta_{1} \begin{bmatrix}y_1\\ \vdots\\ y_J \end{bmatrix} (A.6)
\displaystyle W_2 = \begin{bmatrix}x_1& \dots &x_K \end{bmatrix} \alpha_{2} (A.7)
\displaystyle V_1 W_2 = \beta_{1} \begin{bmatrix}y_1\\ \vdots \\ y_J \end{bmatrix} \begin{bmatrix}x_1& \dots &x_K \end{bmatrix} \alpha_{2} = 0 (A.8)

Theorem 5   Let  \{x^{conv}_t\},  t= -\tau,\ldots,\infty be a non explosive solution satisfying equation 1. Let  A be the state space transition matrix for equation 1 and  V be a set of invariant space vectors spanning the invariant space associated with roots of  A of magnitude bigger than 1. Then for  t= 0,\ldots,\infty
\displaystyle V \begin{bmatrix}x^{conv}_{t-\tau}\\ \vdots\\ x^{conv}_{t+\theta-1} \end{bmatrix}=0 (A.9)

Proof Using  W, the left generalized eigenvectors of  A, one can employ the Jordan Canonical Form of A to write
\displaystyle W^H A = J W^H (A.10)

so that

\displaystyle A^t = (W^H)^{-1} J^t W^H (A.11)
\displaystyle y_t = A^t y_0 (A.12)
\displaystyle W^H y_t = J^t W^H y_0 (A.13)
\displaystyle \lim_{t\rightarrow \infty} y_t =0 \Rightarrow \lim_{t\rightarrow \infty} W^H y_t =0 (A.14)


\displaystyle W_i^H y_0 =0 \, \forall i    such that \displaystyle \vert J_{ii}\vert > 1. (A.15)

so that

\displaystyle V y_0 = \alpha W^H y_0 = 0 (A.16)

Corollary 2   Let  \{x_t\},  t= -\tau,\ldots,\infty be a solution satisfying equation 8. If  A has no roots with magnitude 1 then the path converges to the origin if and only if
\displaystyle V \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t+\theta-1} \end{bmatrix}=0 (A.17)

for some  t.
\displaystyle W_i^H y_0 =0 \, \forall i    such that \displaystyle \vert J_{ii}\vert > 1. (A.18)

means  y_t

\displaystyle J_{ii} \ne 1. (A.19)
\displaystyle y_t = A^t y_0 (A.20)


A.3 Existence and Uniqueness

Theorem 6   Identify  Q_L, Q_R from
\displaystyle Q= \begin{bmatrix}Z^{\sharp}\\ V \end{bmatrix} = \begin{bmatrix}Q_L&Q_R \end{bmatrix} (A.21)

with  Q_R an  (\eta \times L\theta) matrix,  Q_L an  (\eta \times L\tau) matrix, where  \eta represent the number of rows in the matrix  Q.

The existence of convergent solutions depends on the magnitudes of the ranks of the augmented matrix

\displaystyle r_1=rank \left (\begin{bmatrix}Q_R&-Q_L x_{data} \end{bmatrix} \right) (A.22)


\displaystyle r_2=rank(Q_R). (A.23)

By construction,  r_1 \ge r_2 and  r_2 \le L\theta. There are three cases.
  1. If  r_1 > r_2 there is no nontrivial convergent solution
  2. If  r_1 = r_2 = L \theta there is a unique convergent solution
  3. If  r_1 = r_2 < L \theta the system has an infinity of convergent solutions
Corollary 3   When  \eta =L \theta,  Q_R is square. If  Q_R is non-singular, the system has a unique solution

and solutions are of the form \begin{bmatrix}B\\ \underset{2}{B}\\ \vdots \\ \underset{\theta}{B} \end{bmatrix} = Q_R^{-1} Q_L
\begin{matrix}x_t=B \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t-1} \end{bmatrix},& x_{t+1}=\underset{2}{B} \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t-1} \end{bmatrix},&\,\,\dots\,\, x_{t+\theta}=\underset{\theta}{B} \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t-1} \end{bmatrix} \end{matrix} (A.25)

If  Q_R is singular, the system has an infinity of solutions.

When  \eta <L \theta, The system has an infinity of solutions.

When  Q has more than  L \theta rows, The system has a unique nontrivial solution only for specific values of  x_{data}

Proof of rank of Q

Proof The theorem applies well known results on existence and uniqueness of solutions to linear equation systems[Noble, 1969]. If  \mathcal{M}_2=\begin{bmatrix} x_{data}\\ 0 \end{bmatrix} does not lie in the column space of  \mathcal{M}_1=\begin{bmatrix} I&0\ Q_L&Q_R \end{bmatrix}, then there is no solution. If  \mathcal{M}_2 lies in the column space of  \mathcal{M}_1 and the latter matrix is full rank, then there is a unique solution. If  \mathcal{M}_2 lies in the column space of  \mathcal{M}_1 and the latter matrix is not full rank, then there are multiple solutions.  \blacksquare

A.4 Proof of Theorem 1

Proof Construct the  L\tau{} \times L\tau matrix:
\displaystyle \tilde{B}= \begin{bmatrix}\begin{matrix}0&I\\ \end{matrix}\\ B \end{bmatrix} (A.26)


\displaystyle \mathcal{B}_{k+1} = \mathcal{B}_{k} \tilde{B} (A.27)

Applying equation 8 to the unique convergent solution, it follows that

where \begin{bmatrix}H_{-}&H_{0}&H_{+} \end{bmatrix} \begin{bmatrix}I\\ \begin{matrix}B\\ \vdots\\ \mathcal{B}_{\theta+1} \end{matrix} \end{bmatrix}=0
H_-= \begin{bmatrix}H_{-\tau} \ldots H_{-1} \end{bmatrix} H_+= \begin{bmatrix}H_{1} \ldots H_{\theta} \end{bmatrix} (A.29)

Which can also be written as:
\displaystyle \begin{bmatrix}H_{-}&H_{0}&H_{+} \end{bmatrix} \begin{bmatrix}I\\ \begin{bmatrix}\begin{matrix}0&I \end{matrix}\\ B\\ \vdots\\ \mathcal{B}_{\theta} \end{bmatrix}\tilde{B} \end{bmatrix}=0 (A.30)

So that:
\displaystyle H_{-} + ( \begin{bmatrix}0&H_0 \end{bmatrix} + H_+ \begin{bmatrix}B\\ \vdots\\ \mathcal{B}_{\theta} \end{bmatrix})\tilde{B} = 0 (A.31)
\displaystyle H_{-} + ( \begin{bmatrix}0&H_0 \end{bmatrix} + H_+ \begin{bmatrix}B_L&B_R\\ \vdots&\vdots\\ \mathcal{B}_{L\theta}&\mathcal{B}_{R\theta} \end{bmatrix})\tilde{B} = 0 (A.32)

where the  B_R^k matrices are  L \times{} L.
Note that
\displaystyle \begin{bmatrix}B_L&B_R\\ \vdots&\vdots\\ \mathcal{B}_{L\theta}&\mathcal{B}_{R\theta} \end{bmatrix}\tilde{B} = \begin{bmatrix}0&B_L\\ \vdots&\vdots\\ 0&\mathcal{B}_{L\theta} \end{bmatrix}+ \begin{bmatrix}B_R\\ \vdots\\ \mathcal{B}_{R\theta} \end{bmatrix}B (A.33)

\displaystyle H_- + H_+ \begin{bmatrix}0&B_L\\ \vdots&\vdots\\ 0&\mathcal{B}_{L\theta} \end{bmatrix} + (H_0 + H_+ \begin{bmatrix}B_R\\ \vdots\\ \mathcal{B}_{R\theta} \end{bmatrix})B=0 (A.34)

\displaystyle \phi= (H_0 + H_+ \begin{bmatrix}B_R\\ \vdots\\ \mathcal{B}_{R\theta} \end{bmatrix})^{-1} (A.35)

So that

\displaystyle \phi H_- + \phi H_+ \begin{bmatrix}0&B_L\\ \vdots&\vdots\\ 0&\mathcal{B}_{L\theta} \end{bmatrix} +B = 0 (A.36)

Consider the impact that the time  t+s value  z_{t+s} has on the value of  x_{t+s} We can write
\displaystyle \begin{bmatrix}H_{-}&H_{0}&H_{+} \end{bmatrix} \begin{bmatrix}I&&&0\\ &\ddots&&\vdots&\\ &&I&0\\ 0&\dots&0&I\\ 0&& \begin{matrix}B_L\\ \vdots\\ B_L^\theta \end{matrix}& \begin{matrix}B_R\\ \vdots\\ B_R^\theta \end{matrix} \end{bmatrix} \begin{bmatrix}x_{t+s-\tau}\\ \vdots\\ x_{t+s} \end{bmatrix}= \Psi z_{t+s} (A.37)

or equivalently,
\begin{gather*}\begin{split}\phi\Psi z_{t+s}=&\\ &\phi (\begin{bmatrix}H_-&0 \end{bmatrix} + \begin{bmatrix}0&H_0 \end{bmatrix} + \\ & \begin{bmatrix}H_+ \begin{bmatrix}0&B_L\\ \vdots&\vdots\\ 0&\mathcal{B}_{L\theta} \end{bmatrix}& H_+\begin{bmatrix}B_R\\ \vdots\\ \mathcal{B}_{R\theta} \end{bmatrix} \end{bmatrix}) \begin{bmatrix}x_{t+s-\tau}\\ \vdots\\ x_{t+s} \end{bmatrix} \end{split}\end{gather*} (A.38)

\begin{gather*}\begin{split}\phi\Psi z_{t+s}=&\\ &\phi (\begin{bmatrix}H_-&0 \end{bmatrix} + \\ & \begin{bmatrix}H_+ \begin{bmatrix}0&B_L\\ \vdots&\vdots\\ 0&\mathcal{B}_{L\theta} \end{bmatrix}& 0 \end{bmatrix}) \begin{bmatrix}x_{t+s-\tau}\\ \vdots\\ x_{t+s} \end{bmatrix}+\\ &\phi ( \begin{bmatrix}0&H_0 \end{bmatrix} + \\ & \begin{bmatrix}0& H_+\begin{bmatrix}B_R\\ \vdots\\ \mathcal{B}_{R\theta} \end{bmatrix} \end{bmatrix}) \begin{bmatrix}x_{t+s-\tau}\\ \vdots\\ x_{t+s} \end{bmatrix} \end{split}\end{gather*} (A.39)

Which by equations A.35 and A.36 can be written
\begin{gather*}\begin{split}\phi\Psi z_{t+s}=&\\ &\begin{bmatrix}-B&0\end{bmatrix} \begin{bmatrix}x_{t+s-\tau}\\ \vdots\\ x_{t+s} \end{bmatrix}+\\ & \begin{bmatrix}0&I \end{bmatrix} \begin{bmatrix}x_{t+s-\tau}\\ \vdots\\ x_{t+s} \end{bmatrix} \end{split}\end{gather*} (A.40)

So we have
\displaystyle \begin{bmatrix}-B&I \end{bmatrix} \begin{bmatrix}x_{t+s-\tau}\\ \vdots\\ x_{t+s} \end{bmatrix}= \phi\Psi z_{t+s} (A.41)
\displaystyle x_{t+s} = B \begin{bmatrix}x_{t+s-\tau}\\ \vdots\\ x_{t+s-1} \end{bmatrix}+\phi\Psi z_{t+s} (A.42)

Now consider the impact of  z_{t+s} on  x_{t+s-1}. We can write
\begin{multline*} \phi (\begin{bmatrix} H_-&0 \end{bmatrix} + \begin{bmatrix} 0&H_0 \end{bmatrix} + \ \begin{bmatrix} H_+ \begin{bmatrix} 0&B_L\\ \vdots&\vdots\\ 0&\mathcal{B}_{L\theta} \end{bmatrix}& H_+\begin{bmatrix} B_R\\ \vdots\\ \mathcal{B}_{\theta} \end{bmatrix} \end{bmatrix}) \begin{bmatrix} x_{t+s-\tau-1}\\ \vdots\\ x_{t+s-1} \end{bmatrix}+ \\ \phi H_+\begin{bmatrix} I\\ B_R\\ \vdots\\ \mathcal{B}_{\theta-1} \end{bmatrix} \phi \Psi z_{t+s}=0 \end{multline*}

where the last term captures the impact  z_{t+s} has on values of x  t+s and later. Using equations A.35 and A.36 we can write
\displaystyle \begin{bmatrix}-B&I \end{bmatrix} \begin{bmatrix}x_{t+s-\tau-1}\\ \vdots\\ x_{t+s-1} \end{bmatrix}+ \phi H_+\begin{bmatrix}I\\ B_R\\ \vdots\\ \mathcal{B}_{\theta-1} \end{bmatrix} \phi\Psi z_{t+s}=0 (A.43)
\displaystyle x_{t+s-1} = B \begin{bmatrix}x_{t+s-\tau-1}\\ \vdots\\ x_{t+s-2} \end{bmatrix} -\phi H_+\begin{bmatrix}I\\ B_R\\ \vdots\\ \mathcal{B}_{\theta-1} \end{bmatrix} \phi\Psi z_{t+s} (A.44)

and more generally

\displaystyle x_{t+s-i} = B \begin{bmatrix}x_{t+s-\tau-1}\\ \vdots\\ x_{t+s-2} \end{bmatrix} + (-\phi H_+\begin{bmatrix}I\\ B_R\\ \vdots\\ \mathcal{B}_{R\theta-1} \end{bmatrix})^{i} \phi\Psi z_{t+s} (A.45)

To accommodate lagged expectations, suppose that information on all the endogenous variables becomes available with the same lag ( D^\ast) in time:  \exists K^\ast    such that  x_{t-k} \in I_t, \forall K \ge K^\ast\\ then set,
\displaystyle \begin{bmatrix}E [x_{t+1} \vert I_t] \\ \vdots \\ E [x_{t+\theta}\vert I_t] \end{bmatrix} = \begin{bmatrix}B\\ \vdots\\ \mathcal{B}_{\theta} \end{bmatrix} \tilde{B}^{k^\ast} \begin{bmatrix}x^{data}_{t-\tau+1-k^\ast}\\ \vdots \\ x^{data}_{t-k^\ast} \end{bmatrix} + (A.46)
\displaystyle \begin{bmatrix}\sum_{s=0}^\infty ( (-\phi H_+\begin{bmatrix}I\\ B_R\\ \vdots\\ \mathcal{B}_{R\theta-1} \end{bmatrix})^{s} \phi\Psi z_{t+s+1})\\ \vdots\\ \sum_{s=0}^\infty ( (-\phi H_+\begin{bmatrix}I\\ B_R\\ \vdots\\ \mathcal{B}_{R\theta-1} \end{bmatrix})^{s} \phi\Psi z_{t+s+\theta}) \end{bmatrix} (A.47)

So that
\begin{displaymath}\begin{split}x_{t} = &\\ &\begin{bmatrix}B_L& B_R \end{bmatrix} \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t-1} \end{bmatrix} + \\ & \begin{bmatrix}0&\dots 0&I \end{bmatrix} \sum_{s=0}^\infty ( F^{s} \begin{bmatrix}0\\ \phi \Psi z_{t+s} \end{bmatrix}) \end{split}\end{displaymath}    

\displaystyle B= \begin{bmatrix}B_L&B_R\\ \vdots&\vdots\\ B_L^\theta&B_R^\theta \end{bmatrix} = Q_R^{-1} Q_L (A.48)

\displaystyle \phi= (H_0 + H_+ \begin{bmatrix}B_R\\ \vdots\\ \mathcal{B}_{R\theta} \end{bmatrix})^{-1} (A.49)
\displaystyle F= \begin{bmatrix}0&I\\ \vdots&&\ddots\\ 0&&&I\\ -\phi H_+\begin{bmatrix}0\\ \vdots \\ 0\\ I \end{bmatrix}& -\phi H_+\begin{bmatrix}0\\ \vdots\\ I\\ B_R \end{bmatrix}&\ldots& -\phi H_+\begin{bmatrix}I\\ B_R\\ \vdots\\ \mathcal{B}_{R\theta-1} \end{bmatrix} \end{bmatrix} (A.50)

\displaystyle x_{t} = B \begin{bmatrix}x_{t-\tau}\\ \vdots\\ x_{t-1} \end{bmatrix} +\sum_{s=0}^\infty ( (-\phi H_+\begin{bmatrix}I\\ B_R\\ \vdots\\ \mathcal{B}_{R\theta-1} \end{bmatrix})^{s} \phi\Psi z_{t+s}) (A.51)


A.5 Proof of Theorem 2

\displaystyle \sum_{s=0}^\infty (F^{s} \begin{bmatrix}0\\ \phi\Psi{}z_{t+s} \end{bmatrix}) =\sum_{s=0}^\infty ( F^s \begin{bmatrix}0\\ \phi \Psi \end{bmatrix} \Upsilon^{s} )z_{t}=\vartheta{} z_t
vec(\vartheta{}) = \begin{bmatrix}0&\dots 0&I \end{bmatrix} (I - \Upsilon^T\otimes{} F)^{-1} vec \begin{bmatrix}0\\ \vdots \\ 0 \\ \phi\Psi \end{bmatrix} (A.53)


A.6 Observable Structure

Since one can write

\displaystyle \epsilon_t = \begin{bmatrix}H_{-\tau}\ldots H_0 \end{bmatrix} \begin{bmatrix}x^{data}_{t-\tau}\\ \vdots \\ x^{data}_{t} \end{bmatrix} + \begin{bmatrix}H_{1}\ldots H_\theta \end{bmatrix} \begin{bmatrix}E [x_{t+1} \vert I_t] \\ \vdots \\ E [x_{t+\theta}\vert I_t] \end{bmatrix} - \Psi z_t (A.54)

We find that
\displaystyle \epsilon_t= S \begin{bmatrix}x^{data}_{t-\tau+1-\max (1,k^\ast)}\\ \vdots \\ x^{data}_{t} \end{bmatrix} - \Psi z_t (A.55)


\displaystyle S= \begin{bmatrix}0_{L\times L\max (0,k^\ast-1)}& H_{-\tau}&\ldots&H_0 \end{bmatrix} + (A.56)
\displaystyle \begin{bmatrix}\begin{bmatrix}H_{1}\ldots H_\theta \end{bmatrix} \begin{bmatrix}B\\ \vdots\\ \mathcal{B}_{\theta} \end{bmatrix} \tilde{B}^{k^\ast} & 0_{L\times L\max (0,k^\ast-1)} \end{bmatrix} (A.57)

Note that for  k^\ast \ge 1

\displaystyle \frac{\partial \epsilon_t}{\partial x^{data}_t} = H_0 (A.58)

and for  k^\ast = 0
\displaystyle \frac{\partial \epsilon_t}{\partial x^{data}_t} = H_0 + \begin{bmatrix}H_{1}\ldots H_\theta \end{bmatrix} \begin{bmatrix}B\\ \vdots\\ \mathcal{B}_{\theta} \end{bmatrix} = \phi^{-1} (A.59)

A.7 Stochastic Transition Matrices

One can write

\begin{bmatrix}x_{t-\tau-\max (k^\ast-1,0)+1}\\ \vdots \\ x_{t} \end{bmatrix}= \mathcal{A}\begin{bmatrix}x_{t-\tau-\max (k^\ast-1,0)}\\ \vdots \\ x_{t-1} \end{bmatrix} + \mathcal{B}\epsilon_t
\mathcal{A}= \begin{bmatrix}0 &I&&\\ \vdots&&\ddots&\\ 0&&&I\\ S_0^{-1} S_{t-\tau-\max (k^\ast-1,0)+1} &\dots &\dots&S_0^{-1} S_{-1} \end{bmatrix}
\displaystyle \mathcal{B}= \begin{bmatrix}0\\ \vdots \\ 0 \\ S_0^{-1} \end{bmatrix} (A.62)

B. An Example

This section applies algorithms 1-3 to the following model with N=2.[Taylor, 1979]

\displaystyle w_t=\frac{1}{N}E_t[\sum_{i=0}^{N-1}W_{t+i}] - \alpha (u_t-u_n) + \nu_t (B.1)
\displaystyle W_t=\frac{1}{N}\sum_{i=0}^{N-1}w_{t-i} (B.2)
\displaystyle u_t=\vartheta u_{t-1}+\gamma W_t +\mu+\epsilon_t (B.3)
\displaystyle E_t[\nu_{t+i}]=E_t[\epsilon_{t+i}]=0 \,\forall i\ge0 (B.4)

The initial matrix  H^{\sharp,0}, for the example model with variable order  \{ \epsilon,\nu,u,w,W\} is

\displaystyle H^{\sharp,0}= \begin{bmatrix}0 &0 &0 &0 &0 &0 &-1 &\alpha & 1 &-{\frac{1}{2}} &0 &0 &0 &0 & -{\frac{1}{2}}\\ 0 &0 &0 &-{\frac{1}{2}} &0 &0 &0 &0 &-{\frac{1}{2}} &1 &0 &0 &0 &0 &0\\ 0 &0 &-\vartheta &0 &0 &-1 &0 &1 & 0 &-\gamma &0 &0 &0 &0 &0\\ 0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 &0\\ 0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &0 &0 &0 &0 &0 \end{bmatrix}    

We would like to express the time  t+1 variables in terms of the time  t and  t-1 variables. If the sub-matrix corresponding to the  t+1 variables were non singular we could immediately write:
\displaystyle \begin{bmatrix}{{\epsilon}_{t + 1}} \\ {{\nu}_{t + 1}} \\ {u_{t + 1}} \\ {w_{t + 1}} \\ {W_{t + 1}} \end{bmatrix} = (H^{\sharp,0}_\theta)^{-1} \begin{bmatrix}H^{\sharp,0}_{-1} & H^{\sharp,0}_0 \end{bmatrix} \begin{bmatrix}{{\epsilon}_{t - 1}} \\ {{\nu}_{t - 1}} \\ {u_{t - 1}} \\ {w_{t - 1}} \\ {W_{t - 1}}\\ {{\epsilon}_t} \\ {{\nu}_t} \\ {u_t} \\ {w_t} \\ {W_ t} \end{bmatrix}    

Since  (H^{\sharp,0}_\theta) is singular, we use equations dated subsequent to the present time period to construct a set of linear constraints where the leading block is non singular.

\displaystyle H^{\sharp,\star}= \begin{bmatrix}0 &0 &0 &0 &0 &0 &0 &0 & -{\frac{1}{2}} &0 &0 &0 &0 &-{\frac{1}{2}} &1\\ 0 &0 &0 &0 &0 &0 &0 &-\vartheta & 0 &0 &-1 &0 &1 &0 &-\gamma\\ \nonumber 0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &0\\ 0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0\\ \nonumber 0 &0 &0 &0 &0 &0 &-1 &\alpha & 1 &-{\frac{1}{2}} &0 &0 &0 &0 & -{\frac{1}{2}} \nonumber \end{bmatrix}    

So that

A= \begin{bmatrix}0 &0 &0 &0 &0 &1 &0 &0 &0 &0\\ 0 &0 &0 &0 &0 &0 &1 &0 &0 &0\\ 0 &0 &0 &0 &0 &0 &0 &1 &0 &0\\ 0 &0 &0 &0 &0 &0 &0 &0 &1 &0\\ 0 &0 &0 &0 &0 &0 &0 &0 &0 &1\\ 0 &0 &0 &0 &0 &0 &0 &0 &0 &0\\ 0 &0 &0 &0 &0 &0 &0 &0 &0 &0\\ 0 &0 &0 &0 &0 &0 &-2\gamma &\rho &2\gamma &-\gamma\\ 0 &0 &0 &0 &0 &0 &-4 &4\alpha & 3 &-2\\ 0 &0 &0 &0 &0 &0 &-2 &2\alpha & 2 &-1 \end{bmatrix}
\rho= \theta + 2 \gamma \alpha

For the example model

\displaystyle Z^\sharp= \begin{bmatrix}0 &0 &0 &-{\frac{1}{2}} &0 &0 &0 &0 &-{\frac{1}{2}} &1\\ 0 &0 &-\vartheta &0 &0 &-1 &0 &1 & 0 &-\gamma\\ 0 &0 &0 &0 &0 &1 &0 &0 &0 &0\\ 0 &0 &0 &0 &0 &0 &1 &0 &0 &0 \end{bmatrix}    

where the  \phi's are

\displaystyle Q= \begin{bmatrix}0 &0 &0 &-{\frac{1}{2}} &0 &0 &0 &0 &-{\frac{1}{2}} &1\\ 0 &0 &-\vartheta &0 &0 &-1 &0 &1 & 0 &-\gamma\\ 0 &0 &0 &0 &0 &1 &0 &0 &0 &0\\ 0 &0 &0 &0 &0 &0 &1 &0 &0 &0\\ 0 &0 &{{\phi} _4} &0 &0 &0 &2 & {{\phi} _5} &{{\phi} _6} &1 \end{bmatrix}\phi_7=1 + \gamma\,{{\phi}_5} + 2\,{{\phi}_6}    
\displaystyle \phi_6={\frac{-2\,\left( 18\,\alpha\,\gamma + {{\left( -1 + \rho \right) }^2} + \left( 2 + \rho \right) \,{{{{\phi}_2}}^{{\frac{1}{3}}}} + {{{{\phi}_2}}^{{\frac{2}{3}}}} \right) }{{{\phi}_3}}}    
\displaystyle {\scriptstyle \phi_5=-{\frac{-1 - 3\,\alpha\,\gamma + 3\,\rho + 21\,\alpha\,\gamma\,\rho - 3\,{{\rho}^2} + {{\rho}^3} + {{\phi}_1} + {\frac{108\,{{\alpha}^2}\,{{\gamma}^2} + {{\left( -1 + \rho \right) }^4} + 6\,\alpha\,\gamma\,\left( -1 + \rho \right) \, \left( 1 + 5\,\rho \right) + 2\,\left( -1 + \rho \right) \,{{\phi}_1}}{{{{{\phi}_2}}^ {{\frac{1}{3}}}}}} + \left( 12\,\alpha\,\gamma + {{\left( -1 + \rho \right) }^2} \right) \, {{{{\phi}_2}}^{{\frac{1}{3}}}}}{\gamma\,{{\phi}_3}}}}    
\displaystyle \phi_4=-{\frac{3\,\alpha\,\gamma\,\left( 216\,{{\alpha}^2}\,{{\gamma}^2} - 4\,{{\left( -1 + \rho \right) }^3} + 9\,\alpha\,\gamma\,\left( 1 + \left( -14 + \rho \right) \,\rho \right) \right) + {{{{\phi}_1}}^2}}{\gamma\, {{{{\phi}_2}}^{{\frac{2}{3}}}}\,{{\phi}_3}}}    
\displaystyle \phi_3=18\,\alpha\,\gamma + {{\left( -1 + \rho \right) }^2} + \left( 5 + \rho \right) \,{{{{\phi}_2}}^{{\frac{1}{3}}}} + {{{{\phi}_2}}^{{\frac{2}{3}}}}    
\displaystyle \phi_2={{\left( -1 + \rho \right) }^3} + 27\,\alpha\,\gamma\,\left( 1 + \rho \right) + 3\,{{\phi}_1}    
\displaystyle \phi_1={\sqrt{-\left( \alpha\,\gamma\,\left( 216\,{{\alpha}^2}\,{{\gamma}^2} - 4\,{{\left( -1 + \rho \right) }^3} + 9\,\alpha\,\gamma\,\left( 1 - 14\,\rho + {{\rho}^2} \right) \right) \right) }}    

\displaystyle B= \frac{1}{\phi_7} \begin{bmatrix}0 &0 &0 &0 &0\\ 0 &0 &0 &0 &0\\ 0 &0 &\vartheta - \gamma{{\phi} _4} + 2\vartheta{{\phi} _6} & \gamma{{\phi} _6} &0\\ 0 &0 &-2\left( {{\phi} _4} + \vartheta{{\phi} _5} \right) &-1 - \gamma{{\phi} _5} &0\\ 0 &0 &-{{\phi} _4} - \vartheta{{\phi} _5} &{{\phi} _6} &0 \end{bmatrix}    


1. First, I would like to thank George Moore, my now deceased mentor, friend and coauthor of[Anderson & Moore, 1985]. I also wish to thank Brian Madigan, Robert Tetlow, Andrew Levin, Jeff Fuhrer and Hoyt Bleakley for helpful comments. I am responsible for any remaining errors. The views expressed herein are mine and do not necessarily represent the views of the Board of Governors of the Federal Reserve System. Return to Text
3. At the Board, economists commonly refer to this family of algorithms as the AIM algorithm. A metaphor relating our approach to the "shooting method" inspired the name. Return to Text
4. Note that there is no unique steady state requirement. Steady state solutions,  x^\ast satisfying
\displaystyle \sum_{i= - \tau}^\theta( H_i ) x^\ast= 0 (7)

lie in a linear subspace of  \mathcal{R}^L. We will develop conditions that guarantee solutions that evolve from a given set of initial conditions to a single point in this subspace. As a result, one can apply these routines to models with unit roots, seasonal factors, cointegrating vectors and error correction terms. Return to Text
5. The original algorithmic description and software implementation of these algorithms developed homogeneous solutions. Researchers obtained solutions for models with inhomogeneous systems by adding an equation of the form  \mathbf{i}_{t} = \mathbf{i}_{t-1} with initial condition  \mathbf{i}_{t-1}=1 to the system. Return to Text
6. If  A has roots with magnitude 1 then trajectories can converge to either a limit cycle or a non-zero fixed point. Otherwise, non-explosive trajectories will converge to the origin. Return to Text
7. If  Q_R is singular, the system has an infinity of solutions. When  \eta <L \theta, The system has an infinity of solutions.

When  Q has more than  L \theta rows, The system has a unique nontrivial solution only for specific values of  x_{data}. Return to Text

This version is optimized for use by screen readers. Descriptions for all mathematical expressions are provided in LaTex format. A printable pdf version is available. Return to Text