The Federal Reserve Board eagle logo links to home page

The Great Inflation of the 1970s1

Fabrice Collard2 and Harris Dellas3

International Finance Discussion Papers numbers 797-807 were presented on November 14-15, 2003 at the second conference sponsored by the International Research Forum on Monetary Policy sponsored by the European Central Bank, the Federal Reserve Board, the Center for German and European Studies at Georgetown University, and the Center for Financial Studies at the Goethe University in Frankfurt.

NOTE: International Finance Discussion Papers are preliminary materials circulated to stimulate discussion and critical comment. The views in this paper are solely the responsibility of the author and should not be interpreted as reflecting the views of the Board of Governors of the Federal Reserve System or any other person associated with the Federal Reserve System. References in publications to International Finance Discussion Papers (other than an acknowledgment that the writer has had access to unpublished material) should be cleared with the author or authors. Recent IFDPs are available on the Web at This paper can be downloaded without charge from the Social Science Research Network electronic library at


The two leading explanations for the poor inflation performance during the 1970s are policy opportunism (Barro and Gordon, 1982) and '' inadvertently'' bad monetary policy (Clarida, Gali and Gertler, 2000, Orphanides, 2003). In this paper we show that models of the latter category not only can account for high and persistent inflation but also have satisfactory overall performance. Moreover, both the Orphanides thesis (that loose monetary policy was the outcome of mis-perceptions about potential output rather than of inflation tolerance) and the Clarida, Gali and Gertler one (that weak policy reaction to expected inflation led to indeterminacies) are consistent with the data as long as there was a very large decrease in productivity at the time. Our result suggest that the assumption of policy opportunism does not seem essential for understanding the inflation experience of the 70s. JEL class: E32 E52

Keywords: Inflation, imperfect information, learning, monetary policy rule, indeterminacy

JEL Classification: E32, E52


During the 1970s, the inflation rate in the US reached its 20-th century peak, with levels exceeding 10%. The causes of this ''great'' inflation remain the subject of considerable academic debate. Broadly speaking, the proposed explanations fall into two categories. Those that claim that the high inflation was due to the lack of proper incentives on the part of policymakers who chose to accept (or even induce) high inflation in order to prevent a recession (the inflation bias suggested by Barro and Gordon, 1982; see also Ireland, 1999). And those that claim that it may have been the result of the honest mistakes of a well-meaning central bank. The latter category can be further subdivided into a group of explanations that emphasize either bad lack under significant imperfect information or bad luck together with technical, inadvertent errors in policy design.

According to the latter view, the FED inadvertently committed a ''technical'' error by implementing an interest policy rule in which nominal interest rates were moved less than expected inflation (Clarida, Gali and Gertler, 2000). The resulting decrease in real interest rates fuelled inflation inducing instability (indeterminacy) in the economy and exaggerating inflation movements. The implication of this view is that adoption of the standard Henderson-McKibbin-Taylor (HMT)rule would have prevented the persistent surge in inflation.

The bad luck under imperfect information view claims that loose monetary policy and inflation reflected an unavoidable mistake on the part of a monetary authority whose tolerance of inflation did not differ significantly from that commonly attributed to the authorities in the 80s and 90s. Orphanides (2003) has argued that the large decrease in actual output following the persistent downward shift in potential output was interpreted as a decrease in the output gap4. It led to expansionary monetary policy that exaggerated the inflationary impact of the decrease in potential output. Eventually and after a long delay, the FED realized that potential output growth was lower and adjusted policy to bring inflation down. Imperfect information about the substantial productivity slowdown rather than tolerance of inflation played the critical role in the inflation process.

All these theories seem plausible. Identifying the most empirically relevant one has not been an easy task. A subset of the literature has tackled the issue of the contribution of policy to inflation directly, by examining whether monetary actions can be captured by a policy rule, and if yes, what the properties of such rule are. Relying on single equation estimation, Clarida, Gali and Gertler, 2000, claim that the FED indeed followed an interest rule during the 1970s but that rule contained a weak reaction to inflation that led to indeterminacies. Orphanides, 2001, disputes this claim. Using real time data, he documents the existence of a rule too, but he also finds no significant difference between pre and post Volcker tolerance regarding inflation. Lubic and Schforheide, 2003, estimate a small new Keynesian model (without learning, though, on the part of monetary authorities) and arrive at results similar to those of Clarida, Gertler and Gali's. According to their estimated model, post 1982 U.S. monetary policy is consistent with determinacy, whereas the pre-Volcker policy is not. Nelson and Nicolov, 2002, estimate a similar small scale model for the UK and find that both output gap mis-measurement and a weak policy response to inflation played an important role. And that the weak reaction to inflation does not seem to have encouraged multiple equilibria.

A second subset of the literature again uses a small scale model but imposes --rather than estimates-- a policy rule. Lansing, 2001, finds that a specification with sufficiently large reaction to inflation is consistent with the patterns of inflation and output observed during the 1970s.

Finally, a third subset of the empirical literature has investigated the events of the 70s within the context of calibrated, stochastic general equilibrium models. Christiano and Gust, 1999, argue that the new Keynesian model cannot replicate that experience, while a limited participation model with indeterminacy can (they do not address the role of imperfect information, though). Cukierman and Lippi, 2002, demonstrate how, within a backward looking version of the Keynesian model, imperfect information leads to serially correlated forecast errors and loose monetary policy. Bullard and Eusepi, 2003, argue that a persistent increase in inflation can obtain in the new Keynesian model even when policy responds strongly to inflation when the policymakers learn gradually about changes in trend productivity. Finally, in related work which however, looks at the disinflation of the 80s, Erceg and Levin, 2003, argue that the disinflation experience can be accounted for by a shift in the inflation target of the FED with the public only gradually learning about the policy regime switch.

Our objective in this paper is twofold. First, to examine whether explanations based on rules -as opposed to discretion- are consistent with the macroeconomic performance of the 70s. We emphasize overall macroeconomic performance because we find attempts to validate particular theories based solely on the behavior of inflation too narrow. And second, to undertake a direct comparison of the two leading explanations from this group (Orphanides vs Clarida, Gali and Gertler). This is an important task, as the two explanations carry dramatically different implications for inflation scenaria in the future. If the Orphanides view is correct, then strong reaction to expected inflation is not sufficient to prevent bad inflation outcomes. The experience of the 70s can be repeated. If the Clarida, Gali and Gertler view is correct, then inflation is likely to remain tamed as long as the central bank reacts sufficiently strongly to expected inflation.

We address these questions within the New Keynesian (NK) model. We ask whether and under what conditions the NK model with policy commitment can replicate the evolution of inflation following a severe, persistent slowdown in the rate of productivity growth. And if yes, whether the model also meets additional fitness criteria.

We first examine whether the model can generate a ''great inflation'' under the assumption that the HMT policy rule pursued at the time did not differ from that commonly attributed to the ''Volcker-Greenspan'' FED (see Clarida, Gali and Gertler, 2000, Orphanides, 2001). We find that this is the case if the productivity slowdown is very large and there exists a high degree of imperfect information5. Imperfect information introduces stickiness in inflation forecasts, making the expected inflation ''gap''(the deviation of expected from target inflation) small. The underestimation of the inflation gap leads to weak policy reaction even when the inflation reaction coefficient is large. We also find that the overall macroeconomic performance of this model is good with two exceptions: The predicted recession is too severe. And the required shock is very large.

We then examine the performance of the model under HMT rules that allow for indeterminacy (following Clarida, Gali and Gertler, CGG hereafter) due to a small reaction coefficient to inflation. Some of these rules have good properties: They generate inflation persistence and realistic overall macroeconomic volatility. Their main weakness, though, is that they also generate too severe of a recession.

Our conclusion from these exercises is that the data support the view that the FED did not react to inflation developments in the 70s strongly enough, in the sense that it did not raise nominal interest rates sufficiently. Thus policy contributed to higher inflation. Nevertheless, this behavior may not have arisen from policy opportunism, an inappropriate policy rule would have sufficed. It is difficult, though, to identify the source of the weak reaction. Interestingly, our analysis also suggests that output stabilization motives may not have played as important a role in the great inflation as commonly assumed.

The remaining of the paper is organized as follows. Section 1 presents the model. Section 2 discusses the calibration. Section 3 presents the main results. An appendix describes the mechanics of the solution to the model under imperfect information and learning based on the Kalman filter.

1  The model

The set up is the standard New Keynesian model. The economy is populated by a large number of identical infinitely-lived households and consists of two sectors: one producing intermediate goods and the other a final good. The intermediate good is produced with capital and labor and the final good with intermediate goods. The final good is homogeneous and can be used for consumption (private and public) and investment purposes.

1.1  The household

Household preferences are characterized by the lifetime utility function:6

$\displaystyle \sum_{\tau=0}^{\infty}E_{t}\beta^{\tau} U\left( C_{t+\tau},\frac{M_{t+\tau} }{P_{t+\tau}},\ell_{t+\tau}\right)$ (1)

where $ 0<\beta<1$ is a constant discount factor, $ C$ denotes the domestic consumption bundle, $ M/P$ is real balances and $ \ell$ is the quantity of leisure enjoyed by the representative household. The utility function, $ U\left( C,\frac{M}{P},\ell\right) :\mathbb{R}_{+}\times \mathbb{R}_{+}\times\lbrack0,1]\longrightarrow\mathbb{R}$ is increasing and concave in its arguments.

The household is subject to the following time constraint

$\displaystyle \ell_{t}+h_{t}=1$ (2)

where $ h$ denotes hours worked. The total time endowment is normalized to unity.

In each and every period, the representative household faces a budget constraint of the form

$\displaystyle B_{t+1}+M_{t}+P_{t}(C_{t}+I_{t}+T_{t})$ $\displaystyle \leq R_{t-1}B_{t}+M_{t-1}+N_{t} +\Pi_{t}+P_{t}W_{t}h_{t}+P_{t}z_{t}K_{t}$ (3)

where $ W_{t}$ is the real wage; $ P_{t}$ is the nominal price of the final good;.$ C_{t}$ is consumption and $ I$ is investment expenditure; $ K_{t}$ is the amount of physical capital owned by the household and leased to the firms at the real rental rate $ z_{t}$. $ M_{t-1})$ is the amount of money that the household brings into period $ t$, and $ M_{t}$ is the end of period $ t$ money holdings. $ N_{t}$ is a nominal lump-sum transfer received from the monetary authority; $ T_{t}$ is the lump-sum taxes paid to the government and used to finance government consumption.

Capital accumulates according to the law of motion

$\displaystyle K_{t+1}=I_{t}-\frac{\varphi}{2}\left( \frac{I_{t}}{K_{t}}-\delta\right) ^{2} K_{t}+(1-\delta)K_{t}$ (4)

where $ \delta\in[0,1]$ denotes the rate of depreciation. The second term captures the existence of capital adjustment costs. $ \varphi>0$ is the capital adjustment costs parameter.

The household determines her consumption/savings, money holdings and leisure plans by maximizing her utility (1) subject to the time constraint (2), the budget constraint (3) and taking the evolution of physical capital (4) into account.

1.2  Final goods sector

The final good is produced by combining intermediate goods. This process is described by the following CES function

$\displaystyle Y_{t}=\left( \int_{0}^{1}X_{t}(i)^{\theta}\mbox{d}i\right) ^{\frac{1} {\theta}}$ (5)

where $ \theta\in(-\infty,1)$. $ \theta$ determines the elasticity of substitution between the various inputs. The producers in this sector are assumed to behave competitively and to determine their demand for each good, $ X_{t}(i)$, $ i\in(0,1)$ by maximizing the static profit equation
$\displaystyle \max_{\{X_{t}(i)\}_{i\in(0,1)}}P_{t}Y_{t}-\int_{0}^{1}P_{t}(i)X_{t} (i)$ d$\displaystyle i$ (6)

subject to (5), where $ P_{t}(i)$ denotes the price of intermediate good $ i$. This yields demand functions of the form:
$\displaystyle X_{t}(i)=\left( \frac{P_{t}(i)}{P_{t}}\right) ^{\frac{1}{\theta-1}}Y_{t}$     for $\displaystyle i\in(0,1)$ (7)

and the following general price index
$\displaystyle P_{t}=\left( \int_{0}^{1}P_{t}(i)^{\frac{\theta}{\theta-1}}\mbox{d}i\right) ^{\frac{\theta-1}{\theta}}$ (8)

The final good may be used for consumption -- private or public -- and investment purposes.

1.3  Intermediate goods producers

Each firm $ i$, $ i\in(0,1)$, produces an intermediate good by means of capital and labor according to a constant returns-to-scale technology, represented by the Cobb-Douglas production function

$\displaystyle X_{t}(i)=A_{t}K_{t}(i)^{\alpha}h_{t}(i)^{1-\alpha}$ with $\displaystyle \alpha \in(0,1)$ (9)

where $ K_{t}(i)$ and $ h_{t}(i)$ respectively denote the physical capital and the labor input used by firm $ i$ in the production process. $ A_{t}$ is an exogenous stationary stochastic technology shock, whose properties will be defined later. Assuming that each firm $ i$ operates under perfect competition in the input markets, the firm determines its production plan so as to minimize its total cost
$\displaystyle \min_{\{K_{t}(i),h_{t}(i)\}}P_{t}W_{t}h_{t}(i)+P_{t}z_{t}K_{t}(i) $
subject to (9). This leads to the following expression for total costs:
$\displaystyle P_{t}S_{t}X_{t}(i) $
where the real marginal cost, $ S$, is given by $ \frac{W_{t}^{1-\alpha} z_{t}^{\alpha}}{\chi A_{t}}\mbox{ with }\chi=\alpha^{\alpha}(1-\alpha )^{1-\alpha}$

Intermediate goods producers are monopolistically competitive, and therefore set prices for the good they produce. We follow Calvo, 1983, in assuming that firms set their prices for a stochastic number of periods. In each and every period, a firm either gets the chance to adjust its price (an event occurring with probability $ \gamma$) or it does not. In order to maintain long term money neutrality (in the absence of monetary frictions) we also assume that the price set by the firm grows at the steady state rate of inflation. Hence, if a firm $ i$ does not reset its price, the latter is given by $ P_{t} (i)=\overline{\pi}P_{t-1}(i)$. A firm $ i$ sets its price, $ \widetilde{p} _{t}(i)$, in period $ t$ in order to maximize its discounted profit flow:

$\displaystyle \max_{\widetilde{p}_{t}(i)}\widetilde{\Pi}_{t}(i)+E_{t}\sum_{\tau... ...u-1}\left( \gamma\widetilde{\Pi}_{t+\tau }(i)+(1-\gamma)\Pi_{t+\tau}(i)\right) $
subject to the total demand it faces
$\displaystyle X_{t}(i)=\left( \frac{P_{t}(i)}{P_{t}}\right) ^{\frac{1}{\theta-1}}Y_{t}$
and where $ \widetilde{\Pi}_{t+\tau}(i)=(\widetilde{p}_{t+\tau}(i)-P_{t+\tau }S_{t+\tau})X(i,s^{t+\tau})$ is the profit attained when the price is reset, while $ \Pi_{t+\tau}(i)=(\overline{\pi}^{\tau}\widetilde{p}_{t}(i)-P_{t+\tau} S_{t+\tau})X_{t+\tau}(i)$ is the profit attained when the price is maintained. $ \Phi_{t+\tau}$ is an appropriate discount factor related to the way the household values future as opposed to current consumption. This leads to the price setting equation

$ \bigskip$

\left[  (1-\gamma)\overline{\pi}^{\frac{1}{\theta-1}}\right]  ^{\tau}
\left[  (1-\gamma)\overline{\pi}^{\frac{\theta}{\theta-1}}\right]  ^{\tau}
\Phi_{t+\tau}P_{t+\tau}^{\frac{1}{\theta-1}}Y_{t+\tau}}$                                                                 (10)

Since the price setting scheme is independent of any firm specific characteristic, all firms that reset their prices will choose the same price.

In each period, a fraction $ \gamma$ of contracts ends, so there are $ \gamma(1-\gamma)$ contracts surviving from period $ t-1$, and therefore $ \gamma(1-\gamma)^{j}$ from period $ t-j$. Hence, from (8), the aggregate intermediate price index is given by

$\displaystyle P_{t}=\left( \sum_{i=0}^{\infty}\gamma(1-\gamma)^{i}\left( \frac ... ...{\pi}^{i}}\right) ^{\frac{\theta}{\theta-1} }\right) ^{\frac{\theta-1}{\theta}}$ (11)

1.4  The monetary authorities

We assume that monetary policy is conducted according to a standard HMT rule. Namely,

$\displaystyle \widehat{R}_{t}=\rho\widehat{R}_{t-1}+ (1-\rho)[k_{\pi}E_{t}(\widehat{\pi }_{t+1} - \pi) + k_{y}( \widehat{y}_{t} -y^{\star}_{t})]$
where $ \widehat{\pi}_{t}$ and $ \widehat{y}_{t}$ are actual output and expected inflation respectively and $ \pi$ and $ y_{t}^{\star}$ are the inflation and output targets respectively. The output target is set equal to potential output and the inflation target to the steady state rate of inflation. Potential output is defined to be the level of output that corresponds to the flexible price equilibrium of our model. It is assumed that it is not observable and the monetary authorities must learn about changes in it gradually. The learning process is described in the appendix7.

There exists disagreement in the literature regarding the empirically relevant values of $ k_{\pi}$ and $ k_{y}$ for the 1970s. Clarida, Gali and Gertler, 2000, claim that the pre-Volcker, HMT monetary rule involved a policy response to inflation that was too weak. Namely, that $ k_{\pi}<1$ which led to real indeterminacies and excessive inflation. They estimate the triplet $ \{\rho,k_{\pi},k_{y}\}=\{0.75,0.8,0.4\}$. Orphanides, 2001, disputes this claim. He argues that the reaction to -- expected -- inflation was broadly similar in the pre and post-Volcker period, but the reaction to output was stronger in the earlier period. In particular, using real time date, he estimates $ \{\rho,k_{\pi},k_{y}\}=\{0.75,1.6,0.6\}$

We investigate the consequences of using alternative values for $ k_{\pi}$ and $ k_{y}$ in order to shed some light on the role of policy preferences relative to that of the degree of imperfect information for the behavior of inflation.

1.5  The government

The government finances government expenditure on the domestic final good using lump sum taxes. The stationary component of government expenditures is assumed to follow an exogenous stochastic process, whose properties will be defined later.

1.6  The equilibrium

We now turn to the description of the equilibrium of the economy.

Definition 1   An equilibrium of this economy is a sequence of prices $ \{\mathcal{P} _{t}\}_{t=0}^{\infty}=\{W_{t}, z_{t}, P_{t},R_{t},$ $ P_{t}(i),i\in (0,1)\}_{t=0}^{\infty}$ and a sequence of quantities $ \{\mathcal{Q} _{t}\}_{t=0}^{\infty}=\{\{\mathcal{Q}^{H}_{t}\}_{t=0}^{\infty},\{\mathcal{Q} ^{F}_{t}\}_{t=0}^{\infty}\}$ with
$\displaystyle \{\mathcal{Q}^{H}_{t}\}_{t=0}^{\infty}$ $\displaystyle =\{C_{t}, I_{t}, B_{t},K_{t+1} ,h_{t},M_{t}\}$    
$\displaystyle \{\mathcal{Q}^{H}_{t}\}_{t=0}^{\infty}$ $\displaystyle =\{Y_{t},X_{t}(i), K_{t}(i),h_{t}(i); i\in(0,1)\}_{t=0}^{\infty}$    

such that:
given a sequence of prices $ \{\mathcal{P}_{t} \}_{t=0}^{\infty}$ and a sequence of shocks, $ \{\mathcal{Q}_{t}^{H} \}_{t=0}^{\infty}$ is a solution to the representative household's problem;
given a sequence of prices $ \{\mathcal{P}_{t} \}_{t=0}^{\infty}$ and a sequence of shocks, $ \{\mathcal{Q}_{t}^{F} \}_{t=0}^{\infty}$ is a solution to the representative firms' problem;
given a sequence of quantities $ \{\mathcal{Q} _{t}\}_{t=0}^{\infty}$ and a sequence of shocks, $ \{\mathcal{P}_{t} \}_{t=0}^{\infty}$ clears the markets
$\displaystyle Y_{t}$ $\displaystyle =C_{t}+I_{t}+G_{t}$ (12)
$\displaystyle h_{t}$ $\displaystyle =\int_{0}^{1} h_{t}(i)$d$\displaystyle i$ (13)
$\displaystyle K_{t}$ $\displaystyle =\int_{0}^{1} K_{t}(i)$d$\displaystyle i$ (14)
$\displaystyle G_{t}$ $\displaystyle =T_{t}$ (15)

and the money market.
Prices satisfy (1.3) and (11).

2  Parametrization

The model is parameterized on US quarterly data for the period 1960:1-1999:4. The data are taken from the Federal Reserve Database.8 The parameters are reported in table 1.

$ \beta$, the discount factor is set such that households discount the future at a 4% annual rate, implying $ \beta$ equals 0.988. The instantaneous utility function takes the form

$\displaystyle U\left( C_{t},\frac{M_{t}}{P_{t}},\ell_{t}\right) =\frac{1}{1-\si... ...^{\eta}\right) ^{\frac{\nu}{\eta}}\ell_{t}^{1-\nu}\right) ^{1-\sigma}-1\right] $
where $ \zeta$ capture the preference for money holdings of the household. $ \sigma$, the coefficient ruling risk aversion, is set equal to 1.5. $ \nu$ is set such that the model generates a total fraction of time devoted to market activities of 31%. $ \eta$ is borrowed from Chari et al. (2000), who estimated it on postwar US data (-1.56). The value of $ \zeta$, 0.0649, is selected such that the model mimics the average ratio of M1 money to nominal consumption expenditures.

$ \gamma$, the probability of price resetting is set in the benchmark case at 0.25, implying that the average length of price contracts is about 4 quarters. The nominal growth of the economy, $ \mu$, is set such that the average quarterly rate of inflation over the period is $ \overline{\pi}=1.2\%$ per quarter. The quarterly depreciation rate, $ \delta$, was set equal to 0.025. $ \theta$ in the benchmark case is set such that the level of markup in the steady state is 15%. $ \alpha$, the elasticity of the production function to physical capital, is set such that the model reproduces the US labor share -- defined as the ratio of labor compensation over GDP -- over the sample period (0.575).

The evolution of technology is assumed to contain two components. One capturing deterministic growth and the other stochastic growth. The stochastic one, $ a_{t}=\log(A_{t}/A)$ is assumed to follow a stationary AR(1) process of the form

$\displaystyle a_{t} = \rho_{a} a_{t-1}+\varepsilon_{a,t} $
with $ \vert\rho_{a}\vert<1$ and $ \varepsilon_{a,t} \leadsto\mathcal{N}(0,\sigma _{a}^{2})$. We set $ \rho_{a} =0.95$ and9 $ \sigma_{a}=0.008$.

Alternative descriptions of the productivity process may be equally plausible. For instance, productivity growth may have followed a deterministic trend that permanently

Table 1: Calibration, benchmark case

Preferences Preferences Preferences
Discount Factor $ \beta$ 0.988
Relative risk aversion $ \sigma$ 1.500
Parameter of CES in utility function $ \eta$ -1.560
Weight of money in the utility function $ \zeta$ 0.065
CES weight in utility function $ \nu$ 0.344
Technology Technology Technology
Capital elasticity of intermediate output $ \alpha$ 0.281
Capital adjustment costs parameter $ \varphi$ 1.000
Depreciation rate $ \delta$ 0.025
Parameter of markup $ \theta$ 0.850
Probability of price resetting $ \gamma$ 0.250
Shocks and policy parameters Shocks and policy parameters Shocks and policy parameters
Persistence of technology shock $ \rho_{a}$ 0.950
Standard deviation of technology shock $ \sigma_{a}$ 0.008
Persistence of government spending shock $ \rho_{g}$ 0.970
Volatility of government spending shock $ \sigma_{g}$ 0.020
Government share $ g/y$ 0.200
Nominal growth $ \mu$ 1.012
shifted downward in the late 60s to early 70s.10 In our model, this would mean that the FED learns about the trend in productivity rather than about the current level of the -- temporary -- shock to productivity. We are unsure about how our results would be affected by using an alternative process, but, given the state of the art in this area, we do not think that it is possible to identify the productivity process with any degree of confidence.

The government spending shock11 is assumed to follow an AR(1) process

$\displaystyle \log(g_{t})=\rho_{g}\log(g_{t-1})+(1-\rho_{g})\log(\overline{g})+\varepsilon _{g,t} $
with $ \vert\rho_{g}\vert<1$ and $ \varepsilon_{g,t}\sim\mathcal{N}(0,\sigma_{g}^{2})$. The persistence parameter is set to, $ \rho_{g}$, of 0.97 and the standard deviation of innovations is $ \sigma_{g}=0.02$. The government spending to output ratio is set to 0.20.

An important feature of our analysis is that the policymakers have imperfect knowledge about the true state of the economy. In particular, we assume that both actual12 and potential output are observed with noise13. Potential output can be written as

$\displaystyle y_{t}^{\star}=y_{t}^{\textsc{p}}+\xi_{t} $
where $ y_{t}^{\textsc{p}}$ denotes true potential output and $ \xi_{t}$ is a noisy process that satisfies:
$ E(\xi_{t})=0$ for all $ t$;
$ E(\xi_{t}\varepsilon_{a,t})=E(\xi_{t}\varepsilon _{g,t})=0$;
\begin{displaymath} E(\xi_{t}\xi_{k})=\left\{ \begin{array}[c]{ll} \sigma_{\xi}^{2} & \mbox{ if } t=k\ 0 & \mbox{Otherwise} \end{array}\right. \end{displaymath}

In order to facilitate the interpretation of $ \sigma_{\xi}$ we set its value in relation to the volatility of the technology shock. More precisely, we define $ \varsigma$ as $ \varsigma=\sigma_{\xi}/\sigma_{a}$. Different values were assigned to $ \varsigma$ in order to gauge the effects of imperfect information in the model.

3  The results

The model is first log-linearized around the deterministic steady state and then solved according to the method outlined in the appendix.

We start by assuming the standard specification for the HMT rule, namely, $ \rho=0.75$, $ k_{\pi}=1.5$ and $ k_{y}=0.5$ (Hereafter we denote $ \Theta =\{\rho_{r},k_{\pi},k_{y}\}$) and vary the degree of uncertainty -- the quality of the signal -- about potential output.14 The objective of this exercise is to determine i) whether a policy reaction function of the type commonly attributed to the FED during the 80s and 90s is consistent with high and persistent inflation of the type observed in the 70s; and ii) the role played by imperfect information. This exercise may then prove useful for determining whether the great inflation can be attributed mostly to bad luck and incomplete information (as Orphanides, 2001, 2003 has argued). Or to insufficiently aggressive reaction to inflation developments -- a low $ k_{\pi}$, as emphasized by Clarida, Gertler and Gali, 2000. Or to an inherent inflation bias, as emphasized by Ireland, 1999.

We report two sets of statistics. The volatility of H-P filtered actual output, annualized inflation and investment. And the impulse response functions (IRF) of actual output and inflation following a negative technology shock for the perfect information model (Perf. Info.), the imperfect information model with $ \varsigma=1$ (Imp. Info. (I)) and $ \varsigma=8$ (Imp. Info. (II)). The IRF for the inflation rate is annualized and expressed in percentage points. The actual rate of inflation following a shock is simply found by adding the response reported in the IRF to the steady state value ( $ \overline{\pi}$=4.8%).

There exists considerable uncertainty about the (type and) size of the shock that triggered the productivity slowdown of the 70s. We do not take a position on this. We proceed by selecting a value for the supply shock that can generate a large and persistent increase in the inflation rate under at least one of the informational assumptions considered. By large, we mean an increase in the inflation rate of the order of 5-7 percentage points, implying that the maximum rate of inflation obtained during that period is about 10%-12%. We then feed a series of shocks that include this value for the first quarter of 1973 into our model and generate the other statistics described above.

Figure 1 reports the IRFs in the case of a standard HMT rule. The model can produce a large and persistent increase in the inflation rate if two conditions are met: The shock is very large (of the order of 33%) and the degree of imperfect information is very high (say, $ \varsigma=8$). Moreover, table 3 indicates that the model can generate a realistic degree of macroeconomic volatility in the case of a high degree of imperfect information. For instance, the volatility of output, investment and inflation in the case $ \gamma=0.25$ (4 quarters contracts) and $ \varsigma=8$ (Imp. Info (II)) are 1.820%, 6.736% and 0.619% respectively, to be compared to 1.639%, 7.271% and 0.778% in the data. The model fails, though, in its prediction of the maximal effect on output following such a shock. In particular, the maximal predicted effect is -19.812% which seems implausibly high (table 2). On the other hand, the performance of the model under perfect information is bad. The increase in inflation is quite small, output and investment volatility is too large and inflation volatility too low and the maximal effects are even higher.

Imperfect information is critical for the ability of the model to generate a persistent increase in inflation as well as sufficient volatility following a persistent supply shock. When the variance of the noise is large, much of the change in actual inflation is attributed to cyclical rather than ''core'' developments. This means that estimated future inflation --and hence the inflation ''gap''-- is sticky, i.e., it does not move much with the current shocks and actual inflation (see Figure 2). Imperfect information introduces a serially correlated error term in the Phillips curve, whose size and persistence depends on the size of $ \kappa_{\pi}$ and the speed of learning. As a result, the policy reaction to a perceived small inflation gap proves too weak even if $ \kappa_{\pi}$ is large, resulting in countercyclical policy. The real interest rate is decreased significantly, see Figure 3, fuelling inflation while smoothing output out. As long as the inflation forecast error is persistent (as this will be the case for a persistent shock and slow learning) the increase in actual inflation will be persistent too. This requirement does not seem to pose a problem for the model as the magnitude of the predicted gap between actual and expected inflation seems to be in line with that observed in the 70s.

The choice of the inflation variable that enters the policy rule plays an important role. The argument above has suggested that the source of the persistence in inflation is the stickiness of expected inflation. Were the FED to react to current or past actual inflation relative to target then inflation would be contained more quickly. In this case, however, the model would behave less satisfactorily. Inflation volatility would be further away from that in the data, output volatility would be exaggerated and the maximal effect on output would be even higher. Thus, excessive policymaker optimism about the future inflation path plays an important role.

The strength of the stabilization motive (the coefficient $ k_{y}$) does not play an important role in the analysis. We have repeated the analysis under $ k_{y}$=1.2 and $ k_{y}$=1.7 with almost identical results (Figure 4 and Table 4). This is a comforting finding because it is difficult to justify differences in stabilization motives between the pre and post 1980 policymakers. Differences in luck and information are much less controversial.

The model does not perform as well with a lower $ k_{\pi}$ (lower panels of Figure 4 and Table 4). In this case it is difficult to both match volatility and generate the appropriate inflation dynamics. If the model matches volatility well then it exaggerates the increase in inflation.

Increasing the degree of degree of price flexibility (say, from $ \gamma=0.25$ to $ \gamma=1/3$ does not alter the basic picture but improves things somewhat. A smaller shock is now required, inflation volatility moves closer to that in the data and the maximal effect on output is reduced. At the same time, inflation persistence is somewhat reduced.

We have run a larger number of experiments involving this HMT rule and alternative values of the other parameters of the model without changing overall model performance. To summarize our main results: The NK model under the standard HMT policy rule and imperfect information can generate plausible inflation dynamics and good overall fit in the face of a very substantial productivity slowdown and expected inflation gap targeting. Nonetheless, this specification has some weaknesses, found in the requirement of a very large shock, and of a very severe predicted recession.

We now turn to specifications in which policy is conducted in a way that destabilizes rather than constrains inflation (as suggested by Clarida, Gertler and Gali, 2000). We have investigated the properties of the model under the policy rule parametrization suggested by CGG, namely, $ \rho _{r}=0.75,\kappa_{\pi}=0.80,\kappa_{y}=0.40$. Such a rule leads to real indeterminacy. This specification can generate a large, persistent increase in inflation (see Figure 5), but the associated response of output is implausible and macroeconomic volatility is too low (Tables 5 and 6). An important feature of this specification is that real indeterminacy introduces an additional source of uncertainty related to a sunspot shock that affects beliefs. We assume that the sunspot shock is purely extrinsic and is therefore not correlated with any fundamental shock. Since we have no information that would allow us to calibrate this shock we have explored several cases. In the first one, the volatility of the sunspot shock is set to 0. In this case, the model overestimates output volatility, but significantly underestimates that of both investment, consumption and inflation. This is also the case when the volatility is set at the same level as that of the technology shock. When the sunspot shock is calibrated in order for the model to match inflation volatility, the implied standard deviation of output is widely overestimated (by almost 40%). The same obtains when the sunspot is calibrated to match investment volatility, and this is highly magnified when the sunspot is used to mimic the volatility of the nominal interest rate.15 Nonetheless, we have encountered more successful policy specifications within the range of indeterminate equilibria. Figure 6 and Tables 7 and 8 correspond to such a case with $ \rho_{r}=0.75,\kappa_{\pi}=1.20,\kappa_{y}=0.80$ As can be seen, this specification performs fairly well. The model has little difficulty producing high and persistent inflation and can account for volatility fairly well (but it underestimates investment volatility). If it has an Achilles heel, it is to be found in its excessive reaction of output (Figure 6), a weakness that it shares with the imperfect information version under the standard HMT rule. Hence, the main advantage of this specification may be that it works even with a much smaller shock.

How can we explain the similarity in the results under the two specifications of the policy rule? The reaction of the nominal interest rates to inflation is the product of the inflation reaction coefficient and the estimated inflation ''gap''. High and persistent inflation can occur following a productivity slowdown either because the reaction coefficient is low (the Clarida-Gali-Gertler scenario of bad policy ) or because the estimated inflation gap to which policy is reacting is low (the Orphanides scenario of imperfect information). information. This reasoning indicates that there may be a serious difficulty in identifying the policy rule. The difference in the results of CGG and Orphanides who rely on different information assumptions (actual vs real time data) can be explained using this argument.

Before concluding, let us point out that there is a widespread belief that the great inflation did not actually start in the early 70s but rather in the mid-60s. In our model a series of unperceived negative supply shocks, culminating with an oil shock in 1973 --that was misperceived as temporary-- can reproduce the upward trend as well as the spike in the inflation series16.

4  Conclusions

Inflation in the US reached high levels during the 1970s, to a large extent due to what proved to be excessively loose monetary policy. There exist two conflicting views concerning the conduct of policy at that time. One sees it as reflecting opportunistic (discretionary) behavior on the part of the FED (Barro and Gordon, 1982). According to this view, the problem of inflation arises from poorly designed institutions, and the only way to prevent inflationary episodes in the future is by creating institutions that provide the ''right'' incentives to the policymakers.

The other view attributes looseness to inadvertent policy mistakes committed by a central bank that follows a rule. Such mistakes can arise even when the central bank is sufficiently averse to inflation, due to imperfect information about the true state of the economy (Orphanides, 2003). Or when the bank does not fully understand the properties of the rule it uses (Clarida, Gali and Gertler, 2000). The recommended solution in these cases is to improve the technical aspects of policymaking, that is, to adopt better rules, allow for imperfect information and so on.

Our analysis has established that policy opportunism is not necessary for obtaining persistently bad inflation outcomes. And that, conditional on accepting the occurrence of a very large supply shock, these two rule-based explanations represent empirically compelling scenarios. Nevertheless, the information contained in the data does not suffice to conclusively discriminate between. Additional races are needed. Although Lubic and Schforheide, 2003, argue that the data support a policy specification with indeterminacy over one with determinacy (for the 70s) their model does not include the key elements emphasized by Orphanides. We are currently investigating this issue using the Lubic and Schforheide methodology but also incorporating learning on the part of the policymakers. Whether this approach will break the observational equivalence between the competing theories remains an open issue.


Barro, Robert and David Gordon, 1983,''Rules, Discretion and Reputation in a Model of Monetary Policy'', Journal of Monetary Economics, 12 (1), 101-21.

Bils, Mark and Peter Klenow, 2002, ''Some Evidence on the Importance of Sticky Prices,'' NBER wp #9069.

Bullard, James and Stefano Eusepi, 2003, ''Did the Great Inflation Occur Despite Policymaker Commitment to a Taylor Rule,'' Federal Reserve Bank of Atlanta, October, WP 2003-20.

Clarida, Richard, Jordi Gali, and Mark Gertler, 2000, ''Monetary Policy Rules and Macroeconomic Stability: Evidence and Some Theory'', Quarterly Journal of Economics, 147-180.

Christiano, Larry and Christopher Gust, 1999, ''The Great Inflation of the 1970s'', mimeo.

Cukierman, Alex and Francesco Lippi, 2002, '' Endogenous Monetary Policy with Unobserved Potential Output,'' manuscript.

DeLong, Bradford, 1997, ''America's Peacetime Inflation: The 1970s'', In Reducing Inflation: Motivation and Strategy, eds. C. Romer and D. Romer, 247-276. Chicago: Univ. of Chicago Press.

Erceg, Christopher and Andrew Levin. (2003). ''Imperfect Credibility and Inflation Persistence.''Journal of Monetary Economics, 50(4), 915-944.

Ehrmann, Michael and Frank Smets, 2003, ''Uncertain Potential Output: Implications for Monetary Policy, ''Journal of Economic Dynamics and Control, 27, 1611--1638.

Ireland, Peter, 1999, ''Does the Time-Consistency Problem Explain the Behavior of Inflation in the United States?'' Journal of Monetary Economics, 44(2) 279-91.

Lansing, Kevin J, 2001, ''Learning about a Shift in Trend Output: Implications for Monetary Policy and Inflation.'' Unpublished manuscript. FRB San Francisco.

Nelson, Edward and Kalin Nicolov, 2002, ''Monetary Policy and Stagflation in the UK,''CEPR Discussion Paper No. 3458, July.

Orphanides, Athanasios, 2001, ''Monetary Policy Rules, Macroeconomic Stability and Inflation: A View from the Trenches,'' BGFRS.

Orphanides, Athanasios and John C. Williams, 2002, ''Imperfect Knowledge, Inflation Expectations, and Monetary Policy,'' BGFRS.

Orphanides, Athanasios, 2003, ''The Quest for Prosperity without Inflation.'' Journal of Monetary Economics, 50(3) 633-63.

Sargent, Thomas J, 1999, ''The Conquest of American Inflation''. Princeton: Princeton Univ. Press.

Svensson, Lars and Michael Woodford, 2003, ''Indicator Variables for Optimal Policy,'' Journal of Monetary Economics, 50(3), 691-720.

5  Appendix

The solution of the model under imperfect information with a Kalman filter

Let's consider the following system

$\displaystyle M_{cc} Y_{t}= M_{cs} \left( \begin{array}[c]{c} X^{b}_{t}\\ X^{f}... ...ft( \begin{array}[c]{c} X^{b}_{t\vert t}\\ X^{f}_{t\vert t} \end{array} \right)$ (16)

$\displaystyle M_{ss0} \left( \begin{array}[c]{c} X^{b}_{t+1}\\ X^{f}_{t+1\vert ... ..._{sc1} Y_{t} + \left( \begin{array}[c]{c} M_{e} u_{t+1}\\ 0 \end{array} \right)$ (17)

$\displaystyle S_{t}= C^{0} \left( \begin{array}[c]{c} X^{b}_{t}\\ X^{f}_{t} \en... ...gin{array}[c]{c} X^{b}_{t\vert t}\\ X^{f}_{t\vert t} \end{array} \right) +v_{t}$ (18)

$ Y$ is a vector of $ n_{y}$ control variables, $ S$ is a vector of $ n_{s}$ signals used by the agents to form expectations, $ X^{b}$ is a vector of $ n_{b}$ predetermined (backward looking) state variables (including shocks to fundamentals), $ X^{f}$ is a vector of $ n_{f}$ forward looking state variables, finally $ u$ and $ v$ are two Gaussian white noise processes with variance-covariance matrices $ \Sigma_{uu}$ and $ \Sigma_{vv}$ respectively and $ E(uv^{\prime})=0$. $ X_{t+i\vert t}=E(X_{t+i}\vert\mathcal{I}_{t})$ for $ i\geqslant0$ and where $ \mathcal{I}_{t}$ denotes the information set available to the agents at the beginning of period $ t$.

Note that, from (16), we have

$\displaystyle Y_{t}=B^{0} \left( \begin{array}[c]{c} X^{b}_{t}\\ X^{f}_{t} \end... ...ft( \begin{array}[c]{c} X^{b}_{t\vert t}\\ X^{f}_{t\vert t} \end{array} \right)$ (19)

where $ B^{0}=M_{cc}^{-1}M_{cs}$ and $ B^{1}=M_{cc}^{-1}M_{ce}$, such that
$\displaystyle Y_{t\vert t}=B\left( \begin{array}[c]{c} X^{b}_{t\vert t}\\ X^{f}_{t\vert t} \end{array} \right)$ (20)

with $ B=B^{0}+B^{1}$.

5.1  Solving the system

Step 1:

We first solve equation 17 without the error term:

$\displaystyle M_{ss0} \left( \begin{array}[c]{c} X^{b}_{t+1\vert t}\\ X^{f}_{t+... ..._{t\vert t} \end{array} \right) = M_{sc0} Y_{t+1\vert t} + M_{sc1} Y_{t\vert t}$ (21)

Plugging (20) into (21), we have
$\displaystyle \left( \begin{array}[c]{c} X^{b}_{t+1\vert t}\\ X^{f}_{t+1\vert t... ...ft( \begin{array}[c]{c} X^{b}_{t\vert t}\\ X^{f}_{t\vert t} \end{array} \right)$ (22)

$\displaystyle W = -\left( M_{ss0}-M_{sc0}B\right) ^{-1} \left( M_{ss1}+M_{se1} -M_{sc1}B\right) $
Using the Jordan form associated with (22) and applying standard methods for eliminating bubbles we have
$\displaystyle X^{f}_{t\vert t}= G X^{b}_{t\vert t} $
From which it follows that
$\displaystyle X^{b}_{t+1\vert t}$ $\displaystyle =(W_{bb}+W_{bf}G)X^{b}_{t\vert t}=W^{b} X^{b}_{t\vert t}$ (23)
$\displaystyle X^{f}_{t+1\vert t}$ $\displaystyle =(W_{fb}+W_{ff}G)X^{b}_{t\vert t}=W^{f} X^{b}_{t\vert t}$ (24)

Step 2:

We now use these results in the original system of equations. Equation (17) is

$\displaystyle M_{ss0} \left( \begin{array}[c]{c} X^{b}_{t+1}\\ X^{f}_{t+1\vert ... ...ft( \begin{array}[c]{c} X^{b}_{t\vert t}\\ X^{f}_{t\vert t} \end{array} \right)$ $\displaystyle = M_{sc0} B \left( \begin{array}[c]{c} X^{b}_{t+1\vert t}\\ X^{f}... ...sc1} B^{0} \left( \begin{array}[c]{c} X^{b}_{t}\\ X^{f}_{t} \end{array} \right)$    
  $\displaystyle + M_{sc1} B^{1} \left( \begin{array}[c]{c} X^{b}_{t\vert t}\\ X^{... ...ray} \right) + \left( \begin{array}[c]{c} M_{e} u_{t+1}\\ 0 \end{array} \right)$    

Taking expectations, we have
$\displaystyle M_{ss0} \left( \begin{array}[c]{c} X^{b}_{t+1\vert t}\\ X^{f}_{t+... ...ft( \begin{array}[c]{c} X^{b}_{t\vert t}\\ X^{f}_{t\vert t} \end{array} \right)$ $\displaystyle = M_{sc0} B \left( \begin{array}[c]{c} X^{b}_{t+1\vert t}\\ X^{f}... ...ft( \begin{array}[c]{c} X^{b}_{t\vert t}\\ X^{f}_{t\vert t} \end{array} \right)$    
  $\displaystyle + M_{sc1} B^{1} \left( \begin{array}[c]{c} X^{b}_{t\vert t}\\ X^{f}_{t\vert t} \end{array} \right)$    

Subtracting, we get
$\displaystyle M_{ss0} \left( \begin{array}[c]{c} X^{b}_{t+1}-X^{b}_{t+1\vert t}... ...ray} \right) + \left( \begin{array}[c]{c} M_{e} u_{t+1}\\ 0 \end{array} \right)$ (25)

$\displaystyle \left( \begin{array}[c]{c} X^{b}_{t+1}-X^{b}_{t+1\vert t}\\ 0 \en... ... + M_{ss0}^{-1}\left( \begin{array}[c]{c} M_{e} u_{t+1}\\ 0 \end{array} \right)$ (26)

where, $ W^{c}=-M_{ss0}^{-1}(M_{ss1}-M_{sc1} B^{0})$. Hence, considering the second block of the above matrix equation, we get
$\displaystyle W^{c}_{fb}(X^{b}_{t}-X^{b}_{t\vert t})+W^{c}_{ff}(X^{f}_{t}-X^{f}_{t\vert t})=0 $
which gives
$\displaystyle X^{f}_{t}=F^{0}X^{b}_{t}+F^{1} X^{b}_{t\vert t} $
with $ F^{0}=-{W^{c}_{ff}}^{-1}W^{c}_{fb}$ and $ F^{1}=G-F^{0}$.

Now considering the first block we have

$\displaystyle X^{b}_{t+1}=X^{b}_{t+1\vert t}+W^{c}_{bb}(X^{b}_{t}-X^{b}_{t\vert t})+W^{c}_{bf} (X^{f}_{t}-X^{f}_{t\vert t})+M^{2} u_{t+1} $
from which we get using (23)
$\displaystyle X^{b}_{t+1}=M^{0} X^{b}_{t}+M^{1} X^{b}_{t\vert t}+M^{2} u_{t+1} $
with $ M^{0}=W^{c}_{bb}+W^{c}_{bf}F^{0}$, $ M^{1}=W^{b}-M^{0}$ and $ M^{2}=M_{ss0}^{-1}M_{e}$.

We also have

$\displaystyle S_{t}=C^{0}_{b} X^{b}_{t}+C^{0}_{t} X^{f}_{t}+C^{1}_{b} X^{b}_{t\vert t} + C^{1}_{f} X^{f}_{t\vert t}+v_{t} $
from which we get
$\displaystyle S_{t}=S^{0} X^{b}_{t}+S^{1} X^{b}_{t\vert t}+v_{t} $
where $ S^{0}=C^{0}_{b}+C^{0}_{f} F^{0}$ and $ S^{1}=C^{1}_{b}+C^{0}_{f} F^{1}+C^{1}_{f} G$

Finally, we have

$\displaystyle Y_{t}=B^{0}_{b} X^{b}_{t}+B^{0}_{t} X^{f}_{t}+B^{1}_{b} X^{b}_{t\vert t} + B^{1}_{f} X^{f}_{t\vert t} $
which leads to
$\displaystyle Y_{t}=\Pi^{0} X^{b}_{t}+\Pi^{1} X^{b}_{t\vert t} $
where $ \Pi^{0}=B^{0}_{b}+B^{0}_{f} F^{0}$ and $ \Pi^{1}=B^{1}_{b}+B^{0}_{f} F^{1}+B^{1}_{f} G$

5.2  Filtering

Since our solution involves terms in $ X^{b}_{t\vert t}$, we need to compute this quantity. However, the only information we can exploit is a signal $ S_{t}$ that we described previously. We therefore use a Kalman filter approach to compute the optimal prediction of $ X^{b}_{t\vert t}$.

In order to recover the Kalman filter, it is a good idea to think in terms of expectational errors. Therefore, let us define

$\displaystyle \widehat{X}^{b}_{t}=X^{b}_{t}-X^{b}_{t\vert t-1} $
$\displaystyle \widehat{S}_{t}=S_{t}-S_{t\vert t-1} $
Note that since $ S_{t}$ depends on $ X^{b}_{t\vert t}$, only the signal relying on $ \widetilde{S}_{t}=S_{t}-S^{1}X^{b}_{t\vert t}$ can be used to infer anything on $ X^{b}_{t\vert t}$. Therefore, the policy maker revises its expectations using a linear rule depending on $ \widetilde{S}^{e}_{t}=S_{t}-S^{1} X^{b}_{t\vert t}$. The filtering equation then writes
$\displaystyle X^{b}_{t\vert t}=X^{b}_{t\vert t-1}+K (\widetilde{S}^{e}_{t}-\wid... ...e{S}^{e}_{t\vert t-1})= X^{b}_{t\vert t-1}+K (S^{0} \widehat{X}^{b}_{t}+v_{t}) $
where K is the filter gain matrix, that we would like to compute.

The first thing we have to do is to rewrite the system in terms of state-space representation. Since $ S_{t\vert t-1}=(S^{0}+S^{1}) X^{b}_{t\vert t-1}$, we have

$\displaystyle \widehat{S}_{t}$ $\displaystyle =S^{0}(X^{b}_{t}-X^{b}_{t\vert t})+S^{1}(X^{b}_{t\vert t} -X^{b}_{t\vert t-1})+v_{t}$    
  $\displaystyle =S^{0} \widehat{X}^{b}_{t}+S^{1} K(S^{0} \widehat{X}^{b}_{t}+v_{t})+v_{t}$    
  $\displaystyle =S^{\star}\widehat{X}^{b}_{t}+\nu_{t}$    

where $ S^{\star}=(I+S^{1} K)S^{0}$ and $ \nu_{t}=(I+S^{1} K) v_{t}$.

Now, consider the law of motion of backward state variables, we get

$\displaystyle \widehat{X}^{b}_{t+1}$ $\displaystyle = M^{0} (X^{b}_{t}-X^{b}_{t\vert t})+M^{2} u_{t+1}$    
  $\displaystyle = M^{0} (X^{b}_{t}-X^{b}_{t\vert t-1}-X^{b}_{t\vert t}+X^{b}_{t\vert t-1})+M^{2} u_{t+1}$    
  $\displaystyle = M^{0} \widehat{X}^{b}_{t}-M^{0}(X^{b}_{t\vert t}+X^{b}_{t\vert t-1})+M^{2} u_{t+1}$    
  $\displaystyle = M^{0} \widehat{X}^{b}_{t}-M^{0}K(S^{0} \widehat{X}^{b}_{t}+v_{t}) +M^{2} u_{t+1}$    
  $\displaystyle = M^{\star}\widehat{X}^{b}_{t}+\omega_{t+1}$    

where $ M^{\star}=M^{0}(I-KS^{0})$ and $ \omega_{t+1}=M^{2} u_{t+1}-M^{0} K v_{t}$.

We therefore end-up with the following state-space representation

$\displaystyle \widehat{X}^{b}_{t+1}$ $\displaystyle = M^{\star}\widehat{X}^{b}_{t}+\omega_{t+1}$ (27)
$\displaystyle \widehat{S}_{t}$ $\displaystyle =S^{\star}\widehat{X}^{b}_{t}+\nu_{t}$ (28)

For which the Kalman filter is given by
$\displaystyle \widehat{X}^{b}_{t\vert t}=\widehat{X}^{b}_{t\vert t-1}+P {S^{\st... ...\star}}^{\prime}+ \Sigma_{\nu\nu})^{-1}(S^{\star}\widehat{X}^{b}_{t} +\nu_{t}) $
But since $ \widehat{X}^{b}_{t\vert t}$ is an expectation error, it is not correlated with the information set in $ t-1$, such that $ \widehat{X} ^{b}_{t\vert t-1}=0$. The prediction formula for $ \widehat{X}^{b}_{t\vert t}$ therefore reduces to
$\displaystyle \widehat{X}^{b}_{t\vert t}=P {S^{\star}}^{\prime\star}P {S^{\star}}^{\prime}+ \Sigma_{\nu\nu})^{-1}(S^{\star}\widehat{X}^{b}_{t}+\nu_{t})$ (29)

where $ P$ solves
$\displaystyle P=M^{\star}P {M^{\star}}^{\prime}+\Sigma_{\omega\omega} $
and $ \Sigma_{\nu\nu}=(I+S^{1} K)\Sigma_{vv}(I+S^{1} K)^{\prime}$ and $ \Sigma_{\omega\omega}= M^{0} K \Sigma_{vv} K^{\prime}{M^{0}}^{\prime2 } \Sigma_{uu} {M^{2}}^{\prime}$

Note however that the above solution is obtained for a given $ K$ matrix that remains to be computed. We can do that by using the basic equation of the Kalman filter:

$\displaystyle X^{b}_{t\vert t}$ $\displaystyle =X^{b}_{t\vert t-1}+K (\widetilde{S}^{e}_{t}-\widetilde{S} ^{e}_{t\vert t-1})$    
  $\displaystyle =X^{b}_{t\vert t-1}+K (S_{t}-S^{1} X^{b}_{t\vert t}-(S_{t\vert t-1}-S^{1} X^{b}_{t\vert t-1}))$    
  $\displaystyle =X^{b}_{t\vert t-1}+K(S_{t}-S^{1} X^{b}_{t\vert t}-S^{0} X^{b}_{t\vert t-1})$    

Solving for $ X^{b}_{t\vert t}$, we get
$\displaystyle X^{b}_{t\vert t}$ $\displaystyle =(I+KS^{1})^{-1}(X^{b}_{t\vert t-1}+K (S_{t}-S^{0} X^{b}_{t\vert t-1}))$    
  $\displaystyle =(I+KS^{1})^{-1}(X^{b}_{t\vert t-1}+KS^{1}X^{b}_{t\vert t-1}-KS^{1}X^{b}_{t\vert t-1}+K (S_{t}-S^{0} X^{b}_{t\vert t-1}))$    
  $\displaystyle =(I+KS^{1})^{-1}(I+KS^{1})X^{b}_{t\vert t-1}+(I+KS^{1})^{-1}K (S_{t}-(S^{0} +S^{1}) X^{b}_{t\vert t-1}))$    
  $\displaystyle =X^{b}_{t\vert t-1}+(I+KS^{1})^{-1}K \widehat{S}_{t}$    
  $\displaystyle =X^{b}_{t\vert t-1}+K(I+S^{1}K)^{-1} \widehat{S}_{t}$    
  $\displaystyle =X^{b}_{t\vert t-1}+K(I+S^{1}K)^{-1} (S^{\star}\widehat{X}^{b}_{t}+\nu_{t})$    

where we made use of the identity $ (I+KS^{1})^{-1}K \equiv K(I+S^{1}K)^{-1}$. Hence, identifying to (29), we have
$\displaystyle K(I+S^{1}K)^{-1}= P {S^{\star}}^{\prime\star}P {S^{\star}}^{\prime}+ \Sigma_{\nu\nu})^{-1} $
remembering that $ S^{\star}=(I+S^{1} K)S^{0}$ and $ \Sigma_{\nu\nu}=(I+S^{1} K)\Sigma_{vv}(I+S^{1} K)^{\prime}$, we have
$\displaystyle K(I+S^{1}K)^{-1}= P S^{0^{\prime}}(I+S^{1} K)^{\prime1 }K)S^{0} P... ...^{1} K)^{\prime}+ (I+S^{1} K)\Sigma_{vv}(I+S^{1} K)^{\prime-1}(I+S^{1} K)S^{0} $
which rewrites as
$\displaystyle K(I+S^{1}K)^{-1}$ $\displaystyle = P S^{0^{\prime}}(I+S^{1} K)^{\prime}\left[ (I+S^{1} K)(S^{0} P S^{0^{\prime}} +\Sigma_{vv})(I+S^{1} K)^{\prime}\right] ^{-1}$    
$\displaystyle K(I+S^{1}K)^{-1}$ $\displaystyle = P S^{0^{\prime}}(I+S^{1} K)^{\prime}{(I+S^{1} K)^{\prime}}^{-1}(S^{0} P S^{0^{\prime}} +\Sigma_{vv})^{-1}(I+S^{1} K)^{-1}$    

Hence, we obtain
$\displaystyle K = P S^{0^{\prime}}(S^{0} P S^{0^{\prime}} +\Sigma_{vv})^{-1}$ (30)

Now, recall that

$\displaystyle P=M^{\star}P {M^{\star}}^{\prime}+\Sigma_{\omega\omega} $
Remembering that $ M^{\star}=M^{0}(I+KS^{0})$ and $ \Sigma_{\omega\omega}= M^{0} K \Sigma_{vv} K^{\prime}{M^{0}}^{\prime2 } \Sigma_{uu} {M^{2}}^{\prime}$ , we have
$\displaystyle P$ $\displaystyle = M^{0}(I-KS^{0})P \left[ M^{0}(I-KS^{0})\right] ^{\prime0 }K \Sigma_{vv} K^{\prime}{M^{0}}^{\prime2 }\Sigma_{uu} {M^{2}}^{\prime}$    
  $\displaystyle = M^{0}\left[ (I-KS^{0})P (I-{S^{0}}^{\prime}K^{\prime})+ K \Sigma_{vv} K^{\prime}\right] {M^{0}}^{\prime2 }\Sigma_{uu} {M^{2}}^{\prime}$    

Plugging the definition of $ K$ in the latter equation, we obtain
$\displaystyle P=M^{0}\left[ P-P{S^{0}}^{\prime0}P{S^{0}}^{\prime}+\Sigma_{vv})^{-1} S^{0}P\right] {M^{0}}^{\prime2 }\Sigma_{uu} {M^{2}}^{\prime}$ (31)

5.3  Summary

We finally end-up with the system of equations:

$\displaystyle X^{b}_{t+1}$ $\displaystyle = M^{0} X^{b}_{t}+M^{1} X^{b}_{t\vert t}+M^{2} u_{t+1}$ (32)
$\displaystyle S_{t}$ $\displaystyle = S^{0}_{b} X^{b}_{t}+S^{1}_{b} X^{b}_{t\vert t}+v_{t}$ (33)
$\displaystyle Y_{t}$ $\displaystyle = \Pi^{0}_{b} X^{b}_{t}+\Pi^{1}_{b} X^{b}_{t\vert t}$ (34)
$\displaystyle X^{f}_{t}$ $\displaystyle = F^{0}X^{b}_{t}+F^{1} X^{b}_{t\vert t}$ (35)
$\displaystyle X^{b}_{t\vert t}$ $\displaystyle = X^{b}_{t\vert t-1}+K(S^{0}(X^{b}_{t}-X^{b}_{t\vert t-1})+v_{t})$ (36)
$\displaystyle X^{b}_{t+1\vert t}$ $\displaystyle = (M^{0}+M^{1}) X^{b}_{t\vert t}$ (37)

which describe the dynamics of our economy.

6  Determinate Equilibrium: The Volcker-Greenspan rule

Figure 1: IRF to a negative technology shock

Figure 1 has two panels for impulse response functions (IRFs) over the first 40 periods.  The left panel is for the inflation rate, the right panel is for output, and both are with respect to a negative technology shock.  Each panel plots three scenarios: perfect information, imperfect information (I), and imperfect information (II).  All IRFs for inflation are positive. The first creeps upward to about 1.5% at 20 quarters and then declines; the second declines from about 2.5% until reaching the first IRF after a few quarters and then follows the first IRF closely; and the third declines from about 6.5% until reaching the first IRF after several quarters and then follows the first IRF closely.  All IRFs for output are negative: the first increases smoothly from about -45% towards zero; the second starts at about -30%, becomes more negative until reaching the first IRF, and then follows the first IRF closely; and the third declines from near-zero to about -20% at about 10 quarters out, then tends towards zero.

Table 2: Impact and extreme effect of a technology shock


Perf. Info

Impact $ \Theta=\{0.75,1.50,0.50\},-33\%$ Shock

Perf. Info

Max $ \Theta=\{0.75,1.50,0.50\},-33\%$ Shock

Imp. Info (I)

Impact $ \Theta=\{0.75,1.50,0.50\},-33\%$ Shock

Imp. Info (I)

Max $ \Theta=\{0.75,1.50,0.50\},-33\%$ Shock

Imp. Info (II)

Impact $ \Theta=\{0.75,1.50,0.50\},-33\%$ Shock

Imp. Info (II)

Max $ \Theta=\{0.75,1.50,0.50\},-33\%$ Shock

Output -45.074 -45.074 -29.977 -38.695 -3.163 -20.803
Inflation 0.335 1.543 2.597 2.597 6.569 6.569

Note: Perfect information, Imperfect information (I) and Imperfect information

(II) correspond to $ \zeta$=0,1,8 respectively, where $ \zeta$ is the amount of noise.

Table 3: Standard Deviations: $ \Theta=\{0.75,1.50,0.50\},-33\%$ Shock

  $ \sigma_{y}$ $ \sigma_{i}$ $ \sigma_{\pi}$
Data 1.639 7.271 0.778
Perf. Info. 4.349 15.625 0.097
Imp. Info (I) 3.891 14.324 0.212
Imp. Info (II) 1.820 6.736 0.619
Note: The standard deviations are computed for HP-filtered series. y, i

and $ \pi$ are output, investment and inflation respectively. Perfect informa-

tion, Imperfect information I and Imperfect information II correspond to

$ \zeta$=0,1,8 respectively where $ \zeta$ is the amount of noise. $ \Theta$ = $ \{\rho,k_{\pi},k_{y}\}$

Figure 2: Expected versus realized inflation rate = $ \Theta$ = $ \{\rho,k_{\pi},k_{y}\}$ = {0.75, 1.50, 0.50}

Figure 2 has two panels for impulse response functions (IRFs) over the first 40 periods.  The left panel is for imperfect information (I), the right panel is for imperfect information (II), and each panel compares the IRF for the expected inflation rate with the IRF for the realized inflation rate.  In the left panel, the IRF for the expected inflation rate creeps upward to about 1.5% at 20 quarters and then declines, whereas the IRF for the realized inflation rate declines from about 5% initially to about 1.5% at 20 quarters out and then follows the first IRF closely.  In the right panel, the IRF for the expected inflation rate creeps upward from near zero to a few tenths of a percentage point, whereas the IRF for the realized inflation rate declines steadily from about 6.5% to around 1%.

Figure 3: Figure 3: Ex-ante versus Ex-post real interest rate $ \Theta$ = {0.75, 1.50, 0.50}

Figure 3 has two panels for impulse response functions (IRFs) over the first 40 periods.  The left panel is for imperfect information (I), the right panel is for imperfect information (II), and each panel compares the IRF for the ex ante real interest rate with the IRF for the ex post real interest rate.  In the left panel, the IRF for the ex ante real interest rate stays close to zero, whereas the IRF for the ex post real interest rate increases smoothly from about -6% initially to near zero at 20 quarters, staying close to zero thereafter.  In the right panel, the IRF for the ex ante real interest rate stays close to zero, whereas the IRF for the ex post real interest rate increases smoothly and gradually from about -7% initially to about -1% at 40 quarters.

7  Determinacy: Reactions to inflation and output

Table 4: Standard Deviations

  $ \sigma_{y}$ $ \sigma_{i}$ $ \sigma_{\pi}$
Data 1.639$ (\rho,\kappa_{\pi},\kappa_{y})=(0.75,1.50,0.20)$ 7.271$ (\rho,\kappa_{\pi},\kappa_{y})=(0.75,1.50,0.20)$ 0.778$ (\rho,\kappa_{\pi},\kappa_{y})=(0.75,1.50,0.20)$
Perf. Info. 3.509 12.774 0.108
Imp. Info. (I) 3.146 11.549 0.154
Imp. Info. (II) 1.598 5.865 0.483
  $ (\rho,\kappa_{\pi},\kappa_{y})=(0.75,1.50,0.70)$ $ (\rho,\kappa_{\pi},\kappa_{y})=(0.75,1.50,0.70)$ $ (\rho,\kappa_{\pi},\kappa_{y})=(0.75,1.50,0.70)$
Perf. Info. 3.255 11.612 0.093
Imp. Info. (I) 2.957 10.821 0.188
Imp. Info. (II) 1.509 5.521 0.478
  $ (\rho,\kappa_{\pi},\kappa_{y})=(0.75,1.20,0.50)$ $ (\rho,\kappa_{\pi},\kappa_{y})=(0.75,1.20,0.50)$ $ (\rho,\kappa_{\pi},\kappa_{y})=(0.75,1.20,0.50)$
Perf. Info. 3.103 10.810 0.278
Imp. Info. (I) 2.856 10.251 0.313
Imp. Info. (II) 1.468 5.269 0.492

Note: The standard deviations are computed for HP-filtered series. y, i

and $ \pi$ are output, investment and inflation respectively. $ \Theta$ = $ \{\rho,k_{\pi},k_{y}\}$

Figure 4: IRF to a negative -33% technology shock

Panel A: $ \Theta$ = {0.75, 1.50, 0.20}

Figure 4 plots the IRFs in Figure 1 for three other parameter values: $\theta=\{0.75,1.50,0.20\}$ (labeled Panel A),  $\theta=\{0.75,1.50,0.70\}$ (labeled Panel B), and  $\theta=\{0.75,1.2,0.5\}$ (labeled Panel C).  Each of these panels contains two graphs (as in Figure 1), one for the inflation rate and one for output.  Panel A is very similar to Figure 1, except that the IRFs generally are mildly attenuated relative to those in Figure 1.  The IRFs for output in Panel B are generally even more attenuated, whereas those for inflation are unchanged or somewhat larger.  The IRFs in Panel C are similar to or somewhat more pronounced than those in Figure 1.

8  Real Indeterminancy: The Clarida-Gali-Gertler rule

Figure 5: IRF to a -12% technology shock $ \Theta$ = {0.75,0.80,0.40}

Figure 5 has two panels for impulse response functions (IRFs) over the first 40 periods.  The left panel is for the inflation rate, the right panel is for output, and both are with respect to a negative technology shock.  The IRF for inflation declines sharply from about 5% to 3% over the first few quarters, and then declines gradually to zero.  The IRF for output declines sharply from about -2% to -13% over the first few quarters, and then increases gradually to about -3% at 40 quarters.

Table 5: Effects of a -12% technology shock $ \Theta$ = {0.75,0.80,0.40}

  Impact Max.
Output -1.773 -12.755
Inflation 5.000 5.000

Table 6: Standard Deviations $ \Theta$ = {0.75,0.80,0.40}

$ \sigma_{s}$ $ \sigma_{y}$ $ \sigma_{i}$ $ \sigma_{\pi}$
Data 1.639 7.271 0.778
  q=0.25, -12% shock q=0.25, -12% shock q=0.25, -12% shock
0 1.702 5.545 0.529
$ \sigma_{a}$ 1.727 5.689 0.542
0.0400$ ^{(a)}$ 2.272 8.463 0.777
0.0294$ ^{(b)}$ 2.030 7.278 0.676
0.1294$ ^{(c)}$ 5.065 21.029 1.861

Note: The standard deviations are computed for HP-filtered series. y, i

and $ \pi$ are output, investment and inflation respectively. (a), (b) and (c)

match $ \sigma_{\pi}$, $ \sigma_{i}$ and $ \sigma_{R}$. $ \Theta$ = $ \{\rho,k_{\pi},k_{y}\}$

9  Indeterminacy: Other cases

Figure 6: IRF to a -8% technology shock, $ \Theta$ = {0.75,1.20,0.80}

Figure 6 has two panels for impulse response functions (IRFs) over the first 40 periods.  The left panel is for the inflation rate, the right panel is for output, and both are with respect to a negative technology shock.  The IRF for inflation declines sharply from about 5% to 3.7% over the first few quarters, and then declines gradually to about 2.8% at 40 quarters.  The IRF for output declines sharply from about -2% to -10% over the first few quarters, and then increases gradually to about -3% at 40 quarters.

Table 7: Effects of a -8% technology shock, $ \Theta$ = {0.75,1.20,0.80}

  Impact Max.
Output -1.718 -9.972
Inflation 5.020 5.020

Table 8: Standard Deviations, $ \Theta$ = {0.75,1.20,0.80}

$ \sigma_{s}$ $ \sigma_{y}$ $ \sigma_{i}$ $ \sigma_{\pi}$
Data 1.639 7.271 0.778
0 1.625 5.274 0.689
$ \sigma_{a}$ 1.650 5.394 0.714
0.006$ ^{(a)}$ 1.639 5.340 0.704
0.035$ ^{(b)}$ 2.072 7.271 1.042
0.016$ ^{(c)}$ 1.724 5.736 0.778
0.058$ ^{(d)}$ 2.681 9.827 1.461

Note: The standard deviations are computed for HP-filtered series. y, i

and $ \pi$ are output, investment and inflation respectively. (a), (b), (c) and

(d) match $ \sigma_{\pi}$, $ \sigma_{i}$, and $ \sigma_{R}$. $ \Theta$ = $ \{\rho,k_{\pi},k_{y}\}$


1.  We would like to thank Andy Levin, Mike Spagat and the participants at the International Research Forum on Monetary Policy in DC and at the European Monetary Forum in Bonn for valuable comments. Return to text

2.  CNRS-GREMAQ, Manufacture des Tabacs, bât. F, 21 allée de Brienne, 31000 Toulouse, France. Tel: (33-5) 61-12-85-60, Fax: (33- 5) 61-22-55-63, email: [email protected], Homepage: Return to text

3.  Department of Economics, University of Bern, CEPR, IMOP. Address: VWI, Gesellschaftsstrasse 49, CH 3012 Bern, Switzerland. Tel: (41) 31-6313989, Fax: (41) 31-631-3992, email: [email protected], Homepage: Return to text

4.  Related explanations are that the FED was the ''victim'' of conventional macroeconomic wisdom of the time that claimed the existence of a stable, permanent tradeoff between inflation and unemployment (De Long, 1997). Or, that the FED was the ''victim'' of econometrics. Sargent, 1999, for instance, has argued that the data periodically give the impression of the existence of a Phillips curve with a favorable trade-off between inflation and unemployment. High inflation then results as the central bank attempts to exploit this Return to text

5.  We follow Svensson and Woodford, 2003, in modelling imperfect information using the Kalman filter. Return to text

6.  $ E_{t}(.)$ denotes mathematical conditional expectations. Expectations are conditional on information available at the beginning of period $ t$Return to text

7.  See Ehrmann and Smets, 2003, for a discussion of optimal monetary policy in a related model. Return to text

8.  URL: Return to text

9.  There is a non-negligible change in the volatility of the Solow residual between the pre and the post Volcker period. That up to 1979:4 is 0.0084 while that after 1980:1 is 0.0062. For the evaluation of the model it is the former period that is relevant. Note that for the government spending shock the difference between the two periods is negligible. Return to text

10.  For instance, this is the assumption made by Bullard and Eusepi, 2003. Nonetheless, there is very little agreement regarding the type of change in the productivity process that took place around 1970. Other differences between our model and that of Bullard and Eusepi are to be found the learning mechanism and the interest policy rule employed. Return to text

11.  The -logarithm of the- government expenditure series is first detrended using a linear trend. Return to text

12.  Making some variable other than actual output noisy -for instance, inflation- does not materially affect the results. Return to text

13.  The public too has imperfect information about actual and potential output. Return to text

14.  To be more precise, we vary the size of $ \varsigma$Return to text

15.  We could not set the sunspot volatility so as to match consumption volatility as it is already overestimated when the standard deviation of the sunspot is set to 0. Return to text

16.  There is considerable evidence, based, for instance, on the behavior of the current account, that the increase in the oil price in 1973 was perceived as temporary. Return to text

This version is optimized for use by screen readers. A printable pdf version is available.

Home | Economic research and data | Publications and education resources
Accessibility | Contact us
Last update: October 24, 2006