The Federal Reserve Board eagle logo links to home page

Skip to: [Printable Version (PDF)] [Bibliography] [Footnotes]
Key Developments in Monetary Economics Conference: Screen Reader version

New Monetarist Economics*



Stephen Williamson
Washington University in St. Louis
Randall Wright
University of Wisconsin - Madison

Abstract:

This paper articulates the principles and models of New Monetarism, which is our label for a body of recent work on money, banking, payments systems, and asset markets. The approach that has something in common with Old Monetarism, but also some key differences. It has little in common with New Keysianism. A series of models is described. This illustrates what has been done in recent monetary theory, and leads us to a benchmark model that has a wide range of applications. We use the benchmark model to exposit and discuss some Old Monetarist and New Keynesian ideas. Then, through a series of examples, we show how the framework can be applied to issues in payments, banking, and asset pricing.


1 Introduction

The purpose of this essay is to articulate the principles and practice of a school of thought we call New Monetarist Economics. Although there is by now a large body of work in the area, our label is novel, and we feel we should say why we use it. First, New Monetarists find much that is appealing in Old Monetarist economics epitomized by the writings of Milton Friedman, and some of his followers, although we also disagree with their ideas in several important ways. Second, New Monetarism has little in common with New Keynesianism, although this may have as much to do with the way New Keynesians approach monetary economics and the microfoundations of macroeconomics than with their assumptions about sticky prices. Moreover, we think it was a healthy state of affairs when, even in the halcyon days of Old Keynesianism, there was a dissenting view presented by Old Monetarists, at the very least as voice of caution to those who thought macro and monetary economics were "solved" problems.1 We think it would be similarly healthy today if more people recognized that there is an alternative to New Keynesianism. We dub this alternative New Monetarism.

An impression has emerged recently that there is a consensus that New Keynesianism is the most useful approach to analyzing macroeconomic phenomena and guiding monetary policy. The view that there is consensus is surprising to us, as we encounter much sympathy for the position that there are fundamental flaws in the New Keynesian approach. It must then be the case that those of us who do not think New Keynesianism is the only game in town, or who think that approach has issues that need to be discussed, are not speaking with enough force and clarity. In part, this essay is an attempt to rectify this state of affairs and foster more healthy debate. The interaction we envision between New Monetarists and Keynesians is in some ways similar to the debates in the 1960s and 1970s, and in other ways different, of course, since much of the method and language has changed in economics since then. To bring the dialogue to the 21st century, we first need to describe what New Monetarists are doing.

New Monetarism encompasses a body of research on monetary theory and policy, and on banking, financial intermediation, and payments, that has taken place over the last few decades. In monetary economics, this includes the seminal work using overlapping generations models by Lucas (1972) and some of the contributors to the Models of Monetary Economies volume edited by Kareken and Wallace (1980), although antecendents exist, including Samuelson (1956), of course. More recently, much monetary theory has adopted the search and matching approach, an early example of which is Kiyotaki and Wright (1989), although there are also antecendents for this, including Jones (1976). In the economics of banking, intermediation, and payments, which builds on advances in information theory that occurred mainly in the 1970s, examples of what we have in mind include Diamond and Dybvig (1983), Diamond (1984), Williamson (1986,1987), Bernanke and Gertler (1989), and Freeman (1995). Much of this research is abstract and theoretical in nature, but the literature has turned more recently to empirical and policy issues.

In Section 2 we first explain what New Monetarism is not, by describing Keynesianism and Old Monetarism. Then we lay out a set of New Monetarist principles. As a preview, we think New Monetarists agree more or less with the following: 1. Microfoundations matter: productive analysis of macro and monetary economics, including policy discussions, requires adherence to sound and internally consistent economic theory. 2. In the quest to understand monetary phenomena and monetary policy, it is decidedly better to use models that are explicit about the frictions that give rise to a role for money in the first place. 3. In modeling frictions, one has to have an eye for the appropriate level of abstraction and tractability - e.g. the fact that in some overlapping generations models people live two periods, or that in some search models people meet purely at random, may make them unrealistic but does not make them uninteresting. 4. No single model should be an all-purpose vehicle in monetary economics, and the right approach may depend on the question, but at the same time it is desirable to have a class of models, making use of consistent assumptions and similar technical devices, that can be applied to a variety of issues. 5. Financial intermediation is important: while bank liabilities and currency sometimes perform similar roles as media of exchange, for many issues treating them as identical leads one astray.

In Section 3 we review developments in monetary theory over the past two decades that are consistent with these basic principles. We try to say why the models are interesting, and why they were constructed as they were - what lies behind particular assumptions, abstractions and simplifications. In Section 4 we move to more recent models, that are better suited to address certain empirical and policy issues, while at the same time are tractable enough to deliver sharp analytic results. We lay out a benchmark New Monetarist model, based on Lagos and Wright (2005), and show how it can be used to address a range of issues. Again, we try to explain what lies behind the assumptions, and we give some of its basic properties - e.g. money is neutral but not superneutral, the Friedman rule is optimal but may not give the first best, etc. We also show how this benchmark can be extended to address classic issues pertaining to money and capital accumulation and to inflation and unemployment. As one example, we generate a negatively-sloped Phillips curve that is stable in the long run. In our example, anticipated policy can exploit this trade-off, but it turns out it ought not (the Friedman rule is still optimal), illustrating the value of being explicit about micro details.

Much of Sections 3 and 4 is already in the literature; Section 5 presents novel applications. First, we show how the benchmark can be used to formalize Friedman's (1968) views about the short-run Phillips curve, using a signal extraction problem, as in Lucas (1972). This yields some conclusions that are similar to those of Friedman and Lucas, but also some that are different, again showing the importance of micro details. Having shown how the model can be used to think about Old Monetarist ideas, we then use it to illustrate New Keynesian ideas, by introducing sticky prices. This generates policy conclusions similar to those in Clarida et al. (1999) or Woodford (2003), but there are also differences, again illustrating how details matter. Although the examples in this Section rederive known results, in a different context, they also serve to make it clear that other approaches are not inconsistent with our formal model. One should not shy away from New Monetarism even if one believes sticky prices, imperfect information, and related ingredients are critical, as these are easily incorporated into micro-based theories of the exchange process.2

In Section 6, discuss applications related to banking and payments. These extensions contain more novel modeling choices and results, although the substantive issues we address have of course been raised in earlier work. One example incorporates ideas from payments economics similar in spirit to Freeman (1995), but the analysis looks different through the lens of the New Monetarist model. Another example incorporates existing ideas in the theory of banking emulating from Diamond and Dybvig (1983), but again some details look different. In particular, we have genuinely monetary versions of these models, which seems relevant or at least realistic since money has a big role in actual banking and payments systems.3 In Section 7, we present another application, exploring a New Monetarist approach to asset pricing. This approach emphasizes liquidity. and focuses on markets where assets trade can be complicated by various frictions, including private information.

These examples and applications illustrate the power and flexibility of the New Monetarist approach. As we hope the reader will appreciate, although the various models differ with respect to details, they share many features, and build upon consistent principles. This is true for the simplest models of monetary exchange, and the extensions to integrate banking, credit arrangements, payments mechanisms, and asset markets. We think that this is not only interesting in terms of theory, but that there are also lessons to be learned for understanding the current economic situation and shaping future policy. To the extent that the recent crisis has at its roots problems related to banking, to mortgage and other credit arrangements, or to information problems in asset markets, one cannot hope to address the issues without theories that take seriously the exchange process. Although New Keynesians have had some admirable success, not all economic problems are caused by sticky prices. Despite the suggestions of Krugmaniacs, not every answer is hanging on the Old Keynesian cross. Given this, we present our brand of Monetarism as a relevant and viable alternative for both academics and policy makers. What follows is our attempt to elaborate this position.

2 Old and New Perspectives

To understand the basic principles behind our approach, we first need to summarize some popular alternative schools of thought. This will allow us to highlight what is different about New Monetarism, and how it is useful for understanding monetary phenomena and guiding monetary policy.

2.1 Keynesianism

Keynesian economics of course originated with the General Theory in 1936. Keynes's ideas were popularized in Hicks's (1937) IS-LM model, which became enshrined in the undergraduate curriculum, and was integrated into the so-called Neoclassical Synthesis of the 1960s. New Keynesian economics, as surveyed in Clarida et al. (1999) or Woodford (2003), makes use of more sophisticated tools than Old Keynesian economists had at their disposal, but much of the language and many of ideas are essentially the same. New Keynesianism is typically marketed as a synthesis that can be boiled down to an IS relationship, a Phillips curve, and a policy rule determining the nominal interest rate, the output gap, and the inflation rate. It is possible to derive a model featuring these equations from slightly more primitive ingredients, including preferences, but often practitioners do not bother with these details. As a matter of principle, we find this problematic, since reduced-form relations from one model need not hold once one changes the environment, but we don't want to dwell here on such an obvious point.

All New Keynesian models have weak foundations for the assumption at the heart of the theory: prices must be set in nominal terms, and are sticky in the sense that they cannot be changed except at times specified rather arbitrarily, or at a cost. If nominal prices can be indexed to observables - if e.g. a seller can say "my price  p increases one-for-one with aggregate  P," which does not seem especially complicated or costly - the main implications of the theory would be overturned. An implication that we find unattractive is this: agents in the model are often not doing as well as they could, in the sense that gains from trade are left on the table when exchanges are forced at the wrong prices. This is in sharp contrast to some theory, the purest of which is mechanism design, where by construction agents do as well they can subject to constraints imposed by the environment, including technology and also incentives. There can be frictions including private information, limited commitment, etc. that make doing as well as we can fairly bad, of course. It is silly to regard the outcome as a Panglossian "best of all possible worlds," since the world could be better with fewer constraints, but at least in those theories we are not acting suboptimally given the environment.4

Despite these issues, it is commonly argued that the New Keynesian paradigm is consistent with the major revolutionary ideas developed in macroeconomics over the past few decades, such as the Lucas Critique and Real Business Cycle Theory. If we take Woodford (2003) as representing the state of the art, the main tenets of the approach are the following:

  1. The key friction that gives rise to short-run nonneutralities of money, and the primary concern of monetary policy, is sticky prices. Because some prices are not fully flexible, inflation or deflation induces relative price distortions, and this has consequences for welfare.
  2. The frictions that we encounter in relatively deep monetary economics, or even not-so-deep monetary economics, like cash-in-advance models, are at best of second-order importance. In monetary theory these frictions include explicit descriptions of specialization that make direct exchange difficult, and information problems that make credit difficult, giving rise to a fundamental role for media of exchange and to different implications for policy.
  3. There is a short-run Phillips curve trade-off between inflation and output (if not inflation and unemployment, since these theories typically do not have detailed descriptions of the labor market, with a few exceptions, like Gertler and Trigari 2008). Monetary policy can induce a short-run increase in aggregate output with an increase in inflation.
  4. The central bank is viewed as being able to set a short-term nominal interest rate, and the policy problem is presented as the choice over alternative rules for how this should be done in response to economic conditions.

We also think it is fair to say that New Keynesians tend to be supportive of current practice by central banks. Elements of the modeling approach in Woodford (2003) are specifically designed to match standard operating procedures, and he appears to find little in the behavior of central banks that he does not like. And the feeling seems to be mutual, which may be what people have in mind when they suggest that there is a consensus. Interest in New Keynesianism has become intense recently, especially in policy circles, and some economists (e.g. Goodfriend 2007) profess that New Keynesianism is the default approach to analyzing and evaluating monetary policy.

2.2 Monetarism

Old Monetarist ideas are represented in the writing of Friedman (1960,1968,1969) and Friedman and Schwartz (1963). In the 1960s and 1970s, the approach was viewed as an alternative to Keynesianism with different implications for how policy should be conducted. Friedman put much weight on empirical analysis and the approach was often grounded only informally in theory - even if some of his work, such as the theory of the consumption function in Friedman (1957), is about microfoundations. Although there are few professed monetarists in the profession these days, the school has had a lasting impression in macroeconomics and the practice of central banking.5

The central canons of Old Monetarism include the following:

  1. Sticky prices, while possibly important in generating short-run nonneutralities, are unimportant for monetary policy.
  2. Inflation, and inflation uncertainty, generate significant welfare losses.
  3. The quantity theory of money is an essential building block. There exists a demand function for money which is an empirically stable function of a few variables.
  4. There may exist a short-run Phillips curve trade-off, but the central bank should not attempt to exploit it. There is no long-run Phillips curve trade-off (although Friedman tempered this position between 1968 and 1977 when he discussed the possibility of an upward-sloping long-run Phillips curve).
  5. Monetary policy is viewed as a process of determining the supply of money in circulation, and an optimal monetary policy involves minimizing the variability in the growth rate of some monetary aggregate.
  6. Money is any object that is used as a medium of exchange, and whether these objects are private or government liabilities is irrelevant for the analysis of monetary theory and policy.

Friedman and his followers tended to be critical of contemporary central banking practice, and this tradition was carried on through such institutions as the Federal Reserve Bank of St. Louis and the Shadow Open Market Committee. A lasting influence of monetarism is the notion that low inflation should be a primary goal of policy, which is also a principle stressed by New Keynesian economists. However, Friedman's monetary policy prescription that central banks should adhere to strict targets for the growth of monetary aggregates is typically regarded as a practical failure. Old Monetarism tended to emphasize the long run over the short run: money can be nonneutral in the short run, but exploitation of this by the central bank only makes matters worse (in part due to Friedman's infamous "long and variable lags), and policy should focus on long-run inflation. Monetarists also tended to favor relatively simple models, as compared to the Keynesian econometric tradition. Some but definitely not all of these ideas carry over to New Monetarism.

2.3 New Monetarism

The foundations for New Monetarism can be traced to a conference at the Federal Reserve Bank of Minneapolis in the late 1970s, with the proceedings and some post-conference contributions published in Kareken and Wallace (1980). Important antecedents are Samuelson (1956), which is a legitimate model of money in general equilibrium, and Lucas (1972), which sparked the rational expectations revolution and with it a move toward incorporating serious theory in macroeconomics. Kareken and Wallace (1980) contains a diverse body of work with a common goal of moving the profession toward a deeper understanding of the role of money and the proper conduct of monetary policy. This volume spurred much research using the overlapping generations model of money, much of which was conducted by Wallace and his collaborators during the 1980s. Some findings from that research are the following:

  1. Because traditional monetarists neglect key elements of economic theory, their prescriptions for policy can go dramatically wrong (Sargent and Wallace 1982).
  2. The fiscal policy regime is critical for the effects of monetary policy (Sargent and Wallace 1981, Wallace 1981).
  3. Monetary economics can make good use of received theory in other fields, like finance and public economics (Bryant and Wallace 1979,1984).

A key principle, laid out first in the introduction to Kareken and Wallace (1980), and elaborated in Wallace (1998), is that progress can be made in monetary theory and policy analysis only by modeling monetary arrangements explicitly. In line with the arguments of Lucas (1976), to conduct a policy experiment in an economic model, the model must be invariant to the experiment under consideration. One interpretation is the following: if we are considering experiments involving the operating characteristics of the economy under different monetary policy rules, we need a model in which economic agents hold money not because it enters utility or production functions, in a reduced-form fashion, but because money ameliorates some fundamental frictions. Of course the view that monetary theory should " look frictions in the face" goes back to Hicks (1934). Notice that here we are talking about explicit descriptions of frictions in the exchange process, as opposed to frictions in the price setting process, like the nominal rigidities in Keynesian theory, where money does not help, and indeed is really the cause of the problem.

We now know that there are various ways to explicitly model frictions. Just as Old Monetarists tended to favor models that are simple, so do New Monetarists. One reason is they still like to focus more on long-run issues, such as the cost of steady state inflation, instead of business cycles. This is mainly because they tend to think the long run is more important, from a welfare perspective, but as a by-product it allows them to adopt simpler models (at least compared to many New Keynesian models, e.g. Altig et al. 2007). Overlapping generations models can be simple, although one can also complicate them as one likes. Much research in monetary theory in the last 20 years has been conducted using matching models, rather than overlapping generations models, however. These build more on ideas in search and game theory rather than general equilibrium theory. Early work includes Kiyotaki and Wright (1989,1993), which build on ideas and tools in Jones (1976) and Diamond (1982,1984).6

Matching models prove to be very tractable for many questions in monetary economics, though a key insight that eventually arose from this literature is that spatial separation per se is not the critical friction making money essential. As emphasized by Kocherlakota (1998), with credit due to earlier work by Ostroy (19xx) and Townsend (1987,1989), money is essential because it overcomes a double coincidence of wants problem in the context of limited commitment and imperfect record-keeping. As is well known, by now, perfect record keeping would imply that efficient allocations could be supported through insurance and credit markets, or various other institutions, without monetary exchange. Random bilateral matching among a large number of agents is a convenient way to generate a double coincidence problem, and also to motivate incomplete record keeping, but it is not the only way to proceed, as we discuss below.

New Monetarism is not just about the role of currency in exchange; it attempts to study a host of related institutions. An important departure from Old Monetarism is to take seriously the role of financial intermediaries and their interactions with the central bank. Developments in intermediation and payment theories over the last 25 years are critical to our understanding of credit and banking arrangements. A difference between Old and New Monetarists regarding the role of intermediation is reflected in their respective evaluations of Friedman's (1960) proposal for 100% reserve requirements on transactions deposits. His argument was based on the premise that tight control of the money supply by the central bank was key to controlling the price level. However, since transactions deposits at banks are part of what he means by money, and the money multiplier is subject to randomness, even if we could perfectly control the stock of outside money, inside money would move around unless we impose 100% reserves. Old Monetarists thus viewed 100% reserves as desirable. What this ignores, however, is that banks perform a socially beneficial function in transforming illiquid assets into liquid liabilities (transactions deposits), and 100% reserve requirements inefficiently preclude this activity.

The 1980s saw important developments in the theory of banking and financial intermediation, spurred by earlier developments in information theory. One influential contribution was the model of Diamond and Dybvig (1983), which we now understand to be a useful approach to studying banking as liquidity transformation and insurance (it does however require some auxiliary assumptions to produce anything resembling a banking panic or run; see Ennis and Keister 2008). Other work involved well-diversified intermediaries economizing on monitoring costs, including Diamond (1984) and Williamson (1986). In these models, financial intermediation is an endogenous phenomenon. The resulting intermediaries are well-diversified, process information in some manner, and transform assets in terms of liquidity, maturity or other characteristics. The theory of financial intermediation has also been useful in helping us understand the potential for instability in banking and the financial system (again see Ennis and Keister 2008), and how the structure of intermediation and financial contracting can affect aggregate shocks (Williamson 1987, Bernanke and Gertler 1989).

A relatively new sub-branch of this theory studies the economics of payments. This involves the study of payments systems, particularly among financial institutions, such as Fedwire in the US, where central banks can play an important role. See Freeman (1995) for an early contribution, and Nosal and Rocheteau (2009) for a recent survey. The key insights from this literature are related to the role played by outside money and central bank credit in the clearing and settlement of debt, and the potential for systemic risk as a result of intraday credit. Even while payment systems are working well, this area is important, since the cost of failure is potentially so great given the amount of money processed through such systems each day. New Monetarist economics not only has something to say about these issues, it is almost by definition the only approach that does. How can one hope to understand payments and settlement without modeling the exchange process?

To reiterate some of what was said earlier, New Monetarists more or less agree to and try to abide by the following principles:

  1. Useful analysis in macro and monetary economics, including policy analysis, requires sound micro economic theory, which involves using what we know from general equilibrium, as well as game theory, search theory, etc.
  2. Especially important is a clear and internally consistent description of the exchange process, and the means by which money and related institutions facilitate that process, which means the theory must be built on environments with explicit frictions.
  3. While perhaps no one model can answer all questions, in monetary economics, there are important characteristics that good models share, including internal consistency, tractability, and the right amount of abstraction.
  4. Relatively simple models are preferred, in part due to relative emphasis of the longer run.
  5. Rigorous models of financial intermediation are important for monetary theory and policy: credit, banking, and payment systems matter.

We now develop a series of models leading to a useful benchmark framework, after which we present several variations and put them work in different applications.

3 Recent Monetary Theory

The simplest model in the spirit of the principles laid out above is a version of first-generation monetary search theory, long the lines of Kiyotaki and Wright (1993), which is a stripped-down version of Kiyotaki and Wright (1989), and uses methods in early search equilibrium models, especially Diamond (1982). Such a model makes strong assumptions, which will be relaxed later, but even with these assumptions in place the approach captures something of the essence of money as an institution that facilitates exchange. What makes exchange difficult in the first place is the presence of frictions, including a double coincidence problem generated by specialization and random matching, combined with limited commitment and imperfect memory. Frictions like this, or at least informal descriptions thereof, have been discussed in monetary economics since Smith, Jevons, Menger, Hicks, etc. The goal of recent theory is to formalize the ideas, to see which are valid under what assumptions, and to develop new insights.7

3.1 The Simplest Model

Time is discrete and continues forever. There is a  [0,1] continuum of infinite-lived agents. To make the exchange process interesting, these agents specialize in production and consumption of differentiated commodities and trade bilaterally. It is a venerable idea that specialization is intimately related to monetary exchange, so we want this in the environment. Although there are many ways to set this up, we simply assume the following: There is a set of goods, that for now are indivisible and nonstorable. Each agent produces at cost  C\geq0 goods in some subset, and derives utility  U>C from consuming goods in a different subset. It is formally equivalent, but for some applications it helps the discussion, to consider a pure exchange scenario. Thus, if each agent is endowed with a good each period that he can consume for utility  C, but he may meet someone with another good that gives him utility  U, the analysis is basically the same, but  C is interpreted as an opportunity rather than a production cost.

Let  \alpha be the probability of meeting someone each period. There are different types of potential trade meetings. Let  \sigma be the probability that you like what your partner can produce but not vice versa - a single coincidence meeting - and  \delta the probability that you like what he can produce and vice versa - a double coincidence meeting.8 The environment is symmetric, and for the representative agent, the efficient allocation clearly involves producing whenever someone in a meeting likes what his partner can produce. Let  V^{C} be the payoff from this cooperative allocation, described recursively by

\displaystyle V^{C} \displaystyle =\alpha\sigma(U+\beta V^{C})+\alpha\sigma(-C+\beta V^{C}% )+\alpha\delta(U-C+\beta V^{C})    
  \displaystyle +(1-2\alpha\sigma-\alpha\delta)\beta V^{C}    
  \displaystyle =\beta V^{C}+\alpha(\sigma+\delta)(U-C).    

If agents could commit, ex ante, they would all agree to execute the efficient allocation. If they cannot commit, we have to worry about ex post incentive conditions.

The binding condition is that to get agents to produce in single-coincidence meetings, as opposed to simply walking away, we require  -C+\beta V^{C}\geq V^{D}, where  V^{D} is the deviation payoff, depending on what punishments we have at our disposal. Suppose we can punish a deviator by allowing him in the future to only trade in double-coincidence meetings. It is interesting to consider other assumptions about feasible punishments, but this one has a nice interpretation in terms of what a mechanism designer can see and do. We might like e.g. to trigger to autarky - no trade at all - after a deviation, but it is not so obvious we can enforce this in double-coincidence meetings. Having trade only in double-coincidence meetings - a pure barter system - is self enforcing (it is an equilibrium), with payoff  V^{B}=\alpha \delta(U-C)/(1-\beta). If we take  V^{D}=V^{B}, algebra reduces the relevant incentive condition to

\displaystyle \left[ 1-\beta(1-\alpha\sigma)\right] C\leq\beta\alpha\sigma U.% (1)

If every potential trade meeting involves a double-coincidence, i.e. if  \sigma=0, then pure barter suffices to achieve efficiency and there is no incentive problem. But with  \sigma>0, given imperfect commitment, (1) tells us that we can achieve efficiency iff production is not too expensive ( C is small), search and specialization frictions are not too severe ( \alpha and  \sigma are big), etc. If (1) holds, one can interpret exchange as a credit system, as discussed in Williamson and Sanchez (2009), but there is no role for money. A fundamental result in Kocherlakota (1998) is that money is not essential - i.e. it does nothing to expand the set of incentive-feasible allocations - when we can use trigger strategies as described above. Obviously this requires that deviations can be observed and recalled. Lack of perfect monitoring or record keeping, often referred to as incomplete memory, is necessary for money to be essential. There are several way to formalize this. Given a large number of agents that match randomly, suppose that they observe only what happens in their own meetings, not other meetings. Then, if an agent deviates, the probability someone he meets later will know it is 0.9 Hence, no one ever produces in single-coincidence meetings.

In this case, we are left with only direct barter, unless we introduce money. Although we soon generalize this, for now, to make the point starkly, assume that are  M\in(0,1) units of some object that agents can store in units  m\in\{0,1\}. This object is worthless in consumption and does not aid in production; so if it is used as a medium of exchange, it is by definition fiat money (Wallace 1980). Let  V_{m} be the payoff to an agent with  m\in\{0,1\}. Then

\displaystyle V_{0}=\beta V_{0}+\alpha\delta(U-C)+\alpha\sigma M\max_{\pi}\pi\left\{ -C+\beta(V_{1}-V_{0})\right\} ,% (2)

since someone without money can still barter in double-coincidence meetings, and now has another option: if he meets someone with money who likes his good but cannot produce anything he likes, he could trade for cash, and  \pi is the probability he agrees to do so. Similarly,
\displaystyle V_{1}=\beta V_{1}+\alpha\delta(U-C)+\alpha\sigma(1-M)\Pi\left\{ U+\beta\left( V_{0}-V_{1}\right) \right\} ,% (3)

since an agent with money can still barter, and now he can also make a cash offer in single-coincidence meetings, which is accepted with probability  \Pi .10

The best response condition gives the maximizing choice of  \pi, taking  \Pi as given, and Nash equilibrium is a fixed point. More completely, equilibrium is a list  \{\pi,V_{0},V_{1}\} satisfying (2)-(3) and the best response condition. Obviously  \pi=0 is always an equilibrium, and  \pi=1 is an equilibrium iff

\displaystyle \left[ 1-\beta+\beta\alpha\sigma(1-M)\right] C\leq\beta\alpha\sigma(1-M)U
(there are mixed strategy equilibria but one can argue they are not robust, in several senses, as in Shevchenko and Wright 2004). It is easy to see that there is a monetary equilibrium  \pi=1 iff  C is below an upper bound. This bound is less than the one we had when we could use triggers. Moreover, even if we can support  \pi=1, payoffs are lower than when we had triggers. So when monitoring or memory is bad, monetary exchange may allow us to do better than barter, but not as well as perfect monitoring and memory.

The model is obviously rudimentary, but it captures the idea that money can be a socially beneficial institution that helps facilitate exchange. This contrasts with cash-in-advance models, where money is a hindrance to trade - or worse, sticky-price models, where money plays no role except that we are forced to quote prices in dollars and not allowed to change them easily. Contrary to standard asset-pricing theory, there are natural equilibria where an intrinsically worthless object can be valued as a medium of exchange, or for its liquidity. Such equilibria have good welfare properties, relative to pure barter, even if they may not achieve the first best. The fact that  \pi=0 is always an equilibrium points to the tenuousness of fiat money (Wallace 1980). Yet it is also robust, in that equilibrium with  \pi=1 survives even if we endow the fiat object with some bad characteristics, like a transaction or storage cost, or if we tax it. Many of these and other predictions of the model ring true.11

3.2 Prices

Prices were fixed up to now, since every trade was a one-for-one swap. Beginning the next generation of papers in this literature, Shi (1995) and Trejos-Wright (1995) endogenize prices by allowing divisible goods while maintaining the assumption  m\in\{0,1\}. Let  x denote the output given by the producer to the consumer in exchange for currency. Preferences are given by  U=u(x) and  C=c(x), where  u^{\prime}>0,  c^{\prime}>0,  u^{\prime \prime}<0,  c^{\prime\prime}\geq0, and  u(0)=c(0)=0. For future reference, let  x^{\ast} solve  u^{\prime}(x^{\ast})=c^{\prime}(x^{\ast}). It is easy to show the efficient outcome involves an agent producing  x^{\ast} in every meeting where his partner likes his good. To facilitate the presentation, for now we set  \delta=0, so that all trade meetings are single-coincidence meetings and there is no direct barter; we return to  \delta>0 below. Also, we focus on equilibria where money is accepted with probability  \pi=1.

To determine  x, consider the generalized Nash bargaining solution, with the bargaining power of the consumer given by  \theta and threat points given by continuation values.12 Thus,  x solves:

\displaystyle \max\left[ u(x)+\beta V_{0}-\beta V_{1}\right] ^{\theta}\left[ -c(x)+\beta V_{1}-\beta V_{0}\right] ^{1-\theta}.% (4)

A stationary equilibrium is a list  \{x,V_{0},V_{1}\} such that: given  V_{0} and  V_{1},  x solves (4); and given  x,  V_{0} and  V_{1} solve (2) and (3). Consider the case  \theta =1, which means buyers make take-it-or-leave-it offers. Then  x solves
\displaystyle c(x)=\frac{\beta\alpha\sigma(1-M)u(x)}{1-\beta+\beta\alpha\sigma(1-M)}.
This holds at  x=0, which is a nonmonetary equilibrium, and at a unique monetary equilibrium  x>0. It is also easy to check  \partial x/\partial M<0 , so the price level  p=1/x increases with the number of buyers;  \partial x/\partial\alpha>0, so  p increases with search frictions; etc. Also, a straightforward generalization implies that when  \delta>0 there are generically either multiple monetary equilibria or no monetary equilibria.

In the symmetric case  \theta=1/2 and  M=1/2, one can show that in any equilibrium  x<x^{\ast}, so that monetary exchange cannot achieve the efficient allocation. However,  x\rightarrow x^{\ast} as  \beta\rightarrow1. To understand this, consider an Arrow-Debreu version of the environment, which means the same preferences and technology but no frictions. In that economy, given agents can turn their production into instantaneous consumption through the market, it can easily be seen that they choose  x=x^{\ast}. But in our economy, with frictions, they must turn production into cash, which can only be used in future single-coincidence meetings with someone in need of money. Thus, as long as  \beta<1, our agents are willing to produce less than they would in a frictionless model. Now, one can get  x to increase, by raising  \theta, e.g., and for big enough  \theta we sometimes have  x>x^{\ast}% . Still, the model illustrates clearly how frictions and discounting drive a wedge between the return on currency and the marginal rate of substitution, affecting the price level and allocation, as will come up again below.13

3.3 Distributions

We now relax the restriction  m\in\{0,1\}. There are various approaches, but here we use the one in Molico (2006), which allows  m\in\lbrack0,\infty). This means that we have to deal with the endogenous distribution of money across agents,  F(m), while previously this was trivial, since  M agents had  m=1 and  1-M had  m=0. Now, in a single-coincidence meeting where the consumer has  m and the producer has  \tilde{m}, let  x(m,\tilde{m}) be the amount of output and  d(m,\tilde{m}) money traded. Again setting  \delta=0, for expositional purposes, the generalization of (2)-(3) is

\displaystyle V(m) \displaystyle =\beta V(m)+\alpha\sigma\int\left\{ u[x(m,\tilde{m})]+\beta V\left[ m-d(m,\tilde{m})\right] -\beta V(m)\right\} dF(\tilde{m})    
  \displaystyle +\alpha\sigma\int\left\{ -c[x(\tilde{m},m)]+\beta V\left[ m+d(\tilde {m},m)\right] -\beta V(m)\right\} dF(\tilde{m}).% (5)

The first term is the expected value of buying from a producer with  \tilde{m} dollars, and the second the expected value of selling to a consumer with  \tilde{m} dollars (notice how the roles of  m and  \tilde{m} are reversed in the two integrals).14

In this model, we can easily add injections of new currency, say by lump sum or proportional transfers, which was not so easy with  m\in\{0,1\}. With lump sum transfers, e.g. we simply change  m on the RHS to  m+\mu M, where  M is the aggregate money supply, governed by  M_{t+1}=(1+\mu)M_{t}. This greatly extends the class of policies that can be analyzed, but for now we keep  M=\int mdF(m) fixed. Then a stationary equilibrium is a list of functions  \left\{ V(\cdot),x(\cdot),d(\cdot),F(\cdot)\right\} such that: given  x(m,\tilde{m}),  d(m,\tilde{m}) and  F(m),  V(m) solves (5); given  V(m),  x(m,\tilde{m}) and  d(m,\tilde{m}) are determined by some bargaining solution, such as

\displaystyle \max\left[ u(x)+\beta V(m-d)-\beta V(m)\right] ^{\theta}\left[ -c(x)+\beta V(\tilde{m}+d)-\beta V(\tilde{m})\right] ^{1-\theta};% (6)

where the maximization is s.t.  d\leq m since the consumer cannot feasibly turn over more money than he has; and given  x(m,\tilde{m}) and  d(m,\tilde{m}),  F(m) solves a stationary condition omitted in the interest of space. From this we can calculate many other interesting objects, such as the distribution of  p(m,\tilde{m})=d(m,\tilde{m})/x(m,\tilde{m}).

This model is unfortunately hard to handle. Not much can be said about equilibrium analytically, and it is even hard to solve numerically. Rather than go into computational details, we offer the following intuition. Typical heterogeneous-agent, incomplete-market, macro models of the sort analyzed by Hugget (1993) or Krusell and Smith (1998) also have an endogenous distribution as a state variable, but the agents in those models do not care about this distribution per se. They only care about market prices. Of course prices depend on the distribution, but one can typically characterize accurately prices as functions of a small number of moments. In a search model, agents care about  F(m) directly, since they are trading with each other and not just against their budget equations. Still, Molico computes the model, and uses it to discuss several interesting issues, including a welfare-enhancing effect of inflation achieved through lump sum transfers that serve as partial insurance, but we do not have space to get into these results.15

4 More Recent Monetary Theory

Some models use devices that allow one to avoid having to track the distribution of money. There are two main approaches. The first, dating back to Shi (1997), uses the assumption of large households to render the money distribution degenerate. Thus, each decision making unit consists of many members who search randomly, as in the above models, but at the end of each trading round they return to the homestead where they share the money they bring back (and sometimes also consumption). Loosely speaking, by the large of large numbers, each household starts the next trading round with the same  m. The large household is a natural extension for random-matching models of the "worker-shopper pair" discussed in the cash-in-advance literature (Lucas 1980). Several useful papers use this environment, many of which are cited in Shi (2006). We will, however, instead focus on the model in Lagos and Wright (2005), which uses markets instead of large families.

One reason to use the Lagos-Wright model is that it allows us to address a variety of issues, in addition to rendering the distribution of money tractable without extreme assumptions like  m\in\{0,1\}. In particular, it also serves to reduce the gap between monetary theory with some claim to microfoundations and standard macro. As Azariadis (1993) put it, "Capturing the transactions motive for holding money balances in a compact and logically appealing manner has turned out to be an enormously complicated task. Logically coherent models such as those proposed by Diamond (1982) and Kiyotaki and Wright (1989) tend to be so removed from neoclassical growth theory as to seriously hinder the job of integrating rigorous monetary theory with the rest of macroeconomics." And as Kiyotaki and Moore (2001) put it, "The matching models are without doubt ingenious and beautiful. But it is quite hard to integrate them with the rest of macroeconomic theory - not least because they jettison the basic tool of our trade, competitive markets."

The idea in Lagos and Wright (2005) is to bring the jettisoned markets back on board in a way that maintains an essential role for money and makes the model closer to mainstream macro. At the same time, rather than complicating matters, integrating some competitive markets with some search markets actually makes the analysis much easier. We also believe that this is a realistic way to think about economic activity. Clearly, in reality, there is some activity in our economic lives that is relatively centralized - it is fairly easy to trade, credit is available, we take prices as given, etc. - which can be well captured by the notion of a competitive market. But there is also much activity that is relatively decentralized - it is not easy to find trading partners, it can be hard to get credit, etc. - as captured by search theory. Of course, one might imagine that there are various ways to integrate search and competitive markets. Here we present one.

4.1 A Benchmark Model

Each period, suppose agents spend one subperiod in a frictionless centralized market CM, as in standard general equilibrium theory, and one in a decentralized market DM with frictions as in the search models discussed above. Sometimes the setup is described by saying the CM convenes during the day and the DM at night; this story is not important for the theory, and we only use it when it helps keep the timing straight, e.g. in modeling payments systems.16 There is one consumption good  X in the CM and another  x in the DM, but it is easy to have  x come in many varieties, or to interpret  X as a vector as in standard GE theory (Rocheteau et al. 2008). For now  X and  x are produced one-for-one using labor  H and  h, so the real wage is  w=1. Preferences are separable over time, and across a period encompassing one CM and DM, described by  \mathcal{U}(X,H,x,h). What is important for tractability, although not for the theory, in general, is quasi-linearity:  \mathcal{U} should be linear in either  X or  H. With general preferences, the model requires numerical methods, as in Chiu and Molico (2007); with quasi-linearity, we can derive interesting results analytically.17

For now we actually assume

\displaystyle \mathcal{U}=U(X)-H+u(x)-c(h),\displaystyle %
but later we consider nonseparable  \mathcal{U}. If we shut down the CM, or otherwise fix  X and  H, these are the same preferences used in Molico, and the models become equivalent. Since the Molico model collapses to Shi-Trejos-Wright when we impose  m\in\{0,1\}, and to Kiyotaki-Wright when we further make  x indivisible, these ostensibly different setups can be interpreted as special cases of one framework. Faig (2006) also argues that the alternating market model and the large household model in Shi (1997) can be encompassed in a more general setup. We think this is good, but not because we want one all-purpose vehicle for every issue in monetary economics. Rather, we do not want people to get the impression there is a huge set of inconsistent monetary models out there - the ones reviewed so far, as well as the extensions below to incorporate Diamond-Dybvig (1993) banking, a Freeman (1995) payment system, etc. all use similar fundamental building blocks, even if some applications make certain special assumptions.18

In the DM, the value function  V(\cdot) would be described exactly by (5) in the last section, except for one thing: wherever  \beta V(\cdot) appears on the RHS, replace it with  W(\cdot), since before going to the next DM agents now get to visit the CM, where  W(\cdot) denotes the payoff. In particular,

\displaystyle W(m) \displaystyle =\max_{X,H,\hat{m}}\left\{ U(X)-H+\beta V(\hat{m})\right\}    
st \displaystyle X \displaystyle =\phi(m-\hat{m})+H-T,    

where  \phi is the value of money (the inverse of the nominal price level) in the CM and  T is a lump sum tax, both taken parametrically. Assuming an interior solution (see Lagos-Wright for details), we can eliminate  H and write
\displaystyle W(m)=\phi m-T+\max_{X}\left\{ U(X)-X\right\} +\max_{\hat{m}}\left\{ -\phi\hat{m}+\beta V(\hat{m})\right\} .
From this several results are immediate:  W(m) is linear with slope  \phi;  X=X^{\ast} where  U^{\prime}(X^{\ast})=1; and  \hat{m} is independent of wealth  \phi m-T.

Based on this last result, we should expect, and we would be right, a degenerate  F(\hat{m}) - i.e. everyone takes the same  \hat{m}=M out of the CM, regardless of the  m they brought in.19 Using this plus  W^{\prime}(m)=\phi, and replacing  \beta V(\cdot) with  W(\cdot), (5) simplifies rather dramatically to

\displaystyle V(m)=W(m)+\alpha\sigma\left\{ u[x(m,M)]-\phi d(m,M)\right\} +\alpha \sigma\left\{ -c[x(M,m)]+\phi d(M,m)\right\} .% (7)

Effectively, with quasi-linearity, the CM is a settlement subperiod where agents reset their liquidity positions. Without this feature, the analysis is interesting but a lot more difficult, and we think it is nice to have a benchmark model that is tractable. By analogy, while models with heterogeneous agents and incomplete markets are obviously interesting, it is nice to have the basic neoclassical growth theory with complete markets and homogeneous agents as a benchmark. Since serious monetary theory with complete markets and homogeneous agents is a non-starter, we need to find another benchmark. The DM-CM model with quasi-linearity is a candidate.

But this is not all we get in terms of tractability. Replacing  \beta V(\cdot) with  W(\cdot) and using  W^{\prime}(m)=\phi, the bargaining solution (6) reduces to20

\displaystyle \max\left[ u(x)-\phi d\right] ^{\theta}\left[ -c(x)+\phi d\right] ^{1-\theta}%
st  d\leq m. It is easy to show the constraint binds. Inserting  d=m, taking the FOC for  x, and rearranging, we get  \phi m=g(x), where
\displaystyle g(x)\equiv\frac{\theta c(x)u^{\prime}(x)+(1-\theta)u(x)c^{\prime}(x)}{\theta u^{\prime}(x)+(1-\theta)c^{\prime}(x)}.% (8)

This expression may look complicated but it is very easy to use, and simplifies a lot in some special cases - e.g.  \theta =1 implies  g(x)=c(x) , and real balances paid to the producer  \phi m exactly compensate him for his cost. More generally, the producer gets some share of the gains from trade, depending on  \theta, which will be important below. Notice  \partial x/\partial m=\phi/g^{\prime}(x)>0, so bringing more money increases DM consumption, but in a nonlinear way unless  \theta =1 and  c(x)=x.

We have established  d(m,\tilde{m})=m and  x(m,\tilde{m}) depends on  m but not  \tilde{m}. Differentiating (7), we now get

\displaystyle V^{\prime}(m)=(1-\alpha\sigma)\phi+\alpha\sigma\phi u^{\prime}(x)/g^{\prime }(x).% (9)

The marginal benefit of DM money is the value of carrying it into the next CM with probability  1-\alpha\sigma, plus the value of spending it on  x with probability  \alpha\sigma. Being careful with time, we update this one period and combine it with the FOC from the CM,  \phi=\beta V^{\prime}% (\hat{m}), to arrive at
\displaystyle \phi_{t}=\beta\phi_{t+1}\left[ 1+\ell(x_{t+1})\right] ,
where  \ell(x)\equiv\alpha\sigma\left[ u^{\prime}(x)/g^{\prime}(x)-1\right] . Notice  \ell(x) is a liquidity premium, giving the marginal value of spending a dollar, as opposed to carrying it forward, times the probability  \alpha\sigma that one spends it. Using the bargaining solution  \phi m=g(x) plus market clearing  m=M, the previous condition can be written
\displaystyle \frac{g(x_{t})}{M_{t}}=\beta\frac{g(x_{t+1})}{M_{t+1}}\left[ 1+\ell (x_{t+1})\right] .% (10)

An equilibrium can be defined as a list including  V(\cdot),  W(\cdot),  x(\cdot), and so on, satisfying the obvious conditions (see Lagos-Wright), but (10) reduces all this to a simple difference equation determining paths for  x, given a path for  M. Here we focus on steady states, where  x and  \phi M are constant.21 For this to make sense, we impose  M_{t+1}=(1+\mu)M_{t} with  \mu constant. Of course, one has to also consider the consolidated monetary-fiscal budget constraint  G=T+\mu\phi M, where  G is government consumption (in the CM). But notice that it does not matter for (10) whether changes in  M are offset by changing  T or  G. Individuals would of course prefer lower taxes, given  G does not enter utility, but this does not affect their decisions about real balances or consumption in our quasi-linear model. We actually do not have to specify how money transfers are accomplished for the purpose of describing equilibrium  x and  \phi.

In steady state, (10) simplifies to  1+\mu=\beta\left[ 1+\ell (x)\right] , which yields  x as a function of the money growth (equals inflation) rate  \mu. Or, if we price real and nominal bonds between two meetings of the CM, assuming these bonds cannot be traded in the DM, maybe because they are merely book entries that cannot be transfered, we can get the nominal and real interest rates  1+r=1/\beta and  1+i=(1+\mu)/\beta and rewrite the steady state condition as

\displaystyle \ell(x)=i.% (11)

This equates the marginal benefit of liquidity to its cost, the nominal rate. It is equivalent for policy makers here to set either money growth, inflation, or the nominal rate. Obviously the initial stock  M_{0} is irrelevant for the real allocation (money is neutral), but the growth rate is not (it is not super neutral). These are standard properties shared by many monetary models, including standard overlapping-generations, cash-in-advance, and money-in-the-utility-function constructs.

In what follows we assume  i>0, although we do consider the limit as  i\rightarrow0 (it is not possible to have  i<0 in equilibrium). Existence of a a monetary steady state, i.e. an  x>0 such that  \ell(x)=i, is straightforward given standard assumptions like  u^{\prime}(0)=\infty. Uniqueness is more complicated because  \ell(x) is not generally monotone, except under strong assumptions, like  \theta\approx1, or decreasing absolute risk aversion. But Wright (2009) establishes that there is a unique monetary steady state even if  \ell(x) is not monotone. Given this, DM output  x is unambiguously decreasing in  i, as is total output, since CM  X=X^{\ast} is independent of  i.22 For a given policy, one can also show  x is increasing in consumer bargaining power  \theta, the single-coincidence probability  \alpha\sigma, etc. One can also show  x<x^{\ast} for all  i>0, for any  \theta. In fact,  x=x^{\ast} only in the limit when  i=0 and we set  \theta =1. The former condition,  i=0, is the Friedman rule, and is standard. The latter,  \theta =1, is a version of the Hosios (1990) condition describing how to efficiently split the surplus, and this does not appear in theories that do not have bargaining.

To understand this, note that in general there is a holdup problem in money demand, analogous to the usual problem with ex ante investments and ex post negotiations. Agents make an investment here when they acquire cash, which pays off in single-coincidence meetings since it allows trade to occur. But if  \theta<1 producers capture some of the gains from trade, leading agents to under invest. The Hosios condition tells us that investment is efficient when the bargaining solution delivers a payoff to the investor commensurate with his contribution to the total surplus, which in this case means  \theta =1. This is not merely a theoretical detail. In calibrated versions of the model, the welfare cost of inflation is an order of magnitude bigger than found in the reduced-form models (e.g. Cooley and Hansen 1989 or Lucas 2000), leading New Monetarists to rethink some traditional some policy conclusions. As there is not space to present these results in detail, we refer readers to Craig and Rocheteau (2008) for a survey.

4.2 Money and Capital

Because of worries mentioned above about this kind of theory being "so removed" from mainstream macro, we sketch an extension to include capital as in Aruoba et al. (2009). In this version, capital  K is used as a factor of production in both markets, but is does not compete with  M as media of exchange in the DM. To motivate this, it is easy enough to assume  K is not portable, making it hard to trade directly in the DM, but of course this does not explain why claims to capital cannot circulate. On the one hand, this is no different from the result that agents cannot pay in the DM using claims to future endowment or labor income: this can be precluded by imperfect commitment and monitoring. On the other hand, if there is trade in capital in the CM, one can imagine agents exchanging certified claims on it that might also circulate in the DM. One approach is to introduce informational frictions, so that claims to  K are difficult to recognize (perhaps they can be counterfeited) in the DM even if they can be verified in the CM; we defer a detailed analysis of this idea until later.23

The CM technology produces output  F(K,H) that can be allocated to consumption  X or investment; the DM technology is represented by a cost function  c(x,k) that gives an agent's disutility of producing  x when he has  k, where lower (upper) case denotes individual (aggregate) capital. The CM problem is

\displaystyle W(m,k) \displaystyle =\max\limits_{X,H,\hat{m},\hat{k}}\left\{ U(X)-H+\beta V(\hat {m},\hat{k})\right\} (12)
st \displaystyle x \displaystyle =\phi(m-\hat{m})+w\left( 1-t_{h}\right) H+\left[ 1+\left( q-\Delta\right) \left( 1-t_{k}\right) \right] k-\hat{k}-T,    

where  q is the rental rate,  \Delta the depreciation rate, and we add income taxes (important for quantitative work). Eliminating  H, FOC for  (X,\hat{m},\hat{k}) are
\displaystyle U^{\prime}(X) \displaystyle =\dfrac{1}{w\left( 1-t_{h}\right) }    
\displaystyle \dfrac{\phi}{w\left( 1-t_{h}\right) } \displaystyle =\beta V_{1}(\hat{m},\hat {k}) (13)
\displaystyle \dfrac{1}{w\left( 1-t_{h}\right) } \displaystyle =\beta V_{2}(\hat{m},\hat {k}).    

Generalizing what we found in the baseline model,  (\hat{m},\hat{k}) is independent of  (m,k), and  W is linear with  W_{1}(m,k)=\phi/w\left( 1-t_{h}\right) and  W_{2}(m,k)=\left[ 1+\left( q-\Delta\right) \left( 1-t_{k}\right) \right] /w\left( 1-t_{h}\right) .

In the DM, instead of assuming that agents may be consumers or producers depending on who they meet, Aruoba et al. proceed as follows. After the CM closes, agents draw preference and technology shocks determining whether they can consume or produce, with  \gamma denoting the probability of being a consumer and of being a producer. Then the DM opens and consumers and producers are matched bilaterally. This story helps motivate why capital cannot be used for DM payments: one can say that it is fixed in place physically, and consumers have to travel without their capital to producers' locations to trade. Thus, producers can use their capital as an input in the DM but consumers cannot use their capital as payment. In any case, with preference and technology shocks, the equations actually look exactly the the same as what we had with random matching and specialization, except  \gamma replaces  \alpha\sigma.

One can again show  d=m, so the Nash bargaining outcome depends on the consumer's  m but not the producer's  M, and on the producer's  K but not the consumer's  k. Abusing notation slightly,  x=x(m,K) solves  g(x,K)=\phi m/w\left( 1-t_{h}\right) , where

\displaystyle g(x,K)\equiv\frac{\theta c(x,K)u^{\prime}(x)+(1-\theta)u(x)c_{1}(x,K)}{\theta u^{\prime}(x)+(1-\theta)c_{1}(x,K)}%
generalizes (8). Then we have the following version of (7)
\displaystyle V(m,k) \displaystyle =W(m,k)+\gamma\left\{ u\left[ x(m,K)\right] -\frac{\phi m}{w\left( 1-t_{h}\right) }\right\}    
  \displaystyle +\gamma\left\{ \frac{\phi M}{w\left( 1-t_{h}\right) }-c\left[ x(M,k),k\right] \right\} .    

Inserting  V_{1} and  V_{2}, market clearing  k=K and  m=M, and equilibrium prices  \phi=w\left( 1-t_{h}\right) g(x,K)/M,  q=F_{1}(K,H), and  w=F_{2}(K,H), into (13), we get
\displaystyle U^{\prime}(X_{t}) \displaystyle =\frac{1}{\left( 1-t_{h}\right) F_{2}(K_{t},H_{t}% )} (14)
\displaystyle \frac{g(x_{t},K_{t})}{M_{t}} \displaystyle =\frac{\beta g(x_{t+1},K_{t+1})}{M_{t+1}% }\left[ 1-\gamma+\gamma\frac{u^{\prime}(x_{t+1})}{g_{1}(x_{+1},K_{+1}% )}\right] (15)
\displaystyle U^{\prime}(X) \displaystyle =\beta U^{\prime}(X_{+1})\left\{ 1+\left[ F_{1}(K_{_{t}% +1},H_{t+1})-\Delta\right] \left( 1-t_{k}\right) \right\} (16)
  \displaystyle -\beta\gamma\left[ c_{2}(x,K)-c_{1}\left( x,K\right) \frac{g_{2}% (x,K)}{g_{1}(x,K)}\right] .    

Finally, we have the resource constraint
\displaystyle X_{t}+G=F(K_{t},H_{t})+(1-\Delta)K_{t}-K_{+1}.% (17)

Equilibrium is defined as (positive, bounded) paths for  \{x,X,K,H\} satisfying (14)-(17), given monetary and fiscal policy, plus an initial condition  K_{0}. As a special case, in nonmonetary equilibrium we have  x=0 while  \{X,H,K\} solves the system ignoring (15) and setting the last term in (16) to 0. These are exactly the equilibrium conditions for  \{X,H,K\} in the standard (nonmonetary) growth model described in e.g. Hansen (1986).24 In monetary equilibria, we get sometime even more interesting. The last term in (16) generally captures the idea that if a producer buys an extra unit of capital in the CM, his marginal cost is lower in the DM for a given  x, but  x increases as an outcome of bargaining. This is a holdup problem on investment, parallel to the one on money demand discussed above. With a double holdup problem there is no value of  \theta that delivers efficiency, which has implications for the model's empirical performance and welfare predictions.25

Aruoba et al. (2009) compare calibrated versions of the model with versions that assume price taking instead of bargaining. Interestingly, the price-taking version generates a much bigger effect of monetary policy on investment, basically because in the bargaining version  K is relatively low and unresponsive to what happens in the DM due to the holdup problems. In some versions, the effect of inflation on investment is quite sizable compared to what has been found in earlier work using short cuts like cash-in-advance specifications (e.g. Cooley and Hansen 1989). One can also study the interaction between fiscal and monetary policy. We cannot get into the quantitative results in detail here, but we do want to emphasize that it is not so hard to integrate modern monetary theory, with explicit references to search, bargaining, information, commitment, etc., and mainstream macro, with capital, neoclassical production functions, fiscal policy, etc.

4.3 The Long-Run Phillips Curve

In the baseline model, we saw that DM output  x is decreasing in  i, and CM output  X^{\ast} is independent of  i. Hence total output and therefore total employment goes down with inflation. This seems to be a reasonable prediction for long-run (steady state) effects, whatever may be the case in the short-run. If we think of the Phillips curve broadly as the relation between inflation and output/employment, this theory predicts it is not vertical, and in fact inflation reduces output. But here we want to take the Phillips curve more literally and model more carefully the relation between inflation and unemployment. We do several things to make this rigorous. First, we explicitly introduce another friction to generate unemployment in the CM. Second, we re-cast the DM as a pure exchange market, so that employment and unemployment are determined exclusively in the CM. Third, we allow nonseparable utility, so that CM output and employment are not independent of monetary policy.

A principle explicated in Friedman (1968) is that, while there may exist a Phillips curve trade-off between inflation and unemployment in the short run, there is no trade-off in the long run. The natural rate of unemployment is defined as "the level that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labor and product markets" (although, as Lucas 1980 notes, Friedman was "not able to put such a system down on paper"). Friedman (1968) said monetary policy cannot engineer deviations from the natural rate in the long run. However, he tempered this view in Friedman (1977) where he said "There is a natural rate of unemployment at any time determined by real factors. This natural rate will tend to be attained when expectations are on average realized. The same real situation is consistent with any absolute level of prices or of price change, provided allowance is made for the effect of price change on the real cost of holding money balances." Here we take this real balance effect seriously.

Of the various ways to model unemployment, in this presentation we adopt the indivisible labor model of Rogerson (1988).26 This has a nice bonus feature: we do not need quasi-linearity, because in indivisible-labor models agents act as if utility were quasi-linear. For simplicity, we revert to the case where  X is produced one-for-one with  H, but now  H\in\{0,1\} for each individual. Also, as we said, to derive cleaner results we use a version where there is no production in the DM. Instead, agents have an endowment  \bar{x}, and gains from trade arise due to preference shocks. Thus, DM \ utility is  \upsilon^{j}(x,X,H) where  j is a shock realized after  (X,H) is chosen in the CM. Suppose  j=b or  s with equal probability, where  \partial \upsilon^{b}(\cdot)/\partial x>\partial\upsilon^{s}(\cdot)/\partial x, and then in the DM everyone that draws  b is matched with someone that draws  s. The indices  b and  s indicate which agents will be buyers and sellers in matches, for obvious reasons. We also assume that there is discounting between one DM and the next CM, but not between the CM and DM, but this is not important. What is interesting will be nonseparability in  \upsilon^{j}(x,X,H).

As in any indivisible labor model, agents choose a lottery  \left( \ell,X_{1},X_{0},\hat{m}_{1},\hat{m}_{0}\right) in the CM where  \ell is the probability of employment - i.e. the probability of working  H=1 - while  X_{H} and  \hat{m}_{H} are CM purchases of goods and cash conditional on  H. There is no direct utility generated in the CM; utility is generated by combining  (X,H) with  x in the DM. Hence, the CM problem is27

\displaystyle W(m) \displaystyle =\max_{\ell,X_{1},X_{0},\hat{m}_{1},\hat{m}_{0}}\left\{ \ell V\left( \hat{m}_{1},X_{1},1\right) +(1-\ell)V\left( \hat{m}_{0}% ,X_{0},0\right) \right\} (18)
st 0 \displaystyle \leq\phi m-\ell\phi\hat{m}_{1}-(1-\ell)\phi\hat{m}_{0}% +w\ell-T-\ell X_{1}-(1-\ell)X_{0}.    

As is well known,  X and  \hat{m} depend on  H, in general, but if  V is separable between  X and  H then  X_{0}=X_{1}, and if  V is separable between  \hat{m} and  H then  \hat{m}_{1}=\hat{m}_{0}. Of course, the function  V is an endogenous object, and whether it is separable depends on underlying preferences. This is, of course, one argument for making the role of money explicit, instead of simply sticking  m in the utility function: one cannot simply assume  V is separable (or homothetic or whatever), one has to derive this, and this imposes useful discipline.

Letting  \lambda be the Lagrangian multiplier for the budget constraint, FOC for an interior solution are

0 \displaystyle =V_{2}(\hat{m}_{H},X_{H},H)-\lambda, for \displaystyle H=0,1 (19)
0 \displaystyle =V_{1}(\hat{m}_{H},X_{H},H)-\lambda\phi, for \displaystyle H=0,1 (20)
0 \displaystyle =V(\hat{m}_{0},X_{0},0)-V(\hat{m}_{1},X_{1},1)+\lambda\left( X_{1}-X_{0}-1+\phi\hat{m}_{1}-\phi\hat{m}_{0}\right) (21)
0 \displaystyle =\ell-\ell X_{1}-(1-\ell)X_{0}+\phi\left[ m+\gamma M-\ell\hat{m}% _{1}-(1-\ell)\hat{m}_{0}\right] .% (22)

Rocheteau et al. (2007) provide assumptions to guarantee  \ell\in(0,1), and show the FOC characterize the unique solution, even though the objective function is not generally quasi-concave. Given  V(\cdot), (19)-(21) constitute 5 equations that can be solved under weak regularity conditions for  \left( X_{1},X_{0},\hat{m}_{1},\hat{m}_{0},\lambda\right) , independent of  \ell and  m. Then (22) can be solved for individual labor supply as a function of  m,  \ell=\ell(m). Extending the baseline model,  \hat{m}_{H} may depend on  H, but not  m, and hence we get at most a two-point distribution in the DM. Also,  W(m) is again linear, with  W^{\prime}(m)=\lambda\phi.

In DM meetings, for simplicity we assume take-it-or-leave-it offers by the buyer. Also, although it is important to allow buyers' preferences to be nonseparable, let sellers' preferences be separable. Then the DM terms of trade do not depend on anything in a meeting except the buyer's  m: in equilibrium, he pays  d=m, and chooses the  x that makes the seller just willing to accept, independent of the seller's  (X,H). In general, buyers in the DM who were employed or unemployed in the CM get a different  x since they have different  m. In any case, we can use the methods discussed above to describe  V(\cdot), differentiate it, and insert the results into (19)-(21) to get conditions determining  (x_{1},x_{0},X_{1}% ,X_{0},\lambda). From this we can compute aggregate employment  \bar{\ell }=\ell(M). It is then routine to see how endogenous variables depend on policy.

It is easy to check  \partial x/\partial i<0, since as in any such model the first-order effect of inflation is to reduce DM trade. The effect on unemployment depends on the cross derivatives of buyer preferences utility function as follows:

  1.  \upsilon^{b}(x,X,H) is separable between  (X,H) and  x\Rightarrow \partial\bar{\ell}/\partial i=0
  2.  \upsilon^{b}(x,X,H) is separable between  (x,X) and  H\Rightarrow \partial\bar{\ell}/\partial i>0 iff  \upsilon_{Xx}^{b}<0
  3.  \upsilon^{b}(x,X,H) is separable between  (x,H) and  X\Rightarrow \partial\bar{\ell}/\partial i>0 iff  \upsilon_{xH}^{b}<0

The economics here is simple and intuitive. Consider case 2. Since inflation decreases  x, if  x and  X are complements then it also reduces  X, and hence the  \bar{\ell} used to produce  X; but if  x and  X are substitutes then inflation increases  X and  \bar{\ell}. In other words, when  x and  X are substitutes, inflation causes agents to move from DM to CM goods, increasing CM production and reducing unemployment. A similar intuition applies in Case 3, depending on whether  x is a complement or substitute for leisure.

In either case, we can get a downward-sloping Phillips curve under simple and natural conditions, without any complications like imperfect information or nominal rigidities. And this relation is exploitable by policy makers in the long run: given the right cross derivatives, it is feasible to achieve permanently lower unemployment by running a higher anticipated inflation, as Keynesians used to think. But this is never optimal: it is easy to check that the efficient policy here is still Friedman's prescription,  i=0.

4.4 Benchmark Summary

We think this benchmark model, with alternating CM and DM trade, delivers interesting economic insights. A model with only CM trade could not capture as well the fundamental role of money, which is why one has to resort to short cuts like cash-in-advance or money-in-the-utility-function assumptions. The earlier work on microfoundations with only DM trade does capture the role of money, but requires harsh restrictions on money holdings, and hence cannot easily be used to discuss many policy and empirical issues, or becomes intractable in terms of analytic results. There are devices different from our alternating markets that achieve a similar generality plus tractability, including Shi (1977) and Menzio et al. (2009), which are also very useful. One reason to like alternating markets is that, in addition to imparting tractability, it integrates some search and some competitive trade, and thus reduces the gap between the literature on the microfoundations of money and mainstream macro. Alternating markets themselves do not yield tractability; we also need something like quasi-linerity or indivisibilities. This does not seem a huge price to pay for tractability, but one could also dispense with such assumptions, and rely on numerical methods, as much of macro does anyway.

Many other applications in the literature could be mentioned, but we want to get to new results.28 Before we do, however, we mention one variation of the baseline by Rocheteau and Wright (2005), since this is something we use below. This is usually presented as an environment with two permanently distinct types, called buyers and sellers, where the former are always consumers in the DM and the latter are always producers in the DM. This does not work in models with only DM trades, since no one would produce in one DM if he cannot spend the proceeds in a subsequent DM. Here sellers may want to produce in every DM, since they can spend the money in the CM; and buyers may want to work in every CM, since they need the money for the DM. Monetary equilibrium no longer entails a degenerate distribution, but all sellers choose  m=0, while all buyers choose the same  m>0. This raises a point that should be emphasized: the monetary equilibrium distribution is degenerate only conditional on agents' type, as we previously saw in the indivisible-labor model. This is all we need for tractability, however. Indeed, the key property of the model is that the choice of  \hat{m} is history independent, not that it is the same for all agents.

Having two types is interesting for several reasons, including the fact that one can introduce a generalized matching technology taking as inputs the measures of buyers and sellers, and one can incorporate a free entry or participation decision for either sellers or buyers in the DM.29 But notice for this we do not really need permanently distinct types: it would be equivalent to have types determined each period, in a deterministic or a random way. As long as the realization occurs before the CM closes, agents can still choose  \hat{m} conditional on type, and in any case we could still incorporate a generalized matching technology and a participation decision. This is another way in which the framework proves convenient: although the horizon is infinite, which is obviously good for thinking about money and many other applications, in a sense the analysis can be reduced to something almost like a sequence of two-period economies. The demographics in simple overlapping generations models perform a related function, of course. The point is not that the alternating-market structure is the only way to achieve tractability, but that it is one way that works in a variety of applications.

5 New Monetary Theory

In the previous sections we presented results already in the literature. Although we think it is useful to survey what has been done, we also want to present new material. Here we analyze in a novel way some ideas in the Old Monetarist tradition and in the New Keynesian tradition, showing how similar results can be derived in our framework, although sometimes with interesting differences. We first introduce additional informational frictions to show how a signal extraction problem can lead to a short-run Phillips curve, as in Old Monetarist theory, discussed by Friedman (1968), and later formalized by Lucas (1972). Then we analyze what happens when prices are sticky, for some exogenous reason, as in the standard New Keynesian model in Woodford (2003) or Clarida et al. (1999).30

5.1 The Short-Run Phillips Curve

Here we discuss some ideas about the correlations defining the short-run Phillips curve, and the justification for predictable monetary policy, in Old Monetarist economics. For simplicity, we take the Phillips curve to mean a positive relation between money growth or inflation on the one hand, and output or employment the other hand, rather than a negative relation between inflation and unemployment since we do not want to go into details on the labor market or the source of unemployment here (although we saw in the previous section that this is not so hard). Also, although it is not critical, we use the model where agents do not become consumers or producers in the DM based on who they meet, nor based on preference and technology shocks, but instead there are two distinct types called buyers and sellers. Also, we sometimes describe the CM and DM subperiods as the day and night markets when this helps keep track of the timing, and to yield clean results we use  u(q)=\log q.31

We modify the benchmark model by including both real and monetary shocks. First, some fraction of the population is inactive each period. In particular, suppose that a fraction  \omega_{t} of buyers participates in (both) markets in period  t - the rest rest. As well, a fraction  \omega_{t} of sellers does not participate in period  t+1. Assume that  \omega_{t} is a random variable, and realizations are not publicly observable. Second, money growth  \mu_{t} is now random, and realizations are not publicly observable. So that agents have no direct information on the current money injection by the central bank, only indirect information coming from price signals, we add some new actors to the story that we call government agents. During the day in period  t, a new set of government agents appears. Each of them has linear utility  X-H, and can produce one unit of  X for each unit of  H. If  \mu_{t}>1, then government gives money to these extra agents, and they collectively consume  \phi_{t}M_{t-1}(\mu_{t}-1); if  \mu_{t}<1, these agents collectively produce  -\phi_{t}M_{t-1}(\mu_{t}-1). We will assume  \mu is always above  1 here, so government agents never actually produce. In any case, their role is purely a technical one, designed to make signal extraction non-trivial.

During the day, agents learn last period's money stock  M_{t-1}\ and observe the price  \phi_{t}, but not the current aggregate shocks  \omega_{t} and  \mu_{t}. For an individual buyer acquiring money in the CM, the current value of money may be high (low), either because the demand for money is high (low), or because money growth is low (high). For simplicity, assume active buyers make take-it-or-leave-it offers to sellers in the DM, which implies

\displaystyle x_{t}=\beta m_{t}E[\phi_{t+1}\mid\phi_{t}].% (23)

Then an active buyer's FOC from the CM reduces by the usual manipulations to
\displaystyle -\phi_{t}+\beta E[\phi_{t+1}\mid\phi_{t}]u^{\prime}(x_{t})=0.% (24)

Assume the measure of active buyers is  1/2. Then market clearing implies
\displaystyle \frac{\omega_{t}m_{t}}{2}=\mu_{t}M_{t-1}.% (25)

If  \mu_{t} were a continuous random variable, in principle we could solve for equilibrium as in Lucas (1972). For illustrative purposes, however, we adopt the approach in Wallace (1992), using a finite state space (see also Wallace 1980). To make the point, it suffices to consider an example. Thus,  \mu_{t} and  \omega_{t} are independent i.i.d. processes, where  \mu_{t} is  \mu_{1} or  \mu_{t}=\mu_{2}<\mu_{1} each with probability  1/2; and  \omega_{t} is  \omega_{1} or  \omega_{2}<\omega_{2} each with probability  1/2. We assume that

\displaystyle \frac{\omega_{1}}{\mu_{1}}=\frac{\omega_{2}}{\mu_{2}},% (26)

so that agents cannot distinguish between high money demand and high money growth, on the one hand, and low money demand and low money growth on the other hand.

Using (23)-(25) we can obtained closed-form solutions for prices and quantities. Let  \phi(i,j) and  q(i,j) denote the CM price and the DM quantity when  (\mu_{t},\omega_{t})=(\mu_{i},\omega_{j}). Then straightforward algebra yields

\displaystyle \phi(i,j)=\frac{\omega_{j}}{2\mu_{i}M_{t-1}}, for \displaystyle i=1,2% (27)

\displaystyle q(i,j)=\frac{\beta(\omega_{1}+\omega_{2})(\mu_{1}+\mu_{2})}{4\mu_{1}\mu _{2}\omega_{j}}, for \displaystyle (i,j)=(1,2), \displaystyle (2,1),% (28)

\displaystyle q(1,1)=q(2,2)=\frac{\beta(\omega_{1}+\omega_{2})^{2}(\mu_{1}+\mu_{2})}% {8\mu_{1}\mu_{2}\omega_{1}\omega_{2}}.% (29)

Let total output in the day and night be  Q^{d}(i,j) and  Q^{n}(i,j), in state  (\mu_{t},\omega_{t})=(\mu_{i},\omega_{j}). Assuming  \mu_{1}>\mu _{2}\geq1, we have
\displaystyle Q^{d}(i,j)=\phi_{t}M_{t}=\frac{\omega_{j}}{2},% (30)

for  i,j=1,2 from (27). Further, from (28), (29), and 26,
\displaystyle Q^{n}(1,2)=Q^{n}(2,1)=\frac{\beta(\omega_{1}+\omega_{2})(\mu_{1}+\mu_{2}% )}{8\mu_{1}\mu_{2}},% (31)

\displaystyle Q^{n}(1,1)=\frac{\beta(\omega_{1}+\omega_{2})^{2}(\mu_{1}+\mu_{2})}{16\mu _{1}\mu_{2}\omega_{2}},% (32)

\displaystyle Q^{n}(2,2)=\frac{\beta(\omega_{1}+\omega_{2})^{2}(\mu_{1}+\mu_{2})}{16\mu _{1}\mu_{2}\omega_{1}}% (33)

Total output in real terms is  Q(i,j)=Q^{d}(i,j)+Q^{n}(i,j). From (30),  Q^{d} depends only on the current real shock. That is, when the number of active buyers is high (low), money demand is high (low), and the price of money is high (low). Thus, active buyers must collectively produce more (less) in the day to acquire money when the number of active buyers is high (low). At night, from (31)-(33), it is straightforward to show that  \omega_{1}>\omega_{2} implies that  Q^{n}(2,2)<Q^{n}(1,2)=Q^{n}% (2,1)<Q^{n}(1,1).

The scatter plot of aggregate output  Q\ against money growth  \mu, using time series observations generated by the model, is displayed Figure 1 where the four dots represent money and output in each of the four states. There is a positive correlation between money growth and aggregate output. This results from agents' confusion, since if there were full information about aggregate shocks, we would have

\displaystyle Q^{n}(i,j)=\frac{\beta(\omega_{1}+\omega_{2})(\mu_{1}+\mu_{2})}{8\mu_{1}% \mu_{2}}\text{ for all }(i,j)
as in Figure 2, with no correlation between money and output. Confusion results from the fact that, if money growth and money demand are both high (low), then agents' subjective expectation of the  \phi_{t+1} is greater (less) than the objective expectation, so more (less) output is produced in DM matches than under full information. Except for technical details, the nonneutrality of money here is essentially identical to that in Lucas (1972) and Wallace (1980,1992).

A standard narrative associated with the ideas of Friedman (1968) and Lucas (1972,1976) is that 1960s and 1970s macroeconomic policy erred because policy makers treated the dots in (their empirical version of) Figure 1 as capturing a structural relationship between money growth and output. Policy makers took for granted that more output is good and more inflation is bad, and they took the observed correlation as evidence that if the central bank permanently increased money growth this would achieve permanently higher output. Although we saw above that permanent trade-offs are not impossible, the important point emphasized by Friedman, Lucas, Wallace and others is that observed empirical relations may lead one far astray. What happens in this example if we permanently set money growth to  \mu_{1}? It is straightforward to show that the data points we would generate would be the two squares in Figure 1, with high (low) output when money demand is high (low). Rather than increasing output, higher inflation lowers output in all states of the world.

What is an optimal policy? Efficient exchange in DM meetings requires  q=q^{\ast}. If we can find a monetary policy rule that achieves  q=q^{\ast} in equilibrium, this rule is optimal. From (24), an efficient equilibrium has the property that

\displaystyle \phi_{t}=\beta E\left[ \phi_{t+1}\right] .% (34)

Then, from (23) and (34),
\displaystyle \phi_{t}=\frac{\omega_{t}q^{\ast}}{2M_{t}}% (35)

Substituting and rearranging, we obtain
\displaystyle \mu_{t+1}=\beta\frac{\omega_{t+1}}{\omega_{t}}.% (36)

This is the Friedman rule, dictating that the money supply decrease on average at the rate of time preference, with a higher (lower) money growth rate when money demand is high (low) relative to the previous period.

It might appear that the monetary authority cannot implement such a rule, because it seems to require the observability of the aggregate shock  \omega_{t}. However, (35) and (36) imply  \phi_{t+1}=\phi _{t}/\beta, so that prices decrease at a constant rate in the efficient equilibrium. Therefore, the monetary authority need not observe the underlying aggregate shock, and can attain efficiency, simply by a constant rate of deflation. In equilibrium, the price level is predictable, and carries no information about the aggregate state. It is not necessary for the price level to reveal aggregate information, since efficiency requires that buyers acquire the same quantity of real balances in the CM and receive the same quantity in the DM independent of the aggregate shock.

In one sense, these results are consistent with the thrust of Friedman (1968) and Lucas (1972). Monetary policy can confuse price signals, and this can result in nonneutrality that generate a positive Phillips curve, provided that real shocks do not dominate. However, the policy prescription derived from the model is in line with Friedman (1969) rather than Friedman (1968): the optimal money growth rate is not constant, and should respond to aggregate real disturbances, to correct intertemporal distortions. This feature of the model appears consistent with some of the reasons that money growth targeting by central banks failed in practice in the 1970s and 1980s. Of course we do not intend the model in this section to be taken literally - it is meant as an example to illustrate once again, but here in the context of our benchmark framework, the pitfalls of naive policy making based on empirical correlations that are incorrectly assumed to be structural.

5.2 Sticky Prices

We now modify our benchmark model to incorporate sticky prices, capturing ideas in New Keynesian economics along the lines of e.g. Woodford (2003) and Clarida et al. (1999). We will first construct a cashless version, as does Woodford (2003), then modify it to include currency transactions. In our cashless version, all transactions are carried out using credit. New Keynesian models typically use monopolistic competition, where individual firms set prices, usually according to a Calvo (1983) mechanism. Here, to fit into our benchmark model, we assume that some prices are sticky in the DM in bilateral random matching between buyers and sellers. We use the version with permanently distinct buyer and seller types.

In the cashless model, in spite of the fact that money is not held or exchanged, prices are denominated in units of money. As in the benchmark model, the price of money in the CM  \phi_{t} is flexible. In the DM, each buyer-seller pair conducts a credit transaction where goods are received by the buyer in exchange for a promise to pay in the next CM. To support these credit transactions we assume that there is perfect memory or record keeping. That is, if a buyer defaults during the day, this is observable to everyone, and there is an exogenous legal system that can impose severe punishment on a defaulter. Thus, in equilibrium all borrowers pay off their debts.

During the day, suppose that in an individual match the terms of trade between a buyer and seller is either flexible with probability 1/2, or fixed with probability 1/2. In a flexible match, as in the benchmark model, the buyer makes a take-it-or-leave-it offer to the seller. Letting  \frac{1}{\psi_{t}} denote the number of units of money the buyer offers to pay in the following day for each unit of goods produced by the flexible-price seller during the night, and  s_{t}^{1} the quantity of goods produced by the seller, the take-it-or-leave it offer satisfies

\displaystyle s_{t}^{1}=\frac{\beta s_{t}^{1}\phi_{t+1}}{\psi_{t}},
so that
\displaystyle \psi_{t}=\beta\phi_{t+1}.
Now, assume that in each fixed-price exchange during the night, that the seller is constrained to offering a contract which permits the buyer to purchase as much output as they would like in exchange for  \frac{1}% {\psi_{t-1}} units of money in the next day, per unit of goods received.

Then, in a flexible price contract, the buyer chooses  s_{t}^{1} to satisfy

\displaystyle \max_{s_{t}^{1}}\left[ u(s_{t}^{1})-s_{t}^{1}\right] ,% (37)

so that  s_{t}^{1}=q^{\ast}, the surplus-maximizing quantity of output. However, in a fixed-price contract, the buyer chooses the quantity  s_{t}^{2} to solve
\displaystyle \max_{s_{t}^{2}}\left[ u(s_{t}^{2})-\frac{s_{t}^{2}\phi_{t+1}}{\phi_{t}% }\right] ,
so  s_{t}^{2} satisfies
\displaystyle u^{\prime}(s_{t}^{2})=\frac{\phi_{t+1}}{\phi_{t}}.% (38)

Now, thus far there is nothing to determine the sequence  \{\phi_{t}% \}_{t=0}^{\infty}. In Woodford (2003), one solution approach is to first determine the price of a nominal bond. In our model, during the day in period  t, the price  z_{t} in units of money of a promise to pay one unit of money in the daytime during period  t+1 is given by

\displaystyle z_{t}=\beta\frac{\phi_{t+1}}{\phi_{t}}.% (39)

Then, following Woodford's approach, we could argue that  z_{t} can somehow be set by the central bank, perhaps in accordance with a Taylor rule. Then, given determinacy of  z_{t} we can solve for  \{\phi_{t}\}_{t=0}^{\infty} given (39).

Given the model, it seems consistent with New Keynesian logic to consider  \{\phi_{t}\}_{t=0}^{\infty} as an exogenous sequence of prices that can be set by the government. In terms of what matters for agents' decisions, suppose it is equivalent to say that the government sets the path for the inflation rate (in the daytime Walrasian market), where the gross inflation rate is defined by  \pi_{t}=\frac{\phi_{t-1}}{\phi_{t}}. Then, from (37) the path for the inflation rate is irrelevant for  s_{t}^{1}, but from (38)  s_{t}^{2} is increasing in  \pi_{t+1}. In fixed-price transactions, buyers write a credit contract under which the nominal payment they make during the day to settle the previous night's credit transaction is determined by the flexible-price contract from the previous period. When inflation increases, therefore, the implicit real interest rate on a credit transaction in a fixed-price contract falls, and the buyer then purchases more goods during the night. Note that, when the buyer in a fixed-price meeting in the night of period  t repays the loan in period  t+1, that the buyer produces  \frac{s_{t}^{2}}{\beta\pi_{t+1}}, so the effect of inflation on night production is determined by the elasticity of  s_{t}^{2} with respect to the inflation rate  \pi_{t+1}, which in turn depends on the curvature of  u(\cdot).

We again assume  u(q)=\log(q), which implies that daytime production is invariant to the path for the inflation rate. Then, the only component of aggregate output affected by inflation is output produced in fixed-price meetings during the night, and from (38) we have  s_{t}^{2}=\pi_{t+1} , so that there is a short-run and long-run Phillips curve relationship. A temporarily higher rate of anticipated inflation increases output temporarily, and a permanently higher rate of inflation permanently increases output. The model predicts that the Phillips curve will exist in the data, and that it is exploitable by the central bank.

Should the central bank exploit the Phillips curve? The answer is no. The equilibrium is in general inefficient due the sticky price friction, and the inefficiency is manifested in a suboptimal quantity of output exchanged in fixed-price contracts. For efficiency, we require that  s_{t}^{2}=q^{\ast}, which implies from (38) that  \phi_{t}=\phi, a constant, for all  t, so that the optimal inflation rate is zero. Further, from (39), the optimal nominal bond price consistent with price stability, is  z_{t}=\beta. That is, the optimal nominal interest rate is Woodford's " Wicksellian natural rate."

Now, suppose an environment where memory is imperfect, so that money plays a role. In a fraction  \alpha of non-monitored meetings between buyers and sellers during the night, the seller does not have access to the buyer's previous history of transactions, and anything that happens during the meeting remains private information to the individual buyer and seller. Further, assume that it is the same set of sellers that engage in these non-monitored meetings for all  t. A fraction  1-\alpha of matches during the night are monitored, just as in the cashless economy. In a monitored trade, the seller observes the buyer's entire history, and the interaction between the buyer and the seller is public information. The buyer and seller continue to be matched into the beginning of the next day, so that default is publicly observable. As before, we assume an exogenous legal system that can impose infinite punishment for default. The Walrasian market on which money and goods are traded opens in the latter part of the day, and on this market only the market price (and not individual actions) is observable.

Just as with monitored transactions involving credit, half of the nonmonitored transactions using money are flexible-price transactions, and half are fixed-price transactions. The type of meeting that a buyer and seller are engaged in (monitored or nonmonitored, flexible-price or fixed-price) is determined at random, but the buyer knows during the day what the type of transaction will be during the following night.

As in the cashless model, the quantities of goods traded in flexible-price and fixed-price credit transactions, respectively, are  s_{t}^{1} and  s_{t}% ^{2}, with  s_{t}^{1}=q^{\ast} and  s_{t}^{2} determined by (38). For flexible-price transactions where there is no monitoring, and money is exchanged for goods, the buyer will carry  m_{t}^{1} units of money from the day into the night and make a take-it-or-leave-it offer to the seller which involves an exchange of all this money for goods. The quantity of goods  q_{t}^{1} received by the buyer is then

\displaystyle q_{t}^{1}=\beta\phi_{t+1}m_{t}^{1},% (40)

so that the implicit flexible price of goods in terms of money is  \frac {1}{\beta\phi_{t+1}}. In a fixed-price transaction where money is exchanged for goods, we assume that the seller must charge a price equal to the flexible price in a money transaction in the previous period. Therefore, for a buyer engaged in a fixed-price transaction using money, he or she carries  m_{t}% ^{2} units of money forward from the day to the night, and spends it all on a quantity of goods  q_{t}^{2}, where
\displaystyle q_{t}^{2}=\beta\phi_{t}m_{t}^{2},% (41)

As buyers choose money balances optimally in the daytime, we then obtain the following first-order conditions for buyers in monetary flexible-price and fixed-price transactions, respectively.

\displaystyle -\phi_{t}+\beta\phi_{t+1}u^{\prime}(q_{t}^{1})=0,% (42)

\displaystyle -\phi_{t}+\beta\phi_{t}u^{\prime}(q_{t}^{1})=0.% (43)

Assume that money is injected by the government by way of lump-sum transfers to sellers during the day, and suppose that the aggregate money stock grows at the gross rate  \mu. In equilibrium, the entire money stock must be held by buyers at the end of the day who will be engaged in monetary transactions at night. Thus, we have the equilibrium condition

\displaystyle \frac{\alpha}{2}\left( m_{t}^{1}+m_{t}^{2}\right) =M_{t}% (44)

Now, consider the equilibrium where  \frac{1}{\phi_{t}} grows at the gross rate  \mu and all real quantities are constant for all  t. Then, from (38), and (40)-(44), equilibrium quantities  s_{t}^{i},  q_{t}^{i}, for  i=1,2, are the solution to

\displaystyle s_{t}^{1}=q^{\ast},
\displaystyle u^{\prime}(s_{t}^{2})=\frac{1}{\mu},
\displaystyle u^{\prime}(q_{t}^{1})=\frac{\mu}{\beta},
\displaystyle u^{\prime}(q_{t}^{2})=\frac{1}{\beta}.
In equilibrium the money growth rate is equal to the inflation rate, and higher money growth increases the quantity of goods exchanged in fixed-price transactions relative to what is exchanged in flexible-price transactions.

From a policy perspective, it is impossible to support an efficient allocation in equilibrium where  s_{t}^{i}=q_{t}^{i}=q^{\ast} for  i=1,2. However, we can find the money growth rate that maximizes welfare  W(\mu), defined here as the weighted average of total surplus across nighttime transactions, or

\displaystyle W(\mu)=\frac{\alpha}{2}\left[ u(q_{t}^{1})-q_{t}^{1}+u(q_{t}^{2})-q_{t}% ^{2}\right] +\frac{\left( 1-\alpha\right) }{2}\left[ u(s_{t}^{1}% )-s_{t}^{1}+u(s_{t}^{2})-s_{t}^{2}\right]
Then, we have
\displaystyle W^{\prime}(\mu)=\frac{\alpha}{2\beta u^{\prime\prime}(q_{t}^{1})}\left( \frac{\mu}{\beta}-1\right) -\frac{\left( 1-\alpha\right) }{2\mu ^{2}u^{\prime\prime}(s_{t}^{2})}\left( \frac{1}{\mu}-1\right) .% (45)

Now, for an equilibrium we require that  \mu\geq\beta. From (45) note that  W^{\prime}(\beta)>0 and  W^{\prime}(\mu)<0 for  \mu\geq1, so that the optimal money growth factor  \mu^{\ast} satisfies  \beta<\mu^{\ast}<1. This reflects a trade-off between two distortions. Inflation distorts the relative price between flexible-price and fixed-price goods, and this distortion is corrected if there is price stability, as in the cashless model, achieved when  \mu=1. Inflation also results in a typical intertemporal relative price distortion, in that too little of the flexible-price good purchased with cash is in general consumed. This distortion is corrected with a Friedman rule or  \mu=\beta here. At the optimum, since the monetary authority trades off the two distortions, the optimal money growth rate is larger than at the Friedman rule and smaller than what would be required for a constant price level.

What do we learn form this version of the New Keynesian model? One principle of New Monetarism is that it is important to be explicit about the frictions underlying the role for money in the economy, as well as other financial frictions,. What do the explicit frictions in this model tell us that typical New Keynesian models do not? A line of argument in Woodford (2003) is that it is sufficient to use a cashless model, like the one constructed above, to analyze monetary policy. Woodford views typical intertemporal monetary distortions that can be corrected by a Friedman rule as secondary to sticky price distortions. Further, he argues that one can construct monetary economies that behave essentially identically to the cashless economy, so that it is sufficient to analyze the economy that we get with the cashless limit.

The cashless limit would be achieved in our cash/credit model if we let  \alpha\rightarrow0. In the cash/credit model, quantities traded in different types of transactions are independent of  \alpha. The only effects of changing  \alpha are on the price level and the fraction of exchange that is supported by credit. As well, the optimal money growth rate will tend to rise as  \alpha decreases, with  \mu^{\ast}=1 in the limit as  \alpha \rightarrow0. The key feature of the equilibrium we study in the cash/credit model that is different from the cashless economy is that the behavior of prices is tied to the behavior of the aggregate money stock, in line with the quantity theory of money.

Confining analysis to the cashless economy is not innocuous. First, it is important that we not assume at the outset which frictions are important for monetary policy. It is crucial that all potentially important frictions, including intertemporal distortions, play a role, and then quantitative work can sort out which ones are most important. In contrast to Woodford's assertion that intertemporal distortions are irrelevant, as we discussed above, some New Monetarist models find that quantitatively the welfare losses from intertemporal distortions are much larger than in found in traditional monetary models.

Also, the cash/credit model gives the monetary authority control over a monetary quantity, not direct control over a market interest rate, the price level, or the inflation rate. In reality, the central bank intervenes mainly through exchanges of central bank liabilities for other assets and through lending to financial institutions. Though central banks may conduct this intervention so as to target some market interest rate, it is important to model the means by which this is done. How else could we evaluate whether, for example, it is preferable in the short run for the central bank to target a short-term nominal interest rate or the growth rate in the aggregate money stock?

Our cash/credit model is not intended to be taken seriously as a vehicle for monetary policy analysis. New Monetarists are generally uncomfortable with sticky-price models even when, as in Golosov and Lucas (2005) e.g., there are explicit costs to changing prices. The source of these menu costs is typically unexplained, and once they are introduced it seems that one should consider many other types of costs in a firm's profit maximization problem if we take menu costs seriously. The idea here, again, is simply to show that if one thinks it is critical to have nominal rigidities in a model, this is not inconsistent with theories that try to be explicit about the exchange process and the role of money or related institutions in that process.

6 More New Monetary Theory

In this section we analyze extensions of the benchmark New Monetarist model that incorporates payments arrangements, along the lines of Freeman (1995), and banks, along the lines of Diamond and Dybvig (1983). We construct environments where outside money is important not only for accomplishing the exchange of goods but for supporting credit arrangements.

6.1 A Payments Model

We modify the benchmark model by including two types of buyers and two types of sellers. A fraction  \alpha of buyers and a fraction  \alpha of sellers are type 1 buyers and sellers, respectively, and these buyers and sellers meet in the DM in non-monitored matches. Thus, when a type 1 buyer meets a type 1 seller, they can trade only if the former has money. As well, there are  1-\alpha type 2 buyers and  1-\alpha type 2 sellers, who are monitored at night, and hence can trade using credit, which we again assume are perfectly enforced.

In the day, type 1 sellers, and all buyers, participate. Then, at night, bilateral meetings occur between the type 2 buyers and type 2 sellers who were matched during the previous night. Finally, type 1 buyers meet in the second Walrasian market with type 2 sellers, with the price of money denoted by  \phi_{t}^{2}. During the day, buyers can only produce in the Walrasian markets where they are present. The government intervenes by making lump-sum money transfers in Walrasian markets during the day, so that there are two opportunities to intervene during any period. Lump-sum transfers are made in equal quantities to the sellers in the Walrasian market.

Our interest is in studying an equilibrium where trade occurs as follows. First, in order to purchase goods during the night, type 1 buyers need money, which they can acquire either in the first Walrasian market or the second Walrasian market during the day. Arbitrage guarantees that  \phi_{t}^{1}% \geq\phi_{t}^{2}, and we will be interested in the case where  \phi_{t}% ^{1}>\phi_{t}^{2}. Then, in the first Walrasian market during the day, type 2 buyers produce in exchange for the money held by type 1 sellers. Then, type 2 buyers meet type 2 sellers and repay the debts acquired in the previous night with money. Next, in the second Walrasian market during the day, type 2 sellers exchange money for the goods produced by type 1 buyers. Then, in the night, meetings between type 1 buyers and sellers involve the exchange of money for goods, while meetings between type 2 buyers and sellers are exchanges of IOU's for goods. The equilibrium interactions among sets of economics agents in the model are summarized in Figure 3.

All bilateral meetings in the night involve exchange subject to a take-it-or-leave-it offer by the buyer. In an equilibrium where  \phi_{t}% ^{1}>\phi_{t}^{2}, letting  q_{t} denote the quantity of goods received by a type 1 buyer in exchange for goods during the night, optimal choice of money balances by the type 1 buyer yields the first-order condition

\displaystyle -\phi_{t}^{2}+\beta\phi_{t+1}^{1}u^{\prime}(q_{t})=0.% (46)

To repay his or her debt that supported the purchase of  s_{t} units of goods, the type 2 buyer must acquire money in Walrasian market 1 at price  \phi_{t+1}^{1}, and then give the money to the type 2 seller, who then exchanges the money for goods in Walrasian market 2 at the price  \phi _{t+1}^{2}. Therefore,  s_{t} satisfies the first-order condition
\displaystyle -\phi_{t+1}^{1}+\phi_{t+1}^{2}u^{\prime}(s_{t})=0.% (47)

Now, let  M_{t}^{i} denote the quantity of money (post transfer) supplied in the  i^{th} Walrasian market during the day, for  i=1,2. Then, market clearing in Walrasian markets 1 and 2, respectively, gives
\displaystyle (1-\alpha)s_{t-1}=\beta\phi_{t}^{2}M_{t}^{1},% (48)

\displaystyle \alpha q_{t}=\beta\phi_{t+1}^{1}M_{t}^{2}.% (49)

To solve for equilibrium quantities and prices, substitute for prices in (46) and (47) using (48) and (49) to obtain

\displaystyle -\frac{\alpha q_{t}}{M_{t}^{2}}+\frac{(1-\alpha)s_{t}u^{\prime}(s_{t}% )}{M_{t+1}^{1}}=0,% (50)

\displaystyle -\frac{(1-\alpha)s_{t-1}}{\beta M_{t}^{1}}+\frac{\alpha q_{t}u^{\prime}% (q_{t})}{M_{t}^{2}}=0.% (51)

Then, given  \{M_{t}^{1},M_{t}^{2}\}_{t=0}^{\infty}, we can determine  \{q_{t},s_{t}\}_{t=0}^{\infty} from (50) and (51), and then  \{\phi_{t}^{1},\phi_{t}^{2}\}_{t=0}^{\infty} can be determined from (48) and (49). Note that, in general, intervention in both Walrasian markets matters. For example, suppose that  \frac{M_{t}^{1}}% {M_{t}^{2}}=\gamma for all  t,  \frac{M_{t+1}^{i}}{M_{t}^{i}}=\mu, where  \gamma>0 and  \mu>\beta, so that the ratio of money stocks in the two markets is constant for all  t, and in individual Walrasian markets the money stock grows at a constant (and common) rate over time. Further, suppose that  u(c)=\ln c. Then, in an equilibrium where  s_{t}=s for all  t and  q_{t}=q for all  t, where  s and  q are constants, from (50) and (51) we obtain
\displaystyle q=\frac{(1-\alpha)}{\alpha\gamma\mu},
\displaystyle s=\frac{\alpha\beta\gamma}{(1-\alpha)}.
Here, note that a higher money growth rate  \mu decreases the quantity of goods traded in cash transactions during the night, as is standard. However, a higher  \gamma (relatively more cash in the first Walrasian market) will increase the quantity of goods exchanged in credit transactions and reduce goods exchanged in cash transactions during the night.

What is efficient? To maximize total surplus in the two types of trades, we need  q_{t}=s_{t}=q^{\ast} for all  t. So from (50) and (51), this gives  \mu=\beta and  \gamma=\left( 1-\alpha\right) /\alpha\beta. At the optimum, in line with the Friedman rule, money should shrink over time at the rate of time preference, but we also need the central bank to make a money injection in the first market that increases with the fraction of credit transactions relative to cash transactions, so as to support the optimal clearing and settlement of credit.

6.2 Banking

This example extends the benchmark model by including banking in the spirit of Diamond and Dybvig (1983). Currency and credit are both used in transactions, and a diversified bank essentially allows agents to avoid waste. While the role for banking is closely related to the role Diamond-Dybvig banking models, this has nothing to do with risk-sharing here because of quasi-linear utility. As in the payments model, there are  \alpha type 1 sellers who engage in non-monitored DM exchange using currency and  1-\alpha type 2 sellers who engage in monitored exchange. During the night there will be  \alpha type 1 buyers (each one matched with a type 1 seller) and  1-\alpha type 2 buyers (each one matched with a type 2 seller), but a buyer's type is random, and learned at the end of the previous day, after production and portfolio decisions are made. There exists an intertemporal storage technology, which takes as input the output produced by buyers during the afternoon of the day, and yields  R units of the consumption good per unit input during the morning of the next day. Assume that  R>1/\beta. All buyers and type 1 sellers are together in the Walrasian market that opens during the afternoon of the day, while only type 2 buyers are present during the morning of the day.

First suppose that banking is prohibited. To trade with a type 2 seller, a buyer needs to store goods during the day before meeting the seller at night. Since the trade is monitored, the seller is able to verify that the claim to storage offered in exchange for goods by the buyer is valid. To trade with a type 1 seller, a buyer needs to have cash on hand. Thus, during the afternoon of the day, the buyer acquires nominal money balances  m_{t} and stores  x_{t} units of output and given take-it-or-leave-it offers at night, solves

\displaystyle \max_{m_{t},x_{t}}-\phi_{t}m_{t}-x_{t}+\alpha u(\beta\phi_{t+1}m_{t}% )+(1-\alpha)\left[ u(\beta Rx_{t})+\beta\phi_{t+1}m_{t}\right] .
The FOC are
\displaystyle -\phi_{t}+\beta\phi_{t+1}\left[ \alpha u^{\prime}(q_{t})+1-\alpha\right] =0 % (52)

\displaystyle -1+(1-\alpha)\beta Ru^{\prime}(s_{t})=0.% (53)

Assume that the monetary authority makes lump sum transfers during the afternoon of the day to buyers. Then, a Friedman rule is optimal: the money supply grows at gross rate  \beta and  \frac{\phi_{t+1}}{\phi_{t}}=\frac {1}{\beta} in equilibrium. This implies from (52) that  q_{t}=q^{\ast} in monetary exchange. However, claims to storage have no use for a buyer, so if the buyer does not meet a type 2 seller, his storage is wasted, even if we run the Friedman rule.

There is an insurance role for banks here, but it differs from their role in Diamond-Dybvig (1983). In that model, there is a risk-sharing role for a diversified bank, which insures against the need for liquid assets. In our model, the role of a diversified bank is to prevent the wasteful storage. A diversified bank can be formed in the afternoon of the day, which takes as deposits the output of buyers, and issues Diamond-Dybvig deposit claims. For each unit deposited with the bank in period  t, the depositor can either withdraw  \hat{m}_{t} units of cash at the end of the day, or trade claims to  \hat{x}_{t} units of storage during the ensuing night. We assume a buyer's type is publicly observable at the end of the day.

Suppose the bank acquires  d_{t} from a depositor at the beginning of period  t. The bank then chooses a portfolio of  m_{t} units of money and  x_{t} units of storage satisfying the constraint

\displaystyle d_{t}=\phi_{t}m_{t}+x_{t}% (54)

The bank then maximizes the expected utility of the depositor given  d_{t}. If the bank is perfectly diversified (as it will be in equilibrium), then it offers agents who wish to withdraw  \hat{m}_{t}=m_{t}/\alpha units of currency, and permits those who do not withdraw to trade claims to  \hat {x}_{t}=x_{t}/\left( 1-\alpha\right) . The depositor's expected utility is
\displaystyle \psi(d_{t})=\max_{m_{t},x_{t}}\left\{ \alpha u\left( \frac{\beta\phi _{t+1}m_{t}}{\alpha}\right) +(1-\alpha)u\left( \frac{\beta x_{t}R}{1-\alpha }\right) \right\}% (55)

subject to (54). Letting  q_{t} be the quantity of output exchanged during the night in a monetary transaction, and  s_{t} the quantity of output exchanged in a credit transaction, the first-order condition from the bank's problem gives
\displaystyle u^{\prime}(q_{t})\frac{\phi_{t+1}}{\phi_{t}}=u^{\prime}(s_{t})R.% (56)

From (55) and the envelope theorem, the optimal choice of  d_{t} gives

\displaystyle u^{\prime}(s_{t})=\frac{1}{\beta R},% (57)

which determines  s_{t}. Then from (56) and (57) we get
\displaystyle u^{\prime}(q_{t})=\frac{\phi_{t}}{\beta\phi_{t+1}},% (58)

which determines  q_{t}. In equilibrium all buyers choose the same deposit quantity in the day, the bank is perfectly diversified, and it can thus fulfill the terms of the contract. Given this, the quantity  s_{t} traded in nighttime credit transactions is efficient. Without banking, not only is the quantity of goods traded in credit transactions inefficient, from (53), some storage is wasted every period. With banking, the quantity of goods  q_{t} exchanged in monetary transactions during the night is efficient under the Friedman rule  \mu=\beta, which by (58) gives  q_{t}=q^{\ast}.

A policy that we can analyze in this model is Friedman's 100% reserve requirement. This effectively shuts down financial intermediation and constrains buyers to holding outside money and storing independently, rather than holding deposits backed by money and storage. We then revert to our solution where banking is prohibited, and we know that the resulting equilibrium is inefficient. It would also be straightforward to consider random fluctuations in  \alpha or  R, which would produce endogenous fluctuations in the quantity of inside money. Optimal monetary policy would involve a response to these shocks, but at the optimum the monetary authority should not want to smooth fluctuations in a monetary aggregate.

7 Asset Markets

The class of models we have been studying has recently been used to study trading in asset markets. This area of research is potentially very productive, as it permits the examination of how frictions and policy affect the liquidity of assets, and also how it affects asset prices, and the volume of trade on asset markets.32

Modify our benchmark new monetarist model as follows. The population consists, as before, of buyers and sellers, with equal masses of each. For a buyer,  U^{i}(X)=U(X), where  U(\cdot) is strictly increasing and strictly concave, with  U(0)=0,  U^{\prime}(0)=\infty, and  X^{\ast} defined to be the solution to  U^{\prime}(X^{\ast})=1. For a seller,  U^{i}(X)=X. During the day, each buyer and seller have access to a technology that produces one unit of consumption good for each unit of labor input. Neither buyer nor seller can produce during the night.

In the day market, output can be produced from labor, but agents also possess another technology that can produce labor using capital. In particular, at the beginning of the day, before the Walrasian market opens, a buyer with  k_{t}^{b} units of capital can produce  f(k_{t}^{b}) units of the consumption good. Similarly, a seller with  k_{t}^{s} units of capital can produce  f(k_{t}^{s}). Assume that  f(\cdot) is strictly concave and twice continuously differentiable with  f^{\prime}>0,  f^{\prime}(0)=\infty,  f^{\prime}(\infty)=0, and  f(0)=0. Each seller has a technology to convert consumption goods into capital, one-for-one, at the end of the day after the Walrasian market closes. Capital produced in the daytime of period  t becomes productive at the beginning of the day in period  t+1, and it then immediately depreciates by 100%.

In addition to capital, there is a second asset, which we will call a share. To normalize, let there be 1/2 shares in existence, with each share being a claim to  y units of consumption goods during each day. Assume that each seller is endowed with one share at the beginning of period 0. A share trades in the daytime Walrasian market at the price  \phi_{t}. It is straightforward to reinterpret this second asset as money, if  y=0 and the quantity of "shares" in existence can be augmented or diminished by the government through lump-sum transfers and taxes. During the night, each buyer is matched with a seller with probability  \sigma, so that there are  \sigma buyers who are matched and  1-\sigma who are not, and similarly for sellers.

Recall that buyers and sellers do not produce or consume during the night, so that a random match between a buyer and seller during the night represents only an opportunity for asset trade. The available technology prohibits buyers from holding capital at the end of the day. Thus, a match during the night is an opportunity for a buyer to exchange shares for capital.

For now, confine attention to equilibria where a buyer will trade away all of his or her shares during the night, given the opportunity. Then, a buyer's problem during the day is

\displaystyle \max_{k_{t+1}^{b}}\left\{ -\frac{\phi_{t}z(k_{t+1}^{b},k_{t+1})}{\phi _{t+1}+y}+\beta\left[ \sigma f(k_{t+1}^{b})+(1-\sigma)z(k_{t+1}^{b}% ,k_{t+1})\right] \right\}% (59)

That is,  \frac{z(k_{t+1}^{b},k_{t+1})}{\phi_{t+1}+y} denotes the number of shares required in a random match with a seller in the night to purchase  k_{t+1}^{b} units of capital when the seller has  k_{t+1} units of capital. Similarly, the seller's problem at the end of the night is
\displaystyle \max_{k_{t+1}}\left( -k_{t+1}+\beta\left\{ \sigma\left[ f(k_{t+1}% -k_{t+1}^{b})+z(k_{t+1}^{b},k_{t+1})\right] +(1-\sigma)f(k_{t+1})\right\} \right)% (60)

Optimization implies that the following must hold:
\displaystyle \frac{\phi_{t+1}+y}{\phi_{t}}\leq\frac{1}{\beta}.% (61)

Now, consider a match between a buyer and seller during the night, where the buyer has  m shares and the seller has  k units of capital. With the exchange constrained by  m and  k, the buyer exchanges  m shares for  k^{b} units of capital. The buyer's surplus, in units of period  t+1 consumption goods, is  f(k^{b})-m(\phi_{t+1}+y), and the seller's surplus is  f(k-k^{b})+m(\phi_{t+1}+y)-f(k). With generalized Nash bargaining between the buyer and seller, it is straightforward to show that the quantity of capital held by the seller will not constrain trading. However, due to a holdup problem, in equilibrium  m will always constrain the Nash bargaining solution, as long as the seller has some bargaining power. Then, Nash bargaining allows us to solve for  k^{b} according to

\displaystyle \max_{k^{b}}\left[ f(k^{b})-m(\phi_{t+1}+y)\right] ^{\theta}\left[ f(k-k^{b})+m(\phi_{t+1}+y)-f(k)\right] ^{1-\theta},
where  \theta is the bargaining weight associated with the buyer. This tells us that the quantity of shares  m required by the buyer to purchase  k^{b} units of capital when the seller has  k units of capital is given by
\displaystyle m=\frac{z(k^{b},k)}{\phi_{t+1}+y},% (62)

where
\displaystyle z(k^{b},k)=\frac{\theta f^{\prime}(k^{b})[f(k)-f(k-k^{b})]+(1-\theta )f^{\prime}(k-k^{b})f(k^{b})}{\theta f^{\prime}(k^{b})+(1-\theta)f^{\prime }(k-k^{b})}% (63)

Then, the first-order conditions from the buyer's and seller's optimization problems give us, respectively,

\displaystyle \left[ \frac{\phi_{t+1}+y}{\phi_{t}}\right] \left[ \frac{\sigma f^{\prime }(k_{t+1}^{b})+(1-\sigma)z_{1}(k_{t+1}^{b},k_{t+1})}{z_{1}(k_{t+1}^{b}% ,k_{t+1})}\right] =\frac{1}{\beta}% (64)

\displaystyle \sigma\left[ f^{\prime}(k_{t+1}-k_{t+1}^{b})+z_{2}(k_{t+1}^{b},k_{t+1}% )\right] +(1-\sigma)f^{\prime}(k_{t+1})=\frac{1}{\beta}% (65)

Suppose first that  \sigma=0, which implies that there is no asset trading at night. Then, from (64) and (65), the solution is

\displaystyle \phi_{t}=\hat{\phi}=\frac{\beta y}{1-\beta}%
\displaystyle k_{t}=\hat{k}, where \displaystyle f^{\prime}(\hat{k})=\frac{1}{\beta}%
Thus, in this case, the rates of return on shares and capital are equal to the rate of time preference, and the share price is determined by fundamentals, i.e. the share price is just the present value of dividends.

In the case where  \sigma>0, so that there are some meetings between buyers and sellers at night, there may exist an equilibrium where shares trade at their fundamental price, and shares are held at the end of the day by both buyers and sellers. Restrict attention to steady states, where  k_{t}% ^{b}=k^{b},  k_{t}=k, and  \phi_{t}=\phi for all  t, where  k^{b},  k, and  \phi are positive constants. To construct this equilibrium, first note from (64) that  \phi_{t}=\hat{\phi} for all  t implies that

\displaystyle z_{1}(k^{b},k)=f^{\prime}(k^{b}),% (66)

and (65) gives
\displaystyle \sigma\left[ f^{\prime}(k-k^{b})+z_{2}(k^{b},k)\right] +(1-\sigma)f^{\prime }(k)=\frac{1}{\beta}.% (67)

Then equations (66) and (67) solve for  k^{b} and  k. Let  \bar{k}^{b} and  \bar{k} denote the values of  k^{b} and  k, respectively, that solve (66) and (67). For this to be an equilibrium, we must have  m\leq1 in a meeting between a buyer and seller or, from (62),
\displaystyle y\geq(1-\beta)z(\bar{k}^{b},\bar{k})% (68)

Thus, in this equilibrium, the dividend on shares is high enough so that the price of shares is sufficiently large in equilibrium that the total value of shares is more than buyers wish to hold to trade with sellers. As a result, given the holdup problem in trade between buyers and sellers at night, sellers will hold some of the stock of shares at the end of the day, and buyers will hold the rest. Shares trade at their fundamental price, and it is easy to show that there is inefficient trade between buyers and sellers (due to the holdup problem) with  k^{b}<\frac{k}{2} and  f^{\prime}(k^{b})>f^{\prime}(k-k^{b}). In other words, capital is misallocated between buyers and sellers who trade.

In this equilibrium, only buyers will hold shares, so that  \frac{\phi _{t+1}+y}{\phi_{t}}\leq1 and, from (64)

\displaystyle \frac{f^{\prime}(k^{b})}{z_{1}(k,k^{b})}\geq1,
so in this equilibrium shares trade at a price greater than their fundamental, reflecting a liquidity premium, which can be measured by the ratio  \frac{f^{\prime}(k^{b})}{z_{1}(k,k^{b})}. In this equilibrium, we must have
\displaystyle y\leq(1-\beta)z(\bar{k}^{b},\bar{k})
Since  m=1 in equilibrium, from (62), (64), and (65), we obtain
\displaystyle \left[ \frac{z(k^{b},k)}{z(k^{b},k)-y}\right] \left[ \frac{\sigma f^{\prime}(k^{b})+(1-\sigma)z_{1}(k^{b},k)}{z_{1}(k^{b},k)}\right] =\frac {1}{\beta},% (69)

\displaystyle \sigma\left[ f^{\prime}(k-k^{b})+z_{2}(k^{b},k)\right] +(1-\sigma)f^{\prime }(k)=\frac{1}{\beta},% (70)

which then solve for  k and  k^{b}.

7.1 Example 1:  \theta =1

If  \theta=1, so that the take-it-or-leave-it offer is made by the buyer in a nighttime random match, then our problem is somewhat different from the general case, due to the absence of a holdup problem for the buyer. Here, if in a random match between a buyer and seller the buyer is holding a quantity of shares  m satisfying

\displaystyle m\geq\frac{f(k)-f\left( \frac{k}{2}\right) }{\phi_{t+1}+y},% (71)

then the buyer will trade the quantity
\displaystyle m-\left[ \frac{f(k)-f\left( \frac{k}{2}\right) }{\phi_{t+1}+y}\right]
of his or her shares in exchange for capital, and  k^{b}=\frac{k}{2}. However, if
\displaystyle m\leq\frac{f(k)-f\left( \frac{k}{2}\right) }{\phi_{t+1}+y},
then the buyer trades all of his or her shares in exchange for capital, and the take-it-or-leave-it offer by the buyer gives
\displaystyle f(k)-f(k-k^{b})=m(\phi_{t+1}+y)
For the seller, who will face a take-it-or-leave-it offer from the buyer in the night in a random match, (65) becomes
\displaystyle f^{\prime}(k_{t+1})=\frac{1}{\beta}.
Therefore, since the seller gets no surplus from trading, his or her capital accumulation decision is unaffected by trading in the night. Let  k^{\ast} denote the solution to  f^{\prime}(k^{\ast})=\frac{1}{\beta}.

In a steady state fundamentals equilibrium we have  \phi=\hat{\phi}, and it will be the case that sellers hold some of the stock of shares at the end of the day, or buyers hold some shares from one day until the next day. Trading in random matches will be unconstrained by the quantity of shares held by the buyer, so (71) holds, or

\displaystyle y\geq(1-\beta)\left[ f(k^{\ast})-f\left( \frac{k^{\ast}}{2}\right) \right] ,
and  k^{b}=\frac{k^{\ast}}{2}. In this equilibrium, capital is efficiently allocated between buyers and sellers in exchange in the night market.

In the case where

\displaystyle y\leq(1-\beta)\left[ f(k^{\ast})-f\left( \frac{k^{\ast}}{2}\right) \right] ,
buyers will be constrained in trading during the night, with
\displaystyle z(k,k^{b})=f(k^{\ast})-f(k^{\ast}-k^{b}),
and
\displaystyle z_{1}(k,k^{b})=f^{\prime}(k^{\ast}-k^{b})
From (69) the following equation then solves for steady state  k^{b}:
\displaystyle \left[ \frac{f(k^{\ast})-f(k^{\ast}-k^{b})}{f(k^{\ast})-f(k^{\ast}-k^{b}% )-y}\right] \left[ \frac{\sigma f^{\prime}(k^{b})+(1-\sigma)f^{\prime }(k^{\ast}-k^{b})}{f^{\prime}(k^{\ast}-k^{b})}\right] =\frac{1}{\beta} % (72)

In this constrained equilibrium, the rate of return on shares is lower than the rate of time preference and  f^{\prime}(k^{b})\leq f^{\prime}(k^{\ast }-k^{b}), reflecting a liquidity premium on shares, with a misallocation of capital (relative to the social optimum) between buyers and sellers. The steady state price of shares is given by
\displaystyle \phi=f(k^{\ast})-f(k^{\ast}-k^{b})-y.
From (72), it is straightforward to show that  k^{b} is decreasing in  y, which implies that  \phi is increasing in  y. Therefore, the liquidity premium on shares decreases as  y increases - a larger fundamental value for shares reduces the liquidity premium.

7.2 Example 2:  \theta =0

Now, consider the case where  \theta=0, so that the seller makes a take-it-or-leave-it offer. As the buyer receives no surplus, in this case shares must trade at their fundamental value, with  \phi=\hat{\phi}. From the seller's problem, there then exists a continuum of equilibria with  k^{b}% \in\lbrack0,\frac{k^{\ast}}{2}], with  k satisfying

\displaystyle \sigma f^{\prime}(k-k^{b})+(1-\sigma)f^{\prime}(k)=\frac{1}{\beta}.

7.3 Monetary Equilibrium

It is easy to modify the model to reinterpret shares as money. That is, let  y=0, and allow the quantity of shares in existence to be augmented by the government through lump-sum transfers during the day. The fundamentals equilibrium is then the non-monetary equilibrium where  \phi_{t}=0 for all  t, and this equilibrium always exists. There is also a steady state monetary equilibrium where  \phi_{t}>0 for all  t and  \frac{\phi_{t}}{\phi_{t+1}% }=\mu for all  t. Then, letting  \mu denote the gross money growth rate, we obtain two equations, analogous to (69) and (70),

\displaystyle \frac{\sigma f^{\prime}(k^{b})+(1-\sigma)z_{1}(k^{b},k)}{z_{1}(k^{b},k)}% =\frac{\mu}{\beta},
\displaystyle \sigma\left[ f^{\prime}(k-k^{b})+z_{2}(k^{b},k)\right] +(1-\sigma)f^{\prime }(k)=\frac{1}{\beta},
that solve for  k and  k^{b} in the steady state. In general, a Friedman rule with  \mu=\beta will be optimal, as this gives the most efficient allocation of capital across buyers and sellers.

This is a simple model that captures exchange on asset markets where asset returns can include liquidity premia. Such liquidity premia seem potentially important in practice, since it is clear that money is not the only asset in existence whose value depends on its use in facilitating transactions. For example, U.S. Treasury bills play an important role in facilitating overnight lending in financial markets, as T bills are commonly used as collateral in overnight lending. Potentially, models such as this one, which allow us to examine the determinants of liquidity premia, can help to explain the apparently anomalous behavior of relative asset returns and asset prices.

8 Conclusion

New Monetarists are committed to modeling approaches that are explicit about the frictions that make monetary exchange socially useful, and that capture the relationship among credit arrangements, banking, and currency transactions. Ideally, economic models that are designed for analyzing and evaluating monetary policy should be able to answer basic questions concerning the necessity and role of central banking, the superiority of one type of central bank operating procedure over another, and the differences in the effects of central bank lending and open market operations.

New Monetarist economists have made progress in advancing the understanding of the key frictions that make monetary exchange socially useful, and in the basic mechanisms by which monetary policy can correct intertemporal distortions. However, much remains to be learned about the sources of short-run nonneutralities of money and their quantitative significance, and the role of central banking. This paper takes stock of how a new monetarist approach can build on advances in monetary theory and the theory of financial intermediation and payments, constructing a basis for progress in the theory and practice of monetary policy.

We conclude by borrowing from Hahn (1973), one of the editors of the previous Handbook of Monetary Economics. He begins his analysis by suggesting "The natural place to start is by taking the claim that money has something to do with the activity of exchange, seriously." He concludes as follows: " I should like to end on a defensive note. To many who would call themselves monetary economists the problems which I have been discussing must seem excessively abstract and unnecessary. ... Will this preoccupation with foundations, they may argue, help one iota in formulating monetary policy or in predicting the consequences of parameter changes? Are not IS and LM sufficient unto the day? ... It may well be that the approaches here utilized will not in the event improve our advise to the Bank of England; I am rather convinced that it will make a fundamental difference to the way in which we view a decentralized economy."

9 References - Incomplete

Berentsen, A., Menzio, G., and Wright, R. 2008.
" Inflation and Unemployment in the Long Run," working paper, University of Pennsylvania.
Bernanke, B. and Gertler, M. 1989.
"Agency Costs, New Worth, and Business Fluctuations," American Economic Review 79, 14-31.
Bryant, J. and Wallace, N. 1979.
"The Inefficiency of Interest-Bearing Government Debt," Journal of Political Economy 86, 365-381.
Bryant, J. and Wallace, N. 1984.
"A Price Discrimination Analysis of Monetary Policy," Review of Economic Studies 51, 279-288.
Calvo, G. 1983.
"Staggered Prices in a Utility Maximizing Framework," Journal of Monetary Economics 12, 383-398.
Clarida, R., Gali, J., and Gertler, M. 1999.
"The Science of Monetary Policy: A New Keynesian Perspective," Journal of Economic Literature 37, 1661-1707.
Cooley, T. and Hansen, G., 1989.
The inflation tax in a real business cycle model. American Economic Review 79, 733-748.
Diamond, D. 1984.
"Financial Intermediation and Delegated Monitoring," Review of Economic Studies 51, 393-414.
Diamond, D., and Dybvig, P. 1983.
"Bank Runs, Deposit Insurance, and Liquidity," Journal of Political Economy 91, 401-419.
Ennis, H. and Keister, T. 2008.
"Run Equilibria in a Model of Financial Intermediation," working paper, Federal Reserve Bank of New York, Richmond Federal Reserve Bank.
Friedman, M. 1960.
A Program for Monetary Stability, Fordham University Press, New York.
Friedman, M. 1968.
"The Role of Monetary Policy," American Economic Review 58, 1-17.
Friedman, M. 1969.
The Optimum Quantity of Money and Other Essays, Aldine Publishing Company, New York.
Friedman, M. and Schwartz, A. 1963.
A Monetary History of the United States, 1867-1960, National Bureau of Economic Research, Cambridge, MA.
Golosov, M. and Lucas, R. 2005.
"Menu Costs and Phillips Curves," Journal of Political Economy 115, 171-199.
Goodfriend, M. 2007.
"How the World Achieved Consensus on Monetary Policy," Journal of Economic Perspectives 21, 47-68.
Hahn, F.H. (1973)
"On the Foundations of Monetary Theory," in Essays in Modern Economics, ed. Michael Parkin with A. R. Nobay, New York, Barnes & Noble.
Hansen, G. 1985.
"Indivisible Labor and the Business Cycle," Journal of Monetary Economics 16, 309-337.
Hicks, J.R. 1937.
"Mr. Keynes and the `Classics:' A Suggested Interpretation," Econometrica 5, 147-159.
Jones, R. 1976.
"The Origin and Development of Media of Exchange," Journal of Political Economy 84, 757-775.
Kareken, J. and Wallace, N. 1980.
Models of Monetary Economies, Federal Reserve Bank of Minneapolis, Minneapolis, MN.
Kiyotaki, N. and Wright, R. 1989.
"On Money as a Medium of Exchange," Journal of Political Economy 97, 927-954.
Kocherlakota, N. 1998.
"Money is Memory," Journal of Economic Theory 81, 232-251.
Lagos, R. 2008.
"Asset Prices and Liquidity in an Exchange Economy," working paper, New York University.
Lagos, R. and Wright, R. 2005.
"A Unified Framework for Monetary Theory and Policy Analysis," Journal of Political Economy 113, 463-484.
Lester, B., Postlewaite, A., and Wright, R. 2009.
" Information and Liquidity," working paper, University of Pennsylvania.
Lucas, R. 1972.
"Expectations and the Neutrality of Money," Journal of Economic Theory 4, 103-124.
Lucas, R. 1976.
"Econometric Policy Evaluation: A Critique," Carnegie-Rochester Conference Series on Public Policy 1, 19-46.
Lucas, R. and Stokey, N. 1987.
"Money and Interest in a Cash-in-Advance Economy," Econometrica 55, 491-515.
Mankiw, N. 1985.
"Small Menu Costs and Large Business Cycles: A Macroeconomic Model of Monopoly," Quarterly Journal of Economics 100, 529-537.
Mayer, Duesenberry and Aliber. 1981.
Money, Banking, and the Economy.
Mortensen, D. and Pissarides, C. 1994.
"Job Creation and Job Destruction in the Theory of Unemployment," Review of Economic Studies 61, 397-416.
Nosal, E. and Rocheteau, G. 2006.
"The Economics of Payments," working paper, Federal Reserve Bank of Cleveland.
Ravikumar, B. and Shao, E. 2006.
"Search Frictions and Asset Price Volatility," working paper, University of Iowa.
Rocheteau, G. 2009.
"A Monetary Approach to Asset Liquidity," working paper, University of California-Irvine.
Rocheteau, G. and Wright, R. 2005.
"Money in Search Equilibrium, in Competitive Equilibrium, and in Competitive Search Equilibrium," Econometrica 73, 175-202.
Rogerson, R. 1988.
"Indivisible Labor, Lotteries, and Equilibrium," Journal of Monetary Economics 21, 3-16.
Samuelson, P. 1958.
"An Exact Consumption-Loan Model with or without the Social Contrivance of Money," Journal of Political Economy 66, 467-482.
Sargent, T. and Wallace, N. 1981.
"Some Unpleasant Monetarist Arithmetic," Minneapolis Federal Reserve Bank Quarterly Review, Fall.
Sargent, T. and Wallace, N. 1982.
"The Real Bills Doctrine versus the Quantity Theory: A Reconsideration," Journal of Political Economy 90, 1212-1236.
Shi, S. 1995.
"Money and Prices: A Model of Search and Bargaining," Journal of Economic Theory 67, 467-496.
Townsend, R. 1987.
"Economic Organization with Limited Communication," American Economic Review 77, 954-970.
Townsend, R. 1989.
"Currency and Credit in a Private Information Economy," Journal of Political Economy 97, 1323-1345.
Trejos, R. and Wright, R. 1995.
"Search, Bargaining, Money, and Prices," Journal of Political Economy 103, 118-141.
Wallace, N. 1981.
"A Modigliani-Miller Theorem for Open Market Operations," American Economic Review 71, 267-274.
Wallace, N. 1992.
"Lucas's Signal Extraction Model: A Finite-State Exposition with Aggregate Real Shocks, Journal of Monetary Economics 30, 433-447.
Wallace, N. 1998.
"A Dictum for Monetary Theory," Federal Reserve Bank of Minneapolis Quarterly Review, Winter, 20-26.
Williamson, S. 1986.
"Costly Monitoring, Financial Intermediation and Equilibrium Credit Rationing," Journal of Monetary Economics 18, 159-179.
Williamson, S. 1987.
"Financial Intermediation, Business Failures, and Real Business Cycles," Journal of Political Economy 95, 1196-1216.
Woodford, M. 2003.
Interest and Prices, Princeton University Press, Princeton NJ.


Footnotes

* We thank many friends and colleagues for useful discussions and comments, as well as the NSF for financial support. Return to Text
1. Consider Solow (1965): "I think that most economists would feel that short-run macroeconomic theory is pretty well in hand ... The basic outlines of the dominant theory have not changed in years. All that is left is the trivial job of filling in the empty boxes, and that will not take more than 50 years of concentrated effort at a maximum." Return to Text
2. To be clear, we do not want our New Keynsian example to be read as condonation of the practice of assuming nominal rigidities in a ad hoc fashion. It is rather meant to prove that even if one can't live without such assumptions, this does not mean one cannot think seriously about money, banking, and so on. Also, the examples here are meant to be simple, to make the points starkly, but one can elaborate as one wishes. For example, Craig and Rocheteau (2007) have a version of our benchmark model with sticky prices as in Benabou (1988) and Diamond (1993), while Aruoba and Schorfiede (2009) have a version on par with a typical New Keynesian model that they estimate using modern econometric methods. Similarly, Faig and Li (2008) have a more involved version with signal extraction that they also take to data. Our goal is to simply illustrate basic qualitative effects. Return to Text
3. There are previous attempts to study monetary versions of Diamond-Dybvig, including Champ et al. (19xx), Freeman (19xx) and Huangfu and Sun (2008). Return to Text
4. Since Wallace (2009) is all about mechanism design as applied to monetary economics we will not say much more on that here. Return to Text
5. In the early 1980s, standard textbooks put it this way: "As a result of all of this work quantity theorists and monetarists are no longer a despised sect among economists. While they are probably a minority, they are a powerful minority. Moreover, many of the points made by monetarists have been accepted, at least in attenuated form, into the mainstream Keynesian model. But even so, as will become apparent as we proceed, the quantity theory and the Keynesian theory have quite different policy implications" (Mayer, Duesenberry and Aliber 1981). Return to Text
6. Many other contributions to this literature will be discussed below. See Ostroy and Starr (1990) for a survery of earlier attempts at building microfoundations for money using mainly general equilibrium theory. Overlapping generations models are discussed and surveyed in various places, including Wallace (1980) and Brock (1990). Return to Text
7. Random matching is an extreme assumption, but it captures the notion that people trade with each other, not only against budget constraints. Still it is easy to criticize. As Howitt (2000) puts it: "In contrast to what happens in search models, exchanges in actual market economies are organized by specialist traders, who mitigate search costs by providing facilities that are easy to locate. Thus when people wish to buy shoes they go to a shoe store; when hungry they go to a grocer; when desiring to sell their labor services they go to firms known to offer employment. Few people would think of planning their economic lives on the basis of random encounters." Based in part on such criticism, much early monetary theory has been redone using directed rather than random search - see e.g. Cobae et al. (2003) and Julien et al. (2008). While some results change, the basic theory is quite similar. Given this, for ease of presentation, we usually use random matching with the hope that readers understand the theory also works with directed search. Below we also describe explicitly versions where search is replaced entirely by preference and technology shocks. Return to Text
8. Many extensions and variations are possible. In Kiyotaki and Wright (1991) e.g. agents derive utility from all goods, but prefer some over others, and the set they accept is determined endogenously. In Kiyotaki and Wright (1989) or Ayagari and Wallace (1991) there are  N goods and  N types of agents, where type  n consumes good  n and produces good  n+1 (  \operatorname{mod}  N). In this case,  N=2 implies  \sigma=0 and  \delta=1/2, while  N\geq3 implies  \sigma=1/N and  \delta=0. The case  N=3 has been used to good effect in the classic literature - e.g. by Wicksell (1912) and Jevons (1875). Return to Text
9. This is often described by saying agents are anonymous. In addition to Kocherlakota (1988), see Wallace (2001), Araujo (2004), and Aliprantis et al. (2007,2008) for more discussion. Also note that we only need some meetings to be anonymous; in applicaitons below we assume that with a given probability meetings are monitored and credit can potentially be used. Return to Text
10. This is not quite how the original search models worked, as they usually assumed agents with money could not produce, but the version here is arguably more reasonable and simpler; see Rupert et al. (2001) for a discussion and references. Return to Text
11. Other applications of these first-generation models include the following: Kiyotaki and Wright (1989) allow goods to be storable and discuss commodity money. Kiyotaki and Wright (1991,1993) endogenize specialization in production and consumption, analyze welfare in detail, and consider versions with multiple currencies. Kiyotaki, Matsuyama and Matsui (1994) pursue issues in international monetary economics. Williamson and Wright (1994) introduce private information to show how money can ameliorate certain lemons problems. Li (1994,1995) introduces endogenous search intensity and discusses the optimal taxation of money in the presence of search externalities. Ritter (1995) asks which agents can introduce fiat currency (e.g. government). Green and Weber (1996) discuss counterfeiting. He et al. (2005) and Lester (2009) study banking and payments issues. Return to Text
12. It is well known that the Nash solution has strategic foundations in terms of non-cooperative games (see e.g. Osborne and Rubinstein 1990). Shi (1995) and Trejos-Wright (1995) actually use the symmetric Nash solution, but the analysis can be extended as in Rupert et al. (2001). Other solution concepts can also be used - e.g. Curtis and Wright (2004) use price posting; Julien et al. (2008) use auctions in a version with some multilateral meetings; and Wallace and Zhou (2007,2008) use mechanism design. Return to Text
13. Other applications of this model include the following: Shi (1996) introduces bilateral borrowing and lending to study the relation between money and credit. Coles and Wright (1998) and Ennis (1999) study nonstationary equilibria. Williamson (1999) considers private money. Cavalcanti and Wallace (1999a,1999b) introduce banks. Trejos (1999) considers private information. Li (1999), Johri and Leach (2002), and Shevchenko (2004) study middlemen. Nosal and Wallace (2007) analyze counterfeiting. Return to Text
14. Other approaches to relaxing  m\in\{0,1\} include Camera and Corbae (1998), Deviatov and Wallace (1998), Zhu (2003,2004), and a series of papers following up on Green and Zhou (1997) that are cited in Jean et al. (2009). Some of these models assume  m\in \{0,1...\overline{m}\}, where the upper bound  \overline{m} may or may not be finite; (5) still holds in such cases, including the special case  \overline{m}=1 studies above. Return to Text
15. One can make this model easier to compute by assuming competitive markets, rather than bilateral bargaining, as in Dressler (2008,2009). As discussed in Rocheteau and Wright (2005), search-based models of money can be adapted to accomodate competitive price taking. It might help to think about labor search models, like Mortensen-Pissarides (1994), which use bargaining, and Lucas-Prescott (1979), which use price taking. One interpretation is that in the former agents meet bilaterally, while in the latter they meet on islands representing local labor markets, but on each island there are enough workers and firms that it makes sense to take wages parametrically. The same is true in monetary search models. Specialization and anonymity can lead to an essential role for money despite agents meeting in large groups. So one can proceed either as in either Molico or Dressler, recognizing that while the latter is easier it is also less rich, since it loses e.g. the endogenous price distribution. Return to Text
16. One can also proceed differently without changing basic results. Williamson (2007) e.g. assumes both markets are always open and agents randomly transit between them. For some issues, it is also interesting to have more than one round of trade in the DM between meetings of the CM, as in Camera et al. (2005) and Ennis (2008), or more than one period of CM trade between meetings of the DM, as in Teyulkova and Wright (2008). Chiu and Molico (2008) actually allow agents to transit between markets whenever they like, at a cost, embedding something that looks like the model of Baumol (1952) and Tobin (1956) into general equilibrium where money is essential, but that requires numerical methods. Return to Text
17. As discussed below, we can use general utility if we assume indivisible labor, but we take divisible labor and quasi-linearity as a benchmark. Return to Text
18. An assumption not made explcit in early presentations of the model, but clarified by the work of Aliprantis et al. (2006,2007) is that in the CM agents observe only prices, and not other agents' actions. If they did observe others' actions there is a potential to use triggers, rendering money inessential. Aliprantis et al. (2007) also describe variations on the environment where triggers cannot be used, and hence money is essential, even if 's actions can be observed in the CM. This was perhaps less of an issue in models with no CM - or perhaps not, since multilateral trade is neither necessary nor sufficient for public observability or communication. Return to Text
19. The fact that  \hat{m} is independent of  m does not quite imply that all agents choose the same  \hat{m}. In a version of the model with some multilateral ameetings, and auctions instead of bargaining, Galenianos and Kircher (2008) show that agents are indifferent over  \hat{m} in some set, and equilibrium entails a nondegenerate distribution  F(\hat{m}), similar to the way Burdett-Judd (1980) entails a nondegenerate distribution of prices. We can rule that out in our baseline model. Return to Text
20. As in earlier models, one can use different mechanisms. Rocheteau and Wright (2005) and many others since use price taking and price posting. Aruoba et al. (2007) use several alternative bargaining solutions. Galeanois and Kircher (2007) and Duttu et al. (2009) use auctions. Ennis (2008) and Dong and Jiang (2009) use posting in versions with private information. Hu et al. (2009) use pure mechanism design. Return to Text
21. Nonstationary equilibria, where endogenous variables change over time even for a fixed fundamentals and policy, including sunspot, cyclic and chaotic equilibria, are studied in Lagos and Wright (2003), parelleling closely the analysis in other monetary models (see e.g. Azariadis 1993). Return to Text
22. If we allow nonseparable utility, while maintaining quasi-linearity, for tractability, we can get  X to not be independent of  i. Heuristically, suppose e.g. that  x and  X are substitutues: then an increase in  i by reducing  x increases  X. This is made precise below in the disussion of the long-run Phillips curve. Return to Text
23. Lagos and Rocheteau (2008) do allow  K and  M to compete as media of exchange, and show that  M can still be essential. Intuitively, if  K is not sufficiently productive, or the need for liquidity is sufficiently great, without  M agents overinvest, and then fiat currency improves welfare. See also Geromichalos et al. (2007) and Jacquet and Tan (2009). See Wallace (1980) for similar results in an overlapping generations model. However, in these papers  K and  M are equally liquid, and hence must pay the same return in equilibrium. More on this later. Return to Text
24. That is, the nonstochastic version of Hansen (1985). But at this stage it is routine to write down versions with stochastic shocks. See e.g. Aruoba (2008), Aruoba and Shorfeide (2008), and Telykova and Visschers (2009). It is also possible to add long-run technological change and study balanced growth, under the right assumptions, as in Waller (2009). Return to Text
25. Notice that if  K does not enter the DM technology the last term in (16) vanishes and  K drops out of (15). In this case the system dichotomizes: we can independently solve (15) for the DM allocation  x and the other three equations for the CM allocation  (X,K,H), and monetary policy affects the former but not the latter. This is why it is interesting to include  K in  c(x,K). A special case of this dichotomy obtains in the baseline model, where  i affected  x but not  X, but as we will soon see, this can be overturned if we allow nonseparable preferences. Return to Text
26. The approach follows Rocheteau et al. (2007) and Dong (2009). Alternatvely, Berentsen et al. (2009) and Liu (2009) use the unemployment theory in Mortensen and Pissarides (1994), which is quite different. Return to Text
27. As is standard, in lottery equilibrium, agents will get paid for their probability of working. If one does not like lotteries, the same allocations can be supported using only Arrow-Debreu contingent commodity markets with a little extrinsic uncertainty, as in Shell-Wright (1993). Return to Text
28. Existing work includes the following: Aruoba and Chugh (2007) and Gomis-Porgueras and Peralta-Alva (2007) study optimal monetary and fiscal policy problems with commitment, deriving some results that differ from conventional wisdom. Martin (2009) studies similar problems without commitment. Banks are introduced by Berentsen et al. (2007) as follows: after the CM closes but before the DM opens agents realize shocks determining who will buyers and sellers, generating gains from transfering liquidity from the latter to the former that banks help to realize. See also Chiu and Meh (2009), Li (2007), He et al. (2007), and Camera and Becivenga (2008) (we introduce banking in a different way below). Several papers, including Boel and Camera (2006) and Berentsen and Waller (2009) study the interaction between money and bonds. Berentsen and Monnet (2008) use the model to discuss details of monetary policy implementation. Guerrieri and Lorenzoni (2009) alanyze the effects of liquidity on business cycles and use the model to interpret the post 1984 moderation in terms of monetary policy. Several people are using versions of the framework to discuss asset markets, including Lagos (2007) and Rocheteau (2009). Return to Text
29. By way of analogy, e.g. Pissarides (2000) has two types (workers and firms), while Diamond (1982) has only one (traders), which allows the former to introduce components like general matching funcitons and free entry. Also, one can say that having two types here makes the model similar to earlier monetary models, with  m\in\{0,1\}. Return to Text
30. Faig and Li (2008) also analyze signal extraction, while Aruoba and Schorfeide (2009) analyze nominal rigidities, in related models. They also provide serious quantitative analyses, while the emphasis here is on illustrating the basic ideas as simply as possible. Return to Text
31. Many applilcations of the general framework assume  u(0)=0, for technical reasons, but we do not need this here. Return to Text
32. Papers that are similar to what we present here include Lagos (2008), Lester et al. (2009), Rocheteau (2009), and Shao and Ravikumar (2006). Some contributions that are closely related but not quite the same include Duffie et al. (200x), Lagos and Rocheteau (2009), and Weill (200x). Return to Text

This version is optimized for use by screen readers. Descriptions for all mathematical expressions are provided in LaTex format. A printable pdf version is available. Return to Text