The Federal Reserve Board eagle logo links to home page
Remarks by Chairman Alan Greenspan
Price measurement
At the Center for Financial Studies
Frankfurt, Germany
November 7, 1997

The remarkable progress that has been made by virtually all of the major industrial countries in achieving low rates of inflation in recent years has brought into sharper focus the issue of price measurement. As we move closer to price stability, the necessity of measuring prices accurately has become an especial challenge. Biases of a few tenths in annual inflation rates do not matter when inflation is high. They do matter when, as now, a debate has emerged over whether our economies are moving toward price deflation.

In today's advanced economies, allocative decisions are primarily made not by governments but by markets, and the central guide to the efficient allocation of resources in a market economy is prices. Prices are the signals through which tastes and technology affect the decisions of consumers and producers, directing resources toward their highest valued use. Of course, this signaling process would work with or without government statistical agencies that measure individual and aggregate price levels, and in this sense, price measurement probably is not fundamental for the overall efficiency of the market economy. Indeed, vibrant market economies existed long before government agencies were established to measure prices.

Nonetheless, in a modern monetary economy, accurate price measurement is of considerable importance, increasingly so for central banks whose mandate is to maintain financial stability. Accurate price measures are necessary for understanding economic developments, not only involving inflation but also involving real output and productivity. If the general price level is estimated to be rising more rapidly than is in fact the case, then we are simultaneously understating growth in real output and productivity. Real incomes and living standards are rising faster than our published data suggest. Under these circumstances, policymakers must be cognizant of the shortcomings of our published price indexes to avoid misguided actions that will provoke unintended consequences. Clearly, central bankers need to be conscious of the problems of price measurement as we gauge policies designed to promote price stability and maximum sustainable economic growth. Moreover, many economic transactions, both private and public, are explicitly tied to movements in some published price index, most commonly a consumer price index; and some transactions that are not explicitly tied to a published price index may nevertheless take such an index into account less formally. If the price index is not accurately measuring what the participants in such transactions believe it is measuring, then economic transactions will be skewed.

The measured price indexes have played an especially prominent role in Germany, both in terms of public perceptions of inflation performance and as a guide for policymakers. The Bundesbank's long-standing commitment to price stability and the public's support for that commitment derive at least to some extent from Germany's experiences with hyperinflation earlier this century. Given this experience with the devastation that such inflation can bring to the economy and to people's lives, it comes as no surprise that your public and your policymakers give such careful scrutiny to the available measures of inflation. Germany has a reputation for special vigilance in guarding the stability of the price level and has achieved an admirable record of success in maintaining low inflation over the postwar period. From the standpoint of monetary policy, this very success makes accurate price measurement all the more important. When measured inflation is high, we can be confident that the proper direction of monetary policy is to bring inflation lower. But when measured inflation is low, the proper direction of monetary policy, as I indicated, could depend crucially on the accuracy of those measurements.

The importance of accurate price measurement was particularly apparent during unification, when it became necessary to gauge productivity in East and West Germany on a comparable basis. Initial estimates of East German productivity relative to that of the West were considerably higher than later, more accurate estimates showed to be the case. These differences, we are told, owed largely to the difficulties in adjusting the prices of East German products to take into account that they were, on average, of lower quality than the equivalent items produced in the West.

In thinking about the problems of price measurement, a distinction must be made between the measurement of individual prices, on the one hand, and the aggregation of those prices into indexes of the overall price level, on the other. The notion of what we mean by a general price level--or more relevantly, its change--is never unambiguously defined. Moreover, in practice, aggregation can be complicated because standard price indexes frequently assume that individuals and businesses purchase the same basket of goods and services over time--whereas, in fact, people substitute some goods for others when relative prices change and as new goods are introduced. How one aggregates individual prices, of course, depends on the purpose of the measure. Still, the problems of aggregation are well understood by economists, and workable solutions are within reach. Many countries have made progress in utilizing aggregation formulas that do take into account product substitutions, and further progress in this area seems likely in the years ahead.

It is the measurement of individual prices, not the aggregation of those prices, that is so difficult conceptually. At first glance, observing and measuring prices might not appear especially daunting. After all, prices are at the center of virtually all economic transactions. But, in fact, the problem is extraordinarily complex. To be sure, the nominal value--in dollars or deutsche marks, for example--of most transactions is unambiguously exact and, at least in principle, is amenable to highly accurate estimation by our statistical agencies. But dividing that nominal value change into components representing changes in real quantity versus price requires that one define a unit of output that is to remain constant over time. Defining such a constant-quality unit of output is the central conceptual difficulty in price measurement.

Such a definition may be clear for unalloyed aluminium ingot of 99.7 percent purity for the vast proportion of transactions; consequently, its price can be compared over time with a degree of precision adequate for virtually all producers and consumers of aluminium ingot. Similarly, the prices of a ton of cold rolled steel sheet, or of a linear meter of cotton broad woven fabric, can be reasonably compared over a period of years.

But when the characteristics of products and services are changing rapidly, defining the unit of output, and thereby adjusting an item's price for improvements in quality, can be exceptionally difficult. These problems are becoming pervasive in modern economies as service prices, which are generally more difficult to measure, become more prominent in aggregate price measures. One does not have to look to the most advanced technology to recognize the difficulties that are faced. To take just a few examples, automobile tires, refrigerators, winter jackets, and tennis rackets have all changed in ways that make them surprisingly hard to compare to their counterparts of twenty or thirty years ago.

The continual introduction of new goods and services onto the markets creates special challenges for price measurement. In some cases, a new good may best be viewed as an improved version of an old good. But, in many cases, new products may deliver services that simply were not available before. When personal computers were first introduced, the benefits they brought households in terms of word processing services, financial calculations, organizational assistance, and the like, were truly unique. The introduction of heart bypass operations literally prolonged many lives by decades. And, further in the past, think of the revolutionary changes that automobile ownership, or jet travel, brought to people's lives. In theory, economists understand how to value such innovations; in practice, it is an enormous challenge to construct such an estimate with any precision.

The area of medical care, where technology is changing in ways that make techniques of only a decade ago seem archaic, provides some particularly striking illustrations of the difficulties involved in measuring quality-adjusted prices. Cures and preventive treatments have become available for previously untreatable diseases. Medical advances have led to new treatments that are more effective and that have increased the speed and comfort of recovery. In an area with such rapid technological change, what is the appropriate unit of output? Is it a procedure, a treatment, or a cure? How does one value the benefit to the patient when a condition that once required a complicated operation and a lengthy stay in the hospital now can be easily treated on an outpatient basis?

Although there is considerable uncertainty, the pace of change and the shift toward output that is difficult to measure are more likely to quicken than to slow down. How, then, will we measure inflation in the future if our measurement techniques become increasingly obsolete? We must keep in mind that, difficult as the problem seems, consistently measured prices do exist in principle. Embodied in all products is some unit of output, and hence of price, that is recognizable to those who buy and sell the product if not to the outside observer. A company that pays a sum of money for computer software knows what it is buying, and at least has an idea about its value relative to software it has purchased in the past, and relative to other possible uses for that sum of money in the present.

Furthermore, so long as people continue to exchange nominal interest rate debt instruments and contract for future payments in terms of dollars or other currencies, there must be a presumption about the future purchasing power of money no matter how complex individual products become. Market participants do have a sense of the aggregate price level and how they expect it to change over time, and these views must be embedded in the value of financial assets.

The emergence of inflation-indexed bonds, while providing us with useful information, does not solve the problem of ascertaining an economically meaningful measure of the general price level. By necessity, the total return on indexed bonds must be tied to forecasts of specific published price indexes, which may or may not reflect the market's judgment of the future purchasing power of money. To the extent they do not, of course, the implicit real interest rate is biased in the opposite direction. Moreover, we are, as yet, unable to separate compensation for inflation risk from compensation for expected inflation.

Eventually, financial markets may develop the instruments and associated analytical techniques for unearthing these implicit changes in the price level with some precision. In those circumstances, then--at least for purposes of monetary policy--these measures could obviate the more traditional approaches to aggregate price measurement now employed. They may help us understand, for example, whether markets perceive the true change in aggregate prices to reflect fixed or variable weight indexes of the components or whether arithmetic or logarithmic weighting of the components is more appropriate.

But, for the foreseeable future, we shall have to rely on our statistical agencies to produce the price data necessary to assess economic performance and to make economic policy. In that regard, assuming further advances in economic science and provided that our statistical agencies receive adequate resources, procedures should continue to improve. To be sure, progress will not be easy for estimating the value of quality improvements is a painstaking process. It must be done methodically, item by item. But progress can be made.

One improvement that has been made in recent years is a better ability to capture quality differences by pricing the underlying characteristics of complex products. With an increasingly wide range of product variants available to the public, product characteristics are now bundled together in an enormous variety of combinations. A "personal computer" is, in actuality, an amalgamation of computing speed, memory, networking capability, graphics capability, and so on. Computer manufacturers are moving toward build-to-order systems, in which any combination of these specifications and peripheral equipment is available to each individual buyer. Other examples abound. Advancements in computer-assisted design have reduced the costs of producing multiple varieties of small machine tools. The variety of commercial aircraft is much larger now than it was twenty years ago. And in services, witness the plethora of products now available from financial institutions, which have allowed a more complete disentangling and exchange of economic risks across participants around the world. Although hard data are scarce, there can be little doubt that products are tailor-made for the buyer to a larger extent than ever. Gone are the days when Henry Ford could say he would sell a car of any color "so long as it's black."

In such an environment, when product characteristics are bundled together in so many different combinations, defining the unit of output means unbundling these characteristics and pricing each of them separately. The so-called hedonic technique is designed to do precisely that. This technique associates changes in a product's price with changes in product characteristics. It therefore allows a quality comparison when new products with improved characteristics are introduced.

Not surprisingly, one area in which this approach has been especially useful is in computer technology. In the United States, prior to the mid-1980s, computer prices simply were held constant in the national accounts. Now, with the introduction of hedonic techniques, the accounts show computer prices declining at double-digit rates, surely a more accurate estimate of the true quality-adjusted price change. The few other countries that have introduced these techniques--France being the most recent--show computer prices declining much more rapidly than in the majority of countries that have not yet done so.

But hedonics are by no means a panacea. First of all, this technique obviously will be of no use in valuing the quality of an entirely new product that has fundamentally different characteristics from its predecessors. The benefits of cellular telephones, and the value they provide in terms of making calls from any location, cannot be measured from an examination of the attributes of standard telephones.

In addition, the measured characteristics may only be proxies for the overall performance that consumers ultimately value. In the case of computers, the buyer ultimately cares about the quality of services that computer will provide--word processing capabilities, database services, high-speed calculations, and so on. But, in many cases, the number of message instructions per second and the other easily measured characteristics may not be a wholly adequate proxy for the computer services that the buyer values. In these circumstances, the right approach, ultimately, may be to move toward directly pricing the services we obtain from our computers--that is, word processing services, database management services, and so on--rather than pricing separately the hardware and software.

The issues surrounding the appropriate measurement of computer prices also illustrate some of the difficulties of valuing goods and services when there are significant interactions among users of the products. New generations of computers sometimes require software that is incompatible with previous generations, and some users who have no need for the improved computing power nevertheless may feel compelled to purchase the new technology because they need to remain compatible with the bulk of users who are at the frontier. Even if our techniques allow us to accurately measure consumers' valuation of the increased speed and power of the new generation of computer, we may miss the negative influence on some consumers of this incompatibility. Therefore, even in the case of personal computers, where we have made such great strides in measuring quality changes, I suspect that important phenomena still may not be adequately captured by our published price indexes.

Despite the advances in price measurement that have been made over the years, there remains considerable room for improvement. In the United States, a group of experts empaneled by the Senate Finance Committee--the Boskin commission--concluded that the consumer price index has overstated changes in the cost of living by roughly one percentage point per annum in recent years. About half of this bias owed to inadequate adjustment for quality improvement and the introduction of new goods, and about half reflected the manner in which the individual prices were aggregated. Researchers at the Federal Reserve and elsewhere have come up with similar figures. Although the estimates of bias owing to inadequate adjustment for quality improvements surely are the most uncertain aspect of this calculation, the preponderance of evidence is that, on average, such a bias in quality adjustment does exist.

The Boskin commission, along with most other estimates of bias in the U.S. CPI, have taken a microstatistical approach, estimating separately the magnitude of each category of potential bias. Recent work by staff economists at the Federal Reserve Board has added corroborating evidence of price mismeasurement, using a macroeconomic approach that is essentially independent of the microstatistical exercises. Specifically, employing disaggregated data from the national income and product accounts, this research finds that the measured growth of real output and productivity in the service sector is implausibly weak, given that the return to owners of businesses in that sector apparently has been well-maintained. Indeed, the published data indicate that the level of output per hour in a number of service-producing industries has been falling for more than two decades. It is simply not credible that firms in these industries have been becoming less and less efficient for more than twenty years. Much more reasonable is the view that prices have been mismeasured and that the true quality-adjusted prices have been rising more slowly than the published price indexes. Properly measured, output and productivity trends in these service industries might be considerably stronger than suggested by the published data. Assuming, for example, no change in productivity for these industries would imply a price bias consistent with the Boskin commission findings.

Of course, the United States is not the only country that faces challenges in constructing an accurate measure of inflation. Other countries--Germany among them--confront similar issues. In a recent survey of consumer price indexes in its member countries, the OECD found that most countries felt that measurement bias was smaller in magnitude in their own countries than in the United States. Certainly regarding quality adjustment, however, I doubt that this is generally the case. Many countries' responses were prepared by the countries' statistical agencies, which tend to take a somewhat more sanguine view of the adequacy of the existing price statistics than do outside economists. But, in any case, the OECD survey did indicate that many countries reported that measurement bias was a concern and that most countries do not adequately adjust their statistics for quality improvements. Indeed, as I noted previously, most European countries still have yet to adopt the most up-to-date techniques for measuring computer prices in their national accounts. As the OECD survey recognized, the challenges presented by rapid technological advances have affected all of us--not just the United States. Thus, potential sources of measurement bias should be seriously examined in all countries.

Indeed, issues of price measurement may be especially important for the European countries entering into monetary union. For a region with a single monetary policy, a single, consistently estimated measure of inflation is necessary to gauge the region's economic performance. Toward that end, as you know, Eurostat publishes harmonized indexes of consumer prices that are constructed using a common basket of goods and services for each EU member state and using similar statistical methodology. These measures should go a long way toward providing a conceptually sound basis for judging convergence of EU member states in the selection of countries to participate in monetary union. Subsequent to monetary union, harmonized consumer prices can be used as the best available measure of inflation in the Euro area.

However, as it now stands, the harmonized measures do not contain a broad coverage of consumer services. Most notably, the costs of owner-occupied housing--a sizable share of consumer expenditures--are excluded from the harmonized indexes. In the United States, for example, the CPI calculated on this harmonized basis would have increased three or four tenths of a percentage point more slowly than the published CPI, on average, over the past few years, largely because prices of owner-occupied housing have been rising more rapidly than the other components. Arguably, the published index, with broader coverage, is more relevant to assessing inflation trends in the United States than would be the harmonized index. As long as relative prices can and do diverge across countries, the harmonized indexes need to contain as broad a range of items as is practical.

As monetary union proceeds, then, it would be to the advantage of monetary authorities in the Euro area to have a consistent measure of inflation defined over a broad basket of goods and services that is measured according to established statistical methods. Most useful would be for the member countries to continue the harmonization process until the national statistical agencies are truly working on a consistent basis. Indeed, measuring prices consistently across countries could be an important step toward making price measurement more accurate everywhere, if harmonization results in each country's best practices being adopted throughout the monetary union. Moreover, different prices of the same tradable good across the community might signal inefficiencies of distribution which were not evident from other sources.

Harmonization of CPIs in Europe is just one of many examples demonstrating why price measurement techniques cannot be static. With innovation constantly leading to new products, greater variety, and higher quality, the statistical agencies must work ever harder just to stay in place. A government official in the United States once compared a nation's statistical system to a tailor, measuring the economy much as a tailor measures a person for a suit of clothes--with the difference that, unlike the tailor, the person we are measuring is running while we try to measure him. The only way the system can succeed, he said, is to be just as fast and twice as agile. That is the challenge that lies ahead, and it is, indeed, a large one.

There are, however, reasons for optimism. The information revolution, which lies behind so much of the rapid technological change that makes prices difficult to measure, may also play an important role in helping our statistical agencies acquire the necessary speed and agility to better capture the changes taking place in our economies. For example, computers might some day allow our statistical agencies to tap into a great many economic transactions on a nearly real-time basis. Utilizing data from store checkout scanners, which the United States is now investigating, may be an important first step in that direction. But the possibilities offered by information technology for the improvement of price measurement may turn out to be much broader in scope. Just as it is difficult to predict the ways in which technology will change our consumption over time, so is it difficult to predict how economic and statistical science will make creative use of the improved technology.

Such advances must be taken to ensure that our economic statistics remain adequate to support the public policy decisions that must be made. If the challenge for our statistical agencies is not to lose in their race against technology, the challenge for policymakers is to make our best judgments about the limitations of the existing statistics, as we design policies to promote the economic well-being of our nations.

1997 Speeches