Skip to Main Content

Emi Nakamura

Econ Focus
Third Quarter 2015
Interview
nakamura

Photo by Olivia Chong

Emi Nakamura

A key question in macroeconomics is the extent to which demand shocks — ranging from changes in monetary and fiscal policies to private-sector events such as consumer deleveraging — affect "real" variables in the economy such as output and employment.

Empirical research strongly suggests that these phenomena can, in fact, have a large effect on the real economy. While perhaps not surprising to most economists, it does require some explaining. In simple models in which markets work perfectly, prices and wages respond quickly to shocks. In such a world, output and employment would not respond much to demand shocks — and monetary policy in particular would have no effect on real variables, an outcome known as "monetary neutrality."

A favored explanation for why this doesn't occur in the real world is the idea that prices are "sticky": They do not adjust quickly or completely to shocks. If prices are sticky, not only can resources fail to flow to where they are most highly valued, but economy-wide problems like recessions and unemployment can result.

Columbia University economist Emi Nakamura has spent much of her research career measuring price stickiness. She, along with frequent co-author and spouse Jón Steinsson, was one of the first researchers to analyze the micro data underlying the U.S. consumer price index (CPI), a dataset that provides the most broad-based measures of price rigidity for the U.S economy. They showed that previous measures from these data, which suggested a great deal of price flexibility, did not account for important nuances of retail prices, such as temporary sales.

Such findings have important implications for macroeconomic policy, another focus of Nakamura’s research. Her work measuring the effectiveness of fiscal and monetary policies has exploited unique datasets to argue, for example, that state-level variation in military spending can be used as a source of "natural experiments" to estimate the size of the aggregate fiscal multiplier, and that official Chinese statistics on inflation are not quite what they seem.

Nakamura is currently a visiting professor at the Massachusetts Institute of Technology. Renee Haltom interviewed her in her office in Cambridge in October 2015.


EF: You and Jón Steinsson were among the first researchers to exploit large micro datasets — that is, pricing at the level of individual goods and services — to measure price stickiness. What new information does the micro data provide?

Nakamura: Before the work on micro data, most of the monetary economics papers used an assumption like, "prices change once a year." That was based on very limited evidence from individual industries. For example, Anil Kashyap had a study of catalogue prices and Alan Blinder had a survey of firms that were very influential. But there was always the worry that we didn't have enough information from the microeconomic side to justify the assumptions we were making in macro models.

In 2004, Mark Bils and Peter Klenow came out with a landmark study that used data that were much more broad-based than what people had used before. They were looking at the unpublished data underlying the consumer price index, and they showed that there were lots of price changes in the data, many more than monetary economists had traditionally assumed in their models; they found that prices changed roughly every four months on average. And so economists had to ask themselves whether these differences were important for macroeconomics. Were these the types of price changes that monetary economists had in mind?

That fit in well with my interest in microeconomic approaches to understanding price setting. In my early papers with Jón, we showed that a big fraction of the price changes in the Bureau of Labor Statistics data are temporary sales, and that these sales look totally different from the price changes that people were thinking about in stylized macro models: They are much less persistent, with prices often returning back to the original price after a short period.

And in more recent work with another macroeconomist, Ben Malin, and two marketing professors, Eric Anderson and Duncan Simester, we show that there are a lot of institutional frictions that imply sales aren't optimally timed in response to things like recessions. In many cases, for example, a retailer's whole plan for sales is decided in advance at the beginning of the year. Finally, there's a lot of heterogeneity in the economy, and the stickier sectors can hold back price responses in the more flexible ones.

All this means that even if we were to see a huge number of price changes in the micro data, the aggregate inflation rate may still be pretty sticky. And if one abstracts from the huge number of sales in retail price data, then prices look a lot less flexible than they first appear.


WEB EXCLUSIVE

EF: What are the challenges of incorporating micro level evidence on price stickiness — for example, the unique behavior of sales and heterogeneity across sectors — in macro models?

Nakamura: A key challenge is to try to figure out what features of the micro data matter for macroeconomic phenomena such as inflation. There is a huge amount of nuance in the behavior of prices at the microeconomic level — so much that I think it is often overwhelming to macroeconomists.

Of course, price setting is at the core of monetary economics, so taking a hard look at pricing behavior makes sense. But an important question is how to distill what we know about pricing from marketing and industrial organization into things that matter for the macroeconomy. An example is how to interpret all the temporary sales in retail price data. Does this mean that prices are very flexible? It turns out sales have quite special characteristics that suggest that they do not contribute much to aggregate price flexibility — for example, they are very transient; they often return to the original price after a sale.


EF: What is the most important takeaway for macro­economists and policymakers from the evidence on price stickiness?

Nakamura: To me, the key consequence of sticky prices is that demand shocks matter. Demand shocks can come from many places: house prices, fiscal stimulus, animal spirits, and so on. But the key prediction is that prices don't adjust rapidly enough to eliminate the impact of demand shocks.

For example, Atif Mian and Amir Sufi have emphasized that the decline in housing wealth was a very important part of the Great Recession. And if you think about a situation where interest rates have basically been stuck at zero, meaning nominal rates are fixed, what has to happen in efficient models of the economy, like a real business cycle model, is that the real interest rate has to fall to maintain full employment. But that requires this extremely flexible adjustment of prices: Prices would need to jump down and then slowly rise. This would lower real rates by creating inflation. But with sticky prices, prices do not "jump." Instead, prices slowly fall — leading to deflation and an increase in real rates, exactly the opposite of what is supposed to happen.

EF: Yet, after a decade of research on micro price data­sets, there is still no consensus on whether the price stickiness we observe at the microeconomic level implies the kind of substantial monetary non-neutralities suggested by macroeconomic evidence. Can further micro research on price rigidities still help us better establish the nature and extent of that link?

Nakamura: I think we have a pretty good sense by now of how often prices change. But there's a lot of evidence from the aggregate data suggesting that prices don't respond fully even when they do change. If the pricing decisions of one firm depend on what other firms do, then even when one firm changes its prices, it might adjust only partway. And then the next firm adjusts only partway, and so on. This goes under the heading of real rigidities, and there are many sources of them. One example is intermediate inputs; if you buy a lot of stuff from other firms, then if they haven't yet raised their prices to you, then you don't want to raise your prices, and so on. Another source is basic competition: If your competitors haven't raised their prices, you might not want to raise your prices. The same thing occurs if some price changes are on autopilot, or if the people changing prices aren't fully responding to macro news — this is the core of the sticky information literature. These knock-on effects mean that inflation can still be "sticky" long after all the prices in the economy have adjusted.

Real rigidities are where it's much more complicated to do an empirical study. You have to ask not only whether the price changed, but whether it responded fully; so you need to have not only the price data, but also to see the shock to form an idea of what the efficient response would be. For that, the difficulty is that you don't often have good cost data. One part of my Ph.D. thesis was on the coffee market. In that case, you see commodity costs of coffee, so you can see both how frequently say, Folgers, changes its prices and how much it responds to commodity costs when it changes its price. The other type of evidence that speaks to this question comes from exchange rate movements. When you have changes in the exchange rate, you have a situation where there's an observable shock to firms' marginal costs, and you can use that to figure out how much prices respond conditional on having adjusted at all. But fundamentally, this is a much more challenging empirical problem.

EF: Much of the "reconsideration of macroeconomics" in the wake of the Great Recession has taken the view that financial markets and financial frictions should be an integral part of any applied macroeconomic model. Does this view necessarily downgrade the importance of price stickiness as an explanation for economic fluctuations and the importance of monetary policy? To what extent do you think price and wage rigidities played a role in the severity of the Great Recession?

Nakamura: I think the Great Recession has actually increased the emphasis in macroeconomics on traditional Keynesian frictions. The shock that led to the Great Recession was probably some combination of financial shocks and housing shocks — but what happened afterward looked very Keynesian. Output and employment fell, as did inflation. And for demand shocks to have a big impact, there have to be some frictions in the adjustment of prices. The models that have been successful in explaining the Great Recession have typically been the ones that have combined nominal frictions with a financial shock of some kind to households or firms.

One can also see the effects of traditional Keynesian factors in other countries. Jón is from Iceland, which experienced a massive exchange rate devaluation during its crisis. Other countries that were part of the euro, such as Spain, did not. I think this probably mattered a lot; if prices and wages were flexible, the distinction between a fixed and flexible exchange rate wouldn't matter. Another example is Detroit. If Detroit had had a flexible exchange rate with the rest of the United States, a devaluation would have been possible to lower the relative wages of autoworkers, which might have been very helpful. Much of what happened during the Great Recession felt like a textbook example of the consequences of Keynesian frictions.

EF: Is the idea that you have to combine financial frictions with price rigidities to get a prolonged macroeconomic effect starting to become the dominant way of thinking about modeling financial frictions?

Nakamura: Yes, I definitely think so. I think it's something that probably has become more salient in the recent period. In response to the large shocks that occurred in the financial crisis, in an efficient model of the world, there would've been much bigger price and wage adjustments and we would have avoided the big and protracted increase in unemployment. It's been a time when even some people within the profession who had a very hardcore skepticism of price and wage adjustment frictions have started to wonder whether they might be important after all.

I didn't come at it with such a strong perspective myself. I was always more of an empiricist. Clearly it's a topic on which macroeconomists in general have very strong views, but I think the recession has caused a lot of people to update their priors a little bit.

EF: Generally speaking, your research has focused on trying to empirically understand the effects of monetary policy and fiscal policy. Can you describe why that's such a hard question and some of the approaches economists have taken?

Nakamura: Sometimes it feels a little scary that we don't know the answers to these basic questions. I think a major reason there's still so much debate about them is that we don't have many experiments in macroeconom­ics. Fiscal policy and monetary policy don't happen randomly. In principle, you can run a regression of output on government spending to try to figure out the magnitude of the multiplier, the increase in output that would result from an extra dollar of government spending. But you might conclude that the government spending caused the recession even if the causation ran the opposite direction. The reason is that the government typically embarks on stimulus spending when something else is having a negative effect on growth. What you would measure using a simple-minded approach would be the combined effect of the stimulus and the other factors that are causing the recession. That's the basic endogeneity problem, and a similar issue arises with measuring the effects of monetary policy.

In economics, we have both structural approaches, where we build models using plausible assumptions from microeconomic models, and nonstructural approaches that use various types of natural experiments to try to learn about the effects of policy. My work on price rigidity is mostly an input into the structural approach: You walk into a store, you see that a lot of the prices just aren't changing all the time, and as a consequence, price rigidity seems like a reasonable way to build a structural model of why we see inflation as a whole not responding as it might in frictionless models.

The second approach is to use non-structural methods. In this case, one tries to use natural experiments. In my paper with Jón on fiscal stimulus, we look at aggregate variation in military spending to see how it affects states differently. The basic idea is that there are these long-run fluctuations in aggregate military spending — for example, the Carter- Reagan military buildup. But they affect states very differently; every time the United States goes into a big military buildup, it has a much bigger effect on California than it does on Illinois because California has a lot more military activity.

EF: That study found unusually high multipliers. Is that representative of what might happen at the aggregate level, for example, following a federal fiscal stimulus effort intended to bring the economy out of recession?

Nakamura: We find a multiplier of about 1.5. But that's a relative multiplier; in other words, if California receives $1 more in military spending than Illinois due to an aggregate military buildup, state-level output in California rises by $1.50 more in California versus Illinois.

You want to think about these estimates as what the multiplier would be if monetary policy were relatively unresponsive. The intuition is that the Fed can't raise interest rates in California relative to Illinois. So our paper doesn't say that multipliers are always high; it says that multipliers can be high when monetary policy is constrained, like at the zero bound.

It's a good estimate for thinking about which kinds of models fit the facts. In models with price rigidities, it's possible under certain circumstances like the zero lower bound to have a big government spending multiplier. On the other hand, in models that don't have these frictions, multipliers are always close to zero.

EF: Another approach would look directly at monetary shocks, meaning changes to the Federal Open Market Committee's monetary policy. How did you try to overcome the question of causation there?

Nakamura: Here we try to use the fact that if there's something going on in the economy, say a big recession, that will already have been priced in to financial markets even before the FOMC meeting. So the change you see in interest rate futures in the 30 minutes after an FOMC announcement is a true monetary shock, not a response to macroeconomic events.

The intuition is that in a model where monetary policy has no impact, like a real business cycle model, then monetary policy affects nominal interest rates, but all of the impact comes through inflation. There's no impact on real interest rates. But what we find in this paper is that the monetary policy shocks actually have a pretty large and pretty long-lasting impact on not only the nominal interest rate, but also the real interest rate.

So we find quite a bit of evidence for monetary non-neutrality. And to explain that kind of evidence, you need a framework that has price rigidity.


WEB EXCLUSIVE

EF: Are there ways in which you think the measurement of price stickiness could help explain why inflation didn't fall in the Great Recession as much as the traditional Phillips curve would have predicted?

Nakamura: My sense is that some of the debate on "the missing disinflation" is a bit misguided. A lot of the evidence about the Phillips curve being less flat actually comes from data from the late 1970s and early 1980s, the great inflation and the Volcker disinflation. But you have to keep in mind that that was the time period when people's expectations about inflation were changing dramatically. There are two parts to the Phillips curve: the slope of it, which has to do with how much inflation responds to output gaps, and a terminal term that has to do with inflation expectations. A lot of what we saw in the late 1970s and early 1980s may have had more to do with changing expectations about long-run inflation. I don't think we can extrapolate from these findings to how we would expect inflation to respond today.

Estimates of the Phillips curve using recent data actually suggest the Phillips curve is very flat. For example, in the research on market reactions to FOMC meetings with Jón that I mentioned earlier, we find that the response of expected inflation is very small even to fairly large movements in expected real rates. There are estimates of DSGE models using recent data that also find very flat Phillips curves, and there are nonstructural approaches as well. From that perspective, I wouldn't have expected a big inflation response to this recent recession.


EF: Do you think there really are such things as menu costs — meaning a direct cost to changing prices — given innovations such as bar codes? Or are "pure" fixed costs of price changes in models always really a stand-in for something else?

Nakamura: My sense is that literal menu costs are not very important. If managers wanted to have supermarkets where all the prices were digital, for example, it would be possible. Coca-Cola at one point tried to have a vending machine that had prices rise in hot weather and people got very irritated. So I think the right theory has to somehow take this into consideration. It's interesting to think about why Uber has been able to have surge pricing and whether other sectors of the economymight be able to do that too. But when we look at long-term data on price rigidity, one of the things we just don't see is prices getting more flexible over time. It actually looks like prices are getting stickier, because the inflation rate is falling.

So I think the Calvo and menu cost models are simple empirical models for complicated processes that we don't fully understand. The question is, why does price rigidity arise? In surveys of managers that ask why they don't change their prices, they almost always say something about not wanting to upset their customers, this idea of implicit or explicit contracts with them.

I have another paper with Jón on customer markets that tries to provide a model of this. Say you go to Starbucks every day, then in a sense you become "addicted." So Starbucks has an opportunity to price gouge. But if you know that Starbucks is going to try to exploit you once you become addicted, then you may try to avoid going there in the first place. So it can be in the interest of both the firm and the customer for the firm to "commit to a sticky price." This theory can help explain some of the patterns we see in the data — the fact that you see regular prices and downward deviations (sales) but basically never upward deviations (reverse sales).

A similar theory applies to wages. You hire a cleaning person, and in principle, you could set their wages as being indexed to the CPI. But it's not a simple thing for everybody in the world to pay attention to the CPI, so offering your cleaning person a wage indexed to the CPI probably wouldn't be practical. A fixed wage salary is just a lot easier to understand. So maybe the right way of thinking about price rigidity, at a deep level, is some combination of customer markets and information frictions. But I think this is an area where measurement is ahead of theory, and the ideal model has yet to be written.

EF: Many researchers have noted that China's official statistics on inflation suggest lower inflation rates than might have been expected given the country's very rapid growth. You found something very surprising in a paper with Jón and Miao Liu. Can you describe that work?

Nakamura: There's a lot of skepticism about Chinese official statistics, and we wanted to think about alternative ways of estimating Chinese inflation. We use Chinese consumption data to estimate Engel curves, which give you a relationship between people's income and the fraction of their income that they spend on luxuries versus necessities. All else equal, if Chinese people are spending a lot more of their total food budget on luxuries such as fish, that could tell us that their consumption is growing very rapidly. Holding nominal quantities fixed, higher growth is associated with lower inflation, so we can invert estimates of consumption growth to get the bias in the inflation rate.

This approach has been applied to many countries, including the United States, and the usual finding is that the inflation estimate you get is lower than official statistics. This is usually attributed to the idea that official statistics don't accurately account for the role of new goods, resulting in lower estimates of inflation.

But for China we found an interesting pattern. We did find lower estimates of inflation for the late 1990s. But for the last five or 10 years, we find the opposite: Official inflation was understating true inflation, and official estimates of consumption growth were overstating consumption growth. Our estimates suggest that the official statistics are a smoothed version of reality.

There are a couple of reasons why this could be. One possibility is, of course, tampering. Whenever we present this work to an audience of Chinese economists, they are far more skeptical of the Chinese data than we are. But a second possible interpretation is that it's just very difficult to measure inflation in a country like China where things are changing so quickly.

One possible explanation actually comes from another of our papers on a phenomenon called "product replacement bias." This arises from the fact that when the BLS constructs official inflation statistics, the approach is to find a product, look at its price, and come back the next month and look at the same product. But what if a lot of the price changes happen at the time when new goods are introduced? Then inflation can look too smooth. This may be part of what is going on in China.

EF: Most economists just consume statistics, but you've really focused on these novel measurement methods. Why has measurement been the driving focus of your research?

Nakamura: I think it goes back a lot to my parents, both empirical economists. I always thought I wanted to work with data in some form, so that gave me somewhat of a unique perspective on macro, where a big part of the field is theoretical. Beyond that, a friend of the family growing up was Erwin Diewert, who is a towering giant in the field of measurement. Because of that connection, and the fact that I grew up in Vancouver and he's at the University of British Columbia, I was able to take classes on national accounts measurement when I was in high school and as an undergraduate. I was lucky to be exposed to those ideas because they are not taught much in graduate programs in economics anymore. Even though as macroeconomists we use these statistics, we don't always know very much about how they're constructed.


WEB EXCLUSIVE

EF: Has measurement been crowded out by theory in graduate training?

Nakamura: It's a good question. I've had this conversation with several people, why it is that measurement is not as mainstream as it used to be? There used to be more people like Zvi Griliches and Dale Jorgenson, whose work is central in this literature at top U.S. universities. But for whatever reason, it has received less emphasis over time. Maybe there was an idea that we figured it out, but I think there is still a lot of work to be done on measurement.

Beyond academia, there are a lot of potential benefits to greater connections between the community of academic macroeconomists and the measurement community. In Bils and Klenow's paper, for example, one of the things they were doing was just leveraging the micro dataset that was being collected by a national statistical agency, realizing that it could be used to answer this big question in macroeconomics.

In the current political climate there's been a major attack, unfortunately, on the funding for statistical agencies. The budgets of the BLS and the Bureau of Economic Analysis are incredibly tiny for what they do, and there's so much that relies on the statistics they produce. There's this perception that there are lots of private sector firms that can do this, but that's not the case. The truth is that private sector organizations are usually collecting data on more selective samples and benchmarking them to the BLS data. So they can't do what they do without the government statistics.


EF: Do you have additional work planned in the field of measurement?

Nakamura: One of the things I've been doing since grad school is working on recovering data underlying the CPI from the late 1970s and early 1980s. This is an exciting period for analyzing price dynamics since it incorporates the U.S. Great Inflation and the Volcker disinflation — the only period in recent U.S. history when inflation was really high. In the course of our other research, Jón and I figured out that there were ancient microfilm cartridges at the BLS from the 1970s in old filing cabinets. The last microfilm readers that could read them had literally broken, and they couldn't be read by any modern readers. Moreover, they couldn't be taken out of the BLS because they're confidential.

So we decided to try to recover these microfilm cartridges. We had an excellent grad student, who became our co-author, who learned a lot about microfilm cartridge readers and found some that could be retrofitted to read these old cartridges. After we scanned in the data, we had to use an optical character recognition program to convert it into machine-readable form. That was very tricky. The first quote we got to do this was over a million dollars, but our grad student ultimately found a company that would do it for a 100th of the cost. This has been quite an odyssey of a project, and there were many times when I thought we might never pull it off.

We are now finally getting to analyze the data. We are trying to get a sense of the costs of inflation and also how price flexibility has changed over time. Most central banks think about the costs of inflation in terms of price dispersion. The idea is that inflation causes relative prices to get messed up, so they don't give the right price signals in the economy. But we actually have very little empirical evidence for this mechanism.

What we find in our data is that despite the high inflation of the late 1970s and early 1980s, there's really very little evidence that price dispersion increased. This feeds into the recent debate about the optimal inflation rate. People such as Olivier Blanchard have argued that central banks should target higher inflation rates so as to avoid hitting the zero lower bound on nominal interest rates. One argument for low inflation rates is that in the canonical models used by central banks, the costs of inflation associated with price dispersion are huge. But our analysis suggests that the models don't do very well empirically along this dimension. Of course, price dispersion probably isn't the only cost of inflation, even though it plays a central role in monetary models. But our results do push in the direction of suggesting we should have a higher inflation target.

EF: You've mentioned several economists who have influenced you, including your parents. Who else would you list as your primary influences?

Nakamura: My professors at Harvard in grad school had a big influence on me. One great thing about Harvard was the focus on empirical methods. Two people with very different perspectives on this who influenced me were Robert Barro and Ariel Pakes. I always saw it as an achievement that I managed to have them both on my thesis committee because they come from such different intellectual backgrounds — so I think they rarely found themselves in the same seminar, let alone on a thesis committee. Both were very interested in empirical methods but in very different ways: Robert has collected many large datasets over his career, and Ariel has mainly been interested in estimating structural models of industry structure and pricing. Seeing these different perspectives was an amazing thing that I got out of my experience in grad school.


Emi Nakamura

Present Positions

Associate Professor of Business and Economics at Columbia University and Visiting Professor at the Massachusetts Institute of Technology

Education

Ph.D. (2007), Harvard University; A.M. (2004), Harvard University; A.B. (2001), Princeton University

Selected Publications

"Are Chinese Growth and Inflation Too Smooth? Evidence from Engel Curves," American Economic Journal: Macroeconomics, forthcoming (with Jón Steinsson and Miao Liu); "Fiscal Stimulus in a Monetary Union: Evidence from U.S. Regions," American Economic Review, 2014 (with Jón Steinsson); "Price Setting in Forward-Looking Customer Markets," Journal of Monetary Economics, 2011 (with Jón Steinsson); "Monetary Non-Neutrality in a Multi-Sector Menu Cost Model," Quarterly Journal of Economics, 2010 (with Jón Steinsson); "Five Facts about Prices: A Reevaluation of Menu Cost Models," Quarterly Journal of Economics, 2008 (with Jón Steinsson).

Subscribe to Econ Focus

Receive an email notification when Econ Focus is posted online.

Subscribe to Econ Focus

By submitting this form you agree to the Bank's Terms & Conditions and Privacy Notice.

Phone Icon Contact Us
David A. Price (804) 697-8018