Last week I attended a conference at Hoover, "Frameworks for Central Banking in the Next Century." It was very interesting for its mix of academics, Fed people, and media. The Wall Street Journal had an interesting article Monday morning, "BOE's Carney may need to play a fourth card" on BOE governor Mark Carney's struggles with rules. I am left with more questions than answers, which is good.
Rules
What do we really mean by "rules?" The clearest version would be mechanical, the Federal Funds rate shall be \[ i_t = 2\% + 1.5 \times (\pi_t - 2\%) + 0.5 \times (y_t-y^*_t ) \] say, with \(i\) = interest rate, \(\pi\) = inflation \(y - y^*\) = output gap. The numbers come in, the Fed mechanically borrows and lends at that rate. This is something like an idealized gold standard.
That is not what anybody has in mind, obviously. So what do we really mean by "rules?"
One of the biggest problems is what goes in to the output gap part. If the Fed is going to respond to economic conditions, how do we measure those conditions? Unemployment? The Fed got in a bit of a mess first saying 6.5%, then rethinking whether maybe employment vs. unemployment matters, and then worrying about long-term unemployed. Once you get to "labor market conditions," the line between rule, judgment, and discretion gets muddy. Output gap? Then relative to whose "potential?" Just how much of current slow growth is "supply" vs. "demand" possibly fixable by monetary policy is at the center of the current policy debate. It's easy to say "we're really following a rule, we just think the output gap is bigger than you think." Athanasios Orphanides famously pointed out that contemporary views of the output gap in the 1970s justified a lot of loose policy that to later eyes looked like violations of a rule. I don't mean to say it's impossible, or that many people haven't thought long and hard about it, just to point that this is a tough question.
Moreover, I think even the ardent rules supporters have in mind some flexibility to deal with temporary exigencies. The rule is sort of a long-run commitment, not something mechanical. After all, much of the point is to "anchor long run expectations." But my diet also seems to have a daily temporary exigency, and once again rule vs. discretion gets muddy.
David Papell's presentation and Monika Piazzesi's comments were very thought-provoking in this regard. David set out to measure the extent of rules-based vs. discretionary policy. This is deep. Fundamentally, if we can't measure something, it becomes a much muddier concept. I don't think David succeeded, but he did the obvious first step and leaves me with a much clearer view of the problem.
David estimated rules with OLS regressions, roughly \[i_t = r^* + \phi_{\pi} (\pi_t - \pi^*) + \phi_y (y - y^*) + \varepsilon_t\] He sensibly measured the amount of rule-following vs. discretion by the volatility of the error term, and correlated that volatility with economic performance to try to measure the contribution of rules-based policy to economic stability.
But the Fed can surely answer, "We're following a rule, but you're using the wrong measure of u. Our measure of u becomes your error term." The Fed can also answer "that's a ridiculously simplified textbook rule. We follow a rule, but it includes a lot of other right hand variables like financial stability, long-term unemployment, housing bubbles and 10 different measures of output gaps. Variation in those omitted right-hand variables is showing up in your error term, not deviations from a rule."
Those replies would also answer the economic performance correlation. The Fed could go on and say "in times of high economic instability, the other components of our rule move around a lot, so there is more omitted-variable volatility. Economic volatility causes estimated Taylor Rule residuals, not the other way around."
More deeply, Mike Woodford's book recommends that the Fed respond directly to shocks to other parts of the economy, or shocks to the "natural rate," and then add Taylor rule responses, \[i_t = r_t^* + \phi_{\pi} (\pi_t - \pi^*) + \phi_y (y_t - y^*) \] (There is now a t subscript on \(r^*\)). So optimal rule-based policy has this character of apparent "discretionary" residuals from regressions.
So really where is the line between rule, a guideline (Captain Barbossa), a general indication of intent, "forward guidance," communication, principled discretion and willy-nilly discretion? Where is the line between law, commitment, promise, pie-crust promise (Mary Poppins, made to be broken), and the golden-retriever approach to life? Is the issue about rules vs. discretion, or is it just about simple and transparent rules vs. complex and obscure rules; about communication rather than commitment?
At a deep level, we social scientists think of the Fed like every other actor as always following "rules," some function from environment to action that describes behavior. Optimization always results in such a rule. Genuine randomness (the quantum mechanics of behavior?) is't really part of the framework; unpredictable behavior is the result of simplified models and agent's better information, not genuine randomness. (There is an exception for mixed strategies of course, but I don't think that's relevant here.) So is there anything but rules based policy? This question has long bugged me in interpreting impulse-response functions. The Fed never says "and we added 25 basis points for the fun of it." They always describe all actions as reactions to the environment -- a rule.
Framed that way, I think one answer is before us. If we go back to Kydland and Prescott rules vs. discretion, or Odysseus, the key to a "rule" is precommitment. You're following a rule (and a rule is beneficial) when you commit to an action ex-ante that you would prefer not to take ex-post, and that commitment has benefits to your overall objective.
"Forward guidance" or "communication" say "here is what we think we will feel like doing in the future." (But we retain the right to change our mind.) A rule says "here is what we will do in the future," maybe describing a state-contingent set of actions, "even if we will not feel like it at the time." ("And here is a set of costs we impose on ourselves so that we will choose to follow through" helps a lot to make it credible.)
It's pretty clear that the Fed has been doing the former, not the latter. The WSJ article on the BOE makes a similar point. Three rules in a year is not a lot of commitment.
This difference is where my scepticism of stimulative promises came from. If the Fed promised to keep rates low in the future, in order to stimulate today, that promise can only have effect if people imagine the Fed chair going to Congress when inflation has hit 5% and saying "no, I promised to keep rates low in order to boost the economy in the recession, and now I have to do that though we all know it's time to raise rates." Nobody believes the Fed chair will do such a thing. The "guidance" is a "forecast of how the Fed will feel," not a commitment, not a promise with a self-imposed cost, some way of binding itself to the mast.
The intricate legal structure surrounding the Fed, and many of its traditions, do constitute a lot of "rules," by the way. The Fed might dearly like to drop money from helicopters, buy Treasury debt directly, or lend directly to under-"stimulated" businesses. Legal restrictions against such actions are regretted ex-post, and admired as producing overall better outcomes. At best, forward guidance amounts to a set of promises that the Fed will feel it somewhat costly to renege on.
Now, I think we are ready to start thinking about measurement. I don't think that can be a purely empirical exercise. We need to write down some sort of objective, and find promised behavior ex ante that is regretted ex post, but nonetheless beneficial overall. I'm not sure how to do it, but at least the concept has some potentially measurable content.
Models
My second thought prompted by the conference overall, and made concrete by thinking about David's paper is: What is the model of the economy in which the rule is supposed to work?
In David's regression, we can ask the question: Embed the rule in a model. Suppose that the Fed follows the rule perfectly, and we generate artificial time series from the model, and run the regression. Does the regression reveal the Taylor rule that the Fed is following?
In the new-Keynesian model, the answer is no. Bob King pointed out long ago that we can write the Taylor rule in such models as \[i_t = i_t^* + \phi_{\pi} (\pi_t - \pi_t^*) + \phi_y (y_t - y^*_t ) \] where we now interpret the * variables as equilibrium values, and the non-starred values as deviations from equilibrium. When the Fed follows such a rule, in that model, we observe \( i_t = i_t^* \) , \( \pi_t = \pi^*_t \) and \( y_t = y_t^* \). There is no variation in the right hand variables on which to estimate the Taylor rule. The Taylor rule is not identified when placed in a new Keynesian model. In a new-Keynesian model, the "Taylor rule" becomes the "Taylor Principle," a set of off-equilibrium threats not seen in equilibrium. The Fed introduces instabilty to gain determinacy, rather than introduce stability as it does in old-Keynesian models. (This is a not so subtle plug for "Determinacy and Identification With Taylor Rules")
More generally, the point of monetary policy is to stabilize output and inflation, so simple regressions of interest rates on output and inflation no more measure the policy rule, than simple regressions of inflation and output on interest rates measure the effect of monetary policy. This is a point James Tobin made about 50 years ago (post hoc ergo propter hoc). Chris Sims got a Nobel Prize for VARs to address the problem.
Most simply, monetary policy shocks affect output and inflation, so the right hand variable is correlated with the error term. The new-Keynesian model is an extreme case of this behavior, in which the right hand variable and error terms are perfectly correlated.
These questions were not really on anyone's mind. They are hard questions, they are old questions, and they don't have easy answers.
The larger question is, what model of the economy do policy people use to think about how monetary policy affects the economy? The clear answer at this conference is, some unwritten mixture of old Keynesianism and old Monetarism. Old Keynesianism: higher rates reduce "demand" which reduce output which through a Philips curve reduces inflation. Old Monetarism: higher interest rates reduce some quantity of money which works its way through to prices. Neither can be written down or spoken aloud without provoking chuckles. But we had a whole conference on "rules" without an explicit mention of "transmission mechanism" (i.e. "model"), and surely the verbal reasoning conformed more to those 40 year old stories than anything written since.
That wide gulf is worth pondering from both sides.
(There were a lot of really interesting papers and discussions. I especially recommend Marvin Goodfriend's paper, which I'll try to blog at some point in the future.)
No comments:
Post a Comment