After my unit roots redux post, a few people have asked for a nontechnical explanation of what this is all about.
Suppose there is an unexpected movement in any of the data we look at -- inflation, unemployment, GDP, prices, etc. Now, how does this "shock" affect our best estimate of where this variable will be in the future? The graph shows three possibilities.
First, green or "stationary." There may be some short lived dynamics, the little hump shape I drew here. Then, given enough time, the variable will return to where we thought it was going all along. For unemployment, suppose your best guess of unemployment in 2050 was 5%. Then you see an upward unexpected 1% spike in today's unemployment. Ouch, that means that we're going back to a recession. But perhaps this news does not change your view of 2050 unemployment at all.
Second, blue or "pure random walk." That's more plausible (though no longer thought to be true) of stock prices. If the price goes up unexpectedly, your expectation of where the (log) price will be in the future goes up one-for-one, for all time.
Third, black, "unit root." This option recognizes the possibility that a shock may give rise to transitory dynamics, and may come back towards, but not all the way towards your previous estimate. As you can see the "unit root" is the same as a combination of a stationary component and a bit of a random walk. Perhaps seeing unemployment rise 1%, you think most of it will work itself out, but that even in the long run labor markets will be sticky and we'll never quite get back.
The "unit root" is most plausible and verified in the data for log GDP. Recessions and expansions have a lot of transitory component that will come back. But there are permanent movements too. Unemployment, being a ratio, strikes me as one that eventually must come back. But it can take a longer time than we usually think, which is interesting.
This is very simplified. A few of the issues:
For GDP the question is whether it will come back to a linear trend extrapolated from past data, not back to a level as I have shown.
Most of the issue is how standard statistical procedures work in these circumstances.
As you can see from the graph, the pure question whether the series will come back in an infinite time period is not really knowable. It could be that the series will come back eventually, but take a very long time. It could be stationary plus a second very slow moving stationary component. This is a statistical problem but not really an economic problem. The appearance of unit roots are economically interesting as they show a lot of "low frequency" movement, series that are coming back slowly -- even if they do come back eventually. The economics of "slumps" and (we hope, someday) "booms" is hot on the agenda, and this is one indication of the fact.
This is all much more interesting if you look at multiple series together. For the canonical example, if you just look at stock prices, they are very very close to a random walk. A price rise or decline are permanent. However, if you see stock prices rise relative to dividends, that's almost entirely stationary. GDP and consumption have a similar relationship. As in the latest recession, if GDP declines with a big consumption decline, that looks pretty darn permanent. GDP declining and people still consuming is much more likely to go away.
I hope this helps.
Monday, April 27, 2015
Friday, April 24, 2015
Unit roots, redux
Arnold Kling's askblog and Roger Farmer have a little exchange on GDP and unit roots. My two cents here.
I did a lot of work on this topic a long time ago, in How Big is the Random Walk in GNP? (the first one) Permanent and Transitory Components of GNP and Stock Prices” (The last, and I think best one) "Multivariate estimates" with Argia Sbordone, and "A critique of the application of unit root tests", particularly appropriate to Roger's battery of tests.
The conclusions, which I still think hold up today:
Log GDP has both random walk and stationary components. Consumption is a pretty good indicator of the random walk component. This is also what the standard stochastic growth model predicts: a random walk technology shock induces a random walk component in output but there are transitory dynamics around that value.
A linear trend in GDP is only visible ex-post, like a "bull" or "bear" market. It's not "wrong" to detrend GDP, but it is wrong to forecast that GDP will return to the linear trend or to take too seriously correlations of linearly detrended series, as Arnold mentions. Treating macro series as cointegrated with one common trend is a better idea.
Log stock prices have random walk and stationary components. Dividends are a pretty good indicator of the random walk component. (Most recently, here.)
Arnold asks "In stock market returns, econometricians have been able to identify long-term mean reversion even though the short run is a random walk. Can something similar be done with GDP data?" Answer: Yes, and Permanent and Transitory Components is it.
Both Arnold and Roger claim that unemployment has a unit root. Guys, you must be kidding. Actually, this makes a great test case for my point in "A critique", that it is a bad idea it is to blindly run unit root tests and then impose that structure.
A unit root means a random walk component. A random walk will eventually pass any upper and lower limit. Look at it. That's as stationary a series as you're going to find in economics. ("Look at the plot" and "think about the units" are the Cochrane unit root tests.)
Yes, unemployment like other stationary ratios in macro (consumption/GDP, hours/day, etc.) have important and frequently overlooked low-frequency movements. But they are far from random walks, and they like unemployment have a very large transitory component at business cycle frequencies. When unemployment is above 8%, it is a good bet that it will decline over the next 5 years.
If you apply unit root tests to an hour of second by second temperature data from 9 to 10 AM you will think it has both a linear trend and a unit root. Millisecond data will not help you to detect climate change. That's why unit root tests are a problem. You have to think, and consider the span of data you have and the frequency of mean reversion that makes economic sense in your data.
The tests are about infinite horizon behavior which you can never tell with finite horizons. However, they can alert you to low-frequency movement in your data, which can make ordinary distribution theory a bad guide. So can looking at a plot.
As far as I can tell, "Potential GDP" is equivalent to a two sided filter. It looks great ex-post. None of this is inconsistent with Arnold's view that standard calculations of potential GDP gaps do little to forecast GDP growth, especially in real time.
I did a lot of work on this topic a long time ago, in How Big is the Random Walk in GNP? (the first one) Permanent and Transitory Components of GNP and Stock Prices” (The last, and I think best one) "Multivariate estimates" with Argia Sbordone, and "A critique of the application of unit root tests", particularly appropriate to Roger's battery of tests.
The conclusions, which I still think hold up today:
Log GDP has both random walk and stationary components. Consumption is a pretty good indicator of the random walk component. This is also what the standard stochastic growth model predicts: a random walk technology shock induces a random walk component in output but there are transitory dynamics around that value.
A linear trend in GDP is only visible ex-post, like a "bull" or "bear" market. It's not "wrong" to detrend GDP, but it is wrong to forecast that GDP will return to the linear trend or to take too seriously correlations of linearly detrended series, as Arnold mentions. Treating macro series as cointegrated with one common trend is a better idea.
Log stock prices have random walk and stationary components. Dividends are a pretty good indicator of the random walk component. (Most recently, here.)
Arnold asks "In stock market returns, econometricians have been able to identify long-term mean reversion even though the short run is a random walk. Can something similar be done with GDP data?" Answer: Yes, and Permanent and Transitory Components is it.
Both Arnold and Roger claim that unemployment has a unit root. Guys, you must be kidding. Actually, this makes a great test case for my point in "A critique", that it is a bad idea it is to blindly run unit root tests and then impose that structure.
A unit root means a random walk component. A random walk will eventually pass any upper and lower limit. Look at it. That's as stationary a series as you're going to find in economics. ("Look at the plot" and "think about the units" are the Cochrane unit root tests.)
Yes, unemployment like other stationary ratios in macro (consumption/GDP, hours/day, etc.) have important and frequently overlooked low-frequency movements. But they are far from random walks, and they like unemployment have a very large transitory component at business cycle frequencies. When unemployment is above 8%, it is a good bet that it will decline over the next 5 years.
If you apply unit root tests to an hour of second by second temperature data from 9 to 10 AM you will think it has both a linear trend and a unit root. Millisecond data will not help you to detect climate change. That's why unit root tests are a problem. You have to think, and consider the span of data you have and the frequency of mean reversion that makes economic sense in your data.
The tests are about infinite horizon behavior which you can never tell with finite horizons. However, they can alert you to low-frequency movement in your data, which can make ordinary distribution theory a bad guide. So can looking at a plot.
As far as I can tell, "Potential GDP" is equivalent to a two sided filter. It looks great ex-post. None of this is inconsistent with Arnold's view that standard calculations of potential GDP gaps do little to forecast GDP growth, especially in real time.
Wednesday, April 22, 2015
The right to herd
Just when you thought financial regulation couldn't get more expansive and incoherent, our Justice Department comes in to defend morons' right to herd.
As explained in the Wall Street Journal at least, Mr. Navinder Singh Sarao is now under arrest, fighting extradition to the US, and his business ruined, for "spoofing" during the flash crash.
What is that? The Journal's beautiful graph at left explains.
The obvious question: Who are these traders who respond to spoofing orders by placing their own orders? Why is it a crucial goal of law and public policy to prevent Mr. Sarao from plucking their pockets? Is "herding trader" or "momentum trader" or "badly programmed high-speed trading program" or just simple "moron in the market" now a protected minority?
Why is Mr. Sarao being prosecuted and not all the people who wrote badly programmed algorithms that were so easily spoofed? If this caused the flash crash (how, not explained in the article) are they not equally at fault?
I don't mean by this a defense of the crazy stuff going on in high speed trading. As explained here, I think one second batch auctions are a much better market structure. But the whole high speed trading thing is largely a response to SEC regulations in the first place, the order routing regulation, discrete tick size regulation, and strict time precedence regulation. A fact which will probably not enter at Mr. Sarao's trial (he doesn't seem to have billions for a settlement) and will give him little comfort in jail.
And maybe, just maybe, there is something more coherent here than the Journal lets on. I'll keep reading hoping to find it and welcome comments who can.
A larger thought. We still really want to rely on regulators to spot all the problems of finance and keep us safe from more crashes?
Update: Craig Pirrong excellent commentary here via a good FT alphaville post. Great quote:
Update 3: Good Bloomberg View coverage from Matt Levine and John Arnold, the source of the above front-running observation.
As explained in the Wall Street Journal at least, Mr. Navinder Singh Sarao is now under arrest, fighting extradition to the US, and his business ruined, for "spoofing" during the flash crash.
What is that? The Journal's beautiful graph at left explains.
The obvious question: Who are these traders who respond to spoofing orders by placing their own orders? Why is it a crucial goal of law and public policy to prevent Mr. Sarao from plucking their pockets? Is "herding trader" or "momentum trader" or "badly programmed high-speed trading program" or just simple "moron in the market" now a protected minority?
Why is Mr. Sarao being prosecuted and not all the people who wrote badly programmed algorithms that were so easily spoofed? If this caused the flash crash (how, not explained in the article) are they not equally at fault?
I don't mean by this a defense of the crazy stuff going on in high speed trading. As explained here, I think one second batch auctions are a much better market structure. But the whole high speed trading thing is largely a response to SEC regulations in the first place, the order routing regulation, discrete tick size regulation, and strict time precedence regulation. A fact which will probably not enter at Mr. Sarao's trial (he doesn't seem to have billions for a settlement) and will give him little comfort in jail.
And maybe, just maybe, there is something more coherent here than the Journal lets on. I'll keep reading hoping to find it and welcome comments who can.
A larger thought. We still really want to rely on regulators to spot all the problems of finance and keep us safe from more crashes?
Update: Craig Pirrong excellent commentary here via a good FT alphaville post. Great quote:
The complaint alleges that Sarao employed the layering strategy about 250 days, meaning that he caused 250 out of the last one flash crashes. [my emphasis] I can see the defense strategy. When the government expert is on the stand, the defense will go through every day. “You claim Sarao used layering on this day, correct?” “Yes.” “There was no Flash Crash on that day, was there?” “No.” Repeating this 250 times will make the causal connection between his trading and Flash Clash seem very problematic, at best.Update 2: Reading various commentaries that I can't find to cite any more, I realize that "front running" more than "herding" is the protected class. You "spoof" by putting in a bunch of orders just outside the current spread. The algorithms that respond to that think this behavior means some big orders coming, so try to front run those by buying. They cross the spread to take the small order you put on the other side. Or so the story goes. In any case, viewed as spoofers vs. front-runners it's harder still to have sympathy for the latter.
Update 3: Good Bloomberg View coverage from Matt Levine and John Arnold, the source of the above front-running observation.
Monday, April 20, 2015
Consumption-based model and value premium
The consumption based model is not as bad as you think. (This is a problem set for my online PhD class, and I thought the result would be interesting to blog readers.)
I use 4th quarter to 4th quarter nondurable + services consumption, and corresponding annual returns on 10 portfolios sorted on book to market and the three Fama-French factors. (Ken French's website)
The graph is average excess returns plotted against the covariance of excess returns with consumption growth. (The graph is a distillation of Jagannathan and Wang's paper, who get any credit for this observation. The lines are OLS cross-sectional regressions with and without a free intercept.)
By comparison, the CAPM is the usual disaster. If we plot average returns against the covariance of returns with the market (rmrf) or against market betas, there is very little pattern. In particular, the hml portfolio, which by itself captures almost all the pricing information in the ten b/m portfolios (that's the point of the Fama-French model) has a 5% average return and a slightly negative market beta. The fact that the hml portfolio is right on the line in the previous graph is the main point of that graph.
There is an essentially correct story in the consumption-based model: value stocks and small stocks have higher average returns. And they have correspondingly higher covariance with consumption growth. Value and small stocks tend to do poorly in years of bad consumption growth, though they have little systematic correlation with the market.
Is this perfect? No. The model is \(E(R^e) = cov(R^e, \Delta c)) \times \gamma\) where \(R^e\) = excess return, \(\Delta c\) is consumption growth and \( \gamma\) is the risk aversion coefficient. The mean returns are so large -- and the volatility of consumption growth so small -- that the slope coefficient = risk aversion coefficient is 80, a bit hard for most people to swallow.
Also, this is the linearized model. The true nonlinear model is \(E(R^e) = -cov(R^e_{t+1}, (c_{t+1}/c_t)^{-\gamma})\), and raising things to the 80th power is a lot different than multiplying by 80. On the other hand, perhaps this is the key to good performance. If you think the underlying correct model works in continuous time, which is linear, \( E_t(dR^e) = -E_t(dR^e, dc) \gamma \), then perhaps the linearized model is a better approximation to annual time-averaged data than is the discrete-time model that pretends all consumption happens in one big lump every December 31. Furthermore, if you raise consumption growth to the 80th power, all the covariance of returns with marginal utility comes in one or two big spikes. The model becomes a model of rare disasters in marginal utility, not one of repeated events. Perhaps, but life would be so much easier if markets were about repeated risks not once per century disaster covariances.
The larger point: Very few researchers have really given the consumption model a good go to see just how full the glass might be. Hansen and Singleton famously rejected the model, but they used monthly seasonally adjusted consumption data, a bunch of low-power instruments, and no treatment of time aggregation (consumption is sum for the month, returns are 30th to 30th), or the durability of most "nondurable" goods. (Shirts are "nondurable." I get all mine at Christmas, hence 4th quarter to 4th quarter works pretty well for me!) Their point was mostly an illustrative example of GMM methodology not a serious Fama-French style empirical investigation of just how far a model can go. (The Fama-French model is also rejected!) It took 25 years before Jagannathan and Wang produced this simple graph. Can we do even better?
Sure, the consumption-based model won't work at a 5 minute interval. But is there some essence of truth in it, that stocks which fall more in business cycles, as measured by consumption, must pay a higher rate of return. Just how far does that truth go? I think one could do far better by thinking hard about time aggregation, data construction, durability, seasonal adjustment, and the appropriate frequency to evaluate such a model. And by trying to see just how far the model can go, rather than statistically rejecting its perfection.
In the end "why are people afraid of value stocks and leave attractive returns on the table?" must come down to 1) they're morons, they haven't figured it out 2) the value premium isn't really there or 3) value stocks do badly in bad times, so make a portfolio riskier. That consumption is also low in these bad times seems pretty natural.
Update
From "Cross-Sectional Consumption-Based Asset Pricing: A Reappraisal" by Tom Engsted and Stig Vinther Møller at University of Aarhus. Thanks to Stig for the link. BOP and EOP are beginning of period and end of period consumption. In a discrete time model, do you treat the sum of consumption over the year as happening at the beginning of the year, or the end of the year? Treating it at the beginning produces the dramatic graph on the left.
This is a small instance of the many explorations one can do to see if there is some power to the consumption-based model, rather than just take it literally and reject it.
A bigger point. Means are pretty insensitive to timing. But covariances and correlations of white noise series are exquisitely sensitive to timing, measurement error, and so forth. \(cov(a_t,b_t)\) may be large, and \( cov(a_{t-1} b_t)=0\). Another approach is to create time averaged returns. I did this a long time ago here. Average january-january, feburary-february, march-march, etc. returns and compare them to the growth of annual macro data. The right thing to do is to explicitly model time aggregation -- the fact that consumption is reported as an annual average -- along with seasonal adjustment.
I use 4th quarter to 4th quarter nondurable + services consumption, and corresponding annual returns on 10 portfolios sorted on book to market and the three Fama-French factors. (Ken French's website)
The graph is average excess returns plotted against the covariance of excess returns with consumption growth. (The graph is a distillation of Jagannathan and Wang's paper, who get any credit for this observation. The lines are OLS cross-sectional regressions with and without a free intercept.)
By comparison, the CAPM is the usual disaster. If we plot average returns against the covariance of returns with the market (rmrf) or against market betas, there is very little pattern. In particular, the hml portfolio, which by itself captures almost all the pricing information in the ten b/m portfolios (that's the point of the Fama-French model) has a 5% average return and a slightly negative market beta. The fact that the hml portfolio is right on the line in the previous graph is the main point of that graph.
There is an essentially correct story in the consumption-based model: value stocks and small stocks have higher average returns. And they have correspondingly higher covariance with consumption growth. Value and small stocks tend to do poorly in years of bad consumption growth, though they have little systematic correlation with the market.
Is this perfect? No. The model is \(E(R^e) = cov(R^e, \Delta c)) \times \gamma\) where \(R^e\) = excess return, \(\Delta c\) is consumption growth and \( \gamma\) is the risk aversion coefficient. The mean returns are so large -- and the volatility of consumption growth so small -- that the slope coefficient = risk aversion coefficient is 80, a bit hard for most people to swallow.
Also, this is the linearized model. The true nonlinear model is \(E(R^e) = -cov(R^e_{t+1}, (c_{t+1}/c_t)^{-\gamma})\), and raising things to the 80th power is a lot different than multiplying by 80. On the other hand, perhaps this is the key to good performance. If you think the underlying correct model works in continuous time, which is linear, \( E_t(dR^e) = -E_t(dR^e, dc) \gamma \), then perhaps the linearized model is a better approximation to annual time-averaged data than is the discrete-time model that pretends all consumption happens in one big lump every December 31. Furthermore, if you raise consumption growth to the 80th power, all the covariance of returns with marginal utility comes in one or two big spikes. The model becomes a model of rare disasters in marginal utility, not one of repeated events. Perhaps, but life would be so much easier if markets were about repeated risks not once per century disaster covariances.
The larger point: Very few researchers have really given the consumption model a good go to see just how full the glass might be. Hansen and Singleton famously rejected the model, but they used monthly seasonally adjusted consumption data, a bunch of low-power instruments, and no treatment of time aggregation (consumption is sum for the month, returns are 30th to 30th), or the durability of most "nondurable" goods. (Shirts are "nondurable." I get all mine at Christmas, hence 4th quarter to 4th quarter works pretty well for me!) Their point was mostly an illustrative example of GMM methodology not a serious Fama-French style empirical investigation of just how far a model can go. (The Fama-French model is also rejected!) It took 25 years before Jagannathan and Wang produced this simple graph. Can we do even better?
Sure, the consumption-based model won't work at a 5 minute interval. But is there some essence of truth in it, that stocks which fall more in business cycles, as measured by consumption, must pay a higher rate of return. Just how far does that truth go? I think one could do far better by thinking hard about time aggregation, data construction, durability, seasonal adjustment, and the appropriate frequency to evaluate such a model. And by trying to see just how far the model can go, rather than statistically rejecting its perfection.
In the end "why are people afraid of value stocks and leave attractive returns on the table?" must come down to 1) they're morons, they haven't figured it out 2) the value premium isn't really there or 3) value stocks do badly in bad times, so make a portfolio riskier. That consumption is also low in these bad times seems pretty natural.
Update
From "Cross-Sectional Consumption-Based Asset Pricing: A Reappraisal" by Tom Engsted and Stig Vinther Møller at University of Aarhus. Thanks to Stig for the link. BOP and EOP are beginning of period and end of period consumption. In a discrete time model, do you treat the sum of consumption over the year as happening at the beginning of the year, or the end of the year? Treating it at the beginning produces the dramatic graph on the left.
This is a small instance of the many explorations one can do to see if there is some power to the consumption-based model, rather than just take it literally and reject it.
A bigger point. Means are pretty insensitive to timing. But covariances and correlations of white noise series are exquisitely sensitive to timing, measurement error, and so forth. \(cov(a_t,b_t)\) may be large, and \( cov(a_{t-1} b_t)=0\). Another approach is to create time averaged returns. I did this a long time ago here. Average january-january, feburary-february, march-march, etc. returns and compare them to the growth of annual macro data. The right thing to do is to explicitly model time aggregation -- the fact that consumption is reported as an annual average -- along with seasonal adjustment.
Friday, April 17, 2015
Macro Handbook 2
Last week I attended the first half of the conference on the Handbook of Macroeconomics Volume 2, organized by John Taylor and Harald Uhlig, held at Hoover. The conference program and most of the papers are here. The second half will be in Chicago April 23-25, program here
Overall, this Handbook is shaping up as a very useful resource. Really good summary and review papers are a natural way in to long literatures. Bad summary and review papers are long and boring. The conference produced the first kind. Most of the papers are rough first drafts, so make a note to come back when they're finished. A few highlights (with apologies to authors I've left out; I can't review them all here.)
Chad Jones' "The Facts of Economic Growth" is a tremendous introduction to a complex field, nicely mixing facts and ideas. If you last left growth theory with a view that ratios are stable and we just climb up with TFP, this paper will change that view. Equipment is getting cheaper. Factor shares are moving. Human capital is trending up, along with its price. R&D spending and employment share trend up. Misallocation has first-order negative effects on productivity, a finding from growth theory that macro should pay more attention to. Agriculture is declining, health care expanding. Fertility is declining. Inequality, .. yes, that too. Sometimes countries converge, sometimes they diverge.
Monica Piazzesi Martin Schneider's "Housing and Macroeconomics" -- no paper yet, alas, but look for it -- is a very nice introduction to the kind of explicit modeling interacted with data that they've been doing.
Valerie Ramey's Macroeconomic Shocks and Their Propagation took up the current state of vector autoregressions, shock identification and so forth. A great integration of a long literature. Where are we? Both Valeire and Arvind Krishnamurthy discussing showed lots of graphs with varying signs of the effects of monetary policy. Despite sixty years (since Milton Friedman regressed output on money, with high points from Tobin and Solow early 1960s; St. Louis Fed late 1960s; Sims and Granger late 1970s; Christiano-Eichenbaum-Evans 1999; Romer and Romer more recently), we're still at it. Most of the discussion pointed out how much uncertainty there still is. I opined that most of the "uncertainty" was about how much you have to torture estimates to avoid the conclusion that interest rate rises raise output and inflation.
Gary Hansen and Lee Ohanian cover "Neoclassical Theories" by example, integrating three recent models they have worked on, covering how "neoclassical" theories can account for the Great Depression, the WWII boom, and the surprisingly large postwar fluctuations at frequencies lower than standard business cycles. My discussion complained about the habit of different models for different facts, and exogenous TFP shocks. I suggests that it's time to view business cycle TFP movements as more than just scientific innovation identified by residual, but rather to include and independently measure all the "wedges" that policy is inducing between invention and adoption. Among other points.
Jim Stock and Mark Watson, "Factor Models for Macroeconomics" (No paper, alas, but look for it) is shaping up to be one of those very useful "how to" papers that handbooks can provide. Lots of insight in how to use the Stock-Watson methodology in many contexts, all in one place, and (to judge by Mark's presentation) ultra-clear and accessible.
Bob Hall closed strong with "Macroeconomics of Persistent Slumps," the latest in Bob's thinking on this subject (other highlights are his AEA Presidential Speech and Macro Annual paper.) An interesting sidelight, Bob also innovated a new solution methodology. Write the shocks as multinomials, then just solve first order conditions on the following tree. It's amazingly fast. And not in the Fernández-Villaverde, Rubio-Ramirez, Schorfheide cookbook.
Again, the other papers were great too. And the second round promises to equal if not better the first.
Overall, this Handbook is shaping up as a very useful resource. Really good summary and review papers are a natural way in to long literatures. Bad summary and review papers are long and boring. The conference produced the first kind. Most of the papers are rough first drafts, so make a note to come back when they're finished. A few highlights (with apologies to authors I've left out; I can't review them all here.)
Chad Jones' "The Facts of Economic Growth" is a tremendous introduction to a complex field, nicely mixing facts and ideas. If you last left growth theory with a view that ratios are stable and we just climb up with TFP, this paper will change that view. Equipment is getting cheaper. Factor shares are moving. Human capital is trending up, along with its price. R&D spending and employment share trend up. Misallocation has first-order negative effects on productivity, a finding from growth theory that macro should pay more attention to. Agriculture is declining, health care expanding. Fertility is declining. Inequality, .. yes, that too. Sometimes countries converge, sometimes they diverge.
"... once countries get on the “growth escalator,” good things tend to happen and they grow rapidly to move closer to the frontier. Where they end up depends, as we will discuss, on the extent to which their institutions improve."Jesús Fernández-Villaverde, Juan Rubio-Ramirez and Frank Schorfheide's "Solution and Estimation Methods for DSGE Models" is encyclopedic, approaching a book in itself. The technique of solving models, curiously banished from papers these days, is a dark art. There are lots of techniques. Which do you use when? think this will be a very useful "cookbook" for modelers, which is just the sort of thing handbooks are good for. We had a lively discussion on which techniques are best for which kinds of models. How many shocks, how many state variables, how important are nonlinearities all matter. I made the usual complaints about identification, and that perhaps models we know are false (one shock, many series) might not be right for formal black box estimation methods. Intuitive connection to robust facts in the data may be more important than statistical efficiency when the model is a quantitative parable.
Monica Piazzesi Martin Schneider's "Housing and Macroeconomics" -- no paper yet, alas, but look for it -- is a very nice introduction to the kind of explicit modeling interacted with data that they've been doing.
Valerie Ramey's Macroeconomic Shocks and Their Propagation took up the current state of vector autoregressions, shock identification and so forth. A great integration of a long literature. Where are we? Both Valeire and Arvind Krishnamurthy discussing showed lots of graphs with varying signs of the effects of monetary policy. Despite sixty years (since Milton Friedman regressed output on money, with high points from Tobin and Solow early 1960s; St. Louis Fed late 1960s; Sims and Granger late 1970s; Christiano-Eichenbaum-Evans 1999; Romer and Romer more recently), we're still at it. Most of the discussion pointed out how much uncertainty there still is. I opined that most of the "uncertainty" was about how much you have to torture estimates to avoid the conclusion that interest rate rises raise output and inflation.
Gary Hansen and Lee Ohanian cover "Neoclassical Theories" by example, integrating three recent models they have worked on, covering how "neoclassical" theories can account for the Great Depression, the WWII boom, and the surprisingly large postwar fluctuations at frequencies lower than standard business cycles. My discussion complained about the habit of different models for different facts, and exogenous TFP shocks. I suggests that it's time to view business cycle TFP movements as more than just scientific innovation identified by residual, but rather to include and independently measure all the "wedges" that policy is inducing between invention and adoption. Among other points.
Jim Stock and Mark Watson, "Factor Models for Macroeconomics" (No paper, alas, but look for it) is shaping up to be one of those very useful "how to" papers that handbooks can provide. Lots of insight in how to use the Stock-Watson methodology in many contexts, all in one place, and (to judge by Mark's presentation) ultra-clear and accessible.
Bob Hall closed strong with "Macroeconomics of Persistent Slumps," the latest in Bob's thinking on this subject (other highlights are his AEA Presidential Speech and Macro Annual paper.) An interesting sidelight, Bob also innovated a new solution methodology. Write the shocks as multinomials, then just solve first order conditions on the following tree. It's amazingly fast. And not in the Fernández-Villaverde, Rubio-Ramirez, Schorfheide cookbook.
Again, the other papers were great too. And the second round promises to equal if not better the first.
Thursday, April 16, 2015
Banking at the IRS
A while ago in two blog posts here and here I suggested many ways other than currency to get a zero interest rate if the government tries to lower rates below zero. Buy gift cards, subway cards, stamps; prepay bills, rent, mortgage and especially taxes -- the IRS will happily take your money now and you can credit it against future tax payments; have your bank make out a big certified check in your name, and sit on it, don't cash incoming checks. Start a company that takes money and invests in all these things (as well as currency).
Chris and Miles Kimball have an interesting essay exploring these ideas "However low interest rates might go, the IRS will never act like a bank." Their central point: sure that's how things work now. But with substantial negative interest rates, all of these contracts can change. It's technically possible in each case for people and businesses to charge pre-payment penalties amounting to a negative nominal rate.
Reply: Sure, in principle. Nominal claims can all be dated, and positive or negative interest charged between all dates.
But this did not happen in the US and does not happen in other countries for positive inflation and high nominal rates, despite symmetric incentives, and at rates much higher than the contemplated 3-5% or so negative rates. Yes, with large nominal rates there is pressure to pay faster, inventory cash-management to reduce people's holdings of depreciating nominal claims, but this pervasive indexation of nominal payments did not break out. The IRS did not offer interest for early payment.
More deeply, what they're describing is a tiny step away from perfect price indexing. If all nominal payments are perfectly indexed to the nominal interest rate, accrued daily, then it's a tiny change to index all prices themselves to the CPI, accrued daily. If "how much you owe me," say to rent a house, is legally, contractually, and mechanically determined as a value times e^rt, and changes day by day, then e^(pi t) is just as easy.
So, price stickiness itself would (should!) disappear under this scenario.
Price stickiness has always been a bit of a puzzle for economists. As the Kimballs speculate how easy it is to index payments to negative interest rates, so economists speculate how easy it is to index payments to inflation. Yet it seems not to happen.
So this point of view strikes me as a bit of a catch-22 for its advocates, who generally are of the frame of mind that prices and nominal contracts are sticky and that’s why negative nominal rates are a good idea to "stimulate demand" in the first place. If we can have negative nominal rates and change all these legal and contractual zero-rate promises to allow it, then prices won't be sticky any more! Conversely, I should be cheering, as it amounts to a broad push to unstick prices. That has long seemed to me the natural policy response to the view that sticky prices are the root of all our troubles. It would allow negative rates, but eliminate their need as well.
Alas, the world seems remarkably resistant to time-indexing all payments.
Chris and Miles Kimball have an interesting essay exploring these ideas "However low interest rates might go, the IRS will never act like a bank." Their central point: sure that's how things work now. But with substantial negative interest rates, all of these contracts can change. It's technically possible in each case for people and businesses to charge pre-payment penalties amounting to a negative nominal rate.
Reply: Sure, in principle. Nominal claims can all be dated, and positive or negative interest charged between all dates.
But this did not happen in the US and does not happen in other countries for positive inflation and high nominal rates, despite symmetric incentives, and at rates much higher than the contemplated 3-5% or so negative rates. Yes, with large nominal rates there is pressure to pay faster, inventory cash-management to reduce people's holdings of depreciating nominal claims, but this pervasive indexation of nominal payments did not break out. The IRS did not offer interest for early payment.
More deeply, what they're describing is a tiny step away from perfect price indexing. If all nominal payments are perfectly indexed to the nominal interest rate, accrued daily, then it's a tiny change to index all prices themselves to the CPI, accrued daily. If "how much you owe me," say to rent a house, is legally, contractually, and mechanically determined as a value times e^rt, and changes day by day, then e^(pi t) is just as easy.
So, price stickiness itself would (should!) disappear under this scenario.
Price stickiness has always been a bit of a puzzle for economists. As the Kimballs speculate how easy it is to index payments to negative interest rates, so economists speculate how easy it is to index payments to inflation. Yet it seems not to happen.
So this point of view strikes me as a bit of a catch-22 for its advocates, who generally are of the frame of mind that prices and nominal contracts are sticky and that’s why negative nominal rates are a good idea to "stimulate demand" in the first place. If we can have negative nominal rates and change all these legal and contractual zero-rate promises to allow it, then prices won't be sticky any more! Conversely, I should be cheering, as it amounts to a broad push to unstick prices. That has long seemed to me the natural policy response to the view that sticky prices are the root of all our troubles. It would allow negative rates, but eliminate their need as well.
Alas, the world seems remarkably resistant to time-indexing all payments.
Wednesday, April 15, 2015
Gdefault needs not Grexit
The little grumpy cartoon usually represents me pounding my coffee down in agreement as the WSJ exposes some idiocy. Last week, alas, I spilled my grumpy coffee in disagreement with a little part of its otherwise excellent "The case for letting Greece go."
Sure we can have an argument about whether it would be a good idea. The first 147 devaluations and currency confiscations didn't produce Singapore on the Mediterranean, but maybe the 148th will do the trick. The canard is the logical necessity of Grexit.
This is a particularly dangerous canard too. Greece is undergoing a slow motion bank run. Greeks are wisely taking their euros out of Greek banks and either holding cash or taking it abroad. So, how to Greek banks give them euros without selling all their assets -- loans and Greek government bonds? Answer, they get the money from the Greek central bank, which gets the euros from the ECB. The ECB is getting antsy about funding not just Greek government debt, but the whole Greek banking system.
Sooner or later Greeks will translate all this central banker speak about "capital controls" "liquidity management" and so forth to "there is a good chance that tomorrow morning your bank account will be frozen or converted to Drachmas." Then the run of all time starts and the whole thing unravels.
How do you stop that from happening? By shouting from the rooftops that the currency remains the euro, no matter if the government defaults on its loans to the IMF. At least we can shout from the rooftops that changing currencies is a separate decision, and that stiffing the IMF does not imply the logical necessity of grabbing Greek bank accounts.
To be sure the article gets much right. It's main thesis: Letting Greece default might be the right thing to do
Thursday marks another deadline in Greece’s struggle to avoid default, as a €450 million payment to the International Monetary Fund comes due. Athens says it will meet this obligation, but sooner or later Prime Minister Alexis Tsipras and his government will miss a payment to someone if it doesn’t agree with creditors on a new bailout. An exit from the euro would then be a real possibility.Please can we stop passing along this canard -- that Greece defaulting on some of its bonds means that Greece must must change currencies. Greece no more needs to leave the euro zone than it needs to leave the meter zone and recalibrate all its rulers, or than it needs to leave the UTC+2 zone and reset all its clocks to Athens time. When large companies default, they do not need to leave the dollar zone. When cities and even US states default they do not need to leave the dollar zone. A common currency means that sovereigns default just like large financial companies. (Yes, a bit of humor in the last one.)
Sure we can have an argument about whether it would be a good idea. The first 147 devaluations and currency confiscations didn't produce Singapore on the Mediterranean, but maybe the 148th will do the trick. The canard is the logical necessity of Grexit.
This is a particularly dangerous canard too. Greece is undergoing a slow motion bank run. Greeks are wisely taking their euros out of Greek banks and either holding cash or taking it abroad. So, how to Greek banks give them euros without selling all their assets -- loans and Greek government bonds? Answer, they get the money from the Greek central bank, which gets the euros from the ECB. The ECB is getting antsy about funding not just Greek government debt, but the whole Greek banking system.
Sooner or later Greeks will translate all this central banker speak about "capital controls" "liquidity management" and so forth to "there is a good chance that tomorrow morning your bank account will be frozen or converted to Drachmas." Then the run of all time starts and the whole thing unravels.
How do you stop that from happening? By shouting from the rooftops that the currency remains the euro, no matter if the government defaults on its loans to the IMF. At least we can shout from the rooftops that changing currencies is a separate decision, and that stiffing the IMF does not imply the logical necessity of grabbing Greek bank accounts.
To be sure the article gets much right. It's main thesis: Letting Greece default might be the right thing to do
But if Athens won’t implement reforms that would return Greece to growth and sustainable finances, allowing the country to leave would be the least bad outcome.And if the WSJ understood that "allowing the country to default" is not the same thing as "allowing the country to leave" the case is even stronger. (Though who does this "allowing" is a bit muddy. One more subject-less sentence infects the forlorn English language of policy-speak)
No one should cheer a Greek exit, which would be a disaster for the Greeks.Yes. Yet another reason to separate sovereign default from a change of monetary units.
Greece’s main contagion threat now would be if it is bailed out again without reform.This is the article's central point, and a good one. In financial as in foreign policy, people take important lessons from discovering that threats are empty.
The strongest argument against allowing Greece to leave the euro is that it would dent the bloc’s appearance of permanence, making the euro more like a currency peg that members could leave at will.Exactly. And if we would all go back to the original Instruction Manual For the Euro, that says sovereign default can happen, just like corporate default, and does not require a change of currency, that permanence would be all the more assured.
Tuesday, April 14, 2015
Blanchard on Countours of Policy
Olivier Blanchard, (IMF research director) has a thoughtful blog post, Contours of Macroeconomic Policy in the Future. In part it's background for the IMF's upcoming conference with the charming title Rethinking Macro Policy III: Progress or Confusion?” (You can guess my choice.)
Olivier cleanly poses some questions which in his view are likely to be the focus of policy-world debate for the next few years. Looking for policy-oriented thesis topics? It's a one-stop shop.
Whether these should be the questions is another matter. (Mostly no, in my view.)
As a blogger, I can't resist a few pithy answers. But please note, I'm mostly having fun, and the questions and essay are much more serious.
Olivier cleanly poses some questions which in his view are likely to be the focus of policy-world debate for the next few years. Looking for policy-oriented thesis topics? It's a one-stop shop.
Whether these should be the questions is another matter. (Mostly no, in my view.)
As a blogger, I can't resist a few pithy answers. But please note, I'm mostly having fun, and the questions and essay are much more serious.
Financial regulation
... Where do we stand? Are some dimensions of systemic risk easier to measure (e.g., leverage in the banking sector vs. interconnectedness of banks and non-banks or risks outside the banking sector)? How should we assess the experience with stress-tests? And have we made enough progress in reducing systemic risk since the crisis, e.g., with Dodd-Frank, the Vickers commission, the Financial Stability Board, etc?
Answer: "Systemic risk" is barely defined. The idea that regulators will, this time, really really, understand risks taken by the big banks, see trouble ahead, and stop the banks from failing, is a triumph of hope over repeated experience.
The only progress -- and it's big -- is the slow realization that banks can and should issue lots more equity.
Answer: "Systemic risk" is barely defined. The idea that regulators will, this time, really really, understand risks taken by the big banks, see trouble ahead, and stop the banks from failing, is a triumph of hope over repeated experience.
The only progress -- and it's big -- is the slow realization that banks can and should issue lots more equity.
Macro Prudential Policies
... Do we have or can we develop tools to deal with the different types of risk, from high housing prices, to insufficient capital in some financial institutions, to sudden drops in liquidity in some financial markets?
Using these tools ...raises political economy issues. In a housing boom, increasing the loan to value ratio may be politically difficult. Questions: Given these issues, when should we use macro prudential tools, or should we use tougher, non contingent financial regulation? To be concrete, should we aim for variable capital ratios and decide when to adjust them, or just give up on the variable part, and aim for high but constant capital ratios?
Answer: The hubris that the Davos set will be able to figure out just the right amount of capital, and then fine-tune that month-to-month and bank-to-bank is astounding. "Political economy concerns" is putting it mildly. The IMF's "bubble" or "imbalance" is the local Congressman's boom, and he or she will be hopping mad if the Fed restricts credit to his district or pet industry in favor of another.
The fact that our regulators are still talking about liquidity betrays a fundamental confusion of individual vs. systemic risks. Liquidity is the plan, "if we lose money we'll sell assets." To who? Regulators demanding liquidity to plan for a financial crisis is like the FAA making sure everyone on the plane has enough money to buy a parachute in case of engine failure.
Answer: The hubris that the Davos set will be able to figure out just the right amount of capital, and then fine-tune that month-to-month and bank-to-bank is astounding. "Political economy concerns" is putting it mildly. The IMF's "bubble" or "imbalance" is the local Congressman's boom, and he or she will be hopping mad if the Fed restricts credit to his district or pet industry in favor of another.
The fact that our regulators are still talking about liquidity betrays a fundamental confusion of individual vs. systemic risks. Liquidity is the plan, "if we lose money we'll sell assets." To who? Regulators demanding liquidity to plan for a financial crisis is like the FAA making sure everyone on the plane has enough money to buy a parachute in case of engine failure.
Finally, it is clear that both financial regulation and macro prudential tools are likely to lead financial actors to adjust and explore ways of getting around them. Questions: In this game of cat and mouse, can the macro prudential regulators hope to win? Or will regulation and tools become increasingly complex and possibly counterproductive?
That's easy. No and Yes. Actually I'm being too pessimistic. Regulatory capture works both ways. An easy forecast: Stress-testers at the Fed will be getting lucrative salary offers to move to the private sector and help pass stress tests. Which they will increasingly do.
That's easy. No and Yes. Actually I'm being too pessimistic. Regulatory capture works both ways. An easy forecast: Stress-testers at the Fed will be getting lucrative salary offers to move to the private sector and help pass stress tests. Which they will increasingly do.
Monetary Policy
... Questions: Under the highly realistic assumption that financial regulation and macroprudential tools do not fully take care of financial stability, [Highly realistic indeed! You just answered the first set of questions as I did!] should monetary policy take financial stability into account? And if so, how? Can the interest rate or other monetary policy tools reduce financial risk? How should macro prudential tools and monetary policy be coordinated? Should they both be under the responsibility of the central bank?
Let's remember that the crash of 1929 was, at least in the standard history, sparked by the Fed trying to restrain what they saw as the bubble in the stock market.
If this is the case, and central banks have tools which can have effects on very specific sectors of the economy, can they retain full independence?
No. In a democracy, independence comes with limited authority. The financial central planner cannot and will not long stay independent.
If this is the case, and central banks have tools which can have effects on very specific sectors of the economy, can they retain full independence?
No. In a democracy, independence comes with limited authority. The financial central planner cannot and will not long stay independent.
The zero ... lower bound on the interest rate set by central banks was thought to be a theoretical curiosum, unlikely to happen, and, in any case, easy to combat if reached. If reached, central banks could, through announcements of future monetary policy, increase expected inflation and achieve large negative interest rates. We have learned that this was simply wishful thinking. The zero lower bound could be reached, inflation expectations are not easy to manipulate, and it may take a very long time to exit.
Three cheers. Wow, Olivier, who wrote one of the most influential calls for announcements of higher inflation targets, looks at the data and calls it "wishful thinking." Bravo.
.. Quantitative Easing,... Questions: ...should central banks eventually return to the traditional mode of intervening at the short end of the market, or should they continue to buy and sell longer maturity sovereign or corporate bonds? Should the balance sheets of central banks return to their pre-crisis size, or remain permanently larger? If the central bank intervenes along the yield curve, how should monetary policy and debt management by the Treasury be combined?
Three cheers. Wow, Olivier, who wrote one of the most influential calls for announcements of higher inflation targets, looks at the data and calls it "wishful thinking." Bravo.
.. Quantitative Easing,... Questions: ...should central banks eventually return to the traditional mode of intervening at the short end of the market, or should they continue to buy and sell longer maturity sovereign or corporate bonds? Should the balance sheets of central banks return to their pre-crisis size, or remain permanently larger? If the central bank intervenes along the yield curve, how should monetary policy and debt management by the Treasury be combined?
Large balance sheet, interest-paying reserves, open to everyone. Some crisis interventions reveal very desirable permanent states of affairs. Stop fooling around with direct intervention in long-term debt, mortgage-backed security markets, and don't follow other central banks to buying and selling stocks, foreign exchange, etc.
Fiscal Policy
Fiscal Policy
... Questions: What is a dangerous level of debt? That which markets doubt you can repay. Seriously, if you're growing fast with a good long run plan for containing expenditures and raising revenue without ruinous taxation, a lot. If not, a lot less. ... What do we know about confidence effects? You mean statements by officials that "engender confidence?" Go back to the Romans, burn incense at the Temple of Jupiter. More seriously, we've learned that speaking loudly with no stick doesn't work. ...Should the old idea of the fiscal golden rule, the separation of a current and of a capital account, be resurrected? Separating two sides of an accounting identity sounds like an interesting golden rule. I think it would be golden to separate the current account and capital account I run down at the apple store -- they give me stuff, I don't have to give them money. Olivier surely has something more sophisticated in mind, and I'm revealing I'm a rube at this policy-speak coded language.
Most observers agree that the fiscal stimulus early in the crisis was instrumental in limiting the decrease in output. I'm glad he said "most" not "all"....
Capital inflows, exchange rate management and capital controls
The crisis has reinforced the notion that international capital flows can be very volatile, with emerging markets being particularly vulnerable. Back to previous comment. Capital can try to flow, but unless goods flow in the other direction, all it does is to lower prices. Unless you can pass a rule to get rid of accounting identities. See above. Policy makers have responded with a panoply of tools, from capital controls A polite word for expropriation to macro prudential measures aimed at shaping flows, What a lovely little policy-ese phrase and FX intervention. .... And what does the experience since the crisis say about the optimal opening of the capital account, even in the long run? Translated to English, back to the de-globalized protectionist world. If capital can't flow, neither can goods.
The International Monetary and Financial System
.... Questions: ... Should we reexamine the rules of the game for exchange rates? How can we improve on the process of sovereign debt restructuring?
As Olivier's essay moves on, and gradually reverts to the obfuscatory Orwellian prose of the international policy world, I get more and more animated. I mean just who is this "we?" Who is going to tell you you're not allowed to buy euros for your vacation this summer ("capital controls"), tell your bank not to give you a loan ("macro-produential policy"), decide how many billions to siphon from your pocket to the owners of large banks ("recapitalization" "process of sovereign debt restructuring"), not allowed to expand your business in a new country ("macro prudential measures aimed at shaping flows") and so forth? When there even is a "we," unlike most sentences with no subjects, like "the optimal opening of the capital account."
What should be the role of international forums such as the G20?
Aha, now I get it.
As Olivier's essay moves on, and gradually reverts to the obfuscatory Orwellian prose of the international policy world, I get more and more animated. I mean just who is this "we?" Who is going to tell you you're not allowed to buy euros for your vacation this summer ("capital controls"), tell your bank not to give you a loan ("macro-produential policy"), decide how many billions to siphon from your pocket to the owners of large banks ("recapitalization" "process of sovereign debt restructuring"), not allowed to expand your business in a new country ("macro prudential measures aimed at shaping flows") and so forth? When there even is a "we," unlike most sentences with no subjects, like "the optimal opening of the capital account."
What should be the role of international forums such as the G20?
Aha, now I get it.
Thursday, April 2, 2015
The sources of stock market fluctuations
How much do dividend-growth vs. discount-rate shocks account for stock price variations?
An under-appreciated point occurred to me while preparing for my Coursera class and to comment on Daniel Greewald, Martin Lettau and Sydney Ludvigsson's nice paper "Origin of Stock Market Fluctuations" at the last NBER EFG meeting
The answer is, it depends the horizon and the measure. 100% of the variance of price dividend ratios corresponds to expected return (discount rate) shocks, and none to dividend growth (cash flow) shocks. 50% of the variance of one-year returns corresponds to cashflow shocks. And 100% of long-run price variation corresponds to from cashflow shocks, not expected return shocks. These facts all coexist
I think there is some confusion on the point. If nothing else, this makes for a good problem set question.
The last point is easiest to see just with a plot. Prices and dividends are cointegrated. Prices correspond to dividends and expected returns. Dividends have a unit root, but expected returns are stationary. Over the long run prices will not deviate far from dividends. So 100% of long-enough run price variation must come from dividend variation, not expected returns.
Ok, a little more carefully, with equations.
A quick review:
The most basic VAR for asset returns is \[ \Delta d_{t+1} = b_d \times dp_{t}+\varepsilon_{t+1}^{d} \] \[ dp_{t+1} = \phi \times dp_{t} +\varepsilon_{t+1}^{dp} \] Using only dividend yields dp, dividend growth is basically unforecastable \( b_d \approx 0\) and \( \phi\approx0.94 \) and the shocks are conveniently uncorrelated. The behavior of returns follows from the identity, that you need more dividends or a higher price to get a return, \[ r_{t+1}\approx-\rho dp_{t+1}+dp_{t}+\Delta d_{t+1}% \] (This is the Campbell-Shiller return approximation, with \(\rho \approx 0.96\).) Thus, the implied regression of returns on dividend yields, \[ r_{t+1} = b_r \times dp_{t}+\varepsilon_{t+1}^{r} \] has \(b_r = (1-\rho\phi)+0 = 1-0.96\times0.94 = 0.1\) and a shock negatively correlated with dividend yield shocks and positively correlated with dividend growth shocks.
The impulse response function for this VAR naturally suggests "cashflow" (dividend) and "expected return" shocks, (d/p). (Sorry for recycling old points, but not everyone may know this.)
Three propositions:
But
Why are returns and p/d so different? Current cash flow shocks affect returns. But a shock to dividends, when prices rise at the same time, does not affect the dividend price ratio. (This is the essence of the Campbell-Ammer return decomposition.)
The third proposition is less familiar:
This is related to a point made by Fama and French in their Equity Premium paper. Long run average returns are driven by long run dividend growth plus the average value of the dividend yield. The difference in valuation -- higher prices for given set of dividends -- can affect returns in a sample, as higher prices for a given set of dividends boost returns. But that mechanism can't last. (Avdis and Wachter have a nice recent paper formalizing this point.) It's related to a similar point made often by Bob Shiller: Long run investors should buy stocks for the dividends.
A little more generality as this is the new bit.
\[ p_{t+k}-p_t = dp_{t+k}-dp_t + \sum_{j=1}^{k}\Delta d _{t+j} \] \[ p_{t+k}-p_t = (\phi^{k}-1)dp_t + \sum_{j=1}^{k}\phi^{k-j} \varepsilon^{dp}_{t+j} + \sum_{j=1}^{k} \varepsilon^d _{t+j} \] \[ var(p_{t+k}-p_t) = \frac{(1-\phi^{k})^2}{1-\phi^2} \sigma^2(\varepsilon^{dp}) + \frac{(1-\phi^{2k})}{1-\phi^2} \sigma^2(\varepsilon^{dp}) + k\sigma^2(\varepsilon^d) \] \[var(p_{t+k}-p_t) = 2\frac{(1-\phi^{k})}{1-\phi^2} var(\varepsilon^{dp}_{t+1}) + k var(\varepsilon^d_{t+j})\] So you can see the last bit takes over. It doesn't take over as fast as you might think. Here's a graph using sample values,
At a one year horizon, it's just about 50/50. The dividend shocks eventually take over, at rate 1/k. But at 50 years, it's still about 80/20.
Exercise for the interested reader/finance professor looking for problem set questions: Do the same thing for long horizon returns, \( r_{t+1}+r_{t+2}+...+r_{t+k} \) using \(r_{t+1} = -\rho dp_{t+1} + dp_t + \Delta d_ {t+1} \) It's not so pretty, but you can get a closed form expression here too, and again dividend shocks take over in the long run.
Be forewarned, the long run return has all sorts of pathological properties. But nobody holds assets forever, without eating some of the dividends.
Disclaimer: Notice I have tried to say "associated with" or "correspond to" and not "caused by" here! This is just about facts. The facts have just as easy a "behavioral" interpretation about fads and bubbles in prices as they do a "rationalist" interpretation. Exercise 2: Write the "behavioralist" and then "rationalist" introduction / interpretation of these facts. Hint: they reverse cause and effect about prices and expected returns, and whether people in the market have rational expectations about expected returns.
An under-appreciated point occurred to me while preparing for my Coursera class and to comment on Daniel Greewald, Martin Lettau and Sydney Ludvigsson's nice paper "Origin of Stock Market Fluctuations" at the last NBER EFG meeting
The answer is, it depends the horizon and the measure. 100% of the variance of price dividend ratios corresponds to expected return (discount rate) shocks, and none to dividend growth (cash flow) shocks. 50% of the variance of one-year returns corresponds to cashflow shocks. And 100% of long-run price variation corresponds to from cashflow shocks, not expected return shocks. These facts all coexist
I think there is some confusion on the point. If nothing else, this makes for a good problem set question.
The last point is easiest to see just with a plot. Prices and dividends are cointegrated. Prices correspond to dividends and expected returns. Dividends have a unit root, but expected returns are stationary. Over the long run prices will not deviate far from dividends. So 100% of long-enough run price variation must come from dividend variation, not expected returns.
Ok, a little more carefully, with equations.
A quick review:
The most basic VAR for asset returns is \[ \Delta d_{t+1} = b_d \times dp_{t}+\varepsilon_{t+1}^{d} \] \[ dp_{t+1} = \phi \times dp_{t} +\varepsilon_{t+1}^{dp} \] Using only dividend yields dp, dividend growth is basically unforecastable \( b_d \approx 0\) and \( \phi\approx0.94 \) and the shocks are conveniently uncorrelated. The behavior of returns follows from the identity, that you need more dividends or a higher price to get a return, \[ r_{t+1}\approx-\rho dp_{t+1}+dp_{t}+\Delta d_{t+1}% \] (This is the Campbell-Shiller return approximation, with \(\rho \approx 0.96\).) Thus, the implied regression of returns on dividend yields, \[ r_{t+1} = b_r \times dp_{t}+\varepsilon_{t+1}^{r} \] has \(b_r = (1-\rho\phi)+0 = 1-0.96\times0.94 = 0.1\) and a shock negatively correlated with dividend yield shocks and positively correlated with dividend growth shocks.
The impulse response function for this VAR naturally suggests "cashflow" (dividend) and "expected return" shocks, (d/p). (Sorry for recycling old points, but not everyone may know this.)
Three propositions:
- The variance of p/d is 100% risk premiums, 0% cashflow shocks
But
- The variance of returns is 50% due to risk premiums, 50% due to cashflows.
Why are returns and p/d so different? Current cash flow shocks affect returns. But a shock to dividends, when prices rise at the same time, does not affect the dividend price ratio. (This is the essence of the Campbell-Ammer return decomposition.)
The third proposition is less familiar:
- The long-run variance of stock market values (and returns) is 100% due to cash flow shocks and none to expected return or discount rate shocks.
This is related to a point made by Fama and French in their Equity Premium paper. Long run average returns are driven by long run dividend growth plus the average value of the dividend yield. The difference in valuation -- higher prices for given set of dividends -- can affect returns in a sample, as higher prices for a given set of dividends boost returns. But that mechanism can't last. (Avdis and Wachter have a nice recent paper formalizing this point.) It's related to a similar point made often by Bob Shiller: Long run investors should buy stocks for the dividends.
A little more generality as this is the new bit.
\[ p_{t+k}-p_t = dp_{t+k}-dp_t + \sum_{j=1}^{k}\Delta d _{t+j} \] \[ p_{t+k}-p_t = (\phi^{k}-1)dp_t + \sum_{j=1}^{k}\phi^{k-j} \varepsilon^{dp}_{t+j} + \sum_{j=1}^{k} \varepsilon^d _{t+j} \] \[ var(p_{t+k}-p_t) = \frac{(1-\phi^{k})^2}{1-\phi^2} \sigma^2(\varepsilon^{dp}) + \frac{(1-\phi^{2k})}{1-\phi^2} \sigma^2(\varepsilon^{dp}) + k\sigma^2(\varepsilon^d) \] \[var(p_{t+k}-p_t) = 2\frac{(1-\phi^{k})}{1-\phi^2} var(\varepsilon^{dp}_{t+1}) + k var(\varepsilon^d_{t+j})\] So you can see the last bit takes over. It doesn't take over as fast as you might think. Here's a graph using sample values,
At a one year horizon, it's just about 50/50. The dividend shocks eventually take over, at rate 1/k. But at 50 years, it's still about 80/20.
Exercise for the interested reader/finance professor looking for problem set questions: Do the same thing for long horizon returns, \( r_{t+1}+r_{t+2}+...+r_{t+k} \) using \(r_{t+1} = -\rho dp_{t+1} + dp_t + \Delta d_ {t+1} \) It's not so pretty, but you can get a closed form expression here too, and again dividend shocks take over in the long run.
Be forewarned, the long run return has all sorts of pathological properties. But nobody holds assets forever, without eating some of the dividends.
Disclaimer: Notice I have tried to say "associated with" or "correspond to" and not "caused by" here! This is just about facts. The facts have just as easy a "behavioral" interpretation about fads and bubbles in prices as they do a "rationalist" interpretation. Exercise 2: Write the "behavioralist" and then "rationalist" introduction / interpretation of these facts. Hint: they reverse cause and effect about prices and expected returns, and whether people in the market have rational expectations about expected returns.
Subscribe to:
Comments (Atom)








