Monday, March 30, 2015

Adam Davidson on Immigration

Illustration by Andrew Rae, source New York Times

Adam Davidson has a very nice New York Times Magazine article, "Debunking the Myth of the Job-Stealing Immigrant", in favor of "radically open borders."

Here's how a top professional journalist and writer puts together the central argument, so much more cleanly than I can do it:
So why don’t we open up?
The chief logical mistake we make is something called the Lump of Labor Fallacy: the erroneous notion that there is only so much work to be done and that no one can get a job without taking one from someone else. It’s an understandable assumption. After all, with other types of market transactions, when the supply goes up, the price falls. If there were suddenly a whole lot more oranges, we’d expect the price of oranges to fall or the number of oranges that went uneaten to surge.

But immigrants aren’t oranges. It might seem intuitive that when there is an increase in the supply of workers, the ones who were here already will make less money or lose their jobs. Immigrants don’t just increase the supply of labor, though; they simultaneously increase demand for it, using the wages they earn to rent apartments, eat food, get haircuts, buy cellphones. That means there are more jobs building apartments, selling food, giving haircuts and dispatching the trucks that move those phones. Immigrants increase the size of the overall population, which means they increase the size of the economy. Logically, if immigrants were “stealing” jobs, so would every young person leaving school and entering the job market; countries should become poorer as they get larger. In reality, of course, the opposite happens.

Most anti-immigration arguments I hear are variations on the Lump of Labor Fallacy. That immigrant has a job. If he didn’t have that job, somebody else, somebody born here, would have it. This argument is wrong, or at least wildly oversimplified. But it feels so correct, so logical. And it’s not just people like my grandfather making that argument. Our government policy is rooted in it.

The single greatest bit of evidence disproving the Lump of Labor idea comes from research about the Mariel boatlift, a mass migration in 1980 that brought more than 125,000 Cubans to the United States. According to David Card, an economist at the University of California, Berkeley, roughly 45,000 of them were of working age and moved to Miami; in four months, the city’s labor supply increased by 7 percent. Card found that for people already working in Miami, this sudden influx had no measurable impact on wages or employment. His paper was the most important of a series of revolutionary studies that transformed how economists think about immigration. Before, standard economic models held that immigrants cause long-term benefits, but at the cost of short-term pain in the form of lower wages and greater unemployment for natives. But most economists now believe that Card’s findings were correct: Immigrants bring long-term benefits at no measurable short-term cost.
Needless to say the "lump of labor" fallacy pervades politics, policy, and popular discussion on more than immigration. But Adam doesn't bother with the 100 other fallacies.

A beautiful stylistic choice: Adam's antagonist is ... his grandfather. That lets Adam have an anonymous, sympathetic antagonist, who is slowly changing his mind in Adam's favor. Adam doesn't have to pick on a particular individual or set of individuals with complex opinions; he doesn't resort to the horrible vague antagonist, "some think;" and he avoids the usual partisan politics and vilification of so much political blogging and editorial writing.

Students: notice concrete not abstract words. "Using the wages they earn to rent apartments, eat food, get haircuts, buy cellphones."  Not "Using earned income to demand goods and services."

Thursday, March 26, 2015

A New Structure for U. S. Federal Debt

A new paper by that title, here.

I propose a new structure for U. S. Federal debt. All debt should be perpetual, paying coupons forever with no principal payment. The debt should be composed of the following:
  1. Fixed-value, floating-rate debt: Short-term debt has a fixed value of $1.00, and pays a floating rate. It is electronically transferable, and sold in arbitrary denominations. Such debt looks to an investor like a money-market fund, or reserves at the Fed. 
  2. Nominal perpetuities: This debt pays a coupon of $1 per bond, forever. 
  3. Indexed perpetuities: This debt pays a coupon of $1 times the current consumer price index (CPI).
  4. Tax free: Debt should be sold in a version that is free of all income, estate, capital gains, and other taxes. Ideally, all debt should be tax free. 
  5. Variable coupon: Some if not all long-term debt should allow the government to vary the coupon rate without triggering legal default. 
  6. Swaps: The Treasury should manage the maturity structure of the debt, and the interest rate and inflation exposure of the Federal budget, by transacting in simple swaps among these securities.
Of these, I think the first is the most important. Think of it as Treasury Electronic Money, or reserves for all. Why?

Economists have long dreamed of interest-paying money. It fulfills Milton Friedman’s (1969) optimal quantity of money without deflation. Paper money is free to produce, so the economy should be satiated in liquidity...

Our economy invented inside interest-paying electronic money in the form of money market funds, overnight repurchase agreements, and short-term commercial paper, and found it useful. But that money failed, suffering a run in the 2008 financial crisis. Treasury-provided interest-paying electronic money is immune from conventional runs. Money market funds 100% backed by fixed-value Treasury debt cannot suffer a run...

By analogy, in the 19th century, the Treasury provided coins. Banks issued notes. Notes were convenient, being a lot lighter than coins. But there were repeated runs and crises involving bank notes. The U.S. government issued paper money, which might inflate, but cannot suffer conventional default or a run. That money eventually drove out private banknotes, and that source of financial crises was permanently ended. (Crises involving demand deposits did not end, but here the U.S. tried a different policy response, deposit insurance and risk regulation. It has not worked as well.)

In the 21st century, the Treasury has exactly the same natural monopoly in providing default-free and run-free electronically-transferable interest-paying money to private parties. It should do so.
If the Treasury offers what are essentially interest-paying reserves, then we don't have to argue about the size of the Fed's balance sheet, ON RRP, etc.

Nominal perpetuities are a nice way to condense the hundreds of outstanding issues into one, which should increase their liquidity a good deal.

Indexed perpetuities are a cleaner way to implement today's tips.

The tax free analysis is maybe the most interesting. I put together a little tax clientele model with some interesting results. No, issuing tax free debt is not a present to rich people. By attracting the high tax clientele back to Treasury debt, we should see lower net (after tax) interest costs to the Treasury.

I have a nice implementation of Treasury swaps too, that might open them up a lot.

Comments welcome. It's a bit long because it responds to a previous round of comments, so if you're bubbling over with what's wrong with the proposals, do check that I haven't already answered your comment.

Tuesday, March 24, 2015

Jumps and diffusions

I learned an interesting continuous time trick recently. The context is a note, "The fragile benefits of endowment destruction" that I wrote with John Campbell, about how to extend our habit model to jumps in consumption. The point here is more interesting than that particular context.

Suppose one time series \(x\), which follows a diffusion, drives another \(y\). In the simplest example, \[dx_t = \sigma dz_t \] \[ dy_t = y_t dx_t. \] In our example, the second equation describes how habits \(y\) respond to consumption \(x\). The same kind of structure might describe how invested wealth \(y\) responds to asset prices \(x\), or how option prices \(y\) respond to stock prices \(x\).

Now, suppose we want to extend the model to handle jumps in \(x\), \[dx_t = \sigma dz_t + dJ_t.\] What do we do about the second equation? \(y_t\) now can jump too. On the right hand side of the second equation, should we use the left limit, the right limit, or something in between?

The usual answer is to use the left limit. We generalize the model to jumps this way: \[dx_t = \sigma dz_t+ dJ_t \] \[ dy_t = y_{t_-} dx_t = y_{t_-} \sigma dz_t + y_{t_-}dJ_t \] where \(y_{t_{-}}\) denotes the left limit.

That approach has some weird properties however. Suppose \(y_{t_-}=1\), and \(dJ_t=1\). Then \(y_t\) jumps to \(y_t=2\). But suppose there are two jumps of size 1/2, one at time \(t\) and one at time \(t+\varepsilon\). Now \(y\) jumps up to 1.5 after the first jump, and then jumps another \(1.5 \times 0.5 = 0.75\), ending up at \(y_{t+\varepsilon} =2.25\). Two half jumps have a different response than one full jump.

Suppose instead we extend the original model to jumps by taking the jump limit of a continuous process. Imagine that we observe realizations of \(\{dz_t\}\) that get closer and closer to a jump in \(dx_t\), and let's find what happens to \(y_t\). The general solution to the first set of equations is \[ y_{t+\Delta} = y_t e^{(x_{t+\Delta}-x_t - \frac{1}{2}\sigma^2\Delta)}\] so, in the limit \(\Delta \rightarrow 0\) that \(x_t\) takes a jump of size \(dJ_t\), the jump-limit of a continuous movement is \[ dy_{t} \equiv y_t -y_{t_-} = y_{t_-}(e^{dx_{t}}-1) = y_{t_-}\sigma dz_t + y_{t_-}e^{dJ_t}\] rather than \[ dy_t = y_{t_-} dx_t = y_{t_-} \sigma dz_t + y_{t_-}dJ_t \] So, the left-limit method produced a response to a jump that was different from the response to a continuous process arbitrarily close to a jump. For example, the left-limit approach can produce a negative \(y_t\), but this method, like the diffusion process, cannot fall below zero. This method also produces a response to two half jumps that is the same as the response to a full jump.

As you can see, the difference is whether the state variable \(y_t\) gets to change during the jump. In the left-limit approach, the same \(y_{t_-}\) gets applied to the whole jump. In the continuous-limit version, \(y_t\) implicitly gets to move while the jump in \(x_t\) is moving.

A nonlinear function of a jump is a little novel, but there's nothing wrong with it, and it exists in the continuous time literature. We don't see it that often, because when you're only studying one series it's easier to just change the distribution of the jump process instead. This question occurs when you can see both series x and y and you want to model the relationship between them.

Which is right?

Which extension to jumps is correct? Both are mathematically correct. There is nothing wrong with writing down a model in which the response to a jump is different from the response to continuous movements arbitrarily close to jumps. The answer depends on the economic situation.

For example, consider models with bankruptcy constraints. Agents who can continuously adjust their investments may always avoid bankruptcy in a diffusion setting. If we extend such a model to jumps with the continuous limit approach, implicitly preserving the investor's ability to trade as fast as asset prices change even in the jump limit, we will preserve bankruptcy avoidance in face of a jump in prices. However, if we model portfolio adjustment to jumps with the left-limit generalization, agents may be forced in to bankruptcy for price jumps.

Sometimes, one introduces jumps precisely to model a situation in which prices can move faster than agents can adjust their portfolios, so agents may be forced to bankruptcy. Then the left-limit generalization is correct. But if one wants to extend a model to jumps for other reasons, while avoiding bankruptcy, negative consumption, negative marginal utility (consumption below zero or below habits), violations of budget constraints, feasibility conditions, borrowing constraints, and so forth, then one should choose a generalization in which the jump gives the same result as the continuous limit.

Similarly, when extending option pricing models to jumps, one may want to model the jump in such a way that investors cannot adjust portfolios fast enough. Then the left-limit extension is appropriate, and investors must hold the jump risk. But one may wish to accommodate jumps in asset prices to better fit asset price dynamics while maintaining investor's ability to dynamically hedge. Then the nonlinear extension is appropriate, maintaining the equivalence between jumps and the limiting diffusion.

A little more general treatment

A little more generally, suppose \[ dx_t = g dt + \sigma dz_t \] \[dy_t = \mu(y_t) dt + \lambda(y_t)dx_t.\] We want to add \(dJ_t\) to the first equation. The left-limit approach is \[dy_t = \mu(y_{t_-}) dt + \lambda(y_{t_-})dx_t \] If there is a jump \(dJ_t\), \(y\) moves by an amount \[\frac{1}{\lambda(y_{t_-})}dy_t \equiv \frac{1}{\lambda(y_{t_-})}(y_t - y_{t_-}) = dx_t .\] The limit of a continuous movement solves the differential equation \[\int_{y_{t_-}}^{y_t} \frac{1}{\lambda(\xi)}d\xi = dx_t\] Again, you see the crucial difference, whether the state variable gets to move "during" the jump. We can write this as a differential, by writing the solution to this last differential equation as \[y_t-y_{t_-}=f(x_t-x_{t_-};y_{t_-})\] and then \[dy_t = \mu(y_{t_-}) dt + f(dx_t;y_{t_-})=\mu(y_{t_-}) dt + \lambda(y_{t_-})\sigma dz_t+f(dJ_t;y_{t_-})\]

So, you don't have to extend the model to jumps with the left-limit approach, and you don't have to swallow the idea that a jump has a different response than an arbitrarily close continuous-sample-path movement. The last equation shows you how to modify the model to include jumps in a way that preserves the property that the jump has the same effect as its continuous limit.

The point

Why a blog post on this? I asked a few continuous-time gurus, and none of them had seen this issue before. If someone knows where this has all been worked out with proper is dotted and ts crossed, I would like to know and cite it properly. (I would think the literature on option pricing with jumps had done it, but I couldn't find a reference.) Or perhaps it hasn't been done and someone wants to do it. I'm not good enough at the technical aspects of continuous time to write this with the right precision and generality.

And it's a cool trick that may be useful to someone outside of the narrow context that we had for it.

Update: 

Perhaps the right application is stock prices and option prices. When stock prices jump, someone must have studied the case that option prices move by the same amount the Black-Scholes formula gives for the same size stock price movement. Does anyone have a citation to that case?

Monday, March 23, 2015

Hospital Supply

In my view, health care supply restrictions are more important than the insurance or demand features that dominate public discussion. If you are spending your own money, yes, you shop for a good deal. But spending your own money in the face of restricted supply is like hailing a cab to LaGuardia at 5 o'clock on a rainy pre-Uber Friday afternoon. We need to free up innovative, disruptive health-care supply. Let the Southwest Airlines, Walmarts, Amazons and Apples in.

But where are the supply restrictions? Alas it's not as simple as the NY taxi commission. Supply restrictions are spread all over Federal, state and local law and regulation, and usually hidden.

So, I was interested to discover an interesting supply restriction in this editorial in the Wall Street Journal last week.
Last year the Daughters of Charity Health System sought to sell its six insolvent hospitals in California to Prime for $843 million including debt and pension liabilities. State law requires the AG [California Attorney General Kamala Harris] to approve nonprofit hospital acquisitions. Ms. Harris attached several poison pills at the urging of the SEIU [Service Employees International Union], which forced Prime last week to withdraw its offer.
State law requires the AG [Attorney General] to approve nonprofit hospital acquisitions. How could this go wrong?

Since 2010 operating losses at Daughters hospitals have tripled to $146 million. High pay scales, inflexible work rules and rich pension benefits have swelled labor costs to 74% of revenues compared to the nationwide average of 58% at nonprofit systems.... 
... Of six bidders, only Prime agreed to assume the $300 million liability for worker pensions. Prime also scored high on 10 of Daughters’s 11 bidding criteria including financial wherewithal and historical service quality. 
Prime’s problem was the SEIU’s  opposition owing to the company’s rejection of a so-called neutrality agreement, which would facilitate unionization at all of its hospitals. Only four of Prime’s 15 hospitals in California are unionized. Since 2009 the SEIU has run a public campaign against Prime, leveling accusations of Medicare fraud and unchecked sepsis. 
Ms. Harris has taken up the union cause. In 2011 the AG vetoed Prime’s acquisition of the bankrupt Victor Valley Community Hospital as “not in the public interest” though a report produced for her own office concluded that Prime’s “capital investment over the next five years should lead to substantial improvement to facilities, infrastructure, and certain services at the Hospital.”
Now you may say, how nice. The Attorney General is stalwartly backing the union cause, trying to raise wages and employment for struggling "middle class" Americans. Except, it's the same people who pay the higher health-care costs and suffer worse service.

Regulation from the top is supposed to "bend down the cost curve" in medicine. But true cost reductions, efficiency improvements,  and quality improvements are painful. Ask United's pilots union, Walmart's competitors, or Kodak's employees.

The tone of the Journal's editorial suggests a morality play. I think not. I can't imagine any regulator, attorney general, HHS secretary, or politician, given the power to approve or disapprove hospital acquisitions, doing so in a way that truly lowers costs and improves quality following the only path we know that actually works, allowing disruptive competition. You only cut costs by, well, cutting costs. And disruptive competitors only enter if they have the right to do so, not the discretionary approval of any politician or political appointee.

Update in response to comments: "After the ACA" has a long list of supply impediments. I'm trying to learn about additional ones.

Friday, March 20, 2015

Borio, Erdem, Filardo and Hofmann on the Costs of Deflation

Claudio Borio, Magdalena Erdem, Andrew Filardo and Boris Hofmann have a nice paper, "The costs of deflations: a historical perspective"

Deflation remains the looming zombie apocalypse of international monetary commentary.  Before we argue too much about cause and effect, it's nice to get the correlations straight. And the correlation between deflation and poor growth is much weaker than most people think:


The authors:
...Price deflations have coincided with both positive and clear negative growth rates (Graph 2). And a comparison of all inflation and deflation years suggests that, on balance, inflation years have seen only somewhat higher growth (Table 2). The difference in average growth rates is highest and statistically significant only during the interwar years, particularly in the period 1929-38 that includes the Great Depression (some 4 percentage points), and much smaller at other times.... Indeed, in the postwar era, in which transitory deflations dominate, the growth rate has actually been higher during deflation years, at 3.2% versus 2.7%.
Really, the concern at the moment is not a sharp large deflation, such as occurred in the 1930s and is felt by many to epitomize the demand or debt deflation story. Rather, the concern is over a moderate but persistent deflation, such as Japan has experienced. (One in which each individual is likely to never experience a wage decline, more here.)

To summarize the historical record surrounding persistent deflations, the authors organize the data around the peak in CPI before a deflation episode, and show average CPI and GDP around that peak:


The authors:
 While mean growth rates are mostly lower in the five years post-peak, the difference is large, 3.6 percentage points, and clearly statistically significant (i.e.cannot be attributed purely to chance) only in the interwar years, when the Great Depression took place...The difference during the classical gold standard period is 0.6 percentage points but it is not statistically significant. In fact, in the postwar era, average growth was even 0.3 percentage points higher in the five years after a price peak, although the difference is not statistically significant. Moreover, only in the interwar years did output actually fall post-peak.
In a multiple regression sense, does variation in output correlate better with falls in the overall price level, or with falls in house prices or equity prices that accompany deflation?



The graph presents regression coefficients. Read it as the partial correlation, if (blue) property prices go down but consumer and equity prices do not, how much output gain or loss does that event signal?House price or stock price "deflation," not overall price deflation matters. Of course, stock prices and property prices are strong symptoms of economic trouble, so don't be quick to read causality into correlation and ask the Fed to punch up stock and house prices.

The introduction offers a corrective that every financial journalist should take with morning cappuccino. Any price change can come from supply or demand, and is as likely a symptom as a cause:
Concerns about deflation - falling prices of goods and services - have loomed large in recent policy discussions. The debate is shaped by the deep-seated view that deflation, regardless of context, is an economic pathology that stands in the way of any sustainable and strong expansion. 
The almost reflexive association of deflation with economic weakness is easily explained. It is rooted in the view that deflation signals an aggregate demand shortfall, which simultaneously pushes down prices, incomes and output. But deflation may also result from increased supply. Examples include improvements in productivity, greater competition in the goods market, or cheaper and more abundant inputs, such as labour or intermediate goods like oil. Supply-driven deflations depress prices while raising incomes and output.
Conversely, note the simultaneous worry in the US about "wage inflation" and that wages have stagnated. Wage inflation with stable prices is a good thing!

A minor quibble: Asset price "inflation" and "deflation."
Moreover, while the impact of goods and services price deflations is ambiguous a priori, that of asset price deflations is not. As is widely recognised, asset price deflations erode wealth and collateral values and so undercut demand and output.
First, "asset price inflation" sounds sexy, but our first duty as economists should be to help readers understand that relative price changes are not inflation. All relative price changes, including asset prices, are relative price changes, not inflations and deflations. Health cost "inflation," wine "inflation" and chewing gum "inflation" are not inflation. Don't encourage misuse of the word, misunderstanding of relative prices vs. price level, and consequent policy mistakes like using anti-inflation tools to manipulate relative prices.

Second, asset price "deflations" are in large part a transfer of wealth, not a loss of wealth. House prices go down. The houses are still there. This is a qualitatively different fact than if houses wash in to the ocean. If you are young, live in an apartment, and have a job, a house price decline is a great thing. If you plan to buy the same size house as you want to sell, a house price decline is a wash. If you are young, a bond price decline is a great thing. You get the same future payments at a lower price. To some extent the same is true of many stock price movements.

Thursday, March 19, 2015

Levine on the Keynesian Illusion

David Levine has a very nice post on the Keynesian Illusion.

David Levine's analogy for Stimulus
Some big themes: Standard Keynesian economics violates budget constraints. He explains it well, but it is sure to occasion the usual venom from with the "Say's law fallacy" brigade that has a lot of trouble understanding the difference between budget constraints and equilibrium conditions.

David does a lot without equations. That broadens the appeal, but equations can be useful. For example equations clarify that crucial difference between budget constraints and equilibrium conditions. Equations can put to rest silly controversies. We might not still be writing papers, books, and blog posts about what "Keynes really meant," 80 years after the fact, or using "Say's law" as rotten tomatoes, if Keynes had written some equations.  Cynically, maybe the lesson is that lack of equations -- or even an equations appendix or citation -- keeps debate going and your name in the papers.

I also fear that his lovely anecdote about people each of whom wants what others produce will lead readers a bit astray. Keynesian economics is about lack of "demand," sticky prices not absent prices. It's not about absence of money, double coincidence of wants, and so forth.

David goes beyond the usual IS/LM formalism, to explain some of the "coordination failure" interpretations of Keynes. He also references Axel  Leijonhufvud's "great and famous work" describing a mismatch between saving, a desire for generic future consumption, and the demand for specific goods that firms need to invest.

He has a nice personal story of his Keynesian upbringing, which reminds me of my own. And
Knowledge of Keynesianism and Keynesian models is even deeper for the great Nobel Prize winners who pioneered modern macroeconomics - a macroeconomics with people who buy and sell things, who save and invest - Robert Lucas, Edward Prescott, and Thomas Sargent among others. They also grew up with Keynesian theory as orthodoxy - more so than I. And we rejected Keynesianism because it doesn't work not because of some aesthetic sense that the theory is insufficiently elegant.
The constant refrain that critics "don't know" Keynesian economics is an ingorant (I mean that not an insult, but in its literal meaning, ignoring the facts) calumny. Sargent's first book "Macroeconomic Theory" is a great example of a modern economists wrestling hard with Keynes.

The last paragraph is a gem:
Keynes own work consists of amusing anecdotes and misleading stories. Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational, a theory of graphs pulled largely out of thin air, a series of predictions that are hopelessly wrong - together with the vain hope that they can be put right if only the curves in the graphs can be twisted in the right direction. As it happens we have developed much better theories - theories that do explain many facts, theories that provide sensible policy guidance, theories that work reasonably well, theories that are not an illusion. The current versions of these theories are very unlike caricature theories of hopelessly rational people who are all identical. Current theories are not perfect - but unlike the Keynesian theory of perpetual motion machines they explain a great deal and have a great deal of truth to them. A working macroeconomist reading Krugman and DeLong feels as a doctor would if the Surgeon General got up and said that the way to cure cancer was to draw blood using leeches. 

Wednesday, March 18, 2015

Arezki, Ramey, and Sheng on news shocks

I attended the NBER EFG (economic fluctuations and growth) meeting a few weeks ago, and saw a very nice paper by Rabah Arezki, Valerie Ramey, and Liugang Sheng, "News Shocks in Open Economies: Evidence from Giant Oil Discoveries" (There were a lot of nice papers, but this one is more bloggable.)

They look at what happens to economies that discover they have a lot of oil.

An oil discovery is a well identified "news shock."

Standard productivity shocks are a bit nebulous, and alter two things at once: they give greater productivity and hence incentive to work today and also news about more income in the future.

An oil discovery is well publicized. It incentivizes a small investment in oil drilling, but mostly is pure news of an income flow in the future. It does not affect overall labor productivity or other changes to preferences or technology.
Rabah,Valerie, and Liugang then construct a straightforward macro model of such an event.

Utility comes from consumption and work. The production function has an oil sector and non-oil sector. There are adjustment costs to investment and to reallocation of capital between oil and non-oil sectors. The consumption good is tradeable, and the economy sells oil internationally to get it as well as to produce it.


They compute impulse-response functions to big oil discoveries, and compare the model dynamics to the response functions. It's a nice fit and an intuitive story. After the shock hits, during the period of investment, the current account declines -- borrow money, buy oil investment goods and also borrow to finance higher consumption now. GDP is basically flat, as oil investment is a small fraction of the economy. Savings also declines. Consumption goes up right away, and then stays up in permanent income fashion. (You have to look closely at the green line, because the vertical scale is too small.) Investment rises, to build those oil wells.

Employment declines, as there is a wealth effect encouraging leisure but no higher productivity of labor to encourage work. This is why productivity shocks, emphasizing a temporarily higher marginal product of labor, are important in real business cycle models.

Once oil comes on line, the current account changes sign, as the economy exports oil and pays back debt. GDP, including the oil, rises. Consumption stays were it was, by permanent income logic. And investment returns to zero.

Valerie, presenting the paper, was a bit discouraged. This "news shock" doesn't generate a pattern that looks like standard recessions, because GDP and employment go in the opposite direction.

I am much more encouraged. Here are macroeconomies behaving exactly as they should, in response to a shock where for once we really know what the shock is. And in response to a shock with a nice dynamic pattern, which we also really understand.

My comment was something to the effect of "this paper is much more important than you think. You match the dynamic response of economies to this large and very well identified shock with a standard, transparent and intuitive neoclassical model. Here's a list of some of the ingredients you didn't need: Sticky prices, sticky wages, money, monetary policy, (i.e. interest rates that respond via a policy rule to output and inflation or zero bounds that stop them from doing so), home bias, segmented financial markets, credit constraints, liquidity constraints, hand-to-mouth consumers, financial intermediation, liquidity spirals, fire sales, leverage, sudden stops, hot money, collateral constraints, incomplete markets, idiosyncratic risks, strange preferences including habits, nonexpected utility, ambiguity aversion, and so forth, behavioral biases, nonexpected utility, or rare disasters. If those ingredients are really there, they ought to matter for explaining the response to your shocks too. After all, there is only one economic structure, which is hit by many shocks. So your paper calls into question just how many of those ingredients are really there at all."

Thomas Philippon, whose previous paper had a pretty masterful collection of a lot of those ingredients, quickly pointed out my overstatement. One needs not need every ingredient to understand every shock. Constraint variables are inequalities. A positive news shock may not cause credit constraints etc. to bind, while a negative shock may reveal them.

Good point. And really, the proof is in the pudding. If those ingredients are not necessary, then I should produce a model without them that produces events like 2008. But we've been debating the ingredients and shock necessary to explain 1932 for 82 years, so that approach, though correct, might take a while.

In the meantime, we can still cheer successful simple models and well identified shocks on the few occasions that they appear and fit data so nicely. Note to graduate students, this paper is a really nice example to follow for its integration of clear theory and excellent empirical work.

Monday, March 16, 2015

Duffie and Stein on Libor

Darrell Duffie and Jeremy Stein have a nice paper, "Reforming LIBOR and Other Financial-Market Benchmarks" I learned some important lessons from the paper and discussion.

Libor is the "London interbank offering rate." If you have a floating rate mortgage, it is likely based on Libor plus a percentage.
In its current form, LIBOR is determined each day (or “fixed”), not based on actual transactions between banks but rather on a poll of a group of panel banks, each of which is asked to make a judgmental estimate of the rate at which it could borrow.
As soon as money changes hands, there is an incentive to, er, shade reports in the direction that benefits the trading desk.
Revelations of widespread manipulation of LIBOR and other benchmarks, including those for foreign exchange rates and some commodity prices, have threatened the integrity of these benchmarks.. 
or report a rate that makes your bank look better (lower rate) than it really is:
During the financial crisis of 2007-2009...Some banks did not wish to appear to be less creditworthy than others... The rates reported by each of the panel of banks polled to produce LIBOR were quickly published, alongside the name of the reporting bank, for all to see. As a result, there arose at some banks a practice of... understating true borrowing costs when submitting to a LIBOR poll. 

An important point as we get in to security design mode:
many of the documented cases of LIBOR manipulation...involved only very small rate distortions, with the guilty parties often misstating their borrowing costs by just one or two basis points. 
OK, what to do? Rather obviously, publishing the individual bank quotes and not just the average is not a good idea, and I gather will end.

In Darrell and Jeremy's view, we really need two indices for the two separate purposes of Libor.

Libor is used as an index for banks who issue adjustable rate mortgages. For that use, an index of bank borrowing costs is appropriate. But bank borrowing from each other has dried up considerably. Interbank borrowing is really no longer a marginal source of funds. And the market is so small these days that a transactions-based index would be unreliable -- and also open to manipulation.

They suggest an index based on a larger set of securities more representative of actual borrowing costs,
LIBOR ... fixing must be broadened so as to be based on unsecured bank borrowings from all wholesale sources—not just other banks, but non-bank investors in bank commercial paper and large-denomination CDs.
There is an important (very stylized, and likely inaccurate) story here. Why do we have indices anyway?  In the old days, you went to the bank to borrow money. It was like going to a car dealer in the 1950s. Each bank might quote you a price, but you don't really have a good idea if you're getting a good deal without a lot of shopping. In this environment, you can't really have variable rate loans where the bank just announces a new rate.

A better system: The bank quotes you "prime" rate plus some percentage points. But what's "prime?" Well, at least you know it's the basis for the bank's lending to all its other customers. If they say "prime went up you have to pay a higher rate on the loan" you know they're doing the same to all their customers, not just you. That makes variable rate loans more possible and reduces haggling and shopping.

Better yet: The dealer shows you his invoice (the real one, not the phoney one at car dealers!) That's the Libor idea. It's an index of the rates banks pay for funding at the margin. So if Libor goes up, it's much more transparent that the bank is just passing costs on to you.

Don't banks like the obscure system to charge higher profits? Well, not necessarily, which is another important lesson. Haggling over each item and dealing with customers who feel like they're in the 1950s Chevy showroom from "Tin Men" turns out to be less profitable than running a large volume transparent Car-Max operation.

So far so good, but now a second lesson comes to the fore. Libor, as constructed, was a lot better than "prime" announced by each bank. But once markets and contracts settle on Libor, it's awfully hard to move to something better yet.

This gives a role for policy, as Jeremy and Darrell point out, in setting standards, or moving markets to another focal point. We can all use feet or meters, miles or kilometers, dollars or euros.

Being a popular interest rate index, Libor was the natural choice for interest rate derivatives. For example, a swap is a contract in which I promise to pay you $x dollars per year, and you pay me a floating rate. What's a good floating rate... Well, the banks are all using Libor, let's use that!

So now we are in this puzzling point that a huge amount of money changes hands based on a tiny market.
...Unfortunately, there are surprisingly few actual loan transactions between banks that could be used to fix most of the IBORs...
At the commonly-used three-month tenor, transactions in the underlying market for unsecured bank funding are roughly on the order of a billion dollars on a typical day, while the volume of gross notional outstanding in the swap market that references LIBOR at this tenor is on the order of $100 trillion, or 100,000 times larger. [See Table 1 and Table 2.]
And Libor really isn't the right index here. Most derivatives traders are interested in hedging the overall level of rates. They don't mostly care about the bank credit spreads. If treasury rates go down but bank rates go up, because people get scared about banks as in 2008, these traders want an index that goes down.
...IBORs have been heavily used in contracts whose purpose is to transfer risk related to general market-wide interest rates. These “rates trading” applications are not specifically tied to the borrowing costs of banks. It is a self-reinforcing choice by market participants, however, to trade in more liquid high-volume markets, all else equal. In part through an accident of history, this desire to belong to the high liquidity club has led to a massive agglomeration of trade based on the IBOR benchmarks.
So, Darrell and Jeremy propose a second, transactions-based index to be used for derivatives contracts. They have a brilliant idea. Currently, most derivatives are based on three month rates. So, in January 1, we look at the rate for borrowing and lending from January 1 to March 31, and settle derivatives. But there is very little volume in three month rates. Instead, watch the general collateral overnight rate, which has tremendous volume, and pay off contracts on March 31, based on the average of the one-day rates in the quarter.

They have a lot of useful thought on implementation and transition, of course.

Thursday, March 5, 2015

Marginal Revolution on Kleptocracy

I don't often just post links, but sometimes a post is so good, and so complete, it just needs reading without comment.

Marginal Revolution on Kleptocracy

Ok, one comment. The mainstream media are focused on the racial element. This problem is much deeper than race.

Tuesday, March 3, 2015

Mankiw on dynamic scoring

Greg Mankiw has a nice op-ed on dynamic scoring

The issue: When the congressional budget office "scores" legislation, figuring out how much it will raise or lower tax revenue and spending, it has been using "static" scoring. For example, it assumes that a tax cut has no effect on GDP, even if the whole point of the tax cut is to raise GDP.

This is obviously inaccurate. But, as Greg points out, there is a lot of uncertainty in dynamic scoring.

How much will a tax cut raise GDP, and thus potentially not cost as much in tax revenue? (Tax revenue = tax rate x income, so if income rises a given reduction in tax rate costs less in tax revenue.)

By what mechanism? Keynesians will analyze the issue through a multiplier. The tax rate cut puts money in people's pockets, they spend the money, that raises income, and so forth. Other economists focus on the incentives of a tax cut rather than the income transfer. A tax rate cut can induce people to work, save, invest, go to school, etc.  They will come to different answers, especially for policies that emphasize transfers (often with bad incentives) or that emphasize incentives.

How much will policy change growth rates? Long run growth really swamps everything. And the connection between policy and growth is especially hard to nail down.

Greg doesn't really come down on how to solve this issue. I have two suggestions:

1) Embrace uncertainty. It's a fact, we don't know the elasticities, multipliers, and mechanisms that well. So stop pretending. Don't produce only a single number, accurate to three decimals. Instead, present a range of scenarios spanning the range of reasonable uncertainty about responses. The CBO presents a range of fiscal scenarios already.

2) Transparency. Calculations should be utterly transparent and reproducible. If you don't like the labor supply elasticity assumption, you should be able to change the number and produce a new forecast. Scoring should capture "if you think x, then the answer will by y."

Good policy will not result from the illusion of certainty.

Greg also opined on the second round effects, how policy might change economic outcomes which might change future policy. Here I'll side with the old fashioned approach -- let's not go there! The science of forecasting future congressional reactions to events is, let us say, a bit less certain (even) than that of assessing private-sector behavioral responses.

Update:

Greg responds:
Dynamic scoring requires the solution of a general equilibrium model. To solve a dynamic GE model, you need to specify how the government is going to satisfy its present-value budget constraint. You might be tempted to ask the model what happens if the government cuts taxes and never does anything else. But you won't get very far. The model will tell you that the government has to do something else eventually, and it won't tell you what will happen if the government tries to do something impossible.
Greg is right. Though this hasn't bothered CBO scoring yet. Year after year the CBO releases budget forecasts in which debt to GDP ratios climb inexorably; the CBO proclaims this "unsustainable," and life goes on.

Let's try to compromise. A rule that "dynamic scoring models must satisfy a long run restriction in which debt/GDP is no greater than 100%" might work. But one does not have to do huge changes to many models to accomplish that fact. It would be good to have a common benchmark assumption about long run policy so different short run policies can be compared. For example, score all policies in the first 20 years with a common assumption about how debt / GDP at the end of 20 years is resolved.

Where I would rather not go is more detailed political modeling of future congressional actions, especially ones with large distortions.

And for many policies this will not be a huge issue. For example, if we get rid of energy tax boondoggles, one can calculate many interesting behavioral responses, but it is a drop in the bucket of the big social security/medicare/pensions/slow growth debt nexus.