The chart below plots the poverty rate in selected emerging markets between 1981 and 2011. Here, the poverty rate is defined as the proportion of the population living on less than $1.25 per day, adjusted for inflation and PPP. (Data are from the World Bank.) There has been a spectacular reduction in extreme poverty, of which the Chinese experience is the most remarkable. In the early 1980s, when Deng Xiaoping began farm privitisation, more than 80% of the population were living in extreme poverty. As of 2011, just over 10% of the population were living in extreme poverty. The Chinese population averaged about 1.2 billion over this time period, which implies that roughly 800 million people (more than twice the U.S. population) have been lifted out of extreme poverty in China alone.
Saturday, 14 December 2013
Tuesday, 10 December 2013
In a generally clear-headed article entitled The West is Losing Faith in its Own Future in the FT today, Gideon Rachman writes, "Joe Average – once the epitome of the American dream – has fallen back, even as gains for the top 5 per cent of incomes have soared". Indeed, this chart from The Economist shows that median household income has barely risen in the United States since the late 1970s.
Is this a fair picture of how living standards have changed in the United States over the last few decades? In a paper published last year entitled A "Second Opinion" on the Economic Health of the American Middle Class, Burkhauser et al. argue that it is not. They analyse income data from the Current Population Survey between 1979 and 2007. Specifically, they estimate income growth for the median household under different sets of assumptions about what counts as income and what constitutes a household. Their main results are shown below (the table is taken from the original paper).
When a household is defined as a tax unit and only pre-tax, pre-transfer income is considered, income growth for the median household was only 3.2%. However, when a household is defined as a size-adjusted household, rather than a tax unit, and post-tax, post-transfer income plus health insurance is considered, income growth for the median household was 36.7%.
Burkhauser et al.'s conclusion is as follows. "For researchers interested in how middle class Americans are compensated for their time in the labour market, for example, it is more appropriate to use pre-tax, pre-transfer (market) income, although even here researchers who ignore the dramatic increase in the ex-ante value of employer health insurance will understate the returns to work in the United States [...] However, for those interested in the overall economic resources available to individuals, it is more appropriate to consider income defined as broadly as possible [...] As we have demonstrated, doing so provides a markedly different picture of how middle class Americans have fared over the past several decades."
One caveat is that Burkhauser et al. only analyse data up to 2007, i.e., prior to the loss of income that occurred during the financial crisis. If they had analysed data up to 2012, income growth for the median household would probably have been slightly lower in most or all of their categories.
Friday, 22 November 2013
Dear Professor Sutherland,
I would first like to complement you on an excellent article. I would next like to identify an important omission: "Just because a scientific finding happens to be ideologically inconvenient, this does not mean it is false." Note that this is a separate category from both "Bias is rife" and "Scientists are human", because it applies to the policy maker's interpretation of a finding, rather than to the methodology that was used to unearth the finding.
Very many thanks,
Nuffield College, Oxford
Thursday, 21 November 2013
The most widely cited measure of unemployment is the unemployment rate. This is defined as the number of people who are not employed but are actively searching for work relative to the total labour force. The labour force, in turn, is defined as the total number of people who are either employed or unemployed. During the financial crisis that began in 2008, unemployment rates in most Western countries rose considerably. In many countries, they are still high. One major exception is Germany, where the unemployment rate today is lower than it was in 2007. The graph below plots the unemployment rate for selected Western countries between 2000 and 2011. (All data are from the World Bank.)
The unemployment rate went up everywhere except Germany. Most countries experienced a moderate increase. Greece, on the other hand, experienced a substantial increase. Although the unemployment rate rose in France and Italy, it was not unusually high in these countries by 2011.
Another measure of unemployment is the employment-to-population ratio. This is defined as the number of people who are employed relative to the total population older than 15 (or 16). Alternatively, it is sometimes defined as the number of people who are employed relative to the total population aged between 15 and 64. The graph below plots the employment-to-population ratio (defined the first way) for selected Western countries between 2000 and 2011.
Here a very different picture emerges. The employment-to-population ratio has long been much higher in the United States, the United Kingdom and Sweden than in the other European countries. This is due to a number of factors: demographic, cultural and regulatory. (Part of the difference may also be due to discrepancies in exactly how 'employment' is defined in each country.) One caveat is that many of the people not working in a country like Italy (namely women) are engaged in home production, which is an economically useful activity. Nevertheless, it is quite remarkable that in the early 2000s, for example, about 20 percentage-points more people were employed in the United States than were employed in Italy.
Wednesday, 13 November 2013
Here I present two charts showing how much public spending has changed over the last few years. Data are from HM Treasury; they can be downloaded here. (I also used data on population size from Trading Economics). I consider three measures of public spending: total real spending, total spending as a percentage of GDP, and total real spending per capita. Figures do not include money spent on the bank bailouts.
The first graph shows how public spending has changed since the Coalition government came to power. In 2010, the first year of the Coalition government, spending rose. (This may have been due to spending increases already planned by the previous Labour government.) In 2011, spending remained flat. In 2012, it began to fall. And in 2013, it continued to fall. Total real spending is now 2.8% lower than it was in 2009. Total spending as a percentage of GDP is now 2% lower. And total real spending per capita is now 5.9% lower.
Another way to examine how spending has changed is to compare current spending to pre-crisis spending. After all, spending increased considerably during the economic crisis. As the graph below shows, two of the three measures of spending are still above their pre-crisis levels. Total real spending is 2.3% higher than it was in 2008. Total spending as a percentage of GDP is 6.4% higher. And total real spending per capita is 1.7% lower.
Tuesday, 5 November 2013
A couple of days ago, I wrote a post about Rawls' difference principle. My main conclusion was that Rawls' argument for the difference principle relies on the mistaken assumption that there is such a thing as a rational preference; it assumes that minimaxing behind a veil of ignorance is more rational than following any other strategy, such as maximising expected utility. (As a friend of mine points out, someone who believes there is such thing as a rational preference will not necessarily accept my conclusion.)
However, I made one mistake in my analysis. This mistake does not affect my conclusion, but I think it is worth discussing anyway. In particular, I stated that Premise 2 of Rawls' argument for the difference principle is tantamount to the claim that it is rational to play a minimax strategy when faced with a veil of ignorance. This is wrong. What I should have said is that Premise 2 is tantamount to the claim that it is rational to play a minimax strategy and to be inequity-averse when faced with a veil of ignorance.
Premise 2 (again, please correct me if I've got it wrong) is as follows. An individual faced with a veil of ignorance would only want to deviate from perfect equality if doing so would improve the position of the least well-off people in society. More generally, an individual faced with a veil of ignorance would only want some amount of inequality, rather than no inequality, if introducing that inequality would improve the position of the least well-off people in society. In other words, she would want to maximise the position of the least well-off, whilst keeping inequality as low as possible.
By way of illustration, consider the diagram below. An individual who simply had a preference for minimaxing would be indifferent betweeen Scenarios 2 and 3. Conditional on maximising the position of the least well-off people in society, she would be indifferent to the amount of inequality in the rest of the distribution. However, an individual who not only had a preference for minimaxing, but was also inequity-averse, would prefer Scenario 2 to Scenario 3. The least well-off do just as well in Scenario 2 as they do in Scenario 3, but there is less inequality overall in Scenario 2.
If my interpretation of Premise 2 is correct (please tell me if it isn't), Rawls is arguing that a rational individual behind a veil of ignorance would have a preference against scenarios where she could be--on average--better off, and--at worst--no worse off. He is arguing that, conditional on maximising her minimum utility, she would prefer to have lower expected utility. Even if (unlike me) you believe that preferences can be rational, it seems rather irrational to want less expected utility rather than more.
One response I can think of (which is not totally unreasonable) is as follows. The different scenarios behind a veil of ignorance do not correspond to different distributions of utility, but to different distributions of the social and economic determinants of utility (i.e., status, income etc.). And although it might be irrational to want less expected utility, it is not irrational to want less expected income. This is because if you happened to be one of the least well-off people in society, you would want the most well-off people to have as little income as possible.
A counter to this argument is as follows. Even if it is not irrational to be concerned about your relative income, and accepting that there are diminishing marginal returns to income, it is not true that, for all combinations of degree of concern over relative income and rate of decrease of marginal utility, you would strictly prefer less inequality to more inequality. In other words, holding minimum income constant, it is possible to imagine some increase in inequality that you would prefer. At this point, the debate would presumably come down to what degrees of concern over relative income and rates of decrease of marginal utility were reasonable to postulate.
Sunday, 3 November 2013
(I cannot rule out that all of what I'm about to say has already been said, and already been refuted, somewhere in the literature.)
Rawls' difference principle states (someone correct me if I've got it wrong) that a deviation from perfect equality is just if, and only, if it would improve the position of the least well-off people in society.
Rawls' argument for the difference principle is as follows (again, please correct me if I've got it wrong). A deviation from perfect equality is just if, and only if, it would be chosen by a rational individual faced with a veil of ignorance about her own qualities (Premise 1). Such an individual would only want to deviate from perfect equality if doing so would improve the position of the least well-off people in society (Premise 2). Therefore, she would only choose to deviate from perfect equality if doing so would improve the position of the least well-off people in society (Conclusion).
The argument is clearly valid, so any criticisms must be directed toward its premises. Premise 1 says that a social institution is just if and only if it would be chosen by a rational individual behind a veil of ignorance. And Premise 2 says that a rational individual behind a veil of ignorance would maximise the position of the least well-off people in society. Conditional on accepting that justice is the right criterion for deciding what sort of institutions we should have, Premise 1 is fairly unobjectionable. Premise 2, on the other hand, is problematic.
Premise 2 is tantamount to the claim that it is rational to play a minimax strategy when faced with a veil of ignorance. It therefore implies that there is such a thing as a rational preference. Rationality is defined as doing whatever best satisfies one's preferences, conditional on one's preferences; it is a property of decision-making or behaviour, not a property of preferences. Arguably then, the difference principle is predictated on a category error.
The difficulty of maintaining that it is rational to minimax behind a veil of ignorance becomes clearer when we consider other possible strategies. Indeed, an obvious alternative strategy is to maximise expected utility. (I explain the difference between minimaxing and maximising expected utility at the end of the post.) Now, I am not arguing that maximising expected utility is more rational than minimaxing. Rather, I am pointing out that the difference principle rests on the claim that minimaxing is somehow more rational than maximising expected utility (or playing any other conceivable strategy).
A different interpretation of Rawls' difference principle is as follows. In referring to what a rational individual would do, Rawls did not mean to claim that there is such a thing as a rational preference. Rather, he was simply describing what a typical person would do. In other words, he was making a testable claim about what we should expect people to do when faced faced with a veil of ignorance.
Again, this interpretation is problematic. First, the population is almost certainly heterogeneous with respect to preference for equality. There are probably some people who would minimax; some who would maximise expected utility; and some who would play a different strategy altogether. Second, the difference principle no longer necessarily implies that deviations from perfect equality are only just if they improve the position of the least well-off people in society. Rather, it implies that deviations from perfect equality are just so long as the typical person tends to choose them. Indeed, if empirical enquiry reveals that people tend to maximise expected utility, say, then very large deviations from perfect equality may turn out to be just.
The difference between minimaxing and maximising expected utility
The diagram above depicts a veil of ignorance thought experiment (the numbers are completely hypothetical, and were chosen purely for the sake of exposition). Scenario 1 corresponds to perfect equality; the least well-off experience just as much utility as the most well-off. Scenario 2 corresponds to a situation where the least well-off experience 2 units of utility, the middle experience 3 units of utility, and the most well-off experience 4 units of utility. And scenario 3 corresponds to a situation where the least well-off experience 1 unit of utility, the middle experience 4 units of utility, and the most well-off experience 7 units of utility.
According to Rawls' difference principle, a rational individual behind a veil of ignorance should play the minimax strategy: she should maximise her minimum gain. The minimum gain from Scenario 1 is 1; the minimum gain from Scenario 2 is 2; and the minimum gain from Scenario 3 is 1. Therfore, she should choose Scenario 2.
However, an alternative strategy would be for an individual to maximise expected utility: he could maximise his average gain. The average gain from Scenario 1 is 1; the average gain from Scenario 2 is 3; and the average gain from Scenario 3 is 4. Therefore, if an individual were maximising his expected utility, he should choose Scenario 3.
Thursday, 24 October 2013
In discussing the United States’ long-term budget outlook ('The reality of America’s fiscal future', October 22), Martin Wolf notes that, according to the CBO, “a rise in federal revenue to 22 percent of GDP may be needed.” He describes this target as “surely achievable”.
A glance over the historical data suggests otherwise. The accompanying chart plots federal tax receipts as a percentage of GDP between 1930 and 2010. (Data are from the White House Office of Budget and Management, Historical Tables, Table 1.2.) Since the Second World War, revenues have remained relatively stable at around 18% of GDP. They have never amounted to more than 20.9% of GDP. This stability is all the more remarkable given substantial variation in tax rates over the 20th century. During the 1950s, for example, the top marginal tax rate on income was over 90%.
In 2012, tax revenues amounted to only 15.8% of GDP, which suggests that, as the recovery continues, there is some scope for reducing the deficit through increases in revenue. However, a jump to 22% of GDP would be without historical precedent.
Nuffield College, Oxford
Thank you for your letter.
I am aware of this fact. But the simple truth is that this stability is not compatible with the survival of the programmes the US has legislated, in an ageing society. So Americans have to choose. Raising the revenue ratio to 22 per cent or so seems to be the most effective way to meet its commitments. The alternative is to push granny under a bus. That is not going to happen. Letting the rest of government disappear is fairly crazy. So one is left with higher taxation.
Friday, 18 October 2013
Continuing with my recent theme of satisfaction with government in the United States, here I look at Americans' about beliefs about how effectively their tax money is being spent, using figures from Gallup.
The first chart shows the change in local, state, federal and total government spending (as a percentage of GDP) in the US, between 1981 and 2011. Both local and state spending rose slowly but steadily throughout the period, increasing overall by about 2 and 3 percentage-points (of GDP), respectively. In contrast, federal government spending fell during the late 1980s and 1990s, and then rose again during the 2000s. It then increased rapidly at the onset of the financial crisis (due to a combination of lower GDP, the bank- and auto-bailouts, and the stimulus bill), so that--by 2011--it was about 2 percentage-points (of GDP) higher than it had been in 1981. Total government spending fluctuated up and down during the 1980s and early 1990s, declined during the late 1990s, and then rose during the 2000s. It was just under 8 percentage-points (of GDP) higher in 2011 than it had been in 1981.
In 1981, 2001 and 2011, Gallup asked Americans how many cents out of each tax dollar they thought the government wasted. The next three graphs plot the distributions of responses to this question for 1981 and 2011. The first graph shows the results with respect to federal taxes; the second with respect to state taxes; and the third with respect to local taxes. (The results for 2001 lie in-between those for 1981 and 2011, but are closer to those for 1981. There were two surveys carried out in 1981. I chose to display the result of the survey that minimised the difference between 1981 and 2011.)
In all three cases, the distribution of answers is shifted to the right for 2011, indicating that Americans believed that the government was wasting fewer cents out of each dollar in 1981. The descrepancy is largest with respect to federal taxes, and is smallest with respect to local taxes--a result that is shown more clearly in the final graph (below).
These figures show that, with higher government spending, Americans believe the average tax dollar is spent less efficiently. There are a number of possible interpretations of this result. First, if Americans' beliefs are unbiased on average, then as government spending increases, each tax dollar really is spent less efficiently. Second, as government spending increases, media efforts by those opposed to higher government spending (e.g., The Tea Party) systemtatically distort Americans' beliefs about government spending, even though spending does not actually become any less efficient. Third, the apparent trend may simply be an effect of the bailouts and stimulus bill. In particular, while an increase in government spending on, say, education or roads does not make spending any less efficient, the bailouts and stimulus bill were highly inefficient--or at least, were perceived to have been so by many Americans.
Monday, 14 October 2013
Only 26% of Americans believe that "the Republican and Democratic parties do an adequate job of representing the American people". 60% now believe that "they do such a poor job that a third major party is needed." 71% of Independents, 52% of Republicans and 49% of Democrats believe that a third party is needed. Read the article at Gallup.
Sunday, 6 October 2013
As this article at Open Europe reports, the EU budget-deal will cut funding for the Common Agricultural Policy (CAP) considerably. Between 2007 and 2013, the EU spent 56 billion euros per year on the subsidy component of the CAP, whereas between 2014 and 2020, it will spend 46 billion euros per year on it. (56 billion euros is approximately equal to the GDP of Luxembourg.) Not only will the absolute amount funding for the CAP go down, but so too will the share of the total budget allocated to the CAP. Between 2007 and 2013, the subsidy component of the CAP comprised about 32% of the EU budget, whereas between 2014 and 2020, it will comprise 28% of the budget. At the present rate of change, it will only take about 120 years to get the subsidy component of the CAP below 1 billion euros per year.
Saturday, 5 October 2013
To quote wikipedia, classical liberalism "advocates civil liberties, with a limited government under the rule of law, private property, and a belief in laissez-faire economic policy." Earlier this year, The Economist ran an article pointing to evidence that young people in Britain have become more classically liberal over the last few decades. In this post, I gauge the extent to which Americans' political beliefs have shifted in the direction of classical liberalism since the early 2000s. All data are from Gallup.
The first chart indicates that the proportion of Americans who consider themselves independents has increased by 5-10 percentage-points since 2001.
Charts two and three relate to the overall scope of government. The second chart indicates that the proportion who think the federal government has too much power has increased by 5-10 percentage-points, while the proportion who think it does not has decreased by 5-10 percentage-points. The third chart indicates that the proportion who think the government is doing too much has increased by 1-5 percentage-points, while the proportion who think it is doing too little has decreased by 1-5 percentage-points.
Charts four, five, six and seven relate to economic and fiscal issues. The fourth chart indicates that the proportion who think there is too much government regulation has increased by about 5 percentage-points, while the proportion who think there is too little government regulation has decreased by 1-5 percentage-points. The fifth chart indicates that the proportion who disapprove of labour unions has increased by about 5 percentage-points, while the proportion who approve of them has decreased by about 5 percentage-points; both of these changes happened quite abruptly in 2008 when Obama took office. The sixth chart indicates that the proportion who think the government is spending too much on national defence has increased by about 15 percentage-points, while the proportion who think it is spending too little on national defence has decreased by about 15 percentage-points. The seventh chart indicates that the proportion who think the federal income tax is too high has not changed since 2002 when the Bush tax cuts came in; the proportion who think it is about right has not changed either.
Finally, charts eight, nine and ten relate to social issues. The Eighth chart indicates that the proportion who think immigration should be increased has risen by about 5 percentage-points, while the proportion who say it should be decreased has declined by 5-10 percentage-points. The ninth chart indicates that the proportion who think the government should promote traditional values has decreased by 5-10 percentage-points, while the proportion who think it should not promote them has increased by about 5 percentage-points. The tenth chart indicates that the proportion who think homosexual relations should be legal has increased by 10-15 percentage-points, while the proportion who think they should be illegal has decreased by10-15 percentage-points.
Overall, these charts provide strong evidence that Americans have become more classically liberal since the early 2000s. Americans are now more likely to: consider themselves independents, think the government has too much power, think the government is doing too much, think there is too much regulation, disapprove of labour unions, think the government is spending too much on defence, favour increased immigration, and think homosexual relations should be legal. And they are less likely to think the government should promote traditional values. However, they are no more likely to think the federal income tax is too high.
Thursday, 3 October 2013
Tuesday, 1 October 2013
I just came across a study entitled Menu Labelling and Calories Purchased at Chain Restaurants by Krieger et al. The authors investigated whether displaying caloric information on menus altered customers' food and beverage choices. They took advantage of a new regulation that requires certain food outlers to display caloric information on their menus. In particular, they interviewed a large number of customers before the regulation was implemented, and a large number afterward, and then looked to see whether there was a change in the mean number of calories purchased per customer. They found that, after 18 months, mean calories purchased per customer had decreased by about 4% in chain restaurants and about 14% in coffee shops. From the perspective of getting customers to buy less calorically dense foods, these results are quite encouraging.
However, there is one obvious limitation to the study. The authors had absolutely no way of knowing (as they acknowledge at the end of the paper) whether customers compensated for their choices by eating more at other meals. For example, the typical customer might have looked at the menu in the coffee shop and chosen the lower calorie option, only to then eat an extra portion of food at dinner. For this reason, I find these kind of studies largely uninformative for evaluating whether such interventions can alter consumers' caloric consumption.
Thursday, 5 September 2013
I currently own one pair of glasses, having accidentally stepped on my second pair a few days ago. I therefore went into town to purchase a new second pair. Before doing so, I obtained the details of my prescription from the company that carried out my last eye test. This was so I would not have to pay for another eye test. (The pair of glasses I currently own have the prescription from my last eye test, and I am able to see through these perfectly well.)
However, after selecting a pair of frames, and handing over the details of my prescription to the salesperson, I was told that my prescription is no longer valid. In particular, I was told that it would be illegal for the company to sell me any glasses because my last eye test was carried out too long ago (namely, >1 year ago). I explained that the glasses I currently wear have the prescription from my last eye test, and that I can see through these without any trouble. But I was told--regardless of my judgement--that the company could not sell me a pair of glasses without first ascertaining my up-to-date prescription through another eye test. Apparently, the law prohibits retail stores from selling glasses based on prescriptions from eye tests carried out >1 year ago.
I immediately joked that this law must have been written by the administrators of eye tests, and the salesperson agreed. Of course, I'm sure the law's advocates argued that the law is in the interests of the poor, helpless consumer--someone who couldn't possibly work out whether or not his old prescription is good enough. One thing I don't understand is how internet retailers get around the law. Indeed, there seem to be many websites offering glasses based on prescriptions provided by the customer--some with '.uk' addresses. An alternative explanation is simply that I was hustled by a crafty salesperson!
Tuesday, 3 September 2013
Sadly, the economist Ronald Coase died last night at the age of 102. I therefore thought I'd share an amusing quote of his: "An Economist who, by his efforts, is able to postpone by a week a government program which wastes $100 million a year has, by his action, earned his salary for the whole of his life."
Monday, 26 August 2013
In this post, I present a very crude analysis of the relationship between the top marginal tax rate on income and income tax revenues in the United States. I do not consider the level of income at which the top marginal tax rate applies; nor do I consider the lower income tax rates, or the levels of income at which they apply. Data are from the White House and the Tax Foundation.
The chart below depicts income tax revenues as a percentage of GDP and the top marginal tax rate since 1934--the earliest year for which I could find data. After rising from a very low level prior to 1944, income tax revenues have been remarkably stable over the 20th and early 21st centuries. Prima facie, revenues from income tax do not appear to bear much relationship to the top marginal tax rate.
The chart below shows the same information as the one above, but with income tax revenues on a separate axis. It confirms that--superficially at least--the two variables do not bear much relation to one another. For example, as the top marginal tax rate decreased from 91% in the early 1960s to 28% in the late 1980s, the medium-term trend in income tax revenues was more-or-less flat. In fact, there was a very slight upward trend from the early 1950s to the mid 1980s.
Under plausible assumptions about human behaviour, one should not expect to collect any income tax revenues (over and above those provided by lower income tax rates) when the top marginal rate is 0% and when it is 100%. At a top marginal rate of 0%, any additional tax revenue would have to come from people willing to pay income tax voluntarily. And at a top marginal rate of 100%, any additional tax revenue would come from people willing to work voluntarily. Therefore, one should expect to observe what is known as the Laffer-curve: an inverse-U-shaped relationship between income tax revenues and the top marginal rate, with two minima at top marginal rates of 0% and 100%, respectively.
The chart below shows the bivariate relationship between income tax revenues and the top marginal rate, with a quadratic (i.e., U-shaped) function fit to the data. Contrary to theoretical expectation, the peaks of the function are at the lowest and highest top marginal rates! But this is almost certainly attributable to the pre-1944 outliers: years in which very little income tax revenue was collected, despite high top marginal rates.
Incidentally, I do not understand why so little income tax revenue was collected prior to 1944. There must be a good legal or economic reason. Nevertheless, as the chart below indicates, once the pre-1944 values are excluded, the quadratic function takes the expected form.
Tuesday, 20 August 2013
The Economist recently ran an story entitled The Curious Case of the Fall in Crime, which discusses some of the reasons why crime has fallen in the West over the last couple of decades. The BBC featured a similar article specifically about the British case. But by how much has crime actually decreased? Here, I present three charts which depict, respectively, the change in the homicide rate, the theft rate, and the robbery rate in a number of Western countries since 1995. Data are from the United Nations Office of Drugs and Crime.
Looking just at these three metrics, the drop in crime is quite impressive. Homicide has fallen by 20-50%, theft by 10-50%, and robbery by a similar amount. The only outlier here is Italy, which appears to have experienced a considerable increase in robbery, beginning in the early 2000s. (Part of this increase may be due to a some kind of change in the definition of 'robbery' under Italian law.)
Thursday, 1 August 2013
Many commentators have argued that one of the key factors contributing to the financial crisis of 2008 was "too big to fail"--policy-makers' belief that certain institutions could not be allowed to fail because the consequences of their failure would be too disastrous. Here, the argument is simply that there is less incentive to avoid excessive risk-taking when someone else is picking up the bill. Ex post, of course, many financial institutions did get bailed out; not just in the US but in other countries as well. However, although the foregoing argument seems highly plausible (to me at least), one cannot be sure that financial institutions believed ex ante that they would in fact be bailed out. Here I want to review some of the evidence that they did.
The first kind of evidence is that people favourable to the banks were in Washington during the lead-up to the crisis. The two most prominent examples are Hank Paulson, who was Treasury Secretary between 2006 and 2009, and Tim Geithner, who was president of the New York Fed between 2003 and 2009. Paulson had been the CEO of Goldman Sachs between 1999 and 2006. Of course, by itself this fact is very weak evidence that he put his old institution's interests before the tax-payer's. However, there are a couple of additional reasons to suspect he might have done. First, Lehman Brothers--Goldman's biggest rival--was allowed to fail, yet AIG, which owed money to Goldman at the time, was bailed out. Second, on September 18th, 2008 (the day of the run on US money market funds) Paulson corresponded with Lloyd Blankfein, who succeeded Paulson as CEO of Goldman Sachs, more than with any other person in Washington except Ben Bernanke. Geithner has been accused by Sheila Bair, the ex-head of the FDIC, of putting the interests of bailed-out institutions' creditors before those of the US tax-payer. Similar accusations have been made against him by Neil Barofsky, the special inspector general of TARP and author of Bailout. Indeed, Geithner was ostensibly described by one Wall Street banker as "our man in Washington."
The second kind of evidence is that the largest Wall Street banks seem to enjoy a "too big to fail" subsidy. In particular, they face lower borrowing costs than their rivals because of the implicit government guarantee on their debts. According to a report by the Independent Community Bankers of America, of 15 studies that investigated the existence of the "too big to fail" subsidy, 14 found evidence that it existed, whereas only 1 did not, and that study was carried out by JP Morgan Chase. More recently, a study by Goldman Sachs found that the "too big to fail" subsidy has generally been small in magnitude, was largest during the financial crisis, but has since become negative. However, as the Bloomberg article reports, the economist Simon Johnson "said that Goldman Sachs' report proves the value of the too-big-to-fail subsidy because it shows the biggest banks enjoyed a large advantage during the financial crisis".
The third kind of evidence is simply the long list of bailouts that preceded those of 2008. Franklin National Bank was bailed out in 1974; Continental Illinois was bailed out in 1984; a large number of financial institutions were bailed out in 1989 following the S&L crisis; and several Wall Street banks were indirectly bailed out in 1994 when the Mexican government was loaned $50 billion to pay off its creditors after the peso crisis. In addition, LTCM's bailout in 1998 was organised by the New York Fed and was helped-along by easy monetary policy on the part of the Federal Reserve (though it was not taxpayer funded). Finally, a number of large non-financial institutions have been bailed out by the US government over the years. These include: the Penn Central Railroad in 1970, Lockheed in 1971, New York City in 1975, Chrysler in 1980, and the airline industry in 2001.
Friday, 26 July 2013
Tuesday, 23 July 2013
In a previous post, I presented a chart showing that real GDP/capita in Japan has continued to increase since 1990, contrary to the idea that the country "lost a decade". Japan has been in the news again recently; this time, in the context of Shinzo Abe's economic reforms, which are designed to spur economic growth. However, in the absence of large changes in fertility or migration, Japan's population is projected to decrease over the next few decades. Therefore, by how much can we actually expect it's total economy to grow? Here, I attempt to answer this question using population projections from the UN. (The GDP data up to 2010 are from the World Bank.)
The first chart shows the projected increase in real GDP required to achieve different rates of increase in real GDP/capita under two UN fertility scenarios: the high fertility scenario, and the low fertility scenario. The second chart shows--for the next two decades--the projected real GDP growth rate corresponding to each of these trajectories. Under the high fertility scenario, total population levels off, meaning that GDP growth of around 2.5% is needed to achieve per capita growth at the same rate. Under the low fertility scenario, total population falls, meaning that GDP growth of around 2% is needed to achieve per capita growth of 2.5%. The GDP growth rates required for 1.5% per capita growth under the two fertility scenarios are around 1.5% and 1%, respectively.
Tuesday, 9 July 2013
I found out an interesting fact whilst watching an episode of Stossel this evening. The first official US cent coin, the Fugio Cent, had the words "Mind Your Business" written on it. The coin, designed by Benjamin Franklin, was first issued in 1787. It was not until 1864 that the words "In God We Trust" first appeared on US coins.
Friday, 5 July 2013
The US Military has been carrying out drone strikes in North West Pakistan since 2004. As the chart below indicates, these strikes--which function as an alternative to ground-based intervention--have increased in number dramatically under the Obama administration.
According to Obama administration officials, total civilian casualties from drone strikes are in the single digits. What do independent sources have to say? There at least four comprehensive databases on drone strikes in Pakistan: one compiled by The Long War Journal, one compiled by the New America Foundation, one compiled by The Bureau of Investigative Journalism, and one--known as Pakistan Body Count--compiled by Zeeshan Usmani. The total number of civilian deaths estimated by each of these sources is shown in the chart below.
Not one of the esitmates is in line with the Obama administration's claim of single digit civilian casualties. Which one is most reliable? Although it is very difficult to be sure, there is reason to think The Bureau of Investigative Journalism's higher estimate may be closest to the truth. Scholars at the Columbia Law School published a report entitled Counting Drone Strike Deaths, in which they attempted to gauge the number of civilian casualties from drone strikes in 2011 as precisely as possible. They "counted 2300 percent more “civilian” casualties than the New America Foundation, and 140 percent more “civilian” casualties than New America’s “civilian” and “unknown” casualty counts combined". Similarly, the authors of a Stanford/NYU report entitled Living Under Drones concluded that The Bureau of Investigative Journalism's data-sets are "more thorough and comprehensive than both New America Foundation and The Long War Journal." Finally, the government of Pakistani has acknowledged the deaths of over 400 civilians since 2004.
The mismatch between journalistic findings on the one hand, and assertions coming out of the Obama administration on the other, is quite staggering. The Appendix to Living Under Drones documents this discrepancy in meticulous detail. Below is an example of a table from the Appendix, which shows an official government statement on the left together with contrary statements from various news outlets on the right.
One of the principal reasons given by the Obama administration for the use of drones is their alleged accuracy. Ostensibly, targets can be pinpointed and then zeroed-in on with a high degree of precision. If this claim were true, then--given the considerable number of civilians estimated to have been killed--drone strikes should have killed a very large number of putative terrorists. I.e., the ratio of militant deaths to civilian deaths should be very high. What do independent sources have to say? As the table below reveals, the ratio may be as low as 1:4, and is probably at least as low 4:1. (Incidentally, I calculated these percentages myself, based on figures given in the various sources.) And indeed, a recent study argues that drones may be more hazardous to civilians than conventional manned aircraft.
Finally, it is worth mentioning that the Obama administration has been making use of drones not only in Pakistan, but also in Yemen, Somalia, Iraq, Afghanistan and Mali. For several reasons, including the extensive collateral damage they evidently inflict, I do not believe the US should be conducting drone strikes in the Middle East. I may elaborate on my position in a future post.
Thursday, 4 July 2013
In this post, I depict the trajectories taken by the US-led wars in Iraq and Afghanistan between 2001 and 2013. In particular, I plot--over time, for each war--US troop deployments, US Military casualties, and civilian casualties. Both wars were started by the Bush administration--the Afghanistan war in October 2001, and the Iraq war in March 2003. During the first term of Obama's presidency, his administration brought the Iraq war to a close and simultaneously expanded the war in Afghanistan. Huge numbers of civilians have been killed in both wars: over 100,000 in Iraq, and over 15,000 in Afghanistan. While the case for Afghanistan was arguably stronger than the case for Iraq, in my opinion the US should not have gotten involved in either war.
Data on US troop deployments in Iraq are from the US Department of Defense. Data on US troop deployments in Afghanistan are from the Brookings Institution's Afghanistan Index. Data on US Military casualties are from iCasualties.org. Data on civilian casualties in Iraq are from Iraq Body Count.org. Data on civilian casualties in Afghanistan are from Brown University's Cost of War project. The first three graphs correspond to the Iraq war; the latter three to the Afghanistan war. (Note that the final graph shows combat-related civilian deaths in Afghanistan; I could not find any figures for total civilian deaths.)
Tuesday, 2 July 2013
The Austrian economist Bob Murphy has challenged Paul Krugman to a public debate on business cycle theory. So far, Krugman has not taken up the invitation. The interesting twist is that Murphy has gotten people to pledge over $100,000, which--in the event of the debate's taking place--would be donated to a food bank in New York City. Therefore, unless Krugman decides to donate $100,000 of his own money in lieu of the debate, he is effectively preventing a charity from receiving a large sum of money (donated by other people). Furthermore, after Paul Krugman debated Ron Paul on television in 2012, he stated quite unambiguously on his blog that the reason he did so was to publicise his new book. One can only conclude that while Krugman is willing to sacrifice an hour of his time for the sake of advancing his career, he is not willing to do so for the sake of helping the needy. More details can be found here.
Here I provide three graphs depicting how the value of the pound has changed over the last two-and-a-half centuries. The first two graphs are taken directly from the paper Inflation: The Value of the Pound 1750-2011, which can be found at the House of Commons Library. The third graph uses data from that paper.
As the first two graphs show, prices remained at a more-or-less constant level until the early 20th century, at which time they assumed an upward trend. This upward trend remained relatively gentle until the late 1960s. It then became extremely steep during the 1970s, before shallowing-out again in the 1980s and 1990s. The magnitudes of these changes are illustrated most clearly in the second graph. Prices in 1900 were no lower nor higher than they had been in 1800. But between 1900 and 2000 they increased approximately a hundredfold. This implies that if someone put a pound under her mattress in 1900, it would have lost close to 99% of its original value by 2000.
As is well-known (and as noted above), inflation was particularly rapid during the 1970s. This can be seen most clearly in the third graph. By the end of the decade, the purchasing power of the pound had fallen by an astounding 72%. At this rate of inflation, it would have taken only 4 more years for the pound to lose 99% of its 1970 value.
It is no secret that the changes described above are primarily the result of monetary policy on the part of the Bank of England. For exaxmple, the picture is exactly the same in the United States. Obviously, however, just because the value of the pound has diminished over the course of the 20th century, it does not mean monetary policy has done more harm than good. Indeed, most non-Austrian economists would argue that having some form of monetary policy is sensible--if not essential. My purpose in this post is not to evaluate the merits of monetary policy; just to lay out the historical evidence of price changes in the British economy.
Sunday, 23 June 2013
Earlier this month, an article appeared in the New York Times entitled Don't Take Your Vitamins. The article described a number of recent studies which have found that vitamin supplementation may be harmful, rather than helpful. For example, as the article reports, a 2005 Cochrane review of randomised-controlled trials documented that regular supplementation with vitamins A, C and E may increase the risk of mortality. This finding should obviously be qualified by saying that the direction of the effect of supplementation on mortality will depend on dosage, frequency of use, and overall nutritional status. For example, provision of vitamin A supplements can be invaluable for preventing child deaths in developing countries. And, as I've argued before, there's really no such thing as an unhealthy food (or vitamin).
Furthermore, my impression is that most people who supplement with vitamins in developed countries do not to take vitamin A or vitamin C individually, but instead take a multivitamin. Compared to individual-vitamin supplements, these contain much smaller quantities of a much larger number of micro-nutrients. So what does the evidence say as far as multivitamins are concerned? Might taking them regularly also be dangerous for people in developed countries?
In a paper published this year in The American Journal of Clinical Nutrition, Macpherson et al. set-out to answer the above question. They carried out a meta-analysis of 21 randomised-controlled trials, each of which had looked at the impact of multivitamin supplementation on one or measures of mortality. Their total sample comprised >91,000 people. Overall, they found no effect; the risk of mortality among those taking multivitamins was 98% of the risk among those taking placebos, and the 95% confidence interval around the estimate included 1. They also considered mortality from cancer and mortality from heart diesase seperately, and--again--found no effects. Their conclusion is that "the level of alarm" generated by "highly publicised reports from several recent epidemiological studies" may be unwarranted.
Tuesday, 4 June 2013
It is widely (though not unanimously) asserted that, following the bursting of its asset bubble in 1991, Japan "lost a decade", i.e., underwent 10 years of economic stagnation. (According to some versions of the argument, Japan actually lost two decades.) I am certainly not qualified to provide a comprehensive analysis of the recent Japanese experience. Nevertheless, it is instructive to look at how the standard of living (as measured roughly by per-capita production) has changed in Japan over the relevant time-period. (Data are from the World Bank, Statistics Japan and OECD.)
The first graph (below) plots nominal per-capita production in Japan between 1980-2010. And it supports the conventional account; in nominal terms, per-capita production has simply drifted up and down since the 1991 crash.
The second graph (below) plots the relative price level in Japan between 1980-2010. It indicates that, since the early 1990s, prices have stagnated or even declined--an abrupt change from their pre-1990s trend.
The third graph (below) plots real per-capita production in Japan between 1980-2010. It shows that, once adjustments are made for changes in the price-level, the Japanese standard of living has continued to rise over the last two decades, albeit at a somewhat slower rate since 1990.