Wednesday, September 30, 2015

How to analyse housing markets

Housing costs are typically 30% of household income, while about 43% of household savings are tied up in the value of owner-occupied dwellings. There is really no more important market for the general public when it comes to their cost of living and their ability to save for the future.

But simply talking about the housing market as if it is some monolithic beast will lead you to the error of conflating three distinct markets that must be considered independently if you want to really understand what is happening. These markets are

1: The land asset market
2: The housing service market (annual occupancy from rent or ownership)
3: The residential construction market

When you buy a home on the second hand market (rather than a new home), you are actually buying a bundled good which includes a land asset along with a durable housing product which lasts the life of the building. A close analogy would be buying a car bundled with an equity share in the vehicle manufacturer - you get the vehicle for its useful life, and the equity asset in perpetuity.

So when we talk of high demand for housing, home prices increasing, and housing bubbles, we must be clear about whether we are talking about the market for the land asset component of the bundled housing good, or the market of occupying homes themselves. Conflating these is the most common error in housing market analysis, and it leads to conclusions that make little sense in reality.

For example, take the frequent commentary about the effect of population growth on home prices. To me it is utterly confusing. If we are talking about the land asset market, the question then arises about why we don’t talk about the population effects on equity and debt markets, derivatives markets, and other asset classes that could equally see effects. The reason being that more people means more buyers AND sellers of the same assets.

You can see from the graph below that population effects don’t seem to be driving the growth in land asset prices, or at least can’t be a major contributor if areas with a 10-15% population decline can still see 70% growth in home prices.

Of course like other asset markets the reason for the land price increases has a lot to do with the systematic reduction of interest rates in the past 20 years. Asset prices are just the capitalised value of future claims on incomes, and a lower interest rate increases that asset value compared the value of the future incomes. This means that comparing prices of the bundle of house and land asset to incomes makes no sense at all. It would make just as much sense to compare the price of an equity share in Woolworths bundled with a kilos of bananas as a way to measure food inflation. Why not measure the food itself?

Luckily, we do have a market for housing as a produced good that we consume on an annual basis quite apart from the land asset; the rental market. If we measure how much of our incomes we spend on rent, and the quality of the homes we reside in (in terms of sqm per person), we can apply the supply and demand model to the market. If there really is something going on with population and housing production, it must be observable in the rental market. Looking at the chart below we can see that in fact the rent to income ratio declined all the way through the land price boom of the early 2000s, as did the occupancy rate (fewer people per home) indicating that in fact we were building more new homes than new people.

So sure, use your supply and demand analysis on the market for produced durable housing goods, but remember that home prices aren’t the price in that market. Rents are the price in the housing market, while home prices mostly reflect prices in the land market.

Lastly, we can look at the construction market, which is driven by trends in other markets, including speculation on land markets. Here the idea of supply and demand also works fine, as periods of high demand for new construction result in increasing construction prices (as demand shift to the right against a resource-constrained upward sloping supply curve for construction services). But again, the construction market and construction prices are not the contributor to growth in home prices. In fact, higher construction costs will decrease the value of the land asset, as they provide an additional cost to capturing future income flows.

The situation now in Australia is that asset market dynamics, including lower interest rates, international buying, and simple cyclical timing of investments, are driving up land prices in some capital cities. In some areas, when this asset buying occurs in new homes it also increases demand for construction, pushing up prices in that market as well. And in the housing service (i.e. rental) market, the additional supply is suppressing rents.

This is the way to analyse housing markets. Don’t be drawn into the monolithic view by conflating behaviour in these distinct markets.

Thursday, September 10, 2015

Doing the housing supply maths

Laurence Murphy is a top property economist at the University of Auckland. I met him last night after a presentation in Sydney where he took on the myth that planning constraints are a major determinant of current home prices in Australia and New Zealand.

He said it is very easy to demonstrate mathematically how little impact even a large increase in the rate of supply would have on prices. But when he shows this analysis to government officials, planners, and engineers who have bought into the supply-side narrative their response is often

“I see you calculations. I follow the logic. But I don’t believe it!”

So I wanted to try the ‘basic supply-side maths’ for myself on the blog to see what sort of effects radical changes to the rate of new housing supply could have, and see if I generate some of the same responses.

Here’s how the maths work. I take the number of new dwelling completions from the ABS for the past 20 years, which is shown in quarterly figures in the blue line of the chart below. Since 1995 new housing supply has been 146,546 dwellings per year on average, which is about a 2% increase in the stock annually, though this moves with the business cycle.

I then add 10% to this number every year to generate a counterfactual world where supply has been much higher over a sustained two-decade period (green line). Then I add 20% just to take an extreme scenario (yellow line). Note that in this exercise I don’t ‘elastify’ supply, which would have higher construction in boom periods, and lower construction in slump. When I run the numbers of more elastic supply that responds to both booms and slumps more I get fewer home built compared to what actually happened! This is because when completion rate falls, it falls faster, offsetting all of the gain from the previous boom. I show a twice as elastic scenario in the next graph in red, which actually results in 8,000 fewer dwellings built in the past 20 years. ‘Elastifying’ supply can’t really be what is desired by those advocating for supply-side reforms. 

Any supply-side housing initiative should simply aim to get more homes built, year in, year out. This is what I capture in my counterfactual scenarios of 10% and 20% higher construction over two decades.

So here is question. How many more houses would there be now in these counterfactual worlds? And what would the price impact be?

Well, if we had built 10% more new home each year for the past 20 years Australia would have around 300,000 more homes. At a 20% higher rate of completions that's 600,000 more. Sounds terrific! That must have a massive impact on prices.

Well. No.

You see Australia’s current housing stock is somewhere above 9million homes. Around 8.8million occupied, and many second homes, holiday homes, and so forth that are traditionally about 8% of the housing stock. Let’s say that there are 9.3million dwelling in the country right now. These additional homes in my 20 year supercharged supply scenarios represent just a 3.2% and 6.4% increase in total stock respectively.

The price impact of a 3% increase in supply is a 3% reduction if demand elasticity is unity. That’s it. The price reduction could be less if there are countervailing income effects that lead to outbidding for superior locations.  So twenty years of supercharged supply provides somewhere between 0% and 3% lower prices, which suggests to me that focusing on the supply side is close to a waste of time. In the 20% higher housing completions scenario the effect is somewhere between zero and 6%. About the same as two and a half years of rental price growth.

To put it another way, after 20 years of a 10% higher rate of new supply, rents today would be then same as they were in early 2014.

We can alternatively look at raw measure of the gains to the amount of floor space per person. Taking  average floor size of homes, which is about 180sqm, and add 3%, and assign it to the average of 2.6 occupants, to get an additional 2sqm of floor space per person.

Or alternatively we can think of it in terms of occupancy rates, which would be 2.51 instead of 2.6 with the same size homes under the 10% higher supply scenario.

That’s all you get for 20 years worth of sustained housing supply stimulus. And you get none of that simply from more elastic supply only.

The point being that current massive price increases, in the order of 17% per year in Sydney and Melbourne, simply cannot be explained by anything like unresponsive supply. Not only that, any supply-side effect on prices takes many decades to have any effect, and only enters the price equation via effects on rents.

If we want cheaper housing we need to reform legal structures to shift bargaining power to tenants from landlords, curb speculation through financial controls (and keep stamp duties!), and stop rewarding political parties who promise housing supply as any sort of solution to current prices.

Unfortunately, very few people actually want housing to become more cheaper. Around 70% of households are homeowners, around 30% are property investors who come from the wealthier part of society, while most politicians also have a huge share of their wealth tied up in residential property. It suits all of these interests to point the finger at supply because they know it sounds attractive in a naive economy way, but won’t actually reduce the value of their housing portfolios.

As Professor Murphy explained, the consensus around new housing supply as a solution to housing affordability problems is a political construct. This unfortunate political reality is best summarised in this tweet. 
Dear reader I hope you see my calculations, follow the logic and believe it!

Monday, August 31, 2015

So, about the inefficiency of stamp duties…

Australia’s economic commentariat is now almost unanimously on board with the idea that stamp duties on property transactions are immensely inefficient and should be abolished in favour of land taxes.

I’ve long held the view that land taxes are the best form of taxation. I certainly agree with that. But the idea that stamp duties are exceptionally bad is not clear cut. The key reference point for this belief is from the various modelling exercises of economists looking to estimate the welfare losses from these taxes.

The main problem, however, is that there are no transactions in equilibrium economic models, so there is no way to model a transaction tax. Equilibrium models are ‘pre-solved’ by the Walrasian auctioneer to determine the distribution of goods to a single representative agent.

Here’s what the Australia Treasury had to say when they tried to model the welfare effects of stamp duties.
It is inherently difficult to capture this type of capital transaction tax in a model with a single representative agent. The approach adopted here treats real estate services as an investment good which improves the productivity of the firms, including the housing sector. One way of thinking about this is that real estate agents play a valuable role in finding producers that value the capital the most. Therefore a potential owner will be willing to pay a real estate fee equal to the profit they will enjoy over the previous owner. Within this setting the conveyance duty is treated as a tax on the value of investment and subsequent productivity gains facilitated by the transfer of land and structures.
Translated it reads “our model can’t capture transaction taxes so we’ll just assume the tax is something else to fit it into the model we do have.”

The best micro-level analysis comes from Davidoff and Leigh, who find that the main impact of higher stamp duties is to reduce frequency of home sales, and of those home sales, some will be from people relocating. 

Yet it is not clear that the welfare effect of reduced home sales is negative if some of those sales are merely fuelling speculation in the housing market. The basic result of all transaction taxes in asset markets hold - if some of the transactions are simply speculative churn, than there can be positive welfare effects from reducing turnover through transaction taxes. 

So I urge caution about calls to cut stamp duties, even if those calls are accompanied by the proviso that such a change must be accompanied by higher land taxes, and especially if those provisos are likely to be ignored.

Tuesday, August 18, 2015

Nanny state submission

Australia's new libertarian Senator David Leyonhjelm has called for a Senate Inquiry into Australia's creeping 'nanny state' regulations of individual behaviour. The conflicted Senator, who's main claim to fame so far is to agitate for increased regulations on wind farms despite his apparent principles of freedom, is one of those characters who at least shakes up the dreary world of politics. By coincidence I do often agree with him on personal freedoms, though on economic freedoms and issues about the distribution of wealth and social support, we disagree quite starkly. The Inquiry is, however, a useful catalyst for considering the evidence of individual-level harm-minimising regulations.

My submission is reproduced below, and available in full here.

This “nanny state” inquiry is a timely chance to reconsider the relationship between personal choice and legislated responsibilities, and to consider the evidence that exists of the effectiveness nanny state policies in terms of their intended social impacts.

The Terms of Reference for the Inquiry are:
  • the sale and use of tobacco, tobacco products, nicotine products, and e-cigarettes, including any impact on the health, enjoyment and finances of users and non-users; 
  • the sale and service of alcohol, including any impact on crime and the health, enjoyment and finances of drinkers and non-drinkers; 
  • the sale and use of marijuana and associated products, including any impact on the health, enjoyment and finances of users and non-users; 
  • bicycle helmet laws, including any impact on the health, enjoyment and finances of cyclists and non-cyclists; 
  • the classification of publications, films and computer games; and 
  • any other measures introduced to restrict personal choice 'for the individual‘s own good’. 
I respond to each in turn by taking a practical approach informed by research in these areas. An overarching message is this. It should not be okay to ‘do something’ about a social issue without a rigorous assessment of whether that ‘something’ will even address the issue at hand. Many nanny state regulations are a knee-jerk political response and not policy made with clear assessable objectives.

A second message is this. Healthier citizens need not lead to lower health care costs in general as any disease or injury prevention simply allows another disease to cause that person’s death, and it will also have associated health care and ‘end of life’ care costs.

The research is quite clear that this is the case, particularly for smokers. The following academic results are typical (my emphasis).
Health care costs for smokers at a given age are as much as 40 percent higher than those for nonsmokers, but in a population in which no one smoked the costs would be 7 percent higher among men and 4 percent higher among women than the costs in the current mixed population of smokers and nonsmokers. If all smokers quit, health care costs would be lower at first, but after 15 years they would become higher than at present. In the long term, complete smoking cessation would produce a net increase in health care costs, but it could still be seen as economically favorable under reasonable assumptions of discount rate and evaluation period.
And from here
Until age 56, annual health expenditure was highest for obese people. At older ages, smokers incurred higher costs. Because of differences in life expectancy, however, lifetime health expenditure was highest among healthy-living people and lowest for smokers. Obese individuals held an intermediate position.
Therefore when making policy decisions in the interests of improving individual health, an informed government should not naively justify such decisions on the grounds of reducing the resource burden of public health care, as this argument rarely holds. Decisions must be made on other grounds, of which there are many legitimate ones, such as externalities (in the case of passive smoking in some public areas), information failures, market or political power of interest groups.

Underlying this inquiry is also a question as to the current Australia legal situation in terms of duty of care. Take as an example children’s playgrounds in public parks. Surely a part of the trend towards excessive padding and safety is the result of legal pressures and past legal cases against “negligent” councils. The same happens with cracked footpaths (see this example, and there are many others), and other personal injuries that seem to overstep the bounds of a common sense duty of care even on private property (see this example). I believe a key part of the process of decreasing ineffective and costly nanny state regulation requires looking abroad, perhaps to Europe, at how the legal interpretations of duty of care are quite different and allows for governments the legal comfort to go without nanny-state regulations.

Tobacco choice
Legislation restricting tobacco sales, purchasing, and smoking location has had a large impact on smoking in the past two decades. As the below graph from the Australian Institute of Health and Welfare shows, smoking is declining in the general population in response to a combination of policy changes intended to have this effect. Now the rise of vaping as a alternative nicotine indulgence has attracted some attention as its growth in recent years is at odds with the continued long run decline in tobacco smoking in traditional forms. 

Questions about tobacco choice must centre on externalities of consumption, and information failures. Is smoking impinging on the freedom of others to enjoy a smoke free environment, and are smokers fully informed about the products they are consuming?

On the first question it seems clear that previously introduced limitations on smoking locations have addressed the majority of externalities associated with tobacco consumption. On the second, one could argue that the pubic awareness campaigns of the past decades have addressed this issue as well, and that plain packaging rules and other changes have little claim to be further addressing information failures, though there is a very light argument that it reduces the power of tobacco brands because of lower community awareness.

The rise of vaping must also be considered. Vaping is specifically designed to minimise externalities from consuming tobacco in public and enclosed spaces, and hence any regulation of vaping should focus on ensuring consumers are fully informed of the product being consumed and its personal health effects.

Alcohol choice
There is no doubt Australia, like many countries, has high levels of alcohol related violence and a binge drinking culture. Australia has some shocking ‘alcohol related violence’ statistics
  • 1 in 4 Australians were a victim of alcohol-related verbal abuse 
  • 13 percent were made to feel fearful by someone under the influence of alcohol 
  • 4.5 percent of Australians aged 14 years or older had been physically abused by someone under the influence of alcohol 
But all the scientific research says that alcohol has absolutely no effect on aggression, and in fact impairs coordination.

One must be clear about what social problem taxes on alcohol and other regulations limiting sale are targeting; the binge drinking that arguable creates externalities on others, or drinking alcohol in general? Clearly it is the rowdy culture and late night violence in cities and suburbs that is a problem.

Yet it is not clear that “sin taxes” on alcohol are an effective way to change the binge drinking culture, and in fact might have the opposite effect. Those who choose to drink alcohol may change their patterns of consumption to only drink to get drunk. Why pay so much for alcohol unless you are going to get drunk?

Anecdotal evidence across countries suggest that countries with binge drinking cultures also have the more expensive alcohol, such as the Scandinavians and the UK. While in Mediterranean countries where wine is part of the dining culture, cheap enough to consume with most meals, the binge drinking culture is less prevalent. In fact the weight of evidence now points to regular small quantities of alcohol being beneficial to lifetime health.

So what sort of policies would reduce our violent binge drinking culture? I have a radical proposal.
  • Remove taxes on alcohol (revenues can be made up with land taxes) 
  • Reframe the public alcohol messages. 
  • Reduce the drinking age to 16 
  • Allow alcohol to be sold in supermarkets in States where it is not 
  • Remove liquor licensing rules and simply retain responsible serving of alcohol requirements. 
Essentially such changes would make alcohol boring and integrate it into everyday life.

Public health messages might have a grandma drinking Bundy Rum diluted with cold water after dinner, who then falls asleep on the couch. Or we could do a complete reversal and really drill home the point that rowdy drunks are puppets of their social environment and that they can’t blame alcohol. If you are a tool when you are drunk, you are a tool. Embarrass them into less binge drinking.

As anthropologist Kate Fox explains
I would like to see a complete change of focus, with all alcohol-education and awareness campaigns designed specifically to challenge these beliefs – to get across the message that a) alcohol does not cause disinhibition (aggressive, sexual or otherwise) and that b) even when you are drunk, you are in control of and have total responsibility for your actions and behaviour.
Yet at the moment we have alcohol messages that seem to reinforce the message that alcohol is an excuse for disruptive behaviour, with phrases such as “alcohol is responsible for..”. Actually, no. Would you seriously say ‘tea is responsible for…”.

As I have discussed before, culture is often a good explanation of social and economic phenomena. The more we understand culture, and get over our simplistic ‘Pigouvian taxes can fix everything’ mentality, the more we can strategically intervene in highly effective ways to change behaviours that are having negative effects on others.

Marijuana choice
The same arguments discussed above in relation to tobacco smoking and alcohol apply to marijuana. It is mostly through historical happenstance that marijuana consumption is fully prohibited while tobacco and alcohol is not, and certainly prohibition of various types of drugs have a complex social history.

The main comment is that modern experiments with legalisation of marijuana have showed that there is little social disruption to such changes, and that legal and police resource devoted to the current illicit marijuana industry can be much better employed elsewhere.

Bicycle helmet choice
Australia is globally unique in our laws about compulsory bicycle helmet for all riders. As discussed in the background section of this submission, the argument that injured cyclists will be cared for in public hospitals, and as such create externalities on other through the costs of public health care, is rubbish.

Moreover, even if one believed this argument it would also justify helmet wearing for drivers and pedestrians, who on average account for the overwhelming majority of head injury hospitalisations.

As a general observation the helmet laws has been a knee jerk policy without a clear assessable objective, and has for nearly 30 years been an excuse to ignore investing in urban cycling infrastructure because ‘something’ has already been done for cyclists to keep them safe.

Again, the overwhelming research findings are that helmet laws reduce cycling, make cycling less safe for this who do, and decreasing health of those who opt out of cycling. Being a world outlier in this area should be enough of a signal that this law is not achieving any particular goal of reducing externalities or improving information failures for cyclists, and if anything does the opposite by making cycling appear more dangerous that what the statistic show.

Media classifications
Unlike most of the items int he ToR, media classification do serve to address an information failure, in media and games, where viewers are unable to judge the content until after they have experienced it. The simplest way to view media classification is as a type of labelling, similar to that in the food and groceries, which allows customers to easily access additional information about the product.

In an ideal world media classifications would be simple and their design would imply self-evident feature of the media content in terms of violence, sexual content, language and themes. The main use of these classification is for parents of children who are taking responsibility for their child’s exposure to particular types of media, and hence for these parents some form of classification tools appears to address a possible information failure.

Sunday, August 2, 2015

The confused economic orthodoxy

Last year I presented the idea that perhaps an firm objective function of maximising their rate of return on all costs is more consistent with the stylised facts about firm cost curves.

I want to document here two things. First, the two mutually exclusive responses from editors and referees during the reviewing process, which to me reveals the general ignorance of what the core concepts in economics really are (opportunity cost anyone?).

Second I want to spend a moment showing the incoherent ways profit-maximising is used in economics, and reiterate Joan Robinson’s critique of profit-maximisation as it is still highly relevant.

Part 1: Challenging the scriptures

The basic idea of my alternative objective function is that maximising the absolute value of something is universally a stupid thing to do. We need a denominator in a world where what matters at an individual or firm level is relative performance.

I’ve had both the following responses. First is the more common response that the paper is wrong because it doesn’t look at profit maximising firms. Basically, this response involves re-explaining the standard result of profit-maximisation. To borrow Steve Keen’s favourite analogy, we are like Copernicus explaining what a model of the Earth revolving around the Sun predicts, and the response is to explain the predictions of the Ptolemaic orthodox model where the Sun revolves around the Earth. The comments on my first blog post about the paper were mostly along this line.

The second response from editors and reviewers is the opposite. We’ve also been told that return-seeking is natural and implied in the standard model of profit-maximisation.
Your paper argues that firms do not maximize instantaneous profit but instead choose to allocate resources in a way that maximizes return on investment. I don't think that this assumption would surprise or bother anybody.
Actually, yes, it surprises and bothers all your economics colleagues. Maybe you should sit down together and interrogate your own models with some objective clarity and see what they really say.

Even if you dismiss this bizarre series of responses as the outcome of time-poor editors looking for excuses to reject papers they don’t like the look of, you’ve just revealed an acceptance of the non-scientific nature of economics and the lack of openness to anything outside the accepted scriptures (and yes, this is a general socio science problem).

Part 2: Sticking with inconsistent beliefs

This is my main problem with economics. Despite a long history of critiques of the core models from inside the discipline, including the impossibility of a representative agent (and it’s full information), the conflation of uncertainty with risk, the Walrasian auctioneer, the impossibility of aggregating capital quantities, and many others, somehow the core survives.

So let me add to this long history of critiques with another of my own.

Consider the short-run profit maximising model, where profits are revenues minus costs. By definition the short-run has a fixed factor of production, usually called capital, which can be any arbitrary set of inputs. What that implies is that the short run profit maximising output actually is more generally represented as

profit = (revenue - costs) / fixed capital amount

Magically we have an implied denominator, which we might consider sunk costs. But then we have another different set of costs in the numerator, the variable costs. Exactly how is this distinction between types of costs made in practice? More importantly, where do the funds come from to pay these variable costs?

Consider the standard short-run price-taking model in equilibrium. Demand then increases. Increasing output requires the imposition of greater costs for each additional unit (being on the upward-sloping part of the ATC), the firm must conjure these costs from somewhere. If they require a new investor (or the same investors to reinvest earnings), the are diluting the rate of return on all the other investors.

As I have explained before, no current investor would allow the rate of return on their share of the firm to be diminished by adding additional investors. Essentially the core short-run profit-maximising model is one of maximising profits per capital owner.

But then we have a long-run profit maximising model which typically looks like this

profit = quantity * price - (labour units * labour unit cost + capital units * capital unit cost)

The denominator has disappeared. All of a sudden firms don’t care how much it costs to make a profit. If there is a choice between spending $100 to make $40 in profit, and spending $200 to make $41 profit, you choose the latter as a profit maximiser.

But as a return-seeker you first take the $100 investment. You don’t ignore a 40% return because a 20.5% return is available elsewhere. Never.

There is much more to this story, particularly around the ability to leverage. But the biggest story is about how value is gained from high return investments. If I can get my 40% return on costs, I can later sell that firm based on the discounted value of net cash flows. If that discount rate is, say, 10%, then my $100 investment gains $27 in value immediately. I can then sell my firm for $127.

In any case, the point here is that profit-maximisation is, in the words of Joan Robinson, meta-physical doctrine. The empirical record is against it, yet it persists as a signal of membership to the economics tribe. And what is worse, it seems that very few economists at the top of the discipline are clear about the crucial and often hidden underlying assumptions of their models, and continue to teach a fairy-tale view of the core models.

Tuesday, July 7, 2015

ACE 2015: Day 1

This week I'm attending the Australian Conference of Economics.

The main event today was a debate on the topic that economics education needs saving. To me the debate revealed that the desire for change is widespread amongst the old and young alike within the discipline. What is not clear is agreement on an alternative - everyone wants to do 'something else', but agreeing on what that something else might be is a task never quite tackled systematically by potential reformers.

In my mind Rethinking Economics and Post-Crash have most clearly articulated an alternative pluralist, dare I say it, scientific, core. But few seasoned economists are willing to make such radical change. Opening the doors to cross-disciplinary research is scary. Alternative methods might just prove superior to the equilibrium representative agent models that dominate economics.

As a small example of just how hesitant even the relatively ope minds are in economics, when discussing that direct surveys of firm managers and anthropological-style observational studies of firms are a valid method in micro-economics (ala Alan Kirman) I was faced with the following response:
But how do we know that respondents would tell the truth? That's the power of models and various regression tools. We know the assumptions being made
But of course, most data that gets into these estimations is the result of a survey asking people to self-report their views, their income, their expenditure, and so forth. This response (from someone I respect white a great deal who is an excellent experimenter) simply reveals the narrowness of economics training.

To ram home the point, when Alan Blinder did actually send researchers to go and ask questions of firm managers and observe their decisions, his results, summarised in his fantastic book Asking About Prices, has had little relatively little impact on the profession. As Steven Keen writes in his review on Amazon
The chief author of this book is Alan Blinder, once a Vice-President of the American Economic Association, a Vice-Governor of the Federal Reserve, and currently President of the Eastern Economic Association. He is, in other words, no maverick, but firmly within the mainstream of economic thought. And yet the research he reports in this book challenges many of the accepted tenets of both micro and macro economics. 
The publication should therefore be taken seriously by the economics profession, and raked over carefully to find out whether what Blinder reveals is really the case, or simply a product of poor research. 
It speaks volumes for the way that economics handles contrary evidence to accepted beliefs that this has not happened. Blinder's book has instead simply been ignored. The book languishes around the 750,000 mark in Amazon's "best sellers" list, and this review will be the first ever given of it. Meanwhile Mas-Colell's Microeconomic Theory, published three years before Blinder's book, which states the accepted neoclassical microeconomic canon in excruciating mathematical detail, ranks in the mid 100,000s, and has over 80 reviews--most of them from economics PhD students and highly laudatory.
We'll see what Wendy Carlin, author of INET's CORE Economics project, has to say about it all on Friday.

Another productive chat was whether the core economics program could do away with supply and demand diagrams and market equilibrium altogether. Thinking this far outside the current norms are what is really required for change. So you know, two of us believed an economics course could be even more valuable the current standard courses by doing away with supply and demand as currently formulated altogether.

In another session on how to improve your academic writing some quality advice was offered
Focus on the problem of interest, not the method.
This resonated with me. I've long come to the realisation that economists have a small simple toolkit comprising equilibrium models of representative agents, who take that method to new problems, adopting the method across its infinite possible contortions to fit any problem - from reproductive choice, to education, and more. Even when vastly superior alternative are available. The simplicity itself, despite its irrelevance, seems to be quite attractive.

I had expected to be exposed more to new methods and new ideas but came away very much with the impression of the continued dominance of the core. One of the respected elders of the professions suggested we work together on a model of housing markets, but as soon as he talked about equilibrating forces of supply and demand I realised he hadn't actually thought about the housing market (crucially, the timing problems due to the real option nature of housing investment for landowners). Instead he was taking the method to a new area (for him at least).

More updates to come tomorrow.

Wednesday, July 1, 2015

Gay marriage: an institutional perspective

Since I last wrote about the issue I think we have all decided to just call it marriage now.

Four years ago I hadn’t put much thought into one of the big social questions of our time, so I wrote an intentionally controversial list of questions about the debate.

Since that time I’ve some effort into learning about the topic, and the topic of institutions in general. I’m not one to leave my questions unanswered or unexplored for long.

One big theme in my original questions about gay marriage was about how confusing it seems for modern gay couples to be attracted to what is basically a stifling old religious institution? Why do they even want to be in this club? Why not start a better ‘modern marriage’ club?

But I ignored the social and economic realities of institutions. Having now studied in quite some depth the formation and cooperative effects of in- and out-groups I must reconsider my views. Expanding the institution of marriage to allow for gay marriage, rather than superseding traditional marriage with some alternative ‘new marriage’ institution, is likely to result in a far more harmonious outcome.

Consider the diagram below showing on the left a society with two sub-groups; those who choose ‘old marriage’ for straight couples only, and those who choose ‘new marriage’ that allows for gay couples. Immediately we create a group division within a country or region. On the right is the alternative reinvention of a more inclusive institution of marriage. 

The importance of this divide becomes clear when we consider that group divisions lead to group loyalties in matters unrelated to the group itself. Such a divide will create competition between institutions of marriage that will see other social issues become divided along the new / old marriage lines.

To be clear, when people have little knowledge of an issue they default to a view that reflects that of their groups. Don’t know whether a free trade agreement is good policy? What does you political party say about it? Our groups and institutions allow us to put aside reason and default to the standard expected response without having to think every issue through.

If you don’t believe me, take a look at this post explaining recent research into how alignment of loyalty in politics has captured alignment of loyalties on race issues.

By introducing a new institution to compete with the old institution we are asking for continued disharmony and conflict as more social issues become divided on the lines of new and old marriage.

“I’m an ‘old marriage’ person, I couldn’t possibly believe that internet censorship is a bad thing”

By reinventing the same institution instead, we gather together with a little dose of self-delusion by rewriting history and creating marriage as a more inclusive institution. `Love is love’ we repeat to ourselves as we entrench the new normal into the collective consciousness.

The objective observer realises this is a big lie. Love is not love. Expanding marriage to include gay couples still excludes lots of love that we currently find socially unacceptable. We never thought that blacks could marry whites. Then we never thought gay couples could marry. Maybe one day the institution of marriage will allow of polygamy and sibling marriage (as it does in many places). It may sound ridiculous now, just as our modern views sounded ridiculous in the not-too-distant past. But the objective observer must see the institutional patterns this way, and for marriage it is one of expanding dominance across broader social spheres.

My big lesson from the past years of study is that in terms of internal harmony, reinventing our social institutions is often far better than introducing new institutions and the accompanying competition and conflict amongst them.

[Or maybe I am just defaulting to the view of my groups and have put aside reason]

Thursday, June 25, 2015

Dodgy rezoning, a summary

If you click the news link above you will see that my research on land rezoning decisions and the relationship networks of land owners has had a fair bit of coverage.

One thing I have learnt is that this type of quite technical research requires some effort to translate into bite-sized pieces for the broad media-consuming audience.

This post will be a reference point for the media and interested people that summarises the key findings and provides a couple of simple graphs and visuals that are not specifically included in the original research paper, but that can communicate the basic findings well.

What I did

I took a sample of landowners inside and outside rezoned areas in 6 locations in Queensland, where the statuary body, the Urban Land Development Authority (ULDA), took planning controls away from councils with the intent of increasing density, land values, and the speed of housing development.

In the maps below the blue disks are the landowners in my sample inside the rezoned area (black outline), sized by their land area, while the red disks are outside landowners in my sample. 

The logic of doing this is that these outside landowners could have been rezoned had the boundary of the areas been decided differently. From interviews with former public officials, and many others involved with planning decisions, it came to my attention that there is quite a bit of discretion about boundary decisions in zoning and that well-connected landowners often use their political clout to make sure the boundaries encompass their properties. The odd shapes of these areas are quite suggestive of this type of favouritism, as they have no apparent economic justification.

The sample is filtered to ensure I capture only undeveloped parcels that are at least as large as the minimum size rezoned parcel. There are 1,137 landowners owning 1,192 parcels in the sample.

Land characteristics or owner characteristics?

To see whether the characteristics of landowners likely to reflect political influence were a key determinant of these boundary decisions I collected data from both inside and outside landowners on the following:
  • Political donation activity 
  • Professional lobbying activity 
  • Membership of property industry lobby groups 
  • Corporate relationships through cross-directorships and ownership 
I also created a network of relationships that included the landowners using lobbyist-to-client connections, industry group connections, corporate connections, and
  • Connections from ULDA staff to their former employers 
  • Connections to politicians from their former employers 
The network had 13,740 entities and 272,810 edges.

I then model the effect of these characteristic on rezoning success, finding that being connected in the network increased chances of rezoning by around 19%, and getting into the most favourable part of the network gained an additional 25% chances. Employing a lobbyist improves your chances 37%, even after controlling for all other factors. Political donations don’t show and significant prediction on land rezoning when controlling for these other factors.

Connected landowners owned 75% of the rezoned area, and only 12% of the land outside. 

In the graph above I have the proportion of connected and not-connected landowners that were rezoned, the proportion of landowners who don’t employ lobbyists rezoned versus the proportion of those who do, and the same with political donors. 

The size of rezoning value gains

Using 822 historical sales of development sites inside and outside the areas in the study I estimate the price deviation attributable to rezoning. Essentially the rezoning increase prices across all areas by 81% relative the the neighbouring sites outside the rezoned areas. The below graph shows the price deviation through time, and the big change that comes the year of rezoning.

In all the value to rezoned land owners was $710million, of which connected landowners gained $410million. In terms of the marginal gains to becoming connected in our sample, this was $190million. While on a per hectare basis the mean gains are just $56,000, which isn't much, the sheer scale of the the gains to a narrow group of people represent a problematic political transfer from the unconnected to the connected. It also suggests that many billions in value are regularly transferred to connected landowners through routine rezoning decisions.

What to do

There are two main ways to stop this political favouritism - disrupt the favour exchange, and remove the honeypot.

Disrupting the favour exchange means requiring cooling-off periods for former politicians and bureaucrats, but also a policy of electing independent people to boards and decision-making positions. Why does the ULDA need to have its board stacked with local developers? Why not have a planning expert from Europe? There is a myth that local expertise is somehow required in the regulatory positions, but most of the time you don't get expertise, just loyalty to mates.

Greater transparency from all our public registers would also go a long way. Why aren't land titles a freely available database? Why isn't ASIC register of companies available for free? Why can trusts conceal who owns what so easily?

In terms of removing the honeypot, or reducing the value of discretionary political decisions, the obvious point here is to enact a process of selling additional development rights rather than giving them to selected landowners for free. This requires a pretty radical change in the way planning is viewed, but it makes perfect sense. Planning rules, including zoning, are part of the definition of property rights. You wouldn't give away a land area for free, so why give away land right of a different kind for free?

Alternatives to selling development rights are betterment taxes, and land taxes, both of which are effective administrative alternatives.

We can also think outside the box and look at development timing fees. If the rezoning is intended to increase density and increase housing supply, why not provide incentives to build sooner rather than delay? A per-dwelling fee that increases every year in the rezoned area would encourage developers to bring forward construction and sales.

Lastly, we can have local referenda on town planning changes to allow for representation of the interests of the politically unconnected. 

Saturday, June 20, 2015

Division of labour is the outcome, not cause

I’ve written twice now on the spurious ideas in the division of labour story as an underlying cause of productivity growth in economics.

First I questioned whether Adam Smith’s observation of a division of labour in 18th century pin factories made much sense, given that the 18 tasks required to make a pin were undertaken by 18 men in some factories, but only 10 men or fewer in others. Clearly even in this iconic case it was not the division of labour tasks into more specialist roles that was the cause of the rapid gains to productivity in pin factories.

Second, I made the comment that the division of labour story has become an endless repository of ad hoc explanations for productive gains. The story has even captured the imagination of archeologists and anthropologists who saw the invention of tool-making as “gearing up for a clearly defined division of labor.” Yet the simple logic of increasing the number of possible production tasks following the invention of certain tools would mean that each person could do more, rather than fewer task, suggesting a rather strange interpretation of the division of labour. I also noted that the division of labour story rests on labelling conventions of roles in society rather than actual units of labour being devoted to fewer clearly defined tasks.

Here I want to be even more clear on this final point by presenting a minimal example of how roles and tasks can lead to confusion, and emphasise that productivity is the result of doing more tasks with fewer people - the opposite of specialisation.

Inspired by the ancient tool-making tribes of Jordan, my example is a 6-person tribe that undertakes 6 defined tasks, of which the two named roles undertake 3 each. Thinking in terms of roles there are 3 hunters and 3 gatherers, yet each person undertakes just one task within those roles.

Task Role Person in role
Track Hunter 1, 2, 3
Collect Gatherer 5, 5, 6

You might want to argue that the way I define tasks offers limitless ad hoc classification. Tracking an animal could be further divided into a team pursuit with specific sub-tasks for each member. Same with cleaning an animal. But this is kind of the point. Any defined task will be a bundle of sub-tasks. But in order to understand the division of labour we need to keep track of tasks at any one particular level of aggregation and not fall into the trap of calling something specialisation when it is just a different bundling of more tasks into one job.

One of the tribe members now invents the spear and woomera. Regular production of these tools requires 3 additional tasks to be undertaken by the new toolmaker role in the tribe.

Task Role Person in role
Track Hunter 1, 2
Collect Gatherer 3, 4
Collect Tool maker 5, 6

Now we have more roles and fewer people in each of them! Exactly as predicted by the division of labour story.

But if we instead look at the tasks, we have more tasks per person. Each hunter, instead of being able to specialise in one task, like tracking, now must undertake more than one task on average as there are only two hunters available for three tasks. The same for our gatherers. What we see as specialisation in roles is the automatic result, not cause, of increasing productive capacities.

What has happened is that the invention of new production techniques has allowed more tasks to be undertaken by each person leading to fewer people in each role. It is not a case of dividing labour in a way so that each person completes fewer tasks, each requiring less training, in order to increase aggregate output, as is often argued. The most productive countries are not full of people doing repetitive narrowly defined non-skilled tasks, but highly educated people doing ‘specialist’ roles involving a hierarchy of complex and interrelated tasks that require years of training to master.

You might still be thinking that the joint production function from specialising on the task that each person has a relative advantage in rescues the division of labour story. Crusoe catches fish, Friday gathers coconuts, and their combined output can be greater than if they individually produced what they consumed. But the simplification in this story ignores the possibility that if Crusoe and Friday gather coconuts together, then fish together, that the complementarities in joint production for each task might increase their combined output by more than if they specialised and worked alone.

Maybe catching fish involves a number of distinct tasks that, when shared between them, would increase their their output beyond twice that of the most productive of the two men. Would that also count as division of labour? If so, then it appears that any joint production can be labelled as a division of labour without offering any insight as to why one division is better than another. What Crusoe and Friday actually need to grow their economy is to each achieve more tasks each in a given amount of time.

To cap off, for the division of labour story to me is just an observation about the human roles in large scale production. It is not a causal story for increasing productivity. Productivity requires that labour bundling, or given more tasks to each person, is the way to increase output.

Thursday, June 18, 2015

Endless repositories of ad hoc explanations

Division of labour. It’s a thing. A big thing in economics.

But like many core economic concepts[1] it is mostly an endless repository of ad hoc explanations.

In my last post I showed how Adam Smith’s observations of production techniques in 19th century pin factories lead him to make many contradictory remarks about the division of labour. Yet the implicit argument of a single causal mechanism leading from more division labour to greater productivity has become gospel, filling pages of economics texts for a century.

Let’s take a recent example to show exactly why the division of labour is such a slippery concept.

The title, and indeed much of the text reporting the discovery of new stone tools from Jordan, imply that these tools are somehow evidence of the dawn of division of labour.
These toolmakers appear to have achieved a division of labor that may have been part of an emerging pattern of more organized social structures

They were gearing up for a clearly defined division of labor, including firewood gathering, plant gathering, hunting and food foraging.
But the invention of these tools probably lead to less division of labour rather than more!

Let me digress for a moment. To be clear about a division of labor story requires being clear about the counterfactual world of undivided labour. It should not involve counting up the labelled roles within organisations, as larger organisations will have more uniquely named roles for each employee purely due to size.

To really narrow down we need to think of tasks that need doing for basic production of life’s necessities, then make everyone do all necessary AT THE SAME TIME IN THE SAME ORDER.

That’s right. That is the counterfactual of undivided labour. A tribe of people who all wake up at the same time, collect berries from the same location at the same time, start a fire at the same time, hunt alone the same animal at the same time, cook the animal at the same time, build a hut at the same time. Also procreate at the same time.

The reason all of this has to occur at the same time is because if it doesn’t then we have naturally a division of labour already. I hunt animals while you collect fruit. Though tomorrow you may hunt and I may collect berries. Turn-taking is quite clearly not the fundamental idea behind the division of labour, yet I often get the feeling the term is used to capture this idea.

More important is the question of complementarities in joint production, like hunting in groups. I track, you kill. Division of labour.

But if we invent the spear I am able to more effectively hunt AND kill myself than in a team of specialists. Two tasks by one person becomes more efficient. The hunter becomes an anti-specialist, accomplishing more tasks than before.

Even in the iconic pin factory, the story of specialisation can be seen in reverse. Modern pin factories require very few people to do all the 18 tasks required in Smith’s time and in far greater quantities. The specialist maker of a single pin production stage has been replaced by a generalist who controls the whole process, including, no doubt, more advanced packaging.

The division of labour story about productivity is mostly a story about naming conventions for roles in society rather than the tasks achieved in those roles. I am an engineer, you are an account. Mostly though we both use spreadsheets to add numbers, make phone calls, type sentences, maybe drive a car. In fact from a task perspective rather than a role-in-group perspective, there is almost no specialisation. The majority of tasks are the same. But with more people we define each other’s roles more precisely. The ‘division of labour’ story unravels into a ‘large groups can accommodate more named roles’ story.

In modern lives as a whole there are far more tasks we each undertake. Rather than each doing fewer tasks better, we are all doing more than ever thanks to technologically superior capital. The diversity of our own life experience is far higher than ever. More so, those countries with more diversity, rather than specialty, are routinely the most economically advanced.

Returning to the ancient stone tools of Jordan, the invention of those tools would allow the tribe as a whole, and each member within it, do achieve more variety of tasks than ever and expand their production possibilities. But if the number of tribe members was the same, no additional specialisation of tasks could take place. Each unit of labour would accomplish a more diverse variety of tasks than ever, which is what expands the production frontier.

To really make the point, imagine a tribe of 50 people can undertake 100 productive tasks. Then with the invention of new tools, the number of possible tasks the tribe can undertake expands to 125. Clearly, the means the average tribe member is doing 2.5 instead of 2 tasks each - the opposite of what you'd expect from a division of labour story.

So what exactly is this phenomena of divided labour about anyway? To me it is mostly a sign of our level of ignorance about human coordination in groups. When looking to why one company, one country, or one historical period was different from the others, the idea of a division of labour is a catch all term that means ‘something to do with cooperation in groups’, and without further refinement, is an endless repository of ad hoc explanations.

fn[1]. I’m thinking here of concepts like capital (including human), technology, utility/preferences. The phrase 'endless repository of ad hoc explanations' comes from this paper.