Monday, June 25, 2012

Land boom ruins productivity measure

Article first appeared at MacroBusiness

Even though more words have been written about Australia’s productivity performance than most other economic issues, I have learnt very little about what our productivity trends really mean.

Recently, the RBA tried to unravel the mystery. My wise colleagues at MacroBusiness have often penned their interpretation of events.

To throw a little more confusion into the mix, the RBA’s D’Arcy and Gustafsson notes that
...there is considerable measurement error in the estimates of productivity growth making it difficult to be precise about the timing of changes in the underlying trend; and productivity growth is the result of the interaction of many fundamental and proximate factors.
Technological, structural and regulatory changes, as well as cyclical variation in factor utilisation, can all affect measured productivity, making it very difficult to identify and disentangle the various effects.
But we are given a hint at the important conceptual basis of the thing we refer to as productivity. 
Conceptually, economists often view technology as determining the productivity ‘frontier’; that is, the maximum amount that could be produced with given inputs.

Factors affecting how production is organised, including policies affecting how efficiently labour, capital and fixed resources are allocated and employed within the economy, determine how close the economy is to the frontier. Trend productivity growth is then determined by the rate at which new technologies become available—how fast the frontier is expanding and the rate of improvement in efficiency—as well as how fast we are moving to the frontier.
It all sounds very theoretical, but the reality is simple. The chart below shows the two key measures of productivity since the 1970s, and the declining multifactor productivity (MFP) that has attracted so much attention. Labour productivity growth has remained positive, if a little lower than the historical average.




There are two questions I will answer in this article: 
  1. Why is labour productivity growth historically low? 
  2. Why has MFP growth been negative for the past decade? 
To answer the first question we need some perspective about whether Australia’s performance is abnormal compared to other nations. If not, then I suggest there is little that can, or should, be done.

The Productivity Commission has some good figures on our performance against other comparable nations. It seems that our productivity performance is... wait for it... actually pretty good, and fairly stable in relation to the US and EU. Comparing GDP per hour worked, the fundamental measure of labour productivity, Australia has made gains on the EU15 during the 2000s, and has lost just a little ground against the US up to 2007. The chart from the RBA below clearly shows that we were middle of the road of productivity growth in global terms.


So why then would labour productivity be historically low across the world? Mostly, it has to do with significant structural declines in unemployment. Typically the least productive people, those with few skills to utilise capital effectively—to ‘leverage’ their work with the help of machines, computers, tools and so on—are the last to be employed during periods of strong growth, and the first to lose work during economic contractions. Thus the expected outcome is that during economic boom periods of declining unemployment, labour productivity will be biased down by these new workers, compared to if unemployment was flat. We should also expect that during periods of increasing unemployment that labour productivity surges again. When the least useful one percent of the workforce is laid off, production usually declines just a fraction of that one percent. 

In addition, much of the mainstream productivity discussion is dominated by the influence of mining and infrastructure, the two industries with the largest declines in productivity. The basic arguments are as follows.
  1. As widely noted, we have a ‘wall of wire’ problem in much of our basic infrastructure. This simply means that the honeymoon period of relatively new electrical, phone, water, and waste infrastructure is over, and major maintenance and capital expenditure is becoming more frequently required to deliver the same service. 
  2. In mining, a sector showing substantial productivity declines in recent years, we have the situation where “rising minerals prices meant low-productive mines were profitable, and thus the extraction of minerals from those mines actually assisted in lowering the sector's and the economy's productivity” 
These arguments are both true and apply to those sectors in terms of both labour productivity and MFP. Another major factor is unpredictable seasonal changes in the agricultural sector output. 

So what of our MFP performance?

If we have been tracking fine in terms of labour productivity, the actual only meaningful and useful productivity measure that reflects the benefits from economic growth, why the dismal pattern of MFP, the measure most economists prefer to fuss over? And why do they prefer it anyway, when labour productivity is the only one that matters?

As noted in the RBA report, economists believe that Total Factor Productivity (TFP), or Multi-factor Productivity (MFP), measures changes in technology and market structure that enables the ‘production frontier’ to shift outwards. But when the idea of MFP was originally put forward, it was known as the Solow Residual, because it is “the part of growth that cannot be explained through capital accumulation or the accumulation of other traditional factors, such as land or labor”. Essentially, it is the bit left over after we measure all the inputs and outputs of the economy. Economists thought they might call it ‘technology’ or ‘productivity’ because it appears to measure our ability to get something for nothing.

But in reality, it is capital accumulation that almost exclusively improves labour productivity and the scale of our per capita productive capacity. Having more, and better, machinery, buildings, infrastructure networks and other capital equipment is what enables each person to be more productive. Using better machines, for example, can improve how many meters of road can be laid by a small team of workers in one day, and the quality and durability of the resulting surface. As the economy accumulates capital, all parts of production require less labour per output. It is one of the main reasons the agricultural sector requires such a small workforce. If I haven’t repeated myself enough already, it is capital accumulation that explains almost all the improvement in labour productivity (for example, see here).

To recap, labour productivity is simply a measure of output, usually GDP, divided by labour input, either in employed persons, working hours, or population. MFP is a measure of output divided by the sum of inputs of labour and capital, including land. I use the term productivity to mean MFP, or will explicitly state labour productivity when referring to it.

To answer the question of why we have experience declining MFP, we have to think about what can cause a divergence between the two productivity measures. MFP is the result of dividing output, measured by GDP, by the sum of labour and capital inputs. So either we are using our capital less efficiently, requiring more new capital for each improvement in output (diminishing returns to capital), or we have some kind of measurement anomaly in the estimation of the balance of capital assets. Indeed there may be some diminishing returns to capital effect, but after investigating this anomaly I found that falling MFP is substantially the result of estimates of land prices in the measure of the capital stock.

The culprit is hidden deep in the ABS release 5204.0 System of National Accounts. Back in 1999 the methodology for estimating MFP changed. One critical change was the inclusion of non-agricultural land in the capital stock.
...the scope of capital inputs has been changed to include the capital services of livestock, intangibles and non-agricultural land and to exclude ownership transfer costs.
The ABS believes that the exclusion of non-agricultural land biased the measure of MFP downwards in the past. But this only applies to the situation where the value of land assets grows with inflation. When land values significantly exceed inflation, which has especially been the case since 2001, the capital stock component in the denominator of the MFP calculation increases, for no particular reason. Theoretically, the inclusion of land is very odd, since it is always fixed in any case.

The ABS explains that they take the balance sheet value of land from the national accounts to include as the land component of capital stock. We can observe in the chart below the rise in the value of the land balance sheet value against the estimate of MFP, and indeed against an estimate of the land balance if land values simply tracked inflation. Quite clearly, from about 2002 onwards the abnormal increase in the value of land lead to a flattening and falling estimate of MFP. More telling is that fall in all land asset values in 2009 lead immediately to an increase in the MFP measure, only for the next wave of land price escalation, especially FHOB stimulated residential land, to cause a deterioration in MFP during 2011.




We can dig a little deeper into the ‘land balance sheet’ in the system of national accounts, and look closely at the type of increases in land value estimated. The chart below shows in blue the neutral holding gains - that is, the change in the value of land expected if prices tracked inflation. In red we see the real holding gains, which are market-based increases in land values. As the ABS notesHolding gains and losses accrue to the owners of assets and liabilities purely as a result of holding the assets or liabilities over time, without transforming them in any way”. In economic terms, they are pure rents.

When red is greater than blue, we find a significant downward bias in the MFP estimate. It is really that simple. And we are not alone in this either. Spain’s land price boom resulted in a similar pattern of declining MFP during their land price boom in the early 2000s. 




Let us wrap up by summarising the key points from this analysis. 
  1. Australia is not performing abnormally low by international standards in productivity growth. 
  2. Labour productivity is the most important productivity measure and improves almost entirely through capital accumulation. 
  3. Labour productivity is usually biased by changes in unemployment. Reduction in unemployment results in a downwards bias as new labour is employed before capital can be produced to help the expanded workforce produce more effectively. 
  4. Multifactor productivity is the bit left over after adding up all the economy’s outputs and subtracting all the inputs. It typically captures compositional changes in goods produced. 
  5. Multifactor productivity has fallen mostly because the denominator of the productivity equation has been so heavily influenced by inflated land prices across all sectors since 2002. I expect if the slow melt in land prices continues we will see a 'surprising' recovery of the multifactor productivity measure in the coming years.

Tuesday, February 21, 2012

Ridiculous debates on funding health care in Australia


There is a detailed academic debate surrounding health care funding and provision in Australia (as there is globally). 

But the debate is clouded by observer bias – by the relatively wealthy senior academics, government officials, and consultants, who provide the analysis in the policy-making environment.  If everyone involved feels burdened by their own choice to send their children to private school, or irrationally choose private health insurance, because of a culture of social class bias, what type of policy recommendations can you expect?

Consider the King-Gans argument, based on a theoretical micro-economic model of health insurance developed in this paper.  The model assumes individuals with perfect knowledge of future health care needs, which leads to a massive adverse selection problem for the private health insurance market.  After a few pages of intellectual mathematical masturbation, the conclusion follows that –
…those in society who are most likely to be ill will ‘opt out’ of public insurance and purchase private insurance. The public health insurance will only be used by those in society who are healthiest (i.e. least likely to become ill). The high-risk individuals are made worse off by the public insurance because they are required to cross-subsidise the public insurance of the low-risk individuals through the tax system (my emphasis).
This conclusion completely contradicts the current reality in Australian health care, whereby the public system typically deals with the most serious cases of trauma and chronic illness.  While the model conclusions are laughable, the policy implications that follow from this model outcome are being taken seriously. It's a worry.

As far as the details of the model are concerned, we all know the best measure of a model’s usefulness is how well it predicts outcomes.  The King-Gans model requires the following assumptions to come to the conclusion that a mix of private and public insurance will result in higher cost to the most ill people, because only the most ill will choose private health insurance –

1.     Adverse selection of private health insurance by a set of people who happen to know in advance their future health care needs and probability of illness
2.     People value actuarial fair insurance.  That is to say that they value insurance only as a tool for spreading health costs over their lifetime.
3.     They assume a perfectly competitive private insurance market.  While the market is good, I would be hesitant to got that far, but it is probably okay.
4.     They assume there are not ‘too few’ people with a high risk of being ill who know their risk in advance.
5.     They assume identical income for all individuals at a level that justifies PHI for those individuals who know their risk of illness, and subsequently their expected future health costs, in advance.
6.     The ignore all externalities associated with health care.

For me, apart from the above-list of possibly unrealistic assumptions, the analysis ignores the critical fact that risk averse wealthy people will both choose private health insurance (PHI) AND preventative health care through healthy living choices.  PHI is a complement to better voluntary health choices, so we should expect that private patients are typically the healthier members of society, and the publicly funded public hospitals will be treating the most ill patients.

This paper, by Buchmueller et al., finds empirical evidence that contradicts the result of the King-Gans theoretical model.  They find that there is beneficial selection of PHI in Australia, meaning that the healthiest people tend to be privately insured.  May I suggest this conflict between empirical ‘reality’ and theoretical abstraction is the result of the above ignore concept of complementarity of PHI and healthy lifestyle choices.

The conflicting result is also a product of ignoring the relationship between income, health, and PHI.  Higher income people both typically value health more highly (since the can, even though it may not be as high as a proportion of their income of many not-so-wealthy individuals), and are incentivised to be covered by PHI through tax rebates.

The second major shortcoming of the King-Gans model, and the analysis of health care funding and insurance more generally, is that it ignores the fact that end of life health costs are unavoidable.  Death is usually a costly process.  Thus, there is no way to adversely select health insurance for these ‘death costs’ in advance.

Third, many PHI covers are incomplete covers.  That is, that when a health service is claimed against the insurance cover, the reimbursement is a fraction of the total cost, and there are still out-of-pocket costs for the patient.  Thus, for anyone expecting to require a lot of non-elected medical care, public health provision may be the rational choice, given the risk-adjusted premiums for the most comprehensive PHI cover.

Fourth, PHI is incomplete in terms of scope of medical coverage.  Emergency departments, for example, are typically the domain of public hospitals, treating public and privately insured patients at public cost. 

In all, these overlooked considerations render the theoretical outcomes of the King-Gans model inaccurate, and the policy implications that follow from it to be counterproductive.  The reality of beneficial selection of PHI means that wealthy sick individuals are provided a premium health insurance service, because their insurance funds are pooled with a selection of healthy people, to offer an attractive way of making elective health services more affordable.

While King and Gans acknowledge some of these shortcomings, they stand by the conclusions of their model, which feed into their policy recommendations.  Given their academic reputations in the policy-making community, this is dangerous territory.

Finally, my personal gripe with the paper, and the policy agenda being pushed of a result of this (and similar) analysis, is that it ignores the social agreement that has evolved to the current provision of a pooled national health insurance program.  When PHI coverage was over 70% in the 1970s, reforms were aimed at generating a more equitable and broader system of public health coverage.  The very nature of a mostly private health care system is that the unhealthy poor have both the highest need, and the least ability to pay for health services.

That basic philosophy of public health care is that national insurance, funded in a progressive manner through the tax system rather than through actuarial risk-reflective premiums, is provided by publicly run enterprises to fulfill equity considerations (for example, geographically equitable access to health care) and provide broad external community benefits by having a healthier population.

It is also easier to evolve a publicly run system of hospitals and health services to become a platform for implementing auxiliary health policy goals, such as vaccination programs, education programs, doctor training programs, and research.  The current system provides, on the whole, quite a reasonable combination of the benefits of both a private and public system, even if it does perpetuate notions of social class in Australia.

All things considered, the legislation currently being proposed to remove the income tax rebate for individuals covered by PHI may actually promote a more effective market for PHI to offer simple top-up coverage at reduced cost, and generate a welfare shift from the wealthy insured to the poor uninsured. Back in 2004, Dawkins et al. noted that the implementation of the 30% rebate for PHI in 1999 (following more subtle incentives in 1997 and the implementation of ‘lifetime’ health cover) made those on high incomes better off –
There is strong evidence that not only a larger number of households of higher income and socio-economic standings responded to the policy changes, but also they were more likely to have PHI even without the policy changes. These latter households enjoyed “deadweight benefits,” in the sense they needed no such benefits to purchase PHI to begin with. Given that households who took up PHI ought, by their revealed preference, to be better off, we can reasonably conclude that households with high income and socio-economic standings are the main beneficiaries of the policy changes.
As we have seen above, the idea that the ill rich are paying twice for health care is surely not at all representative of the present situation in the Australian health care system, and furthermore, policy advice derived from a bizarre theoretical model whose results oppose the empirical evidence should be carefully scrutinized with a dose of old-fashioned commonsense.

Sunday, February 12, 2012

US gas glut may dampen energy markets


Article first appeared at MacroBusiness

The US economy has shown some signs of stabilisation over the past few months.  For example, retail spending appears to be revisiting a growth path.  According to some commentators, the US economic green shoots appear robust and healthy, while I remain cautious about projecting anything more than muddling though, with a drawn-out grinding improvement in employment as the consumer debt burden is reduced by inflation, repayment and default.

The US ‘recovery’ is the result of many factors, including the relatively cheap US dollar (the US TWI is back at levels last seen in the mid 1990s), but importantly, and often overlooked in financial discussions, relatively cheap domestic energy fuel prices.  WTI crude is hovering around $100/barrel, still 25% down on the pre-crisis boom period.

But the key energy consideration for the US recovery, and future US political–economic trade policy, is the current domestic natural gas boom which has meant the US has shifted from gas net importer to net exporter.

Below we can see the recent rise in natural gas production, mostly as a result of the emergence of economical shale-gas extraction, for the past three years or so.  With natural gas comprising a quarter of domestic US energy needs, the scale of this boost in energy supply is significant.  Some have gone so far as to suggest that the natural gas boom, as a result of fracking and coal seam technology, is leading to a ‘new world energy order”.

We can also see that the US domestic spot price for natural gas has remained quite subdued in this period due to a combined of demand destruction, particularly during the financial crisis period between 2008 and 2010, and increased domestic supply.

The US government closely regulates natural gas exports, and any import or export of natural gas requires approval of the Department of Energy, as per Section 3 of the Natural Gas Ast 1938. (approvals in progress are here).

Debate is now brewing over the direction of US energy policy in the treatment of export approvals for the natural gas glut, given the significant profits to be made from liquefaction and export to Asian markets.
As the Wall Street Journal notes:
The U.S. already exports some natural gas, mostly via pipeline to Canada and Mexico. A recent wave of export proposals by energy firms seeks to liquefy gas and ship it overseas in tankers.
U.S. natural-gas prices have fallen below $3 per million British thermal units, pushed down by swelling production that became possible with the advent of new drilling technologies. With prices so low, U.S. producers are eager to reach customers in other parts of the world, such as Japan, that pay three to four times as much as U.S. users.
Collectively, they want to ship out about 14 billion cubic feet of natural gas a day, roughly 20% of current U.S. production.
But some lawmakers on Capitol Hill are opposed to increased exports and are urging the Department of Energy not to issue the required permits. The department will use the findings released Thursday in making its decision.
The administration is reviewing the export proposals “to ensure they are in the best interests of American taxpayers,” Energy Department spokeswoman Jen Stutsman said.
One argument is that maintaining a tight limit on exports will keep the domestic price low, and domestic energy intensive industry more globally competitive:
 The estimate by the Energy Information Administration, which compared gas-price projections in given years with and without higher exports, appeared to bolster assertions by U.S. manufacturers that they could face stiffer prices for natural gas and lose a competitive edge over companies abroad.
“Higher levels of exports would certainly impact the manufacturing recovery that has been revitalized in the U.S.,” said George Biltz, Dow Chemical Co.’s vice president for energy and climate change. “Exporting too much natural gas simply exports well-paying U.S. jobs.”
US Energy Information Administration (EIA) analysis of price impacts from increased exports shows that, under various scenarios, of high and low growth paths for shale-gas production, and rapid and slow scale up to exports, domestic price increases could be in the order of 15-35%, depending on the speed of development of export capacity.  See their chart of model results below.

As secondary concern has been whether exposing US domestic natural gas production to the global market will import price volatility due to global events.  If export capacity is high enough, this may be the case.  But if export capacity is limited by physical capital – the number and size of pipelines, and finite capacity of LNG conversion and loading facilities which would require a decade or so lead time to expand – then domestic prices will not capture global volatility, as high prices cannot be passed through to the domestic market.

The big question for US energy policy, is how best to share the benefits of this new energy supply.  The way I see it, maintaining tight control on export markets keeps prices for domestic gas users at a globally low level, making a diverse range of US industries more globally competitive.  However, this benefit comes at a cost to the gas industry that is being denied access and profits from global markets, such as Japan, Korea, China and India, where gas prices are significantly higher.

Indeed, allowing exports would in a way provide economic benefits to the destination countries, as the global price of LNG will be reduced as the increased supply comes on board.  For Australia, where the infant Coal Seam Gas (CSG) industry is being relied upon as a driver of economic growth, increased global competition in the gas market from US exporters may be cause to pull pack some of their price forecasts in the short term. Last week, Fitch placed the entire sector on negative watch for this reason among a growing list of other negatives.
The current US debate is interesting from the lens of an Australian resource State, where almost all energy resources are developed for export markets.  It is surely inconceivable that Australian authorities would consider such regulation to ensure competitiveness among our own industry. (Although, as mentioned earlier, the ability for export markets to compete with domestic energy markets is dependent on the interconnection of the supply chain.)
It is worth keeping a close eye on how US energy policy plays out in the treatment of the shale-gas boom, and the potential implications for LNG markets globally.

Tips, suggestions, comments and requests to rumplestatskin@gmail.com + follow me on Twitter@rumplestatskin

Monday, January 30, 2012

Why not adopt NZ’s no-fault national insurance



New Zealand is a beautiful country with great people and a relaxed attitude. One factor that supports a relaxed attitude is New Zealand’s national enduring no-fault accident insurance scheme, which eliminates some the perceived legal risks that can stifle innovation, new business, increase medical costs, and reduce private land utilisation.

I want to take the time to examine this important issue, by looking at the history and performance of the New Zealand scheme, in its various iterations, and compare this to the scope of State and national accident and health insurance provided in Australia. I will preface the discussion by noting that I am not a legal scholar, but an economist looking at the behavioural incentives provided by the legal framework.

So what is this no-fault insurance scheme I speak of?

As one of my Kiwi friends put it, imagine workers compensation for all accidents for all residents and guests in New Zealand. Yes, that includes Rugby World Cup players. In its latest incarnation, the scheme is governed by the Accident Compensation Act 2001, which created the Crown organisation the Accident Compensation Corporation (ACC) (although the legal structure is a little more complex than that).

Funding is provided through various payments, with accounts kept for different types of accidents, such as work injuries, non-work injuries, treatment injury (medical malpractice), and motor vehicle injuries – in all, a comprehensive national accident insurance scheme.

One key feature of the New Zealand scheme is that the legislation, by providing national insurance, removes the ability of accident victims to sue for damages even is such cases as fault could be determined (apart from strictly defined exemplary damages). The origins of this important provision include concerns over wasted legal effort, and sometimes impossibility, of proving fault in accident claims, and the lack of support of victims of no-fault accidents.

This is quite different from Australian tort law, where damages for negligence can be sought from those at fault, provided they meet a few criteria -
  • A duty of care must be found to exist. 
  • The duty of care must be breached. 
  • Damage or injury, not too remote, must result from the breach. 
The legal profession uses the term the ‘public liability and professional indemnity insurance crisis’ to describe they way insurance markets responded to ever increasing claims for damages with higher prices and less availability in Australia around the turn of the millenium. This has serious implications for social clubs and sporting organisations, medical professionals, and private landholders who allow third parties to conduct activities on their land (an angle I will revisit later).

The Americans have been eyeing off New Zealand’s national no-fault insurance scheme because it appears so remarkably affordable. Regarding medical ‘misadventure’ not only are damages paid lower in New Zealand than under America’s tort system for claiming damages, but the administrative costs make up only around 10% of expenditure, compared to 50-60% under the US tort system. While some criticize a no-fault system as a no accountability system, the reforms to New Zealand’s scheme have included accountability for doctors and disciplinary hearings.

There is even regard for New Zealand’s scheme because it automatically provides accident insurance in the case of terrorist events.

Of course, some have criticised the scheme as inadequate. One argument often presented, that doesn’t seem to hold water, is that:
...the losses from the elimination of the tort system go further than just removal of incentives to minimise loss. They also remove the information base from which monitoring activities can be designed and upon which education to prevent future loss relies. Both of these are vitally important factors in a system that is heavily dependent on overt monitoring to achieve a socially optimal outcome. (here)
However, having a central agency, if structured well, would greatly enhance the information base about risks and risk education. A haphazard arrangement of private insurers, and unpredictable court decisions would be a maze to navigate if one wanted to get an appropriate estimation of risk levels.

The critics of such a scheme are those you might expect to benefit from its abolition – the business community who would take over insurance provision, and the legal profession.
While certainly a minority, there is a vocal constituency, comprised largely of individuals from the business community, who seek outright abolition of the no-fault system and a return to the common law tort system. Chief among these critics is the New Zealand Business Roundtable, which considers the accident compensation scheme to be an “unjustifiable intrusion by the state upon individual freedom and decision making” and would like to see it disappear altogether. In 1998, the Roundtable proved successful, if only temporarily, in its goal of dismantling the accident compensation system with the enactment of the Accident Insurance Act. Bolstered by substantial support from the Roundtable, the Act signaled a significant policy shift toward contraction of accident compensation coverage. By suspending the ACC’s statutory monopoly on the administration of benefits and beginning the process of privatizing the ACC, the Act ended most mandatory insurance coverage until the statutory re- institution of the ACC’s monopoly in 2000.
This temporary setback was the product of a clash of political ideals and a shift in political power.

Don’t get me wrong. New Zealand’s scheme is not perfect, and neither are any other national or State regulations for the provision of accident insurance, whether at work, in motor vehicles, or during domestic and recreation activities. New Zealand’s scheme has a checkered history of changing scope of coverage, cost blow-outs, and changes of government and direction.

Cost blow-outs were quite dramatic in the early decades of the scheme, with expenditure growing 8 fold in real per capita terms between 1974 and 2000. What this means in income adjusted terms I am not sure, since even with a set of identical claims and outcomes, the cost of the scheme is always going to at a rate closer to wage growth, than consumer price growth (since income compensation occurs are a proportion of previous incomes, and medical costs are closely linked to labour costs – see here for more explanation on this principle)

Other criticisms come in the form of questioning the moral hazard problem. Won’t everyone in New Zealand simply stop caring? Won’t business allow their patron to take extravagant risks? Will compensating behavior lead the outcomes from the scheme to be far from optimal?

The answers to the questions are all simply no. Studies have shown that risk of medical misadventure in New Zealand sits very close to the average of its peers (halfway between the UK and Australia by the way). In terms of serious injuries to workers, New Zealand outperforms all Australian States bar Victoria in terms of population adjusted workers compensation cases.

In fact, the ACC has a role to prevent injury and has adopted an approach of being a provider and safety information and regulatory advice.

New Zealand’s system, while it appears quite radical, is inherently similar to Australia’s public health care and welfare system from the point of view of an injured person, in terms of health care, rehabilitation, and lifetime income support (although income support in Australia is at subsistence welfare levels rather than a portion of previous income as in the New Zealand scheme).

In the realm of motor vehicle and workplace accidents, Australia has similar national or State insurance/compensations schemes in place (or compulsory private insurance), and it wouldn’t be such a great leap to expand our insurance systems to include all injuries. In fact, only about 25% of the serious injuries are not covered by statutory insurance requirements.
Data has shown that 61% of catastrophic injuries in Australia come under the motor vehicle scheme, 13% are part of workers’ compensation, 11% are due to medical negligence, and 15% fall under public liability. (here)
Some States in Australia have also reformed tort law to cap damages claims, and make scope for damages more narrow. This has gone some way to reducing civil matters that can clog our court systems to the advantage of the legal profession.

After all this rambling, will you now tell us why you are so interested in this scheme?

Yes, thanks for interrupting me.

In everyday life I see liability insurance as a great hindrance to social and economic activities. Local sporting competitions are often burdened by the costs of insurance, which are often directly attributable to legal costs under the tort system, and expectations of damages claims. Adventure sport and tourism, New Zealand’s great exports, would have a much more difficult time if they could not pool their liability insurance across all the diverse activities occurring in New Zealand.

Even if our current system does restrict liability, the uncertainty and difficulty in interpreting legal accountability, is itself a hindrance to emerging social and economic activities.

The NSW parliament has shown some interest in no-fault State accident insurance after the decision of the High Court to reinstate the award of $3.75million in damages to Guy Swain, a man rendered quadriplegic after diving into a wave and hitting a sandbank at Bondi Beach in 1997 (here). Waves of ‘insurance crises’ have led to continual tort reform, but in 2005, Bob Carr announced that the government was working on a no-fault scheme of insurance.

It is these types of legal and damages costs that Councils, community groups and landowners fear.
Doctors face hefty costs to start up their own practice, and often the risks involved from medical malpractice suits leads to over mediation, and excessive treatment. Since medical costs in the public system are already paid for by the public at large, wouldn’t any measure to reduce costs be beneficial? In the US, these so-called ‘defensive’ medical treatments are estimated to cost around $45billion per year.

Lastly, the liability of property owners for injuries from activities on their land does stifle economic activity. For example, would you allow an adventure sport race on your farm if you knew that there was a slight chance of being held liable for injuries to participants? Or would you allow campers on your land? What about paragliders launching and landing? I know from speaking with the paragliding community that liability concerns have made many landholders reluctant to continue to allow launching from sites on their property, and these same concerns extend to local Councils.

Removing the fear of legal action in the case of injury (whether real or perceived), by making all accident insurance compulsory and national, will help lubricate the economy by enabling those very small businesses and community groups to conduct activities they would otherwise fear to undertake.

Australia, and many other countries, are incrementally heading towards the basic principles of New Zealand’s national scheme, as the economic benefits from the elimination of the right to damages apart from in exemplary cases, provides society wide benefits.

I don’t see a sudden change in this direction, but with ongoing parliamentary interest, and recurring insurance crises, we may yet reach a point where the net effect of our complementary insurance scheme is much the same as New Zealand’s scheme, simply by default.

Any input from people who have been injured in New Zealand, or worked in this area of law would be greatly appreciated.

Wednesday, January 4, 2012

Living away from the tax man


This post originally published at MacroBusiness

The Commonwealth Treasury released a consultation paper last month on propose reforms to Fringe Benefits Tax (FBT), particularly, the tax treatment of Living Away From Home Allowances (LAFHA).
Apparently there are two major concerns leading to this reform.  The first, but unstated reason, is to increase tax revenue – which is more clearly articulated in the Treasurer’s statement:
LAHFA tax-free increased from $162 million in 2004-05 to $740 million in 2010-11.  The reform will raise taxes by $683.3million over the forward estimates.
The second reason is described as follows:
A particular concern is the growing use of the concession by employers (including through labour hire and contract management companies) to allow temporary resident workers coming to Australia to convert their taxable salary into a tax‐free allowance. This provides them with an unfair advantage over local Australian workers.
… An area of growing concern is the use of LAFHA by employers (including through labour hire and contract management companies) to attract temporary resident workers to Australia by including tax‐free LAFHA payments as part of their remuneration. These payments are effectively a re‐characterisation of taxable salary or wage income
…The changes will ensure a level playing field exists between hiring an Australian worker or a temporary resident worker living at home in Australia, in the same place, doing the same job.
Indeed, as the paper suggests,
The living‐away‐from‐home allowance is one of the most popular searched terms for workers on the 457 Visa.
So what are the changes?

Put simply, to claim accommodation costs and food expenses as a fringe benefits tax exempt LAFHA allowance, the employee must maintain a home in Australia they are living away from for work.   This is a change from the current treatment whereby there needs to be no evidence of actually living away from an Australian home for temporary visa holders (who, by definition, are living away from home, although not required to actually maintain a home).   They, and their employer, are simply enjoying a tax advantage compared to a local worker.

The second change involves tightening the qualifying criteria to ensure that any FBT exempt LAFHA reflects the actual costs incurred. The below table summarises the proposed new tax treatment.


We know that this will turn off a $683million tax loophole.  But just how many people could have their working conditions affected by the change? And what flow-on effects can be expected?

First, temporary residents on 457 visas are they key group able to access LAFHA tax advantages, and will be the main group affected by the reforms.  We can see in the table below that there has been a significant boom, and tapering in 457 visa numbers since the mid 2000s.

Currently, around 1% of the workforce is made up of the approximately 110,000 457 visa holders, but this is expected to grow over the coming years, as reforms implemented in November allow for a more streamlined visa approval process, and longer stays of six years.

I imagine the likely flow-on effects to be most acute in the real estate markets of mining-boom towns.  The new incentive means that domestic and foreign workers may prefer fly-in fly-out arrangements, as relocating becomes less attractive in after-tax terms, even if the cost of employment to the company is the same.

In terms of rental markets nationally, this will take tiny slice off demand at the margins, as this foreign workforce, occupying around 3% of rental homes, will be less enthusiastically spending tax-free income on accommodation.  The top end of the city markets will see much of this change.

The big question politically is whether this will have much of an impact on the demand for overseas skilled workers, and whether it will encourage the education and training of local workers by industry groups for their own future benefit.  I can’t say there being a major impact, but the uproar from the employment agent industry seems to suggest that the impact on the ease of attracting foreign workers may be significant.  At any rate, these reforms will provide a better set of incentives for local skill development.

Tips, suggestions, comments and requests to rumplestatskin@gmail.com + follow me on Twitter@rumplestatskin

Monday, December 12, 2011

BRICs can't hold the wall


This post first appeared at Macrobusiness

Not a day goes by when some economic commentator notes the ‘decoupling’ of economic growth of the developed and emerging economies. Europe looks almost certain to have a 2012 recession, and the US continues to have doggedly high unemployment with very soft conditions all around. Brazil, Russia, India and China, the BRICs, are meant to be taking up the slack, especially with domestic demand in China, to keep global growth on track.

But just as my suburb has a hard time decoupling from the rest of the city with which we trade our goods and services, the BRICs are nestled solidly in the layered courses of the integrated global economic masonry.

In the least reported economic news of the month, Brazil’s GDP was flat in the September quarter.  During 2010, Brazil’s economy grew 7.5%, while the year to September 2011 saw a mere 2.1% growth, with an obvious trend – down.  I am not sure what is on the horizon turn this around for Brazil, apart from a surprise boost in economic activity in Europe and the US (chart from here):
India was expected, much like China, to continue to grow at double-digit rates, but has just recorded its slowest GDP growth for two years, at 6.9%.  Particularly:
The manufacturing sector, which contributes nearly 16 percent of GDP, grew at a measly 2.7 percent, down from 7.8 percent a year ago. Agricultural output rose an annual 3.2 percent for the same period, down from5.4 percent a year ago.
The worst hit was mining, which showed a decline of 2.9 percent after growing by 8 percent in the same period last year.
Here is how Indian GDP growth has looked since 1990, with the September annual measure marked in addition to the annual measures:
The trend path for growth in India is looking to be heavily influenced by external factors.  Of course, exports make up about 20% of the Indian economy, so one must take external conditions into consideration.  Further, around 32% of the Indian economy is involved with capital investment, which can be easily scuttled should prevailing global market conditions render the financial returns unattractive.

Russia is bucking the trend a little, with GDP growth up from 3.4% annual in the second quarter, to 4.8% over the year to September 2011.  Currently this appears stable, but relatively slow.

Which brings us to China.  Well, I’m not sure what more I can add on China, but the image below shows that growth is remains high, but is doesn’t appear to be taking off in a hurry.  Annual growth has been steadily falling for the past 18 months, from 12% in early 2010, to 9.4% in the latest data:
I am not one to criticize a bit of economic stability, and the higher rates of growth on a lower base do mean that the share of economic expansion of the BRIC is relatively high.  But with these levels of growth I can’t see how the BRICs are meant to support the waning demand of the much larger western economies.  In terms ofshares of the global economy, Japan is 8.75%, US at 26%, the EU15 at 26%.  For the BRICs, Brazil is 2.4%, Russia is 1.9%, India is 2.3% and China is 7.5% – or 14.1% in total for the BRICs.

For some more perspective, the US military budget is about $1trillion pa, or half the size of the Brazilian economy, or two-thirds the size of the Russian economy.

Clearly, very strong growth rates from such a small economic base would be required to counteract even very moderate contractions in the developed world, and this is not visible in the trends we are seeing.  You can’t simply decouple the globally integrated economy while capital and goods are traded freely.

Tips, suggestions, comments and requests to rumplestatskin@gmail.com + follow me on Twitter@rumplestatskin

Tuesday, December 6, 2011

Economics of piracy



This post first appeared at Macrobusiness

Public debate over Internet piracy is riddled with contradictions and fingerprints of vested interests.  In the US, congress is considering the Stop Online Piracy Act (SOPA), while in Australia, an alliance of internet service providers had their proposal to crack down on piracy rejected by the Australian content Industry Group (group of music, software and games content owners).  The proposal:
…would see an ISP provide one education notice, three warning notices and one discovery notice to customers alleged to have infringed on copyright by the copyright holder, and, if a customer continued to infringe after this, the ISPs would tell the rights holder, which may then decide to apply for a subpoena to get access to the customer’s details for legal action.
Given that the application of knowledge is the engine of economic growth, one must intuitively consider copyright and patent laws as a significant burden on growth (China?).  Moves to curb piracy, as many economists are keen to point out, generally reduce consumer welfare, and in many cases reduce the profits of content owners who benefit from platform effects.

A key misapplied economic in the piracy debate is that greater incentives ‘bring about’ greater supply. That somehow, without a massive payoff, people would stop inventing and creating.  Imagine, people following their passions, even in something as obscure as blogging about economics, for their own rewards, rather than monetary payment.  History shows that creative contributions are independent of copyright and even patent protection.

Or, as The Economist magazine put it recently, “copyright theft robs artists and businesses of their livelihoods”.  Given that regulations create markets, and market actors play by the rules of the game, this point is partly true.  But is begs the question of whether markets provide better outcomes for both producers and consumers with or without copyright laws (or the evasion of the laws).

Consider the music industry.  Recent research suggests that music piracy can be beneficial to the music industry as a whole, but not those who are already superstars.
The effect of this is that piracy increases the diversity of music in the short run, and increases the supply of superstars in the longer run. In this sense, piracy is efficient, as it corrects a market imperfection.

This raises the possibility that opposition to file-sharing is strongest amongst those performerswhose success depends upon their fame more than their ability.
In software markets the ‘victims’ of piracy gain substantial benefits through platform effects (the result of strangely named two-sided markets).  The battle between mobile operating systems is demonstrating how platform effects work.  If Apples iOS is a platform for selling apps and music through online stores, Apple should be happy with piracy of its operating system, because the more ubiquitous their system, the more valuable their platform dependent inline retailing.  Microsoft, especially with its Office suite, shows that being ubiquitous means a cornering the market so that competitors cannot crack it.  Everyone insists on perfect compatibility with Office, because it is so common.  And it is partly so common because of the degree of piracy, rather than of sales.  Without piracy, these effects are greatly diminished.

Microsoft has even suggested that if you are going to pirate software, make sure it is theirs.
For some time, big software companies have tried to make the argument that a copy of pirated software is equivalent to a lost sale This is pretty ridiculous for a couple reasons. For one thing, there’s no reason to think that a given user of pirated software would have actually purchased a legitimate copy.
Furthermore, the argument ignores the fact that companies actually benefit in some ways from piracy, because a user of pirated software is likely to purchase software from the same maker at some point down the road. This latter point is something that even Bill Gates has admitted, even while Microsoft continues to talk tough about cracking down on piracy.
Now the company is stating more clearly that it knows there are some benefits to piracy. Jeff Raikes, head of the company’s business group, said at a recent investor conference that while the company is against piracy, if you are going to pirate software, it hopes you pirate Microsoft software. He cited the above reasoning, noting that users of pirated Microsoft software are likely to purchase from the company later on. He said the company wants to push for legal licensing, but doesn’t want to push so hard so as to destroy a valuable part of its user base.
The company recently got a stark reminder of this lesson when a school in Russia said it wouldswitch to Linux to avoid future hassles with the pirate police. Of course, this moderate stance seems at odds with the company’s recent hyper-aggressive anti-piracy push, which resulted in many mistaken piracy accusations. Either way, Raikes’ comments completely destroy the line about pirated software being equivalent to lost sales; if it actually were, Raikes would be telling people to pirate the software of Microsoft’s competitors.
Google is probably the best example of a platform business.  Their Chief Economist Hal Varian has been known to comment that whatever new software Google produces, if it increases the attraction of the Internet, then it is good for Google.

Content piracy has improved our lives, with some very minor costs to a small group of content owners whose lobbying efforts have already results in having the rules changed in their favour (thank Walt Disney). Whether this trend persists in the light of rapidly evolving content distribution methods I am not sure.

Tips, suggestions, comments and requests to rumplestatskin@gmail.com + follow me on Twitter@rumplestatskin