30 December 2007

Two and a half millennia after Parmenides

Raymond Tallis has written an interesting article about the Pre-Socratic philosopher Parmenides in the latest Prospect, as a taster for his forthcoming book on the same topic.

The Pre-Socratics suffer from our having relatively little information about them, and the fact that little of their material survives. They were also in the invidious position of having to invent philosophy from scratch, and of speaking to an audience unfamiliar with the flavour and purpose of philosophical thought. We should therefore not be surprised if their statements, while at times profound, are on the simplistic side.

There is a temptation to read more into Pre-Socratic utterances than is justified. It isn't possible to conclude very much from the fragments we have, beyond the fact that there were intellectuals in that period of history having interesting philosophical thoughts. Interpreting an individual's complete philosophical outlook from what survives has a tendency to move from speculation into fantasy. Whole academic papers are written on the question of whether some fragment of Heraclitus should be interpreted one way or the other. (The interpretation sometimes hinges on how a single word in a particular fragment should be translated.)

While Tallis makes some thought-provoking points, he falls into the same trap.

[In Parmenides' thought] human consciousness had a crucial encounter with itself. This was, I believe, a decisive moment in the long awakening of the human species to its own nature. From this self-encounter resulted the cognitive self-criticism, the profound critical sense that gave birth to the unfolding intellectual dramas of metaphysics and science that have in the last century or so approached an impasse.

Well, perhaps, and Tallis is not alone in wishing to make grandiose claims on his behalf — Nietzsche did it, ditto a number of more recent professional philosophers. But it seems highly speculative. While Plato said nice things about Parmenides, this in itself is not conclusive proof that the latter was indeed the essential foundation for the former, let alone the basis of Plato's metaphysics, as some have suggested.

Parmenides' key idea is usually thought to have been that change is an illusion: ultimately, everything that exists continues to exist. This is an insight comparable to Heraclitus's that you cannot step into the same river twice: a useful reminder about the 'boggliness' of reality, and that common sense concepts don't work very well when you try to analyse the fundamental nature of things. Notably, it is also — on the face of it — saying the opposite from the one about stepping into a river. Conclusion? The Pre-Socratics had some interesting ideas, but it's questionable whether they amounted to full-blown philosophical theories, let alone scientific ones. Still, it is good to be reminded that the metaphysical debate about change versus continuity goes back to the 5th century BC.

Tallis is on more questionable ground when he starts to make claims about Parmenides' contribution to epistemology. While it was useful to state that knowledge comes from analysis rather than perception, and to draw a distinction between appearance and reality, as Parmenides did, Tallis surely goes too far when he says that

In his short poem, thought and knowledge encounter themselves head on for the first time. This is such a huge advance in self-consciousness that it is no exaggeration to call it an "awakening." …The pre-Socratic revolution in thought that Parmenides brought to its climax is, I believe, a more compelling epistemological break than any that Foucault claimed to discover in post-Renaissance humanism.

Tallis asks why this development happened when it did. His answers involve giving political developments priority over intellectual ones — a common move, but a speculative one, and based on the ideological assumption that individuals cannot make innovations without being prompted to do so by their social environment.

Why, hundreds of thousands of years after human beings woke to the outside world as an object of knowledge separate from themselves, did they awaken to knowledge itself? What was it that fostered this collision of human consciousness with itself, such that thought came to think about itself and knowledge inquired into its own basis? …
The pre-Socratic awakening was the result of a unique concatenation of circumstances in place by the 7th century BC. In his classic investigation The Origin of Greek Thought, published half a century ago, J P Vernant connects the pre-Socratic awakening with the rise of the polis, or city state. …
Another driver to the explicitness of thought that made the Parmenidean self-encounter of human consciousness more likely was the rise of cities. … There is one more important driver: writing. This is an extraordinary technology: it stores human consciousness outside of the human body.

Such assertions may seem axiomatic in a post-structuralist culture, but they are highly theory-laden. There is no less evidence for the opposite claim, i.e. that socio-political developments were inspired by philosophical ones. In following the standard line, Tallis comes close to the reductionist thesis that consciousness, or at least thinking about consciousness, is a product of social forces.

Tallis's article becomes more interesting at the end when it turns away from its ostensible subject, and talks about our present intellectual situation.

Over the last century, there has been a growing feeling that in crucial areas of knowledge, we have reached an impasse. For instance, the endeavour to turn the scientific gaze on our own consciousness has run into a brick wall. Although you wouldn't know it from the excitement surrounding brain science, we have made no progress in understanding how it is that we are conscious and are aware of being located in a world that we in part construct and in part encounter as a given. ...
Dismissing the importance of subjective experiences, or "qualia"—a common ploy among the champions of neurophilosophy such as Daniel Dennett—keeps the impression of progress alive, but this is cheating. Biological science—evolutionary theory and so on—is increasingly assimilating itself to physics, chemistry and mathematics. Gene-eyed evolutionary theory and the rise of molecular biology forge closer connections between the biosphere and what Richard Dawkins has called "the blind forces of physics." Not only does this deepen the tension between an objective understanding of ourselves as organisms and our sense of being conscious agents, it exposes the biological sciences to the difficulties our understanding of the physical world is encountering. At the apex of contemporary physics, we have two mighty theories—quantum mechanics and the general theory of relativity—which are incompatible. The attempt to unite the two theories in "superstring theory" has produced a sterile landscape of largely untestable theories … Quantum mechanics, as Richard Feynman repeatedly pointed out, is incomprehensible, for all its extraordinary effectiveness.

He concludes by making the curious suggestion that a solution to these problems can somehow be found in Parmenides’ original insights.

We need to return to the Parmenidean moment to see whether, without losing all the gains that post-Parmenidean thought has brought us, there might be another cognitive journey from that which western thought has taken.

Tallis is right to point to the impasse and sterility which many fundamental areas of science and philosophy have reached. Hanging this observation on the utterances of a minor philosophical figure some two and half millennia ago, however, seems a little contrived. A more persuasive connection between the two periods might have been made as follows, but would probably have been too ideologically incorrect for Prospect Magazine.

'We need to recognise that the revolutionary insights of the Pre-Socratics were made by independent thinkers operating outside an institutional environment, supported by private capital. If we wish to continue to make significant progress, we need to consider whether a different political route from the one which we have taken might be required to restore to us the cultural advantages of classical Greece.'

30 November 2007

The theory of the second-best

One of the few provable useful results in economics is the one about markets producing an optimal outcome.

If all goods and services in an economy are traded via perfectly competitive free markets, the resulting outcome is efficient (i.e. Pareto-efficient), i.e. there is no other possible arrangement of available resources in which some would be better off and no one would be worse off.

By contrast, an “inefficient” outcome is one where the position of some can be improved without making anyone else’s position worse, e.g. where the benefits of exchange have not been fully exploited.

This is potentially a very useful result, because if we want to ensure that things are as good as they could be (ignoring redistribution issues) we need not first calculate everyone’s happiness under various different conditions. All we need do is set up perfectly competitive (PC) markets and let people trade amongst themselves. This is just as well, since it is impossible in practice to know what people’s happiness level is under different conditions, or to find out all possible preferences between different outcomes for every individual. We may not even need to do anything as active as “setting up markets” since they tend to develop spontaneously.

If we currently don’t have conditions of PC markets, the way to get to efficiency is simple, in theory: do whatever it takes to get to precisely those conditions.

The problem is that, in practice — for various reasons, e.g. political — we may not be able to get to PC conditions. We may therefore have to choose between other, suboptimal alternatives, and try to decide which of those is preferable from the point of view of efficiency.

What does economics have to tell us about how to optimise efficiency, if we cannot achieve perfect competition)? There are two ways of dealing with the "problem of the second best" for policy purposes. The first favours government intervention, the second does not.

1) If we had information about the preferences of every individual in the economy, we could calculate what the range of possible optimal states are, given the constraints we have to work with. (Call these states “second-best solutions".) In that case, it might turn out that, if the economy departs from PC in one specific area but is PC elsewhere, we will only be able to get to a second-best solution by departing from PC in other areas as well. In fact, it can be shown that for very simple scenarios, that is the case — i.e. it is better to deviate from perfect competition in all areas rather than just in some.

This is sometimes taken to prove that in certain cases government intervention is better than laissez-faire as a way of generating the best possible outcome, given the constraints. But note that this conclusion depends on knowing everybody’s preferences, which in practice is impossible. The great benefit of the strict-PC model — of being certain that the outcome will be efficient, without having to know anything about people’s preferences — does not apply here.

2) The other way of treating the problem of the second best is to advocate agnosticism. If we do not have perfect PC conditions and cannot get to them, and we do not know everyone’s preferences, then we can’t know whether any particular policy change will move things in the direction of greater efficiency. Even if a policy change appears to be moving things in the direction of PC conditions, it might easily result in less overall efficiency.

Now there are two ways to interpret treatment (2), either of which might be appropriate depending on the circumstances.

(2a) One is to be conservative, in the sense of being cautious about doing anything, especially major changes. They might do harm on balance, rather than good. This generates the opposite conclusion to that of (1), in the sense that you should avoid tinkering further with an already imperfect system in case you make it worse.

(2b) The other way to react is to adopt a muddle-through approach, for which there is no strict justification, but which might be the best one can do, on a sort of hopeful common-sense basis. This could be taken to mean, we should try to aim at the nearest thing to PC in all markets, being careful to ensure that no major areas are omitted.

The one thing second-best theory can definitely tell us is the following: one should be wary of policy changes which involve partial marketisation of a given area. E.g. if the intergenerational market for private capital (= inheritance) is heavily distorted by estate duties, it is not necessarily a good idea to marketise (i.e. remove subsidies from) cultural institutions such as universities or opera houses.

Also — though one does not really need second-best theory for this — it may well be misguided to impose artificial marketisation, e.g. by making academics or medical professionals try to prove they are generating “value for money”. There is no hard support from economic theory for the idea that anything other than a genuine market (where the genuine end users are able to vote with their wallets) will generate any benefit whatsoever.

The standard textbook interpretation of the point about second-best (originally made by Richard Lipsey and Kelvin Lancaster *) is (1) above, i.e. the version which appears to favour government intervention. This interpretation is at best biased, and at worst simply false, but is very common. Dani Rodrik, for example, uses it when he says that

the First Fundamental Theorem of Welfare Economics is proof, in view of its long list of prerequisites, that market outcome can be improved by well-designed interventions.

This is not exactly false, but does seem to exaggerate the case in favour of intervention. The best that could be said is:

The First Fundamental Theorem of Welfare Economics is proof, in view of its long list of prerequisites, that interventions may not necessarily make things worse.

In March I made this point on the Talk page of the Wikipedia article, where the same misinterpretation was being used.

This entry is incomplete as it stands, and in a way which generates a political bias i.e. in favour of state intervention.

Another way of looking at the Lipsey/Lancaster point is as follows. If you are not at a Pareto-optimal point for the economy, you don't know whether any change that doesn't actually take you onto an optimal point is going to improve efficiency (i.e. make everyone better off). Even when you move in what appears to be the direction of greater efficiency, e.g. by changing all controllable parameters to an average of where you are now and where a Pareto optimum is, you might be making things less efficient i.e. making everyone worse off.

So another moral (apart from the one given) is that, when you have a market that is already regulated or otherwise distorted, it is not necessarily a good idea to move in the direction of less distortion. You can be sure that if you can get to a Pareto optimum, that is a good thing (at least in terms of efficiency); apart from that, you can't be certain of the effects of different policy changes. This is a consequence of the severely restricted conclusions of Pareto theory.

In fact, the moral given in the entry is questionable as stated. It isn't really the case that intervention to move things in a direction different from the pro-market one is sometimes a good idea. It's just that such a move might be a good thing, only neither the government nor anyone else can ever know if it would or not.

In a paper published in June, Professor Lipsey himself came out in favour of the second interpretation.

The upshot is that in practical situations, as opposed to theoretical models, we do not know the necessary and sufficient conditions for achieving an economy-wide, first-best allocation of resources. Achieving an economy-wide second best optimum allocation looks even more difficult than achieving the first best. Without a model of the economy’s general equilibrium that contains most let alone all of the above sources, we cannot specify the existing situation formally and so cannot calculate the second best optimum setting for any one source that is subject to policy change. This is an important point since much of the literature that is critical of second best theory assumes that economists know a distortion when they see one and know that the ideal policy is to remove the distortion directly, something that is necessarily welfare improving only in the imaginary one-distortion world.
* R. Lipsey and K. Lancaster (1956), 'The General Theory of Second Best', Review of Economic Studies 24, 11-32.

(originally published on the mediocracy blog)

29 October 2007

Credible threats, moral hazard and Northern Rock

Continuing last week's post on game theory

Entry deterrence

'Entry deterrence' is an example of trying to manipulate a rival player's moves. In this case, it involves an incumbent firm trying to prevent the entry of potential rivals into a market.

Successful entry deterrence depends on avoiding the non-credible threat problem. If you want to make things too difficult for a potential entrant to bother entering, you have to do so in a way which binds you, i.e. you have to commit to a particular strategy. This has to involve ex ante (i.e. prior to the other player’s move) and irreversible action which prima facie is suboptimal for player 1 (and therefore is said to be ‘strategic’, i.e. undertaken only for the purposes of affecting the other player’s behaviour) but which ultimately pays because it succeeds in deterring entry.

Excess (in the sense of surplus) capacity is not an effective way of deterring entry; in fact it represents a non-credible-threat. An incumbent would never expand capacity in response to entry, he would always contract. (Unless there is imperfect information, in which case he may try to convince the other player he is irrational.) However, over-investment in capacity may succeed in deterring entry. This is the Dixit* model in which the incumbent invests irreversibly to expand the capacity at which he can produce at low marginal cost, beyond what he would do left to himself. The point is that this results in a post-entry equilibrium in which his output is higher than what it would have been, and the entrant’s lower — indeed, so low that the latter can’t cover its fixed cost.

Moral Hazard

'Moral hazard' arises when player A wishes to contract with player B for the performance of a variable task by B, the outcome of which will depend partly on (i) B's effort and partly on (ii) random factors, and where it is impossible to ascertain how much the outcome is due to (i) versus (ii). The problem is that B does not have as much incentive to perform as would be optimal. In the case of theft insurance, for example, the insured does not have the ideal level of incentive to protect his property because the insurer cannot monitor what he does, and he will therefore tend to under-protect it.

There is a connection between credible threats and moral hazard. To avoid moral hazard, A wants B to believe there will be penalties for indulging in 'immoral' behaviour. However, the threat to penalise errant behaviour has to be credible. Either the penalty has to be unavoidable, e.g. criminal legal sanctions, or it has to be somehow in A's interests to apply it. The problem is that the application of a punishment is not usually intrinsically beneficial for the punisher. One possible way out is through reputation: if A's reputation for truth-telling and toughness is valuable to A, then A announcing publicly that a penalty will be imposed could lead to a cost for A if he then fails to implement. In this way, the threat to punish would become credible.

Applying this to the Bank of England, a threat not to bail out a bank in trouble except in very limited circumstances is at risk of not being credible and therefore of not being effectual, unless reneging on the threat can be regarded as somehow costly for the Bank. However, it is not clear how the Bank, or any of its agents, could suffer from the failure to penalise an errant lender. Possibly when the Bank was still relatively controlled by the government (pre-1997), the desire of the ruling party to be re-elected could have provided such an incentive.

When there is imperfect information about whether the failure to carry out a threat is costly for the threatener, it is possible for the threat to be credible by exploiting uncertainty. However, once a player has reneged on his threat without obvious negative repercussions, the possibility of future credible threats is more or less eliminated.

* Dixit, A. (1980) 'The Role of Investment in Entry Deterrence', Economic Journal 90, 95-106.

22 October 2007

A short introduction to game theory

In game theory, we are usually looking for one or more equilibria (ideally only one), which we regard as representing the likely outcome of a particular situation. The principal criteria which an equilibrium is expected to satisfy are the Nash equilibrium condition and 'subgame perfection'.

Dominant strategies

We can write the strategy of player i as si. By strategy we mean a particular move or policy, e.g. “produce low output” (collude), or “cooperate if and only if the other player did so the previous round” (trigger).

We can write the payoff to player 1
if player 1 plays s1 and player 2 plays s2
as u1(s1,s2).

If u1(s1A,s2) < u1(s1B, s2) for all possible s2 and some s1B
(i.e. whatever player 2 does, it is possible for player 1 to do better than s1A)
then s1A is a dominated strategy.

If a strategy remains after iterative removal of all dominated strategies, it is a rationalisable strategy.

If there is only one rationalisable strategy, it is the dominant strategy (e.g. “defect” in the Prisoner's Dilemma game).

Nash Equilibrium

For n players, (s1*,s2*,…,sn*) is a Nash equilibrium (NE) if and only if:
ui(s1*,s2*,…,si*,…,sn*) ³ ui(s1*,s2*,…,si,…,sn*) for all si and all i.

All NEs consist of rationalisable strategies, but not all combinations of rationalisable strategies are NEs.

All combinations of dominant strategies are NEs, but not all NEs consist of dominant strategies.

Mixed strategy equilibrium

A mixed strategy is a probability distribution over strategies. I.e. a given player is playing each of his possible strategies with some probability. For example, firm A produces high output with 40% probability and low output with 60%, and firm B similarly mixes these strategies, but in the ratio 70:30. The game may be one-shot, so the point isn’t necessarily that the players alternate between strategies.

In this case the NE consists of optimal probability choices by each player given the probability choices of other players.

Once you allow for mixed strategies then every (finite) game has at least one NE.

Non-cooperative game

In a cooperative game, the rules permit binding agreements prior to play. (Hence collusion would be possible in a one-shot cooperative game.) In practice, we are usually concerned with games in which this is not possible, i.e. a non-coooperative game.

Games of complete information

• Players’ payoffs as functions of other players' moves are common knowledge.
• Each player knows that other players are 'rational' (i.e. payoff-maximising), and knows that they know that he is rational.

Static and dynamic games

In a static (or ‘one-shot’) game, players move simultaneously and only once. In a dynamic game, players either move alternately, or more than once, or both. Bertrand and Cournot models of oligopoly competition are both static games, while the Stackelberg model (firm B moves after firm A) is a dynamic game.

An equilibrium for a dynamic game must satisfy subgame perfection.

Normal and extensive forms

A game expressed in ‘normal form’ is in the form of a payoff matrix, as shown below for the Prisoner's Dilemma game.

A game expressed in ‘extensive form’ shows the ‘tree’ of the possible move paths depending on what each player does at each stage. A dynamic game can only be shown in extensive form.

Repeated games

The same agents repeatedly playing a given one-shot game (“form game”) in sequence is called a supergame. A supergame can consist of either a finite, or an infinite, repetition of a form game. Or we could have a game with a certain probability p of being repeated, i.e. a probability 1 – p of breakdown.

A strategy in this context can be contingent on what other players have done in previous moves.

Subgame perfection

For a dynamic game, some Nash equilibria are not acceptable as solutions because one or more players will want to, and be able to, avoid those outcomes. The subgame perfection criterion demands that, at each stage of the game, the strategy followed is still optimal from that point on.

Non-credible threat

A 'non-credible threat' is a strategy that one player is trying to use to manipulate the behaviour of another (usually via the second move in a sequential game), which forms part of a Nash equilibrium but one that is not subgame perfect.

The strategy is one which is being claimed in some way by the threatening player (e.g. by means of a signal that he is capable of using it) but which is not credible: although the threatened player’s optimal response to the strategy is to do what the threatening player wants, the former knows that by moving first in a different way, the latter will adopt another strategy to generate a different Nash equilibrium.

In a sense, there is no ‘credible threat’ — the term ‘threat’ implies that player 1 will do something specifically designed to harm player 2 if player 2 doesn’t comply, but such a threat would never be carried out in a finite game with perfect information because such a move would not be optimal for player 2. Player 2 will always ‘accommodate’ when it comes to it.

[Next week: applying game theory to Northern Rock & the Bank of England]

8 October 2007

Depreciation and price regulation

Price regulation, for example in the case of a monopoly supplier, often involves determining an acceptable rate of profit. Profit is normally calculated after ‘depreciation’ i.e. taking account of the wearing out of capital assets. It is sometimes suggested that, since depreciation is not a real cost, actual capital expenditure should be used instead to determine the real profit level under different output prices, and hence the acceptable output price. In this article I argue against this approach.

1. The purpose of depreciation

According to Financial Reporting Standard 15 issued by the UK’s Accounting Standards Board, the objective of depreciation is “to reflect in operating profit the cost of the use of the tangible fixed assets (i.e. the amount of economic benefits consumed by the entity)”. The Standard adds that depreciation should be allocated to accounting periods in a way that reflects “as fairly as possible the pattern in which the asset’s economic benefits are consumed by the entity.”

The purpose of depreciation is therefore not to accumulate reserves to finance the future replacement of the assets which are being depreciated. The purchase of an asset is an expense for a company. The choice of accounting treatment is between a full write-off against profits at the time of purchase, and a gradual write-off over the useful life of the asset. In neither case is a corresponding fund set up for eventual replacement.

This view of depreciation, that it represents past rather than future expenditure, is reflected in taxation law. Some form of gradual write-off of past capital expenditure is usually allowed as a deduction in calculating taxable profit. Transfers of profit to fixed asset reserves, on the other hand, are not permitted as deductions. In the UK, this is true both for corporation tax and for petroleum revenue tax. Most other countries’ tax regimes share the same view of depreciation.

2. Return on capital expenditure

The return on a company’s investment in capital expenditure is two-fold. First, the company expects to recoup the capital expenditure over the life of the relevant assets. If it does no more than that, it is simply breaking even. Second, it expects to earn revenue over and above this break-even level during the life of the assets. This additional revenue represents its profit, and is the return on the capital employed.

The first, merely neutral, element in the return on investment is taken into account by deducting depreciation in the calculation of real profit. Hence if depreciation is excluded as a cost in calculating profit, a misleading figure for return on capital is obtained.

3. Financing of asset replacement

There are two principal sources of finance for companies. The more important of the two is internal finance, i.e. a company using its own reserves, represented by cash or short-term investments. Alternatively, a company may obtain external finance, either equity (typically by means of rights issues) or borrowing. Most finance, particularly for fixed asset replacement, is internal.

Typically companies replace fixed assets gradually each year as they wear out, and the asset replacement profile is relatively smooth over time. Since real cashflow exceeds accounting profit by the amount of the depreciation charge, this annual undistributable cashflow excess can be used to finance annual asset replacement. The match between depreciation and fixed asset replacement expenditure is likely to be reasonably close, especially if the current cost accounting form of depreciation is used.

Where the asset replacement profile is not smooth, e.g. where a large asset base is expected to wear out during a relatively narrow time-window (as may happen e.g. with oil or gas pipelines), there are two choices for what to do with the undistributable cashflow excess represented by depreciation.

First, it can be accumulated over a period of years in the form of cash or liquid investments for the purpose of eventual investment in fixed assets. Secondly, the accumulating funds can be invested in long-term projects or business operations in such a way that the funds are potentially ‘tied up’. In that case, external finance may have to be raised when the time comes for the programme of asset replacement. However, subject to capital market imperfections, this should be as efficient a way of financing the programme as the internal accumulation of funds.

A criticism of the first of these two possible approaches is that the purpose of the company from the point of view of its shareholders is to invest available shareholders’ funds in business activities which will earn a better return on those funds than shareholders could do for themselves.

Accumulating large cash or short-term investment reserves is not usually considered appropriate for a company. Companies with large cash reserves are often under pressure from shareholders and analysts to eliminate the reserves in order not to dilute return on capital employed, by using the funds for expansion.

4. Fixed asset reserves

The use of fixed asset reserves to accumulate funds specifically for the purpose of future fixed asset expenditure is not a common business practice in the UK, nor indeed in the rest of Europe or in the US. On the other hand, a company with good financial management will inevitably plan for future cash requirements by appropriate build-up of cash levels or by arranging borrowing facilities in advance.

The closest analogy in UK commercial practice is the use of so-called ‘captive insurance companies’, which are effectively ring-fenced funds designed to provide financial cover for the kinds of eventuality normally insured against, but without the owner of the captive insurance company losing ultimate control over the insurance monies.

5. The effect of price regulation

Where price regulation is based on a target rate of return on capital employed, it is sometimes questioned whether depreciation should be taken into account as an expense in calculating permitted revenue levels. For example, it was argued in relation to TransCo* that, to the extent that depreciation in a period is not matched by expenditure on fixed assets in the same period, allowing depreciation as a cost results in an excessive permitted revenue level.

To the extent that allowed revenue under the depreciation-based approach to setting revenue would exceed allowed revenue under a pay-as-you-go approach [i.e. including current capital expenditure among costs which revenue has to cover], revenue could be considered to be provided to TransCo in advance of its cash requirements. **

However, this argument fails to take into account the point about depreciation made above, namely that it represents a return of initial capital which must be covered by revenue, in addition to any ‘return’ on capital in the sense of profits, for the business to meet its objectives.

Saying that any excess of depreciation over fixed asset expenditure represents a kind of distortion can be used to argue that a justification must be found if the distortion is to be permitted. In the case of TransCo, one justification which has been proposed is that a build-up of such excesses is required to fund an eventual reversal of the situation, i.e. that in due course capital needs for fixed asset replacement will exceed depreciation charges. The authors cited above argue against this by claming that, in view of the difficulties in predicting future required fixed asset expenditure,

the uncertainty associated with the level of future capital spending may well mean that the case for any revenue advancement is weak.

This is an unsatisfactory argument since what is here called ‘revenue advancement’ is not an active process requiring judgments about the appropriate levels of advancement, but rather a case of passively allowing the existing excesses of depreciation charges over fixed asset expenditure to build up in anticipation of a future investment programme. Uncertainty over the precise amount of expenditure involved in this programme is not sufficient to undermine the validity of such ‘revenue advancement’.

One argument in favour of allowing depreciation rather than expected capital expenditure as a cost in calculating permitted revenue is that the resulting price cap is likely to be much less stable under the latter method. Annual depreciation typically has a much smoother profile over time than capital expenditure.

* now part of National Grid plc

** Arthur Andersen, TransCo 1997 price review, 3.6.

1 October 2007

Charts of the month

These are two charts from Marc Faber's recent lecture 'Gloom, Doom, or Boom?'. The first (source: Ned Davis Research) shows that the last time US debt grew to such stratospheric levels as now was in the 1930s.

The second chart (source: Stifel Nicolaus) suggests that we are due for an upturn in consumer inflation.

24 September 2007

Will the internet increase monopolisation?

The internet has been enthusiastically proclaimed as a harbinger of greater competition. The work of Thomas Malone, among others, has predicted an 'electronic market' effect. The results of this are alleged to generate a highly competitive, rivalrous business world.

Prima facie, a reduction in transaction costs might be expected to facilitate marketisation of business-to-business (B2B) commerce and to reduce lock-in. The internet allows information to be both posted and accessed cheaply and easily, making it possible for companies to 'shop around'. In theory, this will make real attributes such as price and quality more important than advertising or reputation, giving small firms a better chance and increasing competition. Companies will find it easier to outsource, but will be less likely to form close integrated relationships with their suppliers, preferring to form casual, temporary relationships via the Web. This summarises the orthodox view of the effects of the internet on commerce.

It is often forgotten, however, that competition in Marshall's sense — i.e. rivalry between homogeneous firms — is an incidental by-product of capitalism. It depends crucially on it being efficient to have a particular activity carried on by a large number of firms rather than by a few. In the long run, this key condition may fail to hold. To consider the likely effects of the internet on market structure, we need to consider how improvements in information exchange will affect efficient firm scale and scope. This topic has not been explored in detail in either the economics or management literatures.

The issue of firm size is one of the key questions of industrial organisation. It is also one which has been linked since the early days of industrial economics with the issue of competition — Knight (1921), for example, argued that the force arising from the powerful desire to benefit from a monopoly position 'must be offset by an equally powerful one making for decreased efficiency.' The issue of efficient size more recently came back into attention, with some (e.g. Arthur 1996) arguing that decreasing returns to scale have ceased to be a relevant feature for many industries, and that in some we are seeing increasing returns.

The question of efficient scale determines concentration, and hence competition and static efficiency. The question of efficient scope affects how focused a firm is — i.e. how many ancillary activities it internalises — and therefore determines the boundaries of firms. The restructuring of business over the last fifteen years — outsourcing, globalisation, mergers — suggests that a change in conditions may have led to a change in what is efficient. A number of researchers (e.g. Malone, Yates and Benjamin 1987, Hammer 1999) have already observed and documented the link between outsourcing and B2B.

Each of the issues of scale, scope and overall firm size need to be considered. With regard to scale, when different companies operate in the same area of business, some duplication of costs is inevitable. Prima facie, therefore, there are potential efficiency gains (economies of scale) from expanding market share, or by simply integrating with rivals. In theory, one should always be able do at least as well by having two similar businesses under common ownership. Since firms tend not to expand without limit, however, there are clearly countervailing forces at work. These have usually been supposed to operate at the managerial level.

The issue of scope has received considerable attention through the work of Coase, Williamson and the property rights theorists (e.g. Grossman and Hart 1986). The key reason for internalising processes secondary to a business's core activity is generally agreed to be that of 'transaction costs' — the incidental costs of contracting with independent parties. Most recently, these costs have come to be identified principally with inefficiencies that arise ex post because contracts are incomplete and therefore not capable of providing ex ante protection against opportunism. As the incomplete contracts literature has emphasised, even a watertight contract will not always be honoured, and legal measures after the fact are expensive and can be ineffective. These theories would explain the current increase in outsourcing as due to improvement in contracts and a lessening of opportunistic behaviour. However, as Brynjolffsson (1993) has observed, empirical studies (e.g. Kanter 1989, Piore 1989) do not support this interpretation. I propose that other types of transaction cost are more important in answering this question, most notably losses arising from imperfect information exchange. Subject to some exceptions (e.g Radner 1996, Casson and Wadeson 1998), these have received relatively little attention to date as a source of transaction cost.

Finally, on the question of the overall size of a firm, it has generally been agreed that the management function typically yields decreasing returns to scale above a certain size, and that this is one of the main reasons why diseconomies of scale set in as firms expand market share. This may be due either to the fact that expanding firms require increasing spans of management, or to the information costs of communicating between different divisions in different locations. One way of interpreting these effects is by reference to bounded rationality — Williamson (1985), for example, summarises this view by arguing that effective expansion ceases when "bounds on cognitive competence" are reached.

Putting these three factors together yields a theory of optimum firm size and scope, which is a function of the trade-off between (a) production economies of scale which favour 'wide' firms, (b) savings in transaction costs from internalising ancillary functions which favour 'deep' firms, and (c) managerial diseconomies of size which favour smaller firms. Effects (a) and (b) each make firms want to expand — horizontally and vertically, respectively — while effect (c) restrains them from doing so. Thus, the need to keep managerial costs down can be achieved either by (i) reducing scale, which means economies of scale are lost and average production costs rise, or (ii) vertical disintegration, which increases transaction costs.

The motive for outsourcing is often said to be that it enables a business to concentrate on its key activity, or 'core competence'. Provided that outsourcing can be achieved with a relatively low resulting increase in transaction cost, it enables businesses to exploit economies of scale that would previously have been offset by managerial diseconomies. I argue that electronic information exchange (EIE) has had, and will continue to have, the effect of facilitating outsourcing in order to exploit economies of scale.

One consequence of this observation is that if transaction costs generally fall, there will be a tendency to switch away from (i) in favour of (ii). In other words, horizontal integration will take place at the expense of vertical integration: outsourcing facilitates mergers.

Little work has been done on explaining the recent trend for increased concentration as a result of horizontal mergers, although Chandler's work suggested that decentralisation is more feasible for the firm with greater lateral scope. Chandler (e.g. 1962) argued that functional scale economies are not great enough to make up for the diseconomies caused by the co-ordination problems inherent in functional structure, and that large firms can combine the most advantageous features of both if they undergo a structural shift into divisional management. General Motors and DuPont are pioneering examples of this. It would seem to be easier for horizontally integrated firms to undergo this shift without incurring inefficiency caused by communication errors than for vertically integrated firms.

One indirect consequence of the tendency of sectors to become monopolised as a result of increasing exploitation of scale economies is that bilateral trade (i.e. exclusive trade between two specific parties) becomes a more attractive option, simply because there is relatively little competition. This helps to explain why we are seeing increasing talk of buyer-supplier 'partnership', and related concepts increasingly featuring as key aspects of supply chain management. Corporations such as Bose practise hole-in-the-wall manufacturing: manufacturers make their products on site in Bose's factories and participate in the company's management meetings. Bose also pioneered 'JIT2', involving suppliers permanently on site, a practice that has spread widely since it was first introduced in the early nineties. Companies, rather than keeping their options open with a large supplier base, are cutting the numbers of their suppliers. Companies seem happy to be closely tied together, and are using e-business to build closer links with their suppliers.

Malone, Yates and Benjamin (1987, 1989) deny this effect in favour of the theory of increased marketisation. They predict that short-term buyer-supplier contracts, not partnership, is the principal effect of EIE — or at least of the form of it with most potential flexibility, the internet. It is true that in areas where homogeneous, irregular inputs are required, the use of web features like electronic auction is likely to lead to more static rivalry. By reducing search costs and rendering information accessible, the internet may indeed make a firm's search for a new supplier more market-like. However, it is important not to mistake a number of special cases for the whole picture. Overall, the evidence suggests increasing concentration and increased use of partnership. Companies appear to prefer partnership relationships where possible because they "improve quality and performance and [...] reduce costs" (Dussuage and Garrette 1999). Steinfield et al. (1998) find that electronic communications networks are being used more often than not as part of a strategy of tighter buyer-supplier relationships.

If e-business does not on balance increase marketisation, but rather encourages monopolisation and lock-in, we need to consider the effects on competition and static efficiency. I suggest that Malone et al's vision of the principal impact of e-business — internet-engendered markets giving easy access to a wide range of facts on different suppliers, benefiting industry and ultimately the consumer by keeping prices low and quality high — is flawed. Horizontal merger activity is likely to continue to feature at every level of business, further increasing concentration.

However, the conventional model of competition — several suppliers of the same product battling for the same customers — is likely to be increasingly replaced by more dynamic forms of competition in which one firm seeks to displace rivals by more efficiently matching customers' needs. Furthermore, potential competition, of the kind proposed by contestable market theorists (e.g. Baumol 1982), in which the behaviour of incumbent firms is constrained by the threat of entry, is likely to replace static rivalry. This will happen as entry barriers decline because firms increasingly focus on core competences and carry out smaller and smaller ranges of activities. A horizontally integrated, vertically disintegrated industry of the kind we have outlined maintains efficiency through fiercer potential competition. It is far easier for a rival to set up in competition with a flat, specialised market niche player than it is to set up a whole value chain in competition with a vertically integrated firm.

As Baumol, Panzar and Willig (1982) have argued, concentration of output need not be harmful so long as barriers to entry are low: the mere threat of competition can make a firm behave competitively. Microsoft, for example, has a near-monopoly in many areas of PC software but arguably remains innovative because its markets are contestable. Electronic information exchange lowers communication costs and thereby also helps to lower barriers to entry.

This is a summary of a paper I wrote in 2000. I have not had an opportunity to update it for any recent developments. A version of the full paper, including a simple numerical model, empirical data, and bibliography, is available here.

3 September 2007

Such a thing as society

Is there such a thing as 'society', over and above a collection of individuals? Curiously, the question is now associated in many people’s minds with Margaret Thatcher, who said:

I think we have gone through a period when too many children and people have been given to understand "I have a problem, it is the Government's job to cope with it!" or "I have a problem, I will go and get a grant to cope with it!", "I am homeless, the Government must house me!", and so they are casting their problems on society and who is society? There is no such thing! There are individual men and women and there are families and no government can do anything except through people and people look to themselves first.

The argument may of course originally have been mentioned to her by one or more of her intellectual supporters. It has also been pointed out that it was made in a fairly informal context, i.e. an interview with Woman’s Own (in 1987).

There are several ways of interpreting the question. Unfortunately, attention tends to be focused on the least interesting one. The trivial answer is, “yes, of course there is”. The word is useful in the same way that the concept of ‘global warming’ is useful — it is a shorthand for describing complex phenomena by means of aggregating and averaging.

Probably one can go further, and say that there are social phenomena which, while ultimately reducible to individual behaviour, only become observable at the aggregate level. They are, in other words, a kind of (weakly) emergent behaviour. For example, to say that "Italy advanced technologically in the sixteenth century" is theoretically reducible to the behaviour of individual Italians, but it would be difficult to do so, and the aggregate phenomenon is not readily predictable simply from information about the individuals.

However, to say that patterns emerge at an aggregate level which are not conveniently reducible to descriptions of individual behaviour is not the same as saying that a different kind of entity emerges at that level. There is clearly a sense in which methodological individualism — the claim that the whole is ultimately nothing but the sum of its parts — is trivially true, and anything more ambitious than weak emergence — e.g. social holism — is 'obviously' false.

The arguments that individuals are influenced by their social environment (situationism), or even that they are largely a function of that environment (constructionism), are really concerned with a different issue from the one raised by Thatcher's assertion. Even if the actions of individuals are always traceable to societal influences, those actions are still the basic components of social behaviour, and the latter must ultimately admit explanation in terms of the former.

However, all the above points are somewhat incidental to the issue typically at stake when the Thatcher quote is discussed. Reactions to it are more often concerned with a different matter altogether: that of whether ontological or moral priority should be given to the individual over the social group. In other words, the argument is really about the conflict between political individualism and communitarianism.

30 July 2007

The purposes of religion

In a secular society such as ours, religion can seem an odd phenomenon. Consider the beliefs that are involved:
- belief in the existence of an invisible deity;
- belief that the wishes of the deity can be ascertained by studying scripture;
- belief that people ought to comply with the deity's wishes.
These things can seem peculiar from the point of view of someone brought up without religion. A number of writers have recently tried to emphasise these peculiarities of religion in order to discredit it.

However, assume for the sake of argument that these beliefs happen to be correct. The question arises, what is the purpose of religion — should it aim to generate conclusions consistent with prevailing secular attitudes? Some who reject religious belief as irrational also criticise it for producing implications about behaviour that lack cogency from a secular perspective. It is not clear that this is a sound line of reasoning.

Consider Catholic objections to the use of condoms and to homosexuality, often linked to arguments about what is 'natural'. Many critics argue that Catholicism is morally wrong to generate advice which, they say, causes suffering, or stops it being prevented. However, the purposes of religion are not the same as those of secular morality.

Philosopher Stephen Law, for example, has recently argued that Catholic doctrine in this area is flawed:
- 'Even if there is a God, the claim that those purposes that we find in nature indicate what God desires is questionable.'
- 'Why not say, "We'd prefer you not to have sex, but if you are going to, please use a condom"? Can I suggest that saying anything else puts you onthesideofthedevil?'
- [responding to commenter] 'You say that the Church's view is: "condoms aren't going to stop you infecting or being infected". I believe medical opinion is that condoms can indeed prevent infection, isn't it? Doesn't this mean the Church's position is wrong?'

It may be appropriate to put pressure on the Church to change its policies, for pragmatic reasons. Church doctrine has changed over the centuries to accommodate changes in public thinking. One can dispute the basic beliefs of a religion, but to say that church policy is irrational because it goes against common sense seems like bad logic.

Other articles
Stephen Law on Thomas Aquinas
Stephen Law on condoms and Catholics
The Guardian on the use of condoms in Africa
Jonathan Tweet on condoms for Catholics
Nourishing Obscurity compares three major religions

23 July 2007


Recently there has been a good deal of debate about the existence, or otherwise, of God. Christians face the problem that the God of the New Testament is normally presumed to be both omnipotent and supremely good, which seems hard to square with the existence of suffering. It should be noted that moral "goodness" is notoriously difficult to define with regard to human behaviour, and is presumably even more problematic applied to a deity.

If we consider the more general idea of a creator or other supreme entity, and leave the idea of "goodness" aside, then we are faced with two key questions:

(A) What characteristics would such an entity have?

(B) What would count as "evidence" of the existence of such an entity?

Much of the debate seems to have blurred the distinction between the Christian God and the idea of God in general, and also failed adequately to consider questions A and B above. By not properly defining terms or considering the fundamental issues, the debate has effectively had the character of political argument rather than intellectual analysis. If a Christian claims that God has particular characteristics definable in ordinary language, or that God has performed particular actions comprehensible to human minds, then it is relatively easy to shed doubt on this. The effect is to undermine the concept altogether.

In a discussion with Alister McGrath, Richard Dawkins says "we have to talk about probabilities", and McGrath agrees. McGrath seems to think there is an inherent improbability in the complexity of living things, which makes the existence of a creator god plausible. Dawkins responds "I want to say exactly the same thing of a designer … Any being capable of designing a universe, or an eye, or a knee would have to be the kind of entity which would be statistically improbable in the same kind of way that the eye is". But both seem to be misusing statistics: we cannot really assign probabilities to hypotheses as abstract as that.

The strongest argument against the existence of God is probably that of superfluity (a version of Ockham's Razor): the concept is irrelevant because it is neither useful nor needed in explaining any facts we wish to account for. However, this argument is not as strong as it may appear.

Within a closed and self-contained system of explanation, e.g. physics, a concept at the biological level such as 'sexual reproduction' is not required to explain all the observed phenomena: we merely need to know the initial conditions and the laws of particle interaction. Yet when we move to a more macroscopic level, we find that it is useful and necessary to invoke certain additional higher-level concepts to provide satisfactory explanations. (This is sometimes expressed using the idea of 'emergent properties'.) The fact that we find it perfectly possible to explain phenomena without recourse to additional theoretical entitites is not strong evidence that no other concepts would be required at a 'larger' level of significance.

Other articles
Wikipedia on emergent properties
Discussion between McGrath and Dawkins
Stephen Law on the probability of God

9 July 2007

Consensus and dissent

Ipsos MORI reports that 56% of British adults agree with the statement "Many leading experts still question if human activity is contributing to climate change", and suggests that people have been "influenced by counter-arguments". Al Gore, who helped to organise last week's Live Earth event, remarked that "those people think, wrongly, that the scientific debate is still raging." The Royal Society's vice-president commented: "People should not be misled by those that exploit the complexity of the issue, seeking to distort the science and deny the seriousness of the potential consequences of climate change. "

The global warming controversy raises issues about (a) what is meant by "scientific consensus" and (b) the extent to which politicians and voters ought to accept what scientists agree is fact or highly probable. There is also the question of whether views which depart from the consensus should be suppressed because they are likely to be false, or promoted because they maintain a healthy debate and prevent thinking from becoming stale and dogmatic.

It seems clear that a majority of scientists support the idea that it is "very likely" that the gradual increase in temperature over the last century is "predominantly" due to human activity. But should we be interested in the views of those working in areas other than climate change? For example, the views of the Australian Medical Association in effect form part of the "consensus", but are they relevant? It is interesting that the most recent statement by the American Meteorological Association sounds more guarded than similar statements by other groups of scientists. Climatology is a highly complex subject: modelling conditions even a few weeks into the future generates results that are notoriously unreliable.

Many believe there is a high risk that CO2 emissions over the next few decades, unless actively restricted, will create significant and damaging climate change, and think additional environmental policies are therefore urgently needed. And for many of these people, persuading the voting public to agree to such policies is seen as a crucial obstacle. For them, climate change sceptics are simply holding up the necessary shift in public perceptions.

While there does seem to exist a consensus of sorts, we need to bear two things in mind. The first is that what is relevant is not whether there exists a consensus among the highly educated, or among scientists or academics in general, but whether those who are expert in the field in question are in agreement, and whether that agreement is unanimous or merely in the majority. Second, we know from past experience that consensus in itself does not conclusively demonstrate correctness, especially if the issue in question has political implications.

Assuming for the sake of argument that there is a consensus among climate scientists, and that the consensus should be given due weight, there is still an argument that opposing viewpoints should actively be given publicity, both from a utilitarian and a libertarian viewpoint. Consequentialism in science does not seem like a good idea.

Other articles
Dizzy on the politicisation of science
Acton Institute blog on Live Earth
Stumbling & Mumbling on climate moralising
BBC's The Investigation on the Stern Review
Crooked Timber endorses the Stern Review
Crooked Timber blames Exxon for scepticism
Carl Wunsch on "The Great Global Warming Swindle"
Nigel Lawson on climate change
Bjorn Lomborg on being vilified
Tim Worstall on IPCC forecasts

2 July 2007

No smoking

Smoking is now illegal in the UK in all indoor locations that are open to the public. Controversially, this includes private establishments where the counter-argument exists that a non-smoker could frequent another establishment if they minded enough.

The logic for tolerating the smoking ban seems to proceed as follows.

1) Most people now believe that smoking cigarettes is equivalent to taking a regular dose of poison: it is basically extremely harmful to health, probably more so (addiction aside) than any of the major illegal drugs.

2) Therefore the right to behave how you like, provided it does not harm others, is not important in this case, and hence does not need to be defended.

3) Many smokers are themselves in favour of the ban, because they believe it will help them to give up.

There has therefore been relatively little objection to the ban encroaching on areas where no non-smoker is being harmed. But is it possible that an important principle is being lost sight of? J.S. Mill argued that “the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others. His own good, either physical or moral, is not sufficient warrant.”

Once a major piece of legislation breaks clearly with this moral restriction, and without significant opposition, are we likely to find other areas in which it will be applied? “Slippery slope” arguments of this kind are regarded as dubious by many philosophers. Bernard Williams, for example, considered them illogical because they appear to assume “that there is no point at which one can non-arbitrarily get off the slope once one has got onto it”.

Nevertheless, there does seem to be something in the idea of slippery slopes, in politics at least. The argument “we allow A, therefore it is inconsistent not to allow B", or “we already ban X, so why do we allow Y?”, is frequently used. Even if not sound, it is a line of argument which the general public seems prepared to regard as reasonable. On that basis, it may not be long before other legislation is proposed to prevent unhealthy individual behaviour. It has already been suggested that it should be a crime for a parent to allow their child to overeat. On that basis it is not impossible that it will, in the not too distant future, become illegal to sell or consume certain foods, e.g. snacks with more than a permitted maximum level of sugar or fat.

Other articles
David Hockney against the smoking ban
Daily Referendum on the smoking ban
Reuters on the smoking ban
Pub tries to avoid ban by becoming an embassy
Stumbling & Mumbling on the smoking ban
Nourishing Obscurity on the smoking ban
Julian Baggini on slippery slopes
Fallacy Files on slippery slopes

25 June 2007

Inheritance (2)

Is it fair for a person’s chances in life to be affected by the receipt of a capital sum from another individual? We could ask the same question about innate ability: is it fair for a person to have a better chance of a high income because of genetic characteristic they did nothing to deserve? It has been argued that it is not. Of course, it is harder to stop people benefiting from their abilities than to stop them benefiting from inheritance.

This raises the question of what exactly is fair, and what is deserved. The philosopher John Rawls thought that, given a choice of different possible reward systems behind a ‘veil of ignorance’, people would opt for a system which permitted rewards to ability and effort only to the extent that the position of the worst off would be maximised. Rawls proposed that this hypothetical choice should be used to define what is ‘fair’. This is usually taken to mean, permitting a certain amount of competition, with higher ability and/or greater effort tending to yield higher rewards, but also a good deal of redistribution back down towards the poorest.

It has been pointed out that Rawls’s solution depends on assuming a particular attitude to risk: that of not being willing to gamble to any extent on a high variance of possible outcomes. If correct, what does it tell us about inheritance? Assuming a Rawlsian system allows for savings, it is not clear that an individual behind the veil of ignorance would necessarily insist on capital gifts being taxed, since such gifts do not themselves result in greater divergence of outcomes between individuals in the way that free market competition does.

If we say inheritance or ability are not morally deserved, then that leaves the question of what is. Although we might want to say that effort is more deserving of reward than ability, because it is more under the control of the individual, we could make similar objections as with ability. If A makes more effort than B, is it not because A benefits from conditions which make it easier for him to try harder, e.g. a more supportive upbringing, a higher level of self-confidence, or a greater innate capacity for making efforts?

Other articles
Stanford Encyclopedia of Philosophy on egalitarianism
Wikipedia on John Rawls

18 June 2007

Inheritance (1)

The phenomenon of wealth that is acquired from a relative appears to have few sympathisers. Even pro-market supporters often have little time for it. Socialists tend to be scathing about it. Anthony Giddens, for example, argues that inherited capital "violates reciprocity”. The recent e-petition against inheritance tax argued it was “double taxation”, but provided no economic arguments in favour of transfers per se.

The logic behind such disapproval relative to other market phenomena is based on the idea that entrepreneurial capital is “earned” and therefore more “deserved”, while capital which someone gave you is not earned at all. However, it is not clear that the logic for this distinction is sound.

Take the case of a self-made millionaire giving capital to an individual who is not a relative, e.g. an artist or composer. This is no less a “market” phenomenon than any other. The millionaire buyer wants something (e.g. a cultural product, whether or not he gets a proprietorial stake in it) in return for his money. The distinguishing feature is that the number of buyers responsible for generating the capital accumulation is one, rather than being in the thousands or millions.

Where the recipient is a family member, it is possible to extend this logic. The millionaire expects to get some benefit, e.g. some kind of quasi-immortality. Rather than a million people paying for (say) astronomical bodies to be named after them and making the supplier of this service wealthy, we have one person “paying” his or her son or daughter to do something with the money which will promote the family name or the memory of the donor. Why is the latter thought to be less tolerable than the former?

Other articles
Anthony Giddens on inheritance tax
Stumbling & Mumbling on hating the rich
Our Word is our Weapon on inequality
Stumbling & Mumbling on slavery and inheritance
e-petition against inheritance tax

11 June 2007


There is some concern over the level of abortions in this country. Three years ago a report showed that over 20 per cent of pregnancies are being terminated. Recently there have been plans to lower the current time limit from 24 weeks, possibly to 20.

Abortion is an extremely emotive topic. A person's attitudes to it are often said to depend on his or her "value system". As with most emotive debates, attitudes may depend more on emotions - and particularly on strong feelings of horror or rejection - than on logic or religious doctrine. However, one should not assume that an attitude based on emotion is necessarily less justified than one based on analysis.

Those who are strongly "pro-choice" may perceive the horror of a woman having a very strong need or desire not to bear a child, while being unable to do so because of some illegality or social taboo. Those who are strongly "pro-life" may perceive the horror of a foetus being aborted without sufficient recognition that this may be different from killing an individual human only by a matter of degree.

One problem with abortion debates is that they tend to focus on hard legalities (pro-lifers want more restrictions, pro-choicers want fewer) while ignoring a whole range of factors which determine how much and how readily abortion is practised. One factor often ignored is that medicine is largely controlled by professional bodies and the government. This leads to the following points.

1) Whether a woman has an abortion will often depend to some extent on the views of the practitioner advising her, which in turn will reflect the attitudes of government and the medical profession.

2) The "privacy" issue (a woman's right to decide about her own body) is more complicated if "right" means claim on state medicine as opposed to liberty not to be legally prevented.

Is it possible that some women are being advised to terminate - and do so - when this is not an option they would ultimately have chosen if they had been freer to decide for themselves?

Other articles
Telegraph on Catholic Cardinals putting pressure on MPs
Telegraph on doctors refusing to carry out abortions
Nadine Dorries MP in support of Cardinals (see 4 and 5 June)
Not Saussure on Nadine Dorries
Devil's Kitchen on Nadine Dorries
Ministry of Truth on Nadine Dorries
The Times on abortion and Hollywood

4 June 2007


An issue closely related to transparency is that of democratic participation. It has often been argued that voters need to become more involved in deciding issues which matter to them. This philosophy is behind the recent trend for more consultations.

There is also a movement to make democracy more devolved, and allow for more voting about local issues at a local level, which is currently receiving exposure in the Daily Telegraph. As with transparency, however, is it necessarily helpful to the effectiveness of democracy to have more participation? Or is there a risk that, by trying to make people be more involved, the effect is to favour the interests of more politically-minded voters?

This comment from US citizen Ron Craig may give food for thought:

"The idea of public referenda doesn’t actually work in practice. Take the experience here in California. With each and every election we are bombarded with countless ‘propositions’ to vote on. What on the face of it looks like the ultimate democratic action ends up being taken over by vested interests and their apologists, who fill the airwaves with, dare I say it, lies to support their ‘for’ and ‘against’ cases. The end result is that the average voter probably makes their decision based on which commercials they watch."

Other articles
Ben Fenton on freedom of information
Adam Smith Institute on direct democracy
Zac Goldsmith on local referenda
Tim Worstall on Zac Goldsmith
Antony Jay on localisation
Bruno Kaufmann on direct democracy in Switzerland
Stumbling & Mumbling on demand-revealing referenda

28 May 2007


Gordon Brown will shortly become Britain's unelected premier. He has promised to introduce more "transparency" into government. Concepts such as transparency and opennness were also popular with the current incumbent Tony Blair. The present administration introduced the Freedom of Information Act, intended to make information about government activities more accessible to ordinary people. The results of the Act have had mixed reviews, with some claiming they represent a genuine contribution to transparency, and others arguing they are a sham.

But is "transparency" an important goal? Do voters need to know about what goes on behind the scenes? Or is there a risk that transparency becomes a substitute for more important issues, such as whether new legislation is a good thing? Which is better: a good deal of unconsidered legislation, with voters being able to see everything about the processes involved; or legislation of higher quality and lower quantity, shielded by a certain degree of secrecy?

Can the concept of "transparency" be used against voters, by encouraging legislation which allows information about private citizens to be made more accessible, or by encouraging whistleblowing? Does a culture of information accessibility make the sharing of information about individuals between different government agencies more likely, and is this desirable?

Other articles
Stumbling & Mumbling on freedom of information
Ben Fenton on freedom of information
Bel is Thinking on MPs opting out of FOIA
UK Liberty on MPs opting out out FOIA
ARCH Blog on information sharing
The Times on freedom of the press
Not Saussure on whistleblowers
Onora O'Neill on trust and accountability
Shuggy on grassing

21 May 2007

Genes and opportunity

Most people would agree that one of the key political issues is this:
How can talented children from poor backgrounds have the same opportunity as those from wealthier backgrounds?

To assess where we are in terms of this goal, we can look at data on social mobility. The problem is, the data can only be interpreted in conjunction with data about heritability of talent.

Say, for the sake of argument, that “ability” is 100% inherited and has no environmental component. Then, in a society that was perfect in terms of the above goal, so that social position was perfectly correlated with ability, would we not find that people never moved from the class they were born into?

Therefore, does it not follow that you cannot assess whether we have the “correct” level of mobility without taking into account heritability? In which case, are not the arguments of both left1 and right2, to the effect that there is “clearly a problem” because so few from the lower classes rise up the hierarchy, entirely specious?

1. Alan Johnson: "There is still a long way to go. A child who is not on free school meals is twice as likely to get five good GCSEs as one who is."
2. David Willetts: "Just 2% of children at grammar schools are on free school meals when those low income children make up 12% of the school population in their areas."

Other articles
Not Saussure on mobility
Stumbling & Mumbling on talent
Conservative Party Reptile on grammar schools
Daily Telegraph on the distribution of ability
The Spectator on David Willetts
Stumbling & Mumbling on nature vs nurture
Peter Hitchens on grammar schools