The Beginning of Infinity

Home > Other > The Beginning of Infinity > Page 40
The Beginning of Infinity Page 40

by David Deutsch


  The number of seats involved is usually small, but that does not make it unimportant. Politicians worry about this because votes in the House of Representatives are often very close. Bills quite often pass or fail by one vote, and political deals often depend on whether individual representatives join one faction or another. So, whenever apportionment paradoxes have caused political discord, people have tried to invent an apportionment rule that is mathematically incapable of causing that particular paradox. Particular paradoxes always make it look as though everything would be fine if only ‘they’ made some simple change or other. Yet the paradoxes as a whole have the infuriating property that, no matter how firmly they are kicked out of the front door, they instantly come in again at the back.

  After Hamilton’s rule was adopted, in 1851, Webster’s still enjoyed substantial support. So Congress tried, on at least two occasions, a trick that seemed to provide a judicious compromise: adjust the number of seats in the House until the two rules agree. Surely that would please everyone! Yet the upshot was that in 1871 some states considered the result to be so unfair, and the ensuing compromise legislation was so chaotic, that it was unclear what allocation rule, if any, had been decided upon. The apportionment that was implemented – which included the last-minute creation of several additional seats for no apparent reason – satisfied neither Hamilton’s rule nor Webster’s. Many considered it unconstitutional.

  For the next few decades after 1871, every census saw either the adoption of a new apportionment rule or a change in the number of seats, designed to compromise between different rules. In 1921 no apportionment was made at all: they kept the old one (a course of action that may well have been unconstitutional again), because Congress could not agree on a rule.

  The apportionment issue has been referred several times to eminent mathematicians, including twice to the National Academy of Sciences, and on each occasion these authorities have made different recommendations. Yet none of them ever accused their predecessors of making errors in mathematics. This ought to have warned everyone that this problem is not really about mathematics. And on each occasion, when the experts’ recommendations were implemented, paradoxes and disputes kept on happening.

  In 1901 the Census Bureau published a table showing what the apportionments would be for every number of seats between 350 and 400 using Hamilton’s rule. By a quirk of arithmetic of a kind that is common in apportionment, Colorado would get three seats for each of these numbers except 357, when it would get only two seats. The chairman of the House Committee on Apportionment (who was from Illinois: I do not know whether he had anything against Colorado) proposed that the number of seats be changed to 357 and that Hamilton’s rule be used. This proposal was regarded with suspicion, and Congress eventually rejected it, adopting a 386-member apportionment and Webster’s rule, which also gave Colorado its ‘rightful’ three seats. But was that apportionment really any more rightful than Hamilton’s rule with 357 seats? By what criterion? Majority voting among apportionment rules?

  What exactly would be wrong with working out what a large number of rival apportionment rules would do, and then allocating to each state the number of representatives that the majority of the schemes would allocate? The main thing is that that is itself an apportionment rule. Similarly, combining Hamilton’s and Webster’s schemes as they tried to do in 1871 just constituted adopting a third scheme. And what does such a scheme have going for it? Each of its constituent schemes was presumably designed to have some desirable properties. A combined scheme that was not designed to have those properties will not have them, except by coincidence. So it will not necessarily inherit the good features of its constituents. It will inherit some good ones and some bad ones, and have additional good and bad features of its own – but if it was not designed to be good, why should it be?

  A devil’s advocate might now ask: if majority voting among apportionment rules is such a bad idea, why is majority voting among voters a good idea? It would be disastrous to use it in, say, science. There are more astrologers than astronomers, and believers in ‘paranormal’ phenomena often point out that purported witnesses of such phenomena outnumber the witnesses of most scientific experiments by a large factor. So they demand proportionate credence. Yet science refuses to judge evidence in that way: it sticks with the criterion of good explanation. So if it would be wrong for science to adopt that ‘democratic’ principle, why is it right for politics? Is it just because, as Churchill put it, ‘Many forms of Government have been tried and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.’ That would indeed be a sufficient reason. But there are cogent positive reasons as well, and they too are about explanation, as I shall explain.

  Sometimes politicians have been so perplexed by the sheer perverseness of apportionment paradoxes that they have been reduced to denouncing mathematics itself. Representative Roger Q. Mills of Texas complained in 1882, ‘I thought . . . that mathematics was a divine science. I thought that mathematics was the only science that spoke to inspiration and was infallible in its utterances [but] here is a new system of mathematics that demonstrates the truth to be false.’ In 1901 Representative John E. Littlefield, whose own seat in Maine was under threat from the Alabama paradox, said, ‘God help the State of Maine when mathematics reach for her and undertake to strike her down.’

  As a matter of fact, there is no such thing as mathematical ‘inspiration’ (mathematical knowledge coming from an infallible source, traditionally God): as I explained in Chapter 8, our knowledge of mathematics is not infallible. But if Representative Mills meant that mathematicians are, or somehow ought to be, society’s best judges of fairness, then he was simply mistaken.* The National Academy of Sciences panel that reported to Congress in 1948 included the mathematician and physicist John von Neumann. It decided that a rule invented by the statistician Joseph Adna Hill (which is the one in use today) is the most impartial between states. But the mathematicians Michel Balinski and Peyton Young have since concluded that it favours smaller states. This illustrates again that different criteria of ‘impartiality’ favour different apportionment rules, and which of them is the right criterion cannot be determined by mathematics. Indeed, if Representative Mills intended his complaint ironically – if he really meant that mathematics alone could not possibly be causing injustice and that mathematics alone could not cure it – then he was right.

  However, there is a mathematical discovery that has changed for ever the nature of the apportionment debate: we now know that the quest for an apportionment rule that is both proportional and free from paradoxes can never succeed. Balinski and Young proved this in 1975.

  Balinski and Young’s Theorem

  Every apportionment rule that stays within the quota suffers from the population paradox.

  This powerful ‘no-go’ theorem explains the long string of historical failures to solve the apportionment problem. Never mind the various other conditions that may seem essential for an apportionment to be fair: no apportionment rule can meet even the bare-bones requirements of proportionality and the avoidance of the population paradox. Balinski and Young also proved no-go theorems involving other classic paradoxes.

  This work had a much broader context than the apportionment problem. During the twentieth century, and especially following the Second World War, a consensus had emerged among most major political movements that the future welfare of humankind would depend on an increase in society-wide (preferably worldwide) planning and decision-making. The Western consensus differed from its totalitarian counterparts in that it expected the object of the exercise to be the satisfaction of individual citizens’ preferences. So Western advocates of society-wide planning were forced to address a fundamental question that totalitarians do not encounter: when society as a whole faces a choice, and citizens differ in their prefe
rences among the options, which option is it best for society to choose? If people are unanimous, there is no problem – but no need for a planner either. If they are not, which option can be rationally defended as being ‘the will of the people’ – the option that society ‘wants’? And that raises a second question: how should society organize its decision-making so that it does indeed choose the options that it ‘wants’? These two questions had been present, at least implicitly, from the beginning of modern democracy. For instance, the US Declaration of Independence and the US Constitution both speak of the right of ‘the people’ to do certain things such as remove governments. Now they became the central questions of a branch of mathematical game theory known as social-choice theory.

  Thus game theory – formerly an obscure and somewhat whimsical branch of mathematics – was suddenly thrust to the centre of human affairs, just as rocketry and nuclear physics had been. Many of the world’s finest mathematical minds, including von Neumann, rose to the challenge of developing the theory to support the needs of the countless institutions of collective decision-making that were being set up. They would create new mathematical tools which, given what all the individuals in a society want or need, or prefer, would distil what that society ‘wants’ to do, thus implementing the aspiration of ‘the will of the people’. They would also determine what systems of voting and legislating would give society what it wants.

  Some interesting mathematics was discovered. But little, if any, of it ever met those aspirations. On the contrary, time and again the assumptions behind social-choice theory were proved to be incoherent or inconsistent by ‘no-go’ theorems like that of Balinski and Young.

  Thus it turned out that the apportionment problem, which had absorbed so much legislative time, effort and passion, was the tip of an iceberg. The problem is much less parochial than it looks. For instance, rounding errors are proportionately smaller with a larger legislature. So why don’t they just make the legislature very big – say, ten thousand members – so that all the rounding errors would be trivial? One reason is that such a legislature would have to organize itself internally to make any decisions. The factions within the legislature would themselves have to choose leaders, policies, strategies, and so on. Consequently, all the problems of social choice would arise within the little ‘society’ of a party’s contingent in the legislature. So it is not really about rounding errors. Also, it is not only about people’s top preferences: once we are considering the details of decision-making in large groups – how legislatures and parties and factions within parties organize themselves to contribute their wishes to ‘society’s wishes’ – we have to take into account their second and third choices, because people still have the right to contribute to decision-making if they cannot persuade a majority to agree to their first choice. Yet electoral systems designed to take such factors into account invariably introduce more paradoxes and no-go theorems.

  One of the first of the no-go theorems was proved in 1951 by the economist Kenneth Arrow, and it contributed to his winning the Nobel prize for economics in 1972. Arrow’s theorem appears to deny the very existence of social choice – and to strike at the principle of representative government, and apportionment, and democracy itself, and a lot more besides.

  This is what Arrow did. He first laid down five elementary axioms that any rule defining the ‘will of the people’ – the preferences of a group – should satisfy, and these axioms seem, at first sight, so reasonable as to be hardly worth stating. One of them is that the rule should define a group’s preferences only in terms of the preferences of that group’s members. Another is that the rule must not simply designate the views of one particular person to be ‘the preferences of the group’ regardless of what the others want. That is called the ‘no-dictator’ axiom. A third is that if the members of the group are unanimous about something – in the sense that they all have identical preferences about it – then the rule must deem the group to have those preferences too. Those three axioms are all expressions, in this situation, of the principle of representative government.

  Arrow’s fourth axiom is this. Suppose that, under a given definition of ‘the preferences of the group’, the rule deems the group to have a particular preference – say, for pizza over hamburger. Then it must still deem that to be the group’s preference if some members who previously disagreed with the group (i.e. they preferred hamburger) change their minds and now prefer pizza. This constraint is similar to ruling out a population paradox. A group would be irrational if it changed its ‘mind’ in the opposite direction to its members.

  The last axiom is that if the group has some preference, and then some members change their minds about something else, then the rule must continue to assign the group that original preference. For instance, if some members have changed their minds about the relative merits of strawberries and raspberries, but none of their preferences about the relative merits of pizza and hamburger have changed, then the group’s preference between pizza and hamburger must not be deemed to have changed either. This constraint can again be regarded as a matter of rationality: if no members of the group change any of their opinions about a particular comparison, nor can the group.

  Arrow proved that the axioms that I have just listed are, despite their reasonable appearance, logically inconsistent with each other. No way of conceiving of ‘the will of the people’ can satisfy all five of them. This strikes at the assumptions behind social-choice theory at an arguably even deeper level than the theorems of Balinski and Young. First, Arrow’s axioms are not about the apparently parochial issue of apportionment, but about any situation in which we want to conceive of a group having preferences. Second, all five of these axioms are intuitively not just desirable to make a system fair, but essential for it to be rational. Yet they are inconsistent.

  It seems to follow that a group of people jointly making decisions is necessarily irrational in one way or another. It may be a dictatorship, or under some sort of arbitrary rule; or, if it meets all three representativeness conditions, then it must sometimes change its ‘mind’ in a direction opposite to that in which criticism and persuasion have been effective. So it will make perverse choices, no matter how wise and benevolent the people who interpret and enforce its preferences may be – unless, possibly, one of them is a dictator (see below). So there is no such thing as ‘the will of the people’. There is no way to regard ‘society’ as a decision-maker with self-consistent preferences. This is hardly the conclusion that social-choice theory was supposed to report back to the world.

  As with the apportionment problem, there were attempts to fix the implications of Arrow’s theorem with ‘why don’t they just . . . ?’ ideas. For instance, why not take into account how intense people’s preferences are? For, if slightly over half the electorate barely prefers X to Y, but the rest consider it a matter of life and death that Y should be done, then most intuitive conceptions of representative government would designate Y as ‘the will of the people’. But intensities of preferences, and especially the differences in intensities among different people, or between the same person at different times, are notoriously difficult to define, let alone measure – like happiness. And, in any case, including such things makes no difference: there are still no-go theorems.

  As with the apportionment problem, it seems that whenever one patches up a decision-making system in one way, it becomes paradoxical in another. A further serious problem that has been identified in many decision-making institutions is that they create incentives for participants to lie about their preferences. For instance, if there are two options of which you mildly prefer one, you have an incentive to register your preference as ‘strong’ instead. Perhaps you are prevented from doing that by a sense of civic responsibility. But a decision-making system moderated by civic responsibility has the defect that it gives disproportionate weight to the opinions of people who lack civic responsibility and are willing to lie. On the other hand, a society in which everyone knows
everyone sufficiently well to make such lying difficult cannot have an effectively secret ballot, and the system will then give disproportionate weight to the faction most able to intimidate waverers.

  One perennially controversial social-choice problem is that of devising an electoral system. Such a system is mathematically similar to an apportionment scheme, but, instead of allocating seats to states on the basis of population, it allocates them to candidates (or parties) on the basis of votes. However, it is more paradoxical than apportionment and has more serious consequences, because in the case of elections the element of persuasion is central to the whole exercise: an election is supposed to determine what the voters have become persuaded of. (In contrast, apportionment is not about states trying to persuade people to migrate from other states.) Consequently an electoral system can contribute to, or can inhibit, traditions of criticism in the society concerned.

  For example, an electoral system in which seats are allocated wholly or partly in proportion to the number of votes received by each party is called a ‘proportional-representation’ system. We know from Balinski and Young that, if an electoral system is too proportional, it will be subject to the analogue of the population paradox and other paradoxes. And indeed the political scientist Peter Kurrild-Klitgaard, in a study of the most recent eight general elections in Denmark (under its proportional-representation system), showed that every one of them manifested paradoxes. These included the ‘More-Preferred-Less-Seats paradox’, in which a majority of voters prefer party X to party Y, but party Y receives more seats than party X.

  But that is really the least of the irrational attributes of proportional representation. A more important one – which is shared by even the mildest of proportional systems – is that they assign disproportionate power in the legislature to the third-largest party, and often to even smaller parties. It works like this. It is rare (in any system) for a single party to receive an overall majority of votes. Hence, if votes are reflected proportionately in the legislature, no legislation can be passed unless some of the parties cooperate to pass it, and no government can be formed unless some of them form a coalition. Sometimes the two largest parties manage to do this, but the most common outcome is that the leader of the third-largest party holds the ‘balance of power’ and decides which of the two largest parties shall join it in government, and which shall be sidelined, and for how long. That means that it is correspondingly harder for the electorate to decide which party, and which policies, will be removed from power.

 

‹ Prev