Book Read Free

The Atlas of Reality

Page 17

by Robert C. Koons,Timothy Pickavance


  The Nomist could always posit that it is simply a basic, inexplicable principle of rationality that we should always prefer the theory with the simplest law, but this would be a still further cost of the theory, in light of the first corollary of Ockham's Razor:

  PMeth 1.1 First Corollary of Ockham's Razor. Other things being equal, prefer the theory that posits the fewest primitive, underivable postulates of reason.

  There is still another potential drawback to Nomism, one concerning functional laws that relate two or more quantities. An example would be Newton's force law, F = ma. When a force of quantity x Newtons is applied to a mass of q kilograms, the mass accelerates at a rate of x/q meters per second per second. The values of F or m can be any real number from 0 to infinity. Thus, there are an infinite numbers of property-triples of force, mass, and acceleration, one for each pair of force and mass quantities. On Nomism, the force law F = ma is in reality an infinite collection of laws, each with a fundamental nomic-necessitation connection among a different triple of force, mass, and acceleration quantities. This involves a huge inflation of the Nomist ontology.

  In addition, many Nomists (like David Armstrong) want to deny the existence of uninstantiated universals. However, it seems likely that there are many specific quantities of mass, force, and acceleration that are never instantiated in the actual world (a mass greater than the total mass-energy of the entire universe, or a mass smaller than that of any mass-bearing fundamental property). These possible quantities would seem to be things that would necessarily obey the relevant functional laws, but the Nomist cannot explain this, since the “missing” universals cannot bear the nomological necessity relation because they simply do not exist in fact.

  Strong Powerists can dodge objection 7 if they treat generic determinables (like mass and acceleration) as real universals, in addition to the various determinate quantities of mass (e.g., 1 kg, 10 kg) and of acceleration (1 m/sec2, 5 m/sec2, etc.). This might seem to involve some truthmaking redundancy: why should each massive object have both some specific mass and the general determinable of massiveness? However, this can be worked out in a plausible and elegant way (see Chapter 10).

  Finally, we could describe the debate between Strong Nomists and Strong Powerists in this way: Nomists believe in the relation of nomic necessitation, which consists in the possession of certain powers on the part of universals. When universal U1 is tied by nomic necessitation to U2, the relation of nomic necessitation confers on universal U1 the active power of making its instances be instances of U2, while conferring the complementary passive power to U2. In contrast, Powerists believe that the fundamental powers are possessed by ordinary particulars (like particles or people), not universals. The Powerist account seems a simpler and more natural way to think about the matter.

  5.2 Neo-Humeism: Reduction of Conditionals, Laws, and Powers

  The Scottish philosopher David Hume famously argued that we have no good reason to believe that there are any “necessary connections” in the world. The only kind of necessity or impossibility that Hume was willing to accept was that generated by the connections among our ideas or concepts. In the twentieth century, this Humean perspective has been revived by such philosophers as Frank Ramsey (Ramsey 1978/1928) and David K. Lewis (Lewis 1980b, 1986a, 1994). For the Neo-Humeist, all the truths or supposed truths about powers, counterfactual conditionals, and laws of nature are grounded in and reducible to truths about the actual distribution of ordinary, qualitative properties in space and time.3

  The Neo-Humeist program proceeds in the following way. First, give a reductionist account of the laws of nature, and then use the laws to ground the truths of attributions of power and of counterfactual conditionals. We'll focus in this section on the Neo-Humeist or Ramsey/Lewis Theory of the laws of nature.

  The key problem is to account for the difference between lawful generalizations and mere accidental generalizations. The Ramsey/Lewis Theory proposes that the difference between the two depends on the way in which a generalization does or does not fit into our best scientific theory of the world. By ‘best’ scientific theory, Ramsey and Lewis do not mean the one that is fundamentally true, but rather that theory that combines the most ‘virtues’, where the standards of theoretical virtue are fixed by the conventions and customs of our actual scientific practice. In particular, we seem to value two things in our theories: (1) good fit between the theory's predictions and observed experimental results (the theory predicts all and only the observed results), and (2) overall simplicity of the theory, in terms of its basic vocabulary, fundamental postulates, and mathematical form. According to Lewis (1986a), we should be willing to accept some discrepancy between the theory and the data if the theory is much simpler than its more accurate competitors, and we should be willing to accept a relatively complex theory if its predictions are much better than any simpler one. The “best” theory is the one that achieves the best trade-off between these two values.

  This Neo-Humeist account has two principal advantages. First, it is ontologically very simple. It posits no fundamental truths involving powers, laws, or counterfactual conditionals. Second, it has a simple explanation for our preference for simple scientific theories. A simple account of the laws is more likely to be true, since to be a law is nothing more than to be a generalization that belongs to the simplest account of nature. A simpler theory is more likely to be “true” because simplicity is one of the two factors that make theories true.

  There are three major objections to the Neo-Humeist account of laws. First, it makes the laws of nature dependent on us, on our practices and preferences, in a way incompatible with scientific realism. Second, it makes the powers of things extrinsic to those things and faces counterexamples involving hypothetical “small worlds.” Third, it has difficulty explaining the rationality of induction, that is, of our confidence that unobserved cases (such as those in the future) will be relevantly similar to observed ones.

  5.2.1 Argument from scientific realism

  On the Neo-Humeist account, what makes something a law of nature is the fact that it would fit into a theory that would best satisfy our preferences, as fixed by our actual scientific practices. This seems to make scientific reality relative to the contingencies of our conventional perspective. Scientific truth would no longer be something objective and mind-independent. This would contradict what seems obvious about the progress of science, namely, that in science we discover truths about the world as it is, independent of ourselves. The reality of the law of gravity, for instance, does not depend in any way on us or on our preferences.

  There is a Neo-Humeist response to this objection developed by David Lewis (1973b, Section 3.3). One can “rigidify” the reference to our practices and preferences, so it is our practices and preferences in this, the actual world, that fixes the meaning of ‘law’. This “rigidification” makes the laws of nature independent of us, in the sense that the laws would have been the same, even if our practices and preferences had been different, since it is our practices in this world that determine what the laws are in all worlds, regardless of what our practices and preferences might have been in those other worlds.

  A similar rigidification could take place with respect to time. Suppose that our standards and preferences have changed over the centuries and will continue to change. The Lewisians suppose that what we mean by ‘law of nature’ changes as the standards change. The phrase ‘law of nature’ meant something different in the year 1500 than it does now, and different from what it would mean in 2500. The laws of nature don't change. What we mean now by ‘law of nature’ incorporates in a “rigid” way today's standards, and so doesn't vary as the standards change. What is a law of nature now (by our standards) will still be a law (by those same standards) in 2500, whatever people then may mean by ‘law of nature’. This is a subtle point, involving the distinction between using the phrase ‘law of nature’ and merely mentioning that phrase.

  This rigidification comes at a steep
cost, however, since it would follow from this account that we may not be at all reliable in identifying the laws of nature. Our conventions and preferences could have been different, and had they been different, we would have systematically misidentified the laws of nature. That we here and now in fact get the laws right (applying the “right” standards) is just a lucky accident, resulting from our occupying this one, special world, in which our actual practices fix the meaning of ‘law’.

  In order to know what the laws are, we must be reliable at detecting the true laws. This reliability is a matter of our “tracking” the correct laws with our beliefs across a span of possible variation. Let's suppose that we do in fact believe in the true laws of nature (or something close to them) in the actual world. There are three kinds of variation to consider:

  Would we still believe in the true laws if the laws were the same, and we had the same preferences for theories, but we observed slightly different parts of the actual world?

  Would we still believe in the true laws, if we had the same theory preferences, and we observed the same parts of the world, but the laws of nature were slightly different?

  Would we still believe in the true laws, if the laws were the same and we observed the same parts of the world, but our theory preferences were slightly different?

  It's variation 3 that potentially creates difficulties for the Neo-Humeist. It seems that slight variations in our theory preferences in nearby worlds could lead us away from the true laws of nature, since what the laws are is fixed (rigidly) by our preferences in the this world.

  We may be very lucky—it may be that the very same laws would come out as “best” under a variety of theory preferences near to our actual ones, but there seems to be little reason to think that we are so lucky. We can take the actual variations in our theory preferences in different eras of history to be a good sample of the sort of variations of type 3 that we need to consider. Thomas Kuhn, in his classic work on scientific revolutions (Kuhn 1970), provided evidence that our standards and preferences for “good” theories has changed dramatically over the last 300 years, so much so that our current theories would not have counted as good theories in the relatively recent past. For example, in the time of Copernicus, it was considered critical that a good theory use only circular orbits in explaining the motions of the planets. In the nineteenth century, only deterministic theories were considered good enough to be considered, and Albert Einstein resisted quantum mechanics for most of his life for that reason.

  If these worries are justified, then the Neo-Humeist position entails that one of the following two methodological principles involving scientific truth must be overridden:

  PMeth 2.1 Scientific Realism: Objectivity. Other things being equal, adopt the theory that implies that our best scientific theories are objectively true: true independently of our scientific preferences and practices.

  PMeth 2.2 Scientific Realism: Reliability. Other things being equal, adopt the theory that implies that we are reasonably reliable in finding scientific truth.

  If the laws of nature are not determined by the rigidification of our actual standards, then we must violate objectivity, since what the laws of nature are would then vary from world to world with our varying preferences. This would make all of science a branch of sociology. Alternatively, if we rigidify our actual standards, we salvage objectivity but we put reliability into jeopardy. However, it might turn out that nature is “kind” (as Lewis put it), in that the very same scientific lawbooks come out as best under a wide range of variation surrounding our actual standards of theory choice. If so, then the Neo-Humeists could claim that their theory of laws is consistent with objectivity and reliability. It's hard to tell who's right here. We would have to be able to anticipate which system of laws does turn out to be best by our standards, and then we would have to do a great deal of historical and sociological research to determine how much variation in the standards of theory choice were really possible.

  In addition, isn't reliability a problem for every account of laws? Can the Nomist or Powerist do a better job of securing our reliability at detecting the true laws? At the very least we can say that given rigidification, the Neo-Humeist can do no better than advocates of other theories at explaining the reliability of using our contingent standards of simplicity as a guide to the true laws. The Neo-Humeists lose a potential advantage. In addition, the Strong Powerist might have a better shot at securing our reliability, in two ways. First, the Powerist might attribute to the human mind an inherent power to recognize the true laws of nature, given enough actual experience, by a kind of rational intuition (what Aristotle called ‘noûs’). Second, Powerists could argue that Neo-Humeism exaggerates the importance of global theory choice. Instead, we discover the laws of nature by means of local interactions under carefully controlled conditions (see the Powerist response to the problem of scientific knowledge in Section 6.1.2).

  5.2.2 The extrinsicality objection and small worlds

  According to Neo-Humeism, whether something has a particular power (active, passive, or immanent) depends on the laws of nature, and whether something is a law of nature depends on the overall pattern of particular facts across the entire history of the cosmos. Thus, having a power is an extrinsic feature of a thing, dependent on the pattern of events in remote parts of the cosmos in remote times, both past and future. In contrast, it seems obvious that having a power is an intrinsic feature of a thing.

  PMeta 2 The Intrinsicality of Powers. Having a power is an intrinsic property.

  This fact is confirmed by thought-experiments involving hypothetical “small worlds” (Tooley 1987). Suppose that the world consisted only of a single electron, an electron intrinsically identical to all actual electrons. It seems that such a solitary electron would still be negatively charged, and that, being negatively charged, it would still have the power of repelling other negatively charged things and attracting positively charged things. However, in such a small world, the “best” scientific theory (in the Ramsey/Lewis sense) would include no laws of nature involving charge at all, since adding such a law would make the theory more complicated without enabling any new or more accurate predictions to be made. On the Neo-Humeist account, then, it must be impossible for anything inhabiting such a small world to have any power to move other objects. We seem to be able to coherently imagine small-world possibilities in which things have powers, however, and the methodological principle of Imagination as a Guide to Possibility provides support for the conclusion that such a world really is possible, contrary to Neo-Humeism.

  Principle of Epistemology (PEpist) 1 Imagination as Guide to Possibility. If a scenario is imaginable in great detail without evident absurdity, then we have good reason to think that it represents a metaphysical possibility.

  Here is another similar example, also from Tooley. Suppose that we discovered that the world consists of ten kinds of fundamental particle, and suppose that we have observed 54 of the 55 possible kind-to-kind interactions. However, suppose that particles of type #1 have never interacted with particles of type #10, and never will (perhaps type #1 disappeared a few seconds after the Big Bang, and type #10 didn't appear until millions of years later). Since we know the laws that govern the other 54 possible interactions, it seems reasonable for us to believe that there must be some similar but unknown law governing the type #1-to-type #10 interaction. However, the Neo-Humeist theory entails that there can be no such law, since the simplest theory of the actual interactions in the world would not include any general statement about what would happen in such a case. Adding any conjectured law would only make the total theory more cumbersome without improving its fit with the actual mosaic of facts.

  5.2.3 Objection based on induction

  The final objection to Neo-Humeism concerns the rationality of induction. Once we have encountered a large body of varied data that conforms to a simple scientific theory, we seem reasonable in believing that the theory accurately describes much, if not all,
of the world. In particular, we take it for granted that the theory will continue to fit the data in cases that have not yet been observed. We assume, for example, that the future will be in this respect very similar to the past. To use a familiar example, we rationally believe that the earth will continue to rotate on its axis, bringing about many future sunrises and sunsets.

  On the Neo-Humeist account, the fact that the particular cases conform to the laws of nature is not explained by those laws (at least, not in the sense that the first is metaphysically grounded in the second). The ontological explanation goes the other way: the laws are laws in part because they conform to the pattern of particular events, not vice versa. For the Neo-Humeist, there are possible worlds corresponding to every imaginable pattern of events. In most of those worlds, the “best” theory is horrendously complicated. We are just lucky to inhabit a possible world where the “best” theory is relatively simple.

  But how do we know that we do inhabit such a “simple” world? All we know for sure is that the observed part of our world conforms to a simple system of laws. Let's call the observed part of our world O. There are many possible worlds that agree with our world with respect to all of the events in O. It is fairly easy to show that, of the worlds that agree about O, the overwhelming majority (by astronomically large margins) fail to conform to any simple system of laws in the complement of O, Comp(O), which encompasses all the events other than those in O. Thus, it would seem that we have very good reason to believe that it is unlikely that our world is one of the few simple ones. Neo-Humeism, therefore, gives us strong reasons to doubt the reliability of induction. It gives what epistemologists call an ‘undercutting defeater’ of our inductive inferences.

 

‹ Prev