Book Read Free

The Atlas of Reality

Page 68

by Robert C. Koons,Timothy Pickavance


  So, Aristotelian Temporal Finitists illustrate that it is possible to be both an Intervalist and to embrace Infinite Temporal Divisibility. What about those, like Whitehead (1919), who embrace both Intervalism and Temporal Infinitism? Such a position, Infinitary Intervalism, faces a serious problem in identifying what is metaphysically fundamental in the case of time. Each fundamental interval of time would have to be wholly composed of an infinite number of equally fundamental sub-intervals. This would generate a very high degree of metaphysical redundancy at the fundamental level. Each interval could be thought of as both fundamental in its own right and as a merely derivative entity, wholly grounded in the lengths and arrangements of some of its fundamental sub-intervals.

  Such metaphysical redundancy would involve a large number of brute metaphysical necessities. For example, each interval I would be wholly composed of a finite set of disjoint, contiguous sub-intervals S. (In fact, for each interval I, there would be an infinite number of such sets S.) It is necessarily the case that the temporal length (or duration) of I is equal to the sum of the lengths of the members of S. Since both the length of I and the length of each of the members of S are, for the Infinitary Intervalists, fundamental truths, we have a necessary connection between equally fundamental, distinct sets of truths. Therefore, by the standard of the Second Corollary of Ockham's Razor (PMeth 1.2), Infinitary Intervalism is quite unattractive, relative to the three competing theories of Aristotelian Temporal Finitism, Temporal Discretism, and Instantism.

  Consequently, we will assume that all Intervalists are either Discretists or Aristotelian Finitists, and that all Instantists are Temporal Infinitists. Thus, the issue of Finitism is crucial.

  19.1.1 Argument for Instantism: Possible super-tasks

  A super-task (as defined by James Thomson 1954) is a process consisting of infinitely many sub-tasks, completable in a finite period of time. The original super-task was proposed by the ancient philosopher Zeno of Elea in the paradox of Achilles and the tortoise. Achilles is much faster than the tortoise, one hundred times faster, we'll say, but the tortoise starts out with a head start of, say, 1000 meters. In order to catch up with the tortoise, Achilles must first cover the 1000 meters between his starting point and the tortoise's. By the time Achilles has done that, the tortoise will have moved 10 meters forward. To catch the tortoise, Achilles must first cover these 10 meters. By the time he has done that, the tortoise will have moved 0.1 meters ahead. Achilles can never catch the tortoise because whenever Achilles reaches the tortoise's starting point at the beginning of the most recent period, the tortoise will always have moved forward. To catch the tortoise, Achilles must complete an infinite number of tasks: running first 1000 meters, then 10 meters, then 0.1 meters, and so on ad infinitum.

  There is a simpler version of Zeno's puzzle, that of Homer and the stadium. Suppose that Homer wants to walk across the stadium. To get to the other side, he must first walk halfway, then walk half the remaining distance, and so on ad infinitum. The single process of walking across the stadium can be described as consisting of an infinite number of smaller sub-tasks: walking half of the way, then a fourth of the way, then an eighth, and so on. If it takes one-half hour to walk halfway, one-quarter hour to walk a quarter of the way, and so on, Homer can complete these infinitely many tasks in just one hour.

  According to Aristotle and Simplicius, Zeno offered the paradox as an argument against the reality of motion and in defense of Parmenides's theory that reality is unchanging. Zeno took it as obvious that such an infinitary task cannot be completed. However, we have a great deal of evidence, from sense perception and memory, for the reality of motion and, hence, of the possibility of such infinitary super-tasks (PEpist 4). Given Time-Process Correspondence, we have reason to believe that there exist finite intervals of time with infinitely many actual parts, one corresponding to each part of the infinitary sub-task. This provides a strong argument for the fundamentality of instants, since if it were intervals that were fundamental, we would seem to have in such infinitary processes a case of an infinite regress of metaphysical priority, with each interval being dependent on the many proper sub-intervals it is composed of, never reaching a level of absolutely fundamental instants.

  However, Intervalists could respond by claiming, with Aristotle, that Homer's walk across the stadium is only potentially, and not actually, divisible into an infinite number of parts. It seems plausible to think that Homer's walk in fact consists of a finite number of metaphysically fundamental, temporally extended sub-processes, each of which could have occupied a shorter duration. On this picture, the intervals of time involved have potential, but not actual, parts.

  Adolf Grünbaum (1967) argued that we can conceive of a super-task with infinitely many actual parts. Imagine that Homer walks across the stadium in the following staccato fashion. He walks halfway across, stops for a short period of time, and then walks a quarter of the way across, stops again for an even shorter period of time, and so on. Let's suppose that each period of rest is equal in length to the preceding period of motion. Homer could then complete the walk across the stadium in exactly two hours. In this case, the Aristotelian cannot argue that the walk consists in reality of only a finite number of parts. It is clear that Homer's staccato walk is divided actually into an infinity of smaller parts, each separated from the next by a period of rest.

  There is a key difference between Homer's ordinary walk and his staccato walk. We have experience demonstrating that walks across stadiums are in fact possible. We do not have comparable evidence that staccato walks, of the kind described, ever happen in reality. We can, however, imagine such a walk, and we can, apparently, do so without contradicting ourselves or falling into any obvious absurdity. By Imagination as Guide to Possibility (PEpist 1), therefore, we have grounds for holding that Homer's staccato walk is possible, and so we have reason to believe that it is at least possible for an interval to have infinitely many actual parts. In such a case, the instants within the interval would be the metaphysically fundamental entities. It seems reasonable to assume that if instants (or points) are possibly fundamental, then they are necessarily fundamental, since metaphysical fundamentality would seem to be an essential feature of things.

  19.1.2 Arguments for Intervalism: Impossible super-tasks

  The main argument for Intervalism is that it follows from the metaphysical impossibility of certain infinitary scenarios involving super-tasks.

  THOMSON'S SUPER-LAMP Thomson's (1954) original super-task involved a lamp with a power-switch having two settings: On and Off. Suppose that the lamp begins in the On position, is switched to Off after one-half hour, and then is switched to On after one-quarter hour, and so on. At the end of the hour, the lamp's switch has been moved from Off to On and back to Off infinitely many times. There is no last state of the switch. For any switch movement in the series, there is always a still later one within the hour period. The problem is this: what is the setting of the lamp after the hour is over? On or Off? Either outcome seems possible, without obviously contradicting any laws of motion.

  So far, we have an unsolved problem, but no reason yet for thinking that the Thomson lamp involves any impossibility or absurdity. To get to a contradiction, we have to make some assumptions about the causal or explanatory structure of the situation.

  First, we have to assume that the final setting of the lamp has some causal explanation, in terms of the lamp's prior history. It seems plausible to assume that any event or process that has a beginning in time must have a causal explanation.

  Def D19.1 Immediate Causation. x is an immediate cause of y if and only if x is a cause of y, and there is no z such that x is a cause of z and z is a cause of y.

  Causation of the Initiation of Processes. Any event or process with a beginning in time has an immediate cause.

  Next, we have to assume that the convoluted history of the lamp cannot provide a causal explanation of the final state of the lamp at the end of the hour. As we shall se
e in Chapter 28, causal explanations come in two varieties, namely, continuous and discrete. A continuous explanation provides a cause for an event in terms of a process that reaches the occurrence of the event through an infinity of prior states, smoothly or continuously varying over time. A discrete explanation refers to a single event as the cause.

  Causation: Continuous or Discrete. If x is a cause of y, then either (a) x is a part of a simple (undivided) process including y as a part, which begins before the beginning of y, or (b) x is a process or event that is separate from y, in the sense that x and y do not belong to any single, simple process.

  Partial Simultaneity of Discrete Causes. If x is a cause of y, and x and y do not belong to any one simple (undivided) process, then x's time of occurrence includes the time at which y begins.

  Why must immediate continuous causes and their effects belong to a single simple or temporally undivided process? If the process isn't simple, if it contains many temporal parts that intervene between the cause and the effect, then the cause cannot be an immediate cause of the effect. This is because there can be no immediate action at a temporal distance: immediate causation cannot jump over a gap in time.

  Impossibility of Action at a Temporal Distance. If x is an immediate cause of y, then either x and y overlap in time or x and y belong to a single, temporally undivided process (with no intervening temporal parts).

  Given these three principles, we can conclude that the final setting of the lamp, whether Off or On, must have a cause, that the cause must be part of a simple process that persists until the end of the hour of switching, and that the cause contains no intervening temporal parts. However, the switching process clearly has infinitely many temporal parts, corresponding to the infinite number of distinct switch movements, from Off to On and from On to Off. Any of the earlier temporal parts is cut off from the final state of the lamp by infinitely many intervening temporal parts. Hence, there can be no immediate cause of the final state of the lamp, and so the lamp is metaphysically impossible.

  Thus, we have a strong argument, based on causality, for thinking the Thomson lamp scenario is impossible. However, there is also a plausible argument for its possibility, based on Infinite Patchwork (PMeta 5.2). Appealing again to Imagination as a Guide to Possibility (PEpist 1) as well as to everyday experience with electric lamps, we know that it is possible for a lamp to be switched on and off again. It also seems possible that this switching process could be scaled downward to an arbitrary degree, making each switching event smaller and faster than its predecessor. If we assume that time is infinitely dense, that is, that there is an infinite series of smaller intervals that is completed before noon, then we can use patchwork principles to prove that the Thomson lamp scenario is possible, after all.

  It is possible to switch the lamp on and off again.

  The switching process described in premise (1) can be compressed without limit in time and space. That is, if it is possible to switch the lamp on and off again by moving the switch x millimeters in y seconds, then it is possible to switch the lamp on and off again by moving the switch x/2 millimeters in y/2 seconds.

  There exists an infinite series of intervals, each ½ the length of its predecessors, all of which are located before noon (with noon as the earliest moment not contained in any of the intervals).

  By Infinite Patchwork (PMeta 5.2) and premises (1–3), it is possible for the lamp to be turned on and off again infinitely often before noon.

  If we accept the causal argument for the impossibility of the Thomson lamp, then we must reject either one of the premises (1–3) or Infinite Patchwork. Premise (1) is known to be true, and the patchwork principles seem to be indispensable guides to possibility, so we shall focus on premises (2) and (3).

  Temporal Finitism offers a simple solution to the paradox by denying premise (3). If time is discrete, with indivisible temporal “atoms”, then premise (3) is clearly false. In addition, even if time is not discrete, if it is impossible for any finite interval of time to be actually divided into an infinite number of sub-intervals, then premise (3) will still be false, even if every finite interval is potentially divisible into smaller sub-intervals.

  We might try to avoid Temporal Finitism by concentrating instead on premise (2). One worry about the Thomson lamp scenario is that it seems to require the switch to move faster and faster as the switching hour progresses. At some point, the switch will have to move faster than the speed of light. There would be no upper bound to the velocity of the switch, with the consequence that the path of the switch through spacetime is discontinuous at the final boundary of the hour. It is not implausible to suppose that the structure of spacetime guarantees (as the theory of relativity says that it does) that nothing can move faster than some fixed velocity (such as the velocity of light).

  We can, however, re-describe the lamp scenario in order to avoid such super-luminal velocities. For example, imagine that the switch consists of a single particle. The lamp is On if the particle lies on a certain plane (or with its center on the plane, if the particle has finite volume), and it is Off if the particle is located off the plane (no matter how close it might be). Further, imagine that each successive switching Off involves the particle's moving a smaller distance from the On plane. Since the particle moves a shorter distance in each succeeding sub-interval, its velocity can remain constant throughout.

  However, if we follow this version of the scenario, there is a definite answer as to the final state of the lamp's switch: it must end up on the plane, which means that the lamp will be On at the end of the switching hour. We can see that this must be the case by supposing that the particle ends up off the plane at the end. If it lies off the plane, then it must lie some finite distance from the plane, say k. However, the particle will stay closer to the plane than k for some final segment of the switching process. Supposing that the particle ends up k units away from the plane requires us to assume that the particle jumps instantaneously through space at the end of the process, which we could plausibly suppose to be impossible.

  PNatPhil 3 Continuity of Motion. It is impossible for any material thing to move discontinuously through spacetime.

  Maximum Velocity. There is some maximum velocity above which it is impossible for anything to move.

  Continuity of Motion and Maximum Velocity offer an alternative explanation to the impossibility of the original Thomson lamp scenario, and they guarantee that the second scenario has a unique solution. However, we are still left with an argument from causation for the impossibility of the second scenario. We can conclude that the lamp must end up in the On position, but we cannot find a non-redundant cause of this fact that does not involve some action at a temporal distance. So, we still have some reason for preferring Temporal Finitism's solution to the puzzle.

  FORREST'S SUPER-URN Peter Forrest (1999) has offered another paradox, similar in structure to Thomson's lamp scenario, that supports Temporal Finitism: the Super-Urn task. We are to imagine an infinite sequence of actions, taking shorter and shorter periods of time, and ending in a finite interval, say, an hour. We start with an urn containing a single particle. In the first half hour, we remove a particle from an urn and then move a particle into the urn. We do the same thing in the next quarter hour, and so on. After an hour has passed, we have completed infinitely many particle-replacements. The question is, at the end of the hour, is there a particle in the urn or is the urn empty?

  The odd thing about the urn case is the fact that the answer to this question seems to depend on whether we move the same particle out of and then back into the urn or replace one particle with a new one in each sub-period. If we move the same particle out and in, then it seems clear that the particle must still be in the urn at the end of the period (assuming that the particle cannot jump discontinuously through space). Alternatively, if we move a new particle in during each period, and then out in the next period, then it will follow that the urn will be empty at the end, since all of the particles that w
ere ever in the urn at any time will now be in some discard pile at the end of the hour. Therefore, it seems that the result of the super-task depends entirely on the identities of the particles moved around. However, it seems clear that the result of a physical process can only depend on the kinds of particles and material bodies involved, not in bare facts about the identities of those particles and bodies. Suppose that all of the particles involved in the task are perfectly indistinguishable from one another. Let's suppose that it is possible for many particles of this kind to occupy exactly the same place at the same time (as, for example, photons can). We could then describe two versions of the Super-Urn task that are qualitatively identical, differing only in the identity or distinctness of the particles involved. This, however, requires that mere differences in identity with no accompanying qualitative difference can make a difference to the result of a physical process. But this is implausible; Impotence of Identity seems true:

  Impotence of Identity. Two qualitatively identical processes (processes indistinguishable except with respect to facts about the respective identity or distinctness of the particles involved) must have exactly the same result.

  If we suppose Impotence of Identity, then we must conclude that the Super-Urn task is impossible.

  Here's a more precise description of the Forrest super-task, using an infinite collection of qualitatively indistinguishable particles, any two of which can occupy exactly the same place at the same time, like photons or other bosons. The urn is now a plane in space. To begin with, the plane is occupied by particle 0. The other particles are arranged in space in the following way: particle 1 is one meter away, particle 2 is one-half meter away, and so on. In the first version of the super-task, particle 0 moves off the plane, coincides in position with particle 1 for a moment, and then returns to the plane. In the next stage, it moves off the plane and coincides with particle 2 for a moment, and then returns to the plane. At the end of this version of the task, particle 0 ends its journey on the plane.

 

‹ Prev