The Atlas of Reality

Home > Other > The Atlas of Reality > Page 96
The Atlas of Reality Page 96

by Robert C. Koons,Timothy Pickavance


  Thus, we seem to have two ways of thinking about causation. We might think of it as a real relation, the relation of causal connection, between things or events, or we might think of it as a logical relation, the relation of causal explanation, among truths. For metaphysicians, the crucial question is whether causal connection or causal explanation is more fundamental. We thus have the difference between Causal Connectionism and Causal Explanationism:

  27.1T Causal Connectionism. Causation is fundamentally a relation (causal connection) between things (and not between truths).

  27.1A Causal Explanationism. Causation is fundamentally a relation (causal explanation) between truths.

  To clarify, Causal Explanationists need not deny the existence of events, even as fundamental entities. For example, Causal Explanationists might take the fundamental relation to be an explanation relation between propositions of the form ‘event c occurred’ and ‘event e occurred’. In contrast, Connectionists must suppose that there is a fundamental relation between the events themselves. When that relation holds between events c and e, then it will follow that the fact that c occurred provides a veridical causal explanation of the fact that e occurred.

  27.1 Causal Explanationism

  Causal Explanationists take causal explanation to be metaphysically fundamental. For Explanationists, statements that seem to attribute causation as a relation between things, like events, are merely a different way of expressing the existence of a causal explanation between two truths. Consider again (1) and (2):

  (1) Mary's kicking the ball caused the ball to enter the goal.

  (2) The ball entered the goal because Mary kicked the ball.

  For Explanationists, verbal noun phrases like ‘Mary's kicking the ball’ or ‘the ball's entering the goal’ do not really refer to things. They are merely ways of expressing the truths that are most properly expressed by the complete sentences ‘Mary kicked the ball’ and ‘the ball entered the goal’. Hence, Explanationists have a prima facie advantage in respect of Ockham's Razor (PMeth 1.4): Explanationists' ontology is leaner, both quantitatively and qualitatively, than Connectionists'. Connectionists believe in many instances of a fundamentally new kind of entity, the sort of entity that can stand in a primitive causal relation to other entities.

  There are three popular versions of Causal Explanationism, distinguished by their respective ways of explicating the idea that one truth can causally explain another:

  27.1A.1 Nomological-Deductive Theory of Causal Explanation. The fact that truth p causally explains truth q consists in the fact that q can be deduced from p, together with truths about the laws of nature and relevant background conditions.

  27.1A.2 Counterfactual Conditional Theory of Causal Explanation. If p and q are truths of the appropriate kind (truths about the occurrence of changes or becomings of some kind), and the truth that p causally explains the truth that q, then that causal-explanatory fact consists in the fact that q would be not have been true if p had not been true (i.e., that (Not-p []-> Not-q), using the Stalnaker-Lewis subjunctive conditional, ‘[]->’).

  27.1A.3 Probability-Raising Theory of Causal Explanation. The fact that truth p causally explains truth q consists in the fact that there is an appropriate body of background truths K(p) such that the probability of q conditional on the conjunction of p and K(p) is greater than the conditional probability of q on K(p) alone.

  If the fundamental laws of causation were probabilistic (at least in some cases) rather than deterministic, then the Nomological-Deductive Theory would be inadequate, since probabilistic laws would never entail the truth of any specific truth about the consequences of any initial set-up. They would only entail that certain consequences are more likely than others. Many philosophers and physicists (but not all)1 believe that the fundamental laws of quantum mechanics are probabilistic: the laws make it likely that a certain kind of atom will decay within a certain period, but they never entail that it certainly will decay. Other philosophers, including Anscombe (1975) and Cartwright (1983) think that even if the fundamental laws of physics are not probabilistic, it is still never the case that we can use them to deduce with certainty what would happen in any actual or hypothetical case, because the application of the laws to concrete cases involves the introduction of exceptions, hedges, and qualifications.

  Phil Dowe (2000) argued that a certain “decay case” is prima facie problematic for the Probability-Raising and Counterfactual Theories. The decay case involves five events (see Figure 27.1). Event A represents the existence of an atom of type 1, which must decay either into an atom of type 3 (event C) or an atom of type 2 (event B), with probabilities 1/4 and 3/4, respectively. An atom of type 2 decays immediately with probability one into an atom of type 5 (event E), while the atom of type 3 may decay either into an atom of type 5 (with probability 3/4) or into one of type 4 (with probability 1/4). The actual sequence of events is A-C-E. Intuitively, we want to say that C is a cause of E, since C consists of the production of an atom of type 3 whose actual decay into an atom of type 5 is (in actuality) E. However, the occurrence of C actually lowers the probability of E from 1 to 3/4. The Counterfactual Theory also seems to fail in this case, since it does not seem to be true that if C had not occurred, E would not have occurred. If C hadn't occurred, a type 2 atom would have been created instead, which would have decayed into an type 5 atom, and E is simply the production of an atom of type 5.

  Figure 27.1 The Dowe Decay Case

  There is room, however, for Causal Explanationists to revise the two theories in such a way that they deliver the right answer in this case. In the case of the Counterfactual Theory, we could say (as David Lewis (1973a) recommends) that if C had not occurred, then neither would have B, and so E would not have resulted. The chain of events would simply have ended with A. Lewis recommends that when we evaluate a counterfactual conditional, we introduce a local “miracle” into the course of events, the smallest miracle needed to make the antecedent true. We should ignore what the laws of nature might say about what would have happened in place of the antecedent. The laws in Dowe's decay case are supposed to entail that either C or B succeeds event A. However, when evaluating the conditional ‘If C had not occurred, E would not have occurred’, we should ignore this law and consider what would have happened had neither C nor B occurred, a world in which the atom of type 1 decays and leaves (miraculously) no decay product at all.

  Similarly, when applying the Probability-Raising Theory, we could ask what was the conditional probability of E given the absence of C, while holding the absence of B fixed. We should include the non-occurrence of B in the set of background truths relative to which we evaluate the posterior probability of E given the non-occurrence of C. Presumably, the probability of E, given the non-occurrence of both B and C, is zero or close to zero. Hence, the occurrence of C does raise the probability of E, after all.

  Both the Counterfactual Theory and the Probability-Raising theory thus must provide some principled way of identifying those facts that should be held fixed when evaluating a counterfactual or the relevant conditional probabilities. One popular answer (Eells 1991, Kvart 2004) is to hold fixed all the propositions that are made true by something earlier in time than whatever makes true the proposed cause. However, such an answer ensures that it is metaphysically impossible for causes to be later in time than their effects, and it forces us to assume that causal directionality derives from temporal directionality, and not vice versa.

  27.1.1 Objections to Causal Explanationism

  There are two major objections to all three versions of Causal Explanationism: the problem of causal linkage and the problem of the direction or asymmetry of causation.

  27.1.1.1 Objection 1: The Problem of Causal Linkage.

  The problem of causal linkage charges that Causal Explanationism provides an inadequate account of the linkage between causes and their effects. This lack of connection is most evident in cases of redundant causation. Redundant causation comes in two forms
: symmetric overdetermination and preemption.

  1A. THE CASE OF SYMMETRICAL CAUSAL OVERDETERMINATION. Causation is redundant when several potential causes exist, each of which could be sufficient by itself and independently of the other causes to produce some specific effect. This kind of redundancy can occur in both deterministic and probabilistic settings.

  Since we have already been discussing deterministic laws and causation, a definition of determinism is overdue.

  Def D27.1 Actually Deterministic Laws. A set S of causal laws is actually deterministic if and only if every member of S is true, and every true proposition describing the occurrence of a localized, instantaneous event E is logically entailed by the set of true propositions which includes S as a subset and which includes each true proposition specifying a condition that is causally prior to E.

  Def D27.2 Essentially Deterministic Laws. A set S of causal laws is essentially deterministic if and only if S is actually deterministic in every world w at which all of the members of S are true.

  Def D27.3 Deterministic Worlds. A world w is deterministic if and only if the set S of causal laws true in w is essentially deterministic.

  Using these definitions, we can distinguish deterministic redundancy from probabilistic redundancy:

  Def D27.4 Deterministic Redundancy. A truth p has deterministically redundant causes if and only if there are two disjoint sets of truths C1 and C2; the truths in C1 and C2 are causally prior to p; no truth in C1 is causally prior to any truth in C2 or vice versa; there is a set of actual causal laws S such that both the union of S and C1 entails p, and the union of S and C2 does so also.

  A case of causal redundancy is symmetric if and only if there is no relevant difference between the two independent causes of the effect.

  Def D27.5 Probabilistic Symmetry. A truth p has probabilistically symmetrical causes if and only if (i) there are two disjoint sets of truths C1 and C2, (ii) the truths in C1 and C2 are causally prior to E, (iii) no truth in C1 is causally prior to any event in C2 or vice versa, (iv) there is a set of actual causal laws S such that both the union of S and C1 raises the objective probability of the occurrence of p (relative to background conditions), (v) the union of S and C2 raises the probability of p to exactly the same degree, and (vi) the probability of C1 and C2 are mutually independent.

  A classic example of symmetric deterministic redundancy is a symmetric firing squad, consisting of two infallible marksmen. Each firing is sufficient, by itself and independently of the other, to cause the victim's death. We are inclined to say either that both of the true propositions about the shootings are causal explanations of the death or that the conjunction of the two truths is a single, partially redundant explanation.

  There are also cases of probabilistic symmetry. For example, take the case of two neutrons striking the same nucleus at the same time, each with a chancy propensity to initiate the nucleus's decay. If an event of decay actually occurs, the right answer seems to be that one or the other or both may be the true causal explanation. Suppose that each potential cause had a 50% chance of causing the decay. Intuitively, there is a 75% chance that the nucleus will decay when struck by both causes simultaneously: a 25% chance that it will decay as a result of the action of the first neutron alone, a 25% chance that it will decay as a result of the action of the second neutron alone, and a 25% chance that it will decay as a result of the simultaneous action of both neutrons (multiple, simultaneous causation of the effect). However, Causal Explanationists must say that there is a 75% chance that it will decay as a result of the action of both causes and a 0% chance that it will decay as a result of one but not the other. This is a counter-intuitive result.

  1b. Preemption, especially late preemption and trumping.

  Preemption involves asymmetric redundancy. In preemption cases, there are two potential causes, but it is clear intuitively that one of the potential causes has preempted the other, making it the actual cause and the other a merely potential cause. To return to the firing squad case, we can turn a case of symmetric redundancy into preemption simply by having one marksman shoot a split second before the other, in such a way that the first marksman's bullet has already killed the victim before the second marksman's bullet can arrive. A simple example involves two bowling balls rolling toward the pins, with the first ball knocking down the pins before the arrival of the second. In such cases, it seems clear that one potential cause has preempted the other. Even though the preempted potential cause still entails (in the case of deterministic causation) or raises the probability of the occurrence of the effect, it in fact plays no causal role.

  Preemption cases pose two dangers for any account. First, there is a danger of attributing causality to the merely potential, preempted cause. Second, there is a danger of denying causality to the actual, preempting cause. Any Explanationist theory is liable to one or both of these dangers since they provide no way of tying some particular effect to one rather than the other potential cause. Causal Connectionists, in contrast, can appeal to a richer account of the cases, one in which some kind of fundamental causal tie or connection breaks the logical or probabilistic symmetry.

  Cases of preemption provide ready counterexamples to the Nomological-Deductive Theory. In the case of the staggered firing squad, the shooting of the delayed marksman satisfies the Nomological-Deductive Theory's conditions: the truth of the proposition that the second marksman fired is, together with the laws of nature and other background conditions, sufficient to entail the proposition that the victim is subsequently dead, despite the fact that this shooting has clearly been preempted from causing the death.

  The same case poses a problem for the Counterfactual Conditional Theory as well. Here the problem is not that the second marksman's shooting counts as a cause, but that the first marksman's shooting does not. It is not the case that if the first marksman did not shoot, the victim would not have died. The victim would have died whether or not the first marksman had shot.

  It is also relatively easy to create problem cases for the Probability-Raising Theory (like Eells's 1991 account). To do so, we must find a preempting cause that lowers the probability of the effect given the presence of the preempted cause. Douglas Ehring (1997) uses the example of a device that can deliver one of two drugs. It is very likely that B will be delivered, and the chances of survival conditional on B's delivery are very high. It is possible, but unlikely, that A will be delivered instead, preempting the delivery of B. The chances of survival conditional on A's delivery are good, but significantly worse than on B's delivery. In the actual world, A is delivered and the patient survives. As we track the probability of survival, we find that it goes down when drug A is delivered, despite the fact that A is clearly the cause of survival.

  DAVID LEWIS'S SOLUTION: CAUSATION AS A CHAIN OF CAUSAL DEPENDENCIES David Lewis (1973a) defines causation as the transitive closure, the ancestral, of the causal dependency relation. In the context of defending Causal Explanationism, Lewis's idea could be adapted in this way. First, we define direct causal explanation in terms of the Nomological-Deductive, Counterfactual Conditional or Probability-Raising Theory. Next, we define causal explanation as the ancestor of direct causal explanation. In other words:

  Def D27.6 Causal Explanation as the Ancestral of Direct Causal Explanation. Truth p causally explains truth q if and only if either: (i) p directly explains q, or (ii) there are truths r1, r2, … r(n) such that: p explains r1, each r(i) explains r(i+1), and rn explains q.

  This Ludovician account has the advantage of providing the right answer in many cases of preemption, namely, in all those cases involving early preemption, in which the merely potential cause is preempted at a stage earlier than the effect. For example, suppose there are two potential causes, c and c′, of some event e. If not interrupted, c′ would cause intermediate event i′, which would in turn cause e. Similarly, there is a potential c-i-e chain. Suppose both c and c′ occur, but c prevents the occurrence of i′. In this case, e is causal
ly dependent on i (since i′ never occurred), and i is causally dependent on c, so c counts as a cause of e, even though if c hadn't occurred, c′ would have caused e.

  A similar story can be told in terms of probabilistic causation. Let's suppose that the definition of ‘A causes B’ requires a chain of intermediate events I1, I2, … , In, such that A raises the probability of I1, each I(j) raises the probability of I(j+1), and In raises the probability of B.

  Obviously, this solution depends on the interposition of at least one pair of intermediate events (i.e., events i or i′). However, we can imagine cases of late preemption, cases where it is the effect e itself that preempts the alternate causal chain, without the failure of the realization of an intermediate event (like i′ above). We could also imagine cases involving discrete time, in which the effect e occurs in the very next moment following the occurrence of the preempting cause, with no possibility of the occurrence of any intermediate event.

  EARLY VS. LATE PREEMPTION Let's look at a pair of examples that illustrate the difference between early and late preemption:

  Early preemption: Ball A knocks down the pins before ball B begins to be rolled. Because A has caused the effect, the other causal process is cut short (i.e., the second bowler never completes his bowling of ball B).

  Late preemption: In this case, there are no intervening states. Ball A preempts the causal efficacy of ball B simply by knocking down the pins first, not by changing any of the preconditions of B's action (Hall 2004).

  When dealing with probabilistic causation, we should consider cases in which the preempting cause is simply the negation of the preempted cause. Suppose, in a variation on Dowe's decay case, atom 1 decays into atom 2 (event B), which in turn decays into atom 3 (event C). If the atom 1 had not decayed into atom 2 the probability that it would have decayed directly into atom 3 was 75%. Let's suppose that the probability that atom 2 would decay into atom 3 was only 50%. Thus, event B actually lowers the probability of event C in the circumstances, and yet B could be a cause of C. Let's suppose that the event B is intimately or intrinsically involved in the occurrence of C. The occurrence of B is not simply some partial disabling of the original C-producing mechanism but by itself constitutes a C-producing mechanism (let's say, of a significantly different kind from the direct A-to-C mechanism). C comes about in a different way, perhaps in an entirely different way, depending on whether it is due to the A-involving or B-involving mechanism. B had its own propensity to produce C, which was in fact exercised in the actual world. It seems clear that in such a case B is indeed a genuine cause of C, despite lowering C's chances of occurring.

 

‹ Prev