Essays on Deleuze

Home > Other > Essays on Deleuze > Page 50
Essays on Deleuze Page 50

by Daniel Smith


  In contrast to Lautréamont's song that rises up around the paranoiac–Oedipal–narcissistic pole [of mathematics]—”O severe mathematics … Arithmetic! Algebra! Geometry! Imposing Trinity! Luminous triangle!”—there is another song: O schizophrenic mathematics, uncontrollable and mad ….16

  It is this other mathematics—problematics, as opposed to axiomatics as a “specifically scientific Oedipus”—that Deleuze attempts to uncover and formalize in his work. The obstacles to such a project, however, are evident. The theory of extensional multiplicities (Cantor's set theory) and its rigorous axiomatization (Zermelo-Fraenkel et al.) is one of the great achievements of modern mathematics, and in Being and Event Badiou was able to appropriate this work for his philosophical purposes. For Deleuze, the task was quite different, since he himself had to construct a hitherto non-existent (philosophical) formalization of differential or virtual multiplicities which are, by his own account, selected against by “royal” mathematics itself. In this regard, Deleuze's relation to the history of mathematics is similar to his relation to the history of philosophy: even in canonical figures there is something that “escapes” the official histories of mathematics. At one point, he even provides a list of “problematic” figures from the history of science and mathematics:

  Democritus, Menaechmus, Archimedes, Vauban, Desargues, Bernoulli, Monge, Carnot, Poncelet, Perronet, etc.: in each case a monograph would be necessary to take into account the special situation of these scientists whom State science used only after restraining or disciplining them, after repressing their social or political conceptions.17

  Since Badiou has largely neglected Deleuze's writings on mathematics, in what follows I would first like to outline the nature of the general contrast Deleuze establishes between problematics and axiomatics, and then briefly identify the mathematical origins of Deleuze's notion of “multiplicities.” With these resources in hand, we will then return to Badiou's specific critiques of Deleuze, partly to show their inherent limitations, but also to identify what I take to be the more relevant points of contrast between their respective philosophical positions.

  PROBLEMATICS AND AXIOMATICS

  Let me turn first to the problematic–axiomatic distinction. Although Deleuze formulates this distinction in his own manner, it in fact reflects a fairly familiar tension within the history of mathematics, which we must be content to illustrate hastily by means of three historical examples.

  1. The first example comes from the Greeks. Proclus, in his Commentary on the First Book of Euclid's Elements, had already formulated a distinction, within Greek geometry, between problems and theorems.18 Whereas theorems concern the demonstration, from axioms or postulates, of the inherent properties belonging to a figure, problems concern the construction of figures using a straightedge and compass.19 In turn, theorematics and problematics each involve two different conceptions of “deduction”: in theorematics, a deduction moves from axioms to the theorems that are derived from it, whereas in problematics a deduction moves from the problem to the ideal accidents and events that condition the problem and form the cases that resolve it. “The event by itself,” writes Deleuze, “is problematic and problematizing” (LS 54). For example, in the theory of conic sections (Apollonius), the ellipse, hyperbola, parabola, straight lines, and point are all “cases” of the projection of a circle on to secant planes in relation to the apex of a cone. Whereas in theorematics a figure is defined statically, in Platonic fashion, in terms of its essence and its derived properties, in problematics a figure is defined dynamically by its capacity to be affected—that is, by the ideal events that befall it: sectioning, cutting, projecting, folding, bending, stretching, reflecting, rotating. As a theorematic figure, a circle is an organic and fixed essence, but the morphological variations of the circle (figures that are “lens-shaped,” “umbelliform,” “indented,” etc.) form problematic figures that are, in Husserl's words, “vague yet rigorous,” “essentially and not accidentally inexact.”20 In Greece, problematics found its classical expression in Archimedean geometry (especially the Archimedes of “On the Method”), an “operative” geometry in which the line was defined less as an essence than as a continuous process of “alignment,” the circle as a continuous process of “rounding,” the square as the process of “quadrature,” and so on.

  Proclus, however, had already pointed to (and defended) the relative triumph, in Greek geometry, of the theorematic over the problematic. The reason: to the Greeks, “problems concern only events and affects which show evidence of a deterioration or a projection of essences in the imagination,” and theorematics thus could present itself as a necessary “rectification” of thought.21 This rectification must be understood, in a literal sense, as a triumph of the rectilinear over the curvilinear.22 The definition of the straight line as “the shortest distance between two points,” for example, is understood dynamically in Archimedean geometry as a way of defining the length of a curve in pre-differential calculus, such that the straight line is seen as a “case” of the curve; in Euclidean geometry, by contrast, the essence of the line is understood statically, in terms that eliminate any reference to the curvilinear (“a line which lies evenly with the points on itself”).23 In the “minor” geometry of problematics, figures are inseparable from their inherent variations, affections, and events; the aim of “major” theorematics, by contrast, is “to uproot variables from their state of continuous variation in order to extract from them fixed points and constant relations” (TP 408–9), and thereby to set geometry on the “royal” road of theorematic deduction and proof. Badiou, for his part, explicitly aligns his ontology with the position of theorematics: “the pure multiple, the generic form of being, never welcomes the event in itself as its component.”24

  2. By the seventeenth century, the tension between problems and theorems, which was internal to geometry, had shifted to a more general tension between geometry itself, on the one hand, and algebra and arithmetic on the other. Desargues's projective geometry, for instance, which was a qualitative and “minor” geometry centered on problems-events (as developed, most famously, in the Draft Project of an Attempt to Treat the Events of the Encounters of a Cone and a Plane, which Boyer aptly describes as “one of the most unsuccessful great books ever produced”), was quickly opposed (and temporarily forgotten) in favor of the analytic geometry of Fermat and Descartes—a quantitative and “major” geometry that translated geometric relations into arithmetic relations that could be expressed in algebraic equations (Cartesian coordinates).25 “Royal” science, in other words, now entailed an arithmetization of geometry itself. “There is a correlation,” Deleuze writes, “between geometry and arithmetic, geometry and algebra which is constitutive of major science.”26 Descartes was dismayed when he heard that Desargues's Draft Project treated conic sections without the use of algebra, since to him “it did not seem possible to say anything about conics that could not more easily be expressed with algebra than without.”27 As a result, Desargues's methods were repudiated as dangerous and unsound, and his practices of perspective banned.

  It would be two centuries before projective geometry was revived in the work of Monge, the inventor of descriptive geometry, and Poncelet, who formulated the “principle of continuity,” which led to developments in analysis situs and topology. Topology (so-called “rubber-sheet geometry”) concerns the property of geometric figures that remain invariant under transformations such as bending or stretching; under such transformations, figures that are theorematically distinct in Euclidean geometry, such as a triangle, a square, and a circle, are seen as one and the same “homeomorphic” figure, since they can be continuously transformed into one another. This entailed an extension of geometric “intuitions” far beyond the limits of empirical or sensible perception (à la Kant). “With Monge, and especially Poncelet,” writes Deleuze, commenting on Léon Brunschvicg's work, “the limits of sensible, or even spatial, representation (striated space) are indeed surpassed, but less in th
e direction of a symbolic power of abstraction [i.e., theorematics] than toward a trans-spatial imagination, or a trans-intuition (continuity).”28 In the twentieth century, computers have extended the reach of this “trans-intuition” even further, provoking renewed interest in qualitative geometry, and allowing mathematicians to “see” hitherto unimagined objects such as the Mandelbrot set and the Lorenz attractor, which have become the poster children of the new sciences of chaos and complexity. “Seeing, seeing what happens,” continues Deleuze, “has always had an essential importance, greater than demonstrations, even in pure mathematics, which can be called visual, figural, independently of its applications: many mathematicians nowadays think that a computer is more precious than an axiomatic” (WP 128). But already in the early nineteenth century, there was a renewed attempt to turn projective geometry into a mere practical dependency on analysis, or so-called higher geometry (the debate between Poncelet and Cauchy).29 The development of the theory of functions would eventually eliminate the appeal to the principle of continuity, substituting for the geometrical idea of smoothness of variation the arithmetic idea of “mapping” or a one-to-one correspondence of points (point-set topology).

  3. This double movement of major science toward theorematization and arithmetization would reach its full flowering, finally, in the late nineteenth century, primarily in response to problems posed by the invention of the calculus. In its origins, the calculus was tied to problematics in a double sense. The first refers to the ontological problems that the calculus confronted: the differential calculus addressed the problematic of tangents (how to determine the tangent lines to a given curve), while the integral calculus addressed the problematic of quadrature (how to determine the area within a given curve). The greatness of Leibniz and Newton was to have recognized the intimate connection between these two problems (the problem of finding areas is the inverse of determining tangents to curves), and to have developed a symbolism to link them together and resolve them. The calculus quickly became the primary mathematical engine of what we call the “scientific revolution.” Yet for two centuries, the calculus, not unlike Archimedean geometry, itself maintained a problematic status in a second sense: it was allotted a para-scientific status, and labeled a “barbaric” or “Gothic” hypothesis, or at best a convenient convention or well-grounded fiction. In its early formulations, the calculus was shot through with dynamic notions such as infinitesimals, fluxions and fluents, thresholds, passages to the limit, continuous variation—all of which presumed a geometrical conception of the continuum: in other words, the idea of a process. For most mathematicians, these were considered to be “metaphysical” ideas that lay beyond the realm of mathematical definition. Berkeley famously ridiculed infinitesimals as “the ghosts of departed quantities”; D'Alembert famously responded by telling his students, Allez en avant, et la foi vous viendra (“Go forward, and faith will come to you”).30 The calculus would not have been invented without these notions, yet they remained problematic, lacking an adequate mathematical ground.

  For a long period of time, the enormous success of the calculus in solving physical problems delayed research into its logical foundations. It was not until the end of the nineteenth century that the calculus would receive a “rigorous” foundation through the development of the “limit-concept.” “Rigor” meant that the calculus had to be separated from its problematic origins in geometrical conceptions or “intuitions,” and reconceptualized in purely arithmetic terms (the loaded term “intuition” here having little to do with “empirical” perception, but rather the “ideal” geometrical notion of continuous movement and space).31 This “arithmetization of analysis,” as Félix Klein called it,32 was achieved by Karl Weierstrass, one of Husserl's teachers, in the wake of work done by Cauchy (leading Giulio Giorello to dub Weierstrass and his followers the “ghostbusters”).33 Analysis (the study of infinite processes) was concerned with continuous magnitudes, whereas arithmetic had as its domain the discrete set of numbers. The aim of Weierstrass's “discretization” program was to separate the calculus from the geometry of continuity and base it on the concept of number alone. Geometrical notions were thus reconceptualized in terms of sets of discrete points, which in turn were conceptualized in terms of number: points on a line as individual numbers, points on a plane as ordered pairs of numbers, points in n-dimensional space as n-tuples of numbers. As a result, the concept of the variable was given a static interpretation. Early interpreters had tended to appeal to the geometrical intuition of continuous motion when they said that a variable x “approaches” a limit (for instance, the circle defined as the limit of a polygon). Weierstrass's innovation was to reinterpret this variable x arithmetically as simply designating any one of a collection of numerical values (the theory of functions), thereby eliminating any dynamism or “continuous variation” from the notion of continuity, and any interpretation of the operation of differentiation as a process. Weierstrass, writes Deleuze,

  provided what he himself called a “static” interpretation of the differential and infinitesimal calculus, in which there is no longer any fluction toward a limit, no longer any idea of a threshold, but rather the idea of a system of choice, from the viewpoint of an ordinal interpretation.34

  In Weierstrass's limit-concept, in short, the geometric idea of “approaching a limit” was arithmetized, and replaced by static constraints on discrete numbers alone (the episilon-delta method). Dedekind took this arithmetization a step further by rigorously defining the continuity of the real numbers in terms of a “cut”: “it is the cut which constitutes … the idea cause of continuity or the pure element of quantitativity” (DR 172). Cantor's set theory, finally, gave a discrete interpretation of the notion of infinity itself, treating infinite sets like finite sets (the power set axiom)—or rather, treating all sets, whether finite or infinite, as mathematical objects (the axiom of infinity).35

  Weierstrass, Dedekind, and Cantor thus form the great triumvirate of the program of discretization and the development of the “arithmetic” continuum (the redefinition of continuity as a function of sets over discrete numbers). In their wake, the basic concepts of the calculus—function, continuity, limit, convergence, infinity, and so on—were progressively “clarified” and “refined,” and ultimately given a set theoretical foundation.36 The assumptions of Weierstrass's discretization problem—that only arithmetic is rigorous, and that geometric notions are unsuitable for secure foundations—are now largely identified with the “orthodox” or “major” view of the history of mathematics as a progression toward ever more “well-founded” positions.37 The program would pass through two further developments. The contradictions generated by set theory brought on a sense of a “crisis” in the foundations, which Hilbert's formalist (or formalization) program attempted to repair through axiomatization: that is, by attempting to show that set theory could be derived from a finite set of axioms, which were later codified by Zermelo-Fraenkel (given his theological leanings, even Cantor needed a dose of axiomatic rigor). Gödel and Cohen, finally, in their famous theorems, would eventually expose the internal limits of axiomatization (incompleteness, undecidability), demonstrating, in Badiou's language, that there is a variety of mathematical forms in “infinite excess” over our ability to formalize them consistently.

  This historical sketch, though necessarily brief, none the less provides a basis from which we can pinpoint the differences between the respective projects of Badiou and Deleuze. In identifying ontology with axiomatic set theory, Badiou is adopting the position of “major” mathematics with its dual programs of “discretization” and “axiomatization.” This contemporary orthodoxy has often been characterized as an “ontological reductionism.” In this viewpoint, as Penelope Maddy describes it, “mathematical objects and structures are identified with or instantiated by set theorematic surrogates, and the classical theorems about them proved from the axioms of set theory.”38 Reuben Hersh gives it a more idiomatic and constructivist characterizat
ion:

  Starting from the empty set, perform a few operations, like forming the set of all subsets. Before long you have a magnificent structure in which you can embed the real numbers, complex numbers, quaterions, Hilbert spaces, infinite-dimensional differentiable manifolds, and anything else you like.39

  Badiou tells us that he made a similar appeal to Deleuze, insisting that “every figure of the type ‘fold,’ ‘interval,’ enlacement,’ ‘serration,’ ‘fractal,’ or even ‘chaos’ has a corresponding schema in a certain family of sets ….”40 Deleuze, for his part, fully recognizes this orthodox position: “Modern mathematics is regarded as based upon the theory of groups or set theory rather than on the differential calculus” (DR 180). None the less, he insists that the fundamental difference in kind between problematics and axiomatics remains, even in contemporary mathematics:

 

‹ Prev