+ All Categories
Home > Documents > Enlarging One's Stall or How Did All of These Sets Get - Philosophy

Enlarging One's Stall or How Did All of These Sets Get - Philosophy

Date post: 12-Feb-2022
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
23
Enlarging One's Stall or How Did All of These Sets Get in Here? 1. Mark Wilson * * Department of Philosophy, University of Pittsburgh, Pittsburgh, Penn. 15260, U.S.A. [email protected] Abstract Following historical developments, this article traces two basic motives for employing sets within a physical setting and discusses whether they truly pose a problem for ‘mathematical naturalism’. I Confusing ‘philosophy of mathematics’ with ‘foundations of mathematics’ can occasion significant misunderstanding if it is not recognized that the motives of the two enterprises do not entirely coincide. ‘Foundations of mathematics’ is usually concerned to amalgamate into formal unity various forms of technical construction that appear scattered across mathematical practice, so that their comparative postulational strengths can be accurately gauged. By its very nature, this useful conglomeration may neglect motivational differences that should remain salient when more determinedly ‘philosophical’ concerns are addressed. This note will suggest (without claiming to prove the thesis) that such counterproductive assimilation partially underlies current concern with the so- called ‘problem of mathematical naturalism’: the alleged problem of explaining why it is proper to ‘posit’ the ‘abstract quantities’ typical of mathematics. As Gideon Rosen reminds us in a useful Stanford Encyclopedia article on ‘Abstract objects’ [Rosen, 2012 ], worries of this shape were rarely discussed before the twentieth century but became prominent after Quine's focus upon the limited reasons that a stout-hearted ‘naturalist’ might cite in ‘positing’ ontologies of various kinds. Certainly, if one ponders only the raw manner in which the ‘postulation of sets’ is approached within a typical ‘foundations of mathematics’ context (within ZFC, for example), the motives for such maneuvers can appear rather distant from any readily identifiable ‘naturalistic’ concern. This worry has inspired a large range of quixotic attempts to reconstruct orthodox mathematics along acceptable ‘naturalistic’ lines. Recently a number of philosophers — Penelope Maddy's Second Philosophy [2007 ] might serve as a distinguished paradigm of this trend — have argued that something has gone dreadfully astray in these reconstructive projects and that the entire problem of ‘mathematical naturalism’ merits radical re-examination. I fully concur but also believe that doing so requires the careful survey of a large number
Transcript

Enlarging One's Stall or How Did All of These Sets Get in Here?†

1. Mark Wilson* *Department of Philosophy, University of Pittsburgh, Pittsburgh, Penn. 15260, U.S.A. [email protected]

Abstract Following historical developments, this article traces two basic motives for employing sets within a physical setting and discusses whether they truly pose a problem for ‘mathematical naturalism’.

I Confusing ‘philosophy of mathematics’ with ‘foundations of mathematics’ can occasion significant misunderstanding if it is not recognized that the motives of the two enterprises do not entirely coincide. ‘Foundations of mathematics’ is usually concerned to amalgamate into formal unity various forms of technical construction that appear scattered across mathematical practice, so that their comparative postulational strengths can be accurately gauged. By its very nature, this useful conglomeration may neglect motivational differences that should remain salient when more determinedly ‘philosophical’ concerns are addressed.

This note will suggest (without claiming to prove the thesis) that such counterproductive assimilation partially underlies current concern with the so-called ‘problem of mathematical naturalism’: the alleged problem of explaining why it is proper to ‘posit’ the ‘abstract quantities’ typical of mathematics. As Gideon Rosen reminds us in a useful Stanford Encyclopedia article on ‘Abstract objects’ [Rosen, 2012], worries of this shape were rarely discussed before the twentieth century but became prominent after Quine's focus upon the limited reasons that a stout-hearted ‘naturalist’ might cite in ‘positing’ ontologies of various kinds. Certainly, if one ponders only the raw manner in which the ‘postulation of sets’ is approached within a typical ‘foundations of mathematics’ context (within ZFC, for example), the motives for such maneuvers can appear rather distant from any readily identifiable ‘naturalistic’ concern. This worry has inspired a large range of quixotic attempts to reconstruct orthodox mathematics along acceptable ‘naturalistic’ lines.

Recently a number of philosophers — Penelope Maddy's Second Philosophy [2007] might serve as a distinguished paradigm of this trend — have argued that something has gone dreadfully astray in these reconstructive projects and that the entire problem of ‘mathematical naturalism’ merits radical re-examination. I fully concur but also believe that doing so requires the careful survey of a large number

of faulty assumptions about ‘scientific method’ that collectively compose the portrait of ‘postulation within science’ with which Quine and his followers work. The present article attempts to take a modest step in these corrective directions by considering some typical set-theoretic constructions within their original motivational contexts, in which circumstances it appears far less obvious that any clash with ‘naturalism’ lies at hand. In particular, I will distinguish two employments of the notion of ‘set’ that look rather similar from a formal point of view but whose underlying motivations appear notably different in character. The easiest strategy for excavating these differences is to follow history and examine the separate pathways along which set constructions emerged as important during the nineteenth century. Retracing these old trade routes may recover a desirable lost innocence with respect to the varying strategic purposes that sets continue to serve within modern working mathematics. These two themes will emerge as crucial:

(A) the desirability of understanding our computational relationships to the sorts of physical behaviors captured in differential equations (and allied forms of dynamical recipe)

(B) the desirability of abstracting deeper forms of explanatory structure based upon the surface appearances of more familiar mathematical behaviors.

Considered in their own terms, (A) and (B) strike me as concerns that a ‘naturalist’ should straightforwardly embrace. But their individual flavors of naturalistic purpose trace to different sources and can become easily lost if they are wantonly blended together in a standard ‘foundations of mathematics’ stew.

To be sure, these two tasks share a common methodological orientation in that they both demonstrate the fact that a structural investigation can often make better progress if it operates over wider dominions than might be originally apparent. For often the most tractable path connecting points A and B carries one out of the country where one began through intermediate locations C. This is a shared theme that strongly characterizes much of the progress that mathematics has made since the early nineteenth century. A classic early example can be witnessed in Gauss's employment of certain complex numbers (‘the cyclotomic integers’) to make sense of factorization patterns amongst the regular numbers. In a striking phrase [Gauss, 1910, p. 208], he claimed that such utility showed that the complex numbers should be granted ‘full equality of citizenship’ with the regular numbers.

In an allied spirit, Cauchy discovered that understanding the differential equations of mathematical physics greatly benefits from attention to the complex plane, even if, at first blush, such equations carry no physical import there. For example, the central early technique employed to graph the solutions associated with such equations extracted series expansions from the equations through various forms of formal division. But such series display a notorious capacity to supply really rotten answers at unpredictable moments (e.g., they blow up to infinity or approach correct values very, very slowly). And these ‘bad spots’ often make it very hard, in dealing with an unfamiliar equation, to know when one's calculations are on track and when they have been shunted off into the realms of utter fancy by plowing through a ‘bad spot’ unwisely.

Sometimes the reasons for these miserable behaviors are fairly evident. If we extract a series expansion by formally dividing the formula 1/(1−x2), it is not surprising that the result will become problematic at the points −1 and +1 along the real line because the original formula is infinite there (and

hence should not possess a proper summation in any case). But why does the analogous series emerging from the formula 1/(1+x2) likewise break down at the same points, when its parent formula appears to

be well-behaved everywhere along the real line? Cauchy recommends that we look out to the complex plane for an explanation. Given that conventional series expansions essentially consist in additions and multiplications, our computations still make sense over the complex field, even though our original differential equations no longer carry obvious physical significance there. Looked at from this point of view, we espy two obvious ‘bad spots’ at i and −i, completely analogous to the −1 and +1 ‘bad spots’ for 1/(1−x2). Accordingly, we should worry whether our expansion for 1/(1+x2) can be trusted anywhere along the ‘circle of convergence’ that runs through +i and −i. But the real-valued points +1 and −1 lie on this bounding circle, hence we have located a useful ‘early warning’ signal of their potentially problematic series behavior by noting the obvious ‘bad points’ on the complex plane.

Moral: to understand the ‘special functions’ that naturally spring from the equations of mathematical physics, they should be examined over a wider territory than the real line [Batterman, 2006]. Specifically, the topography of where the function's singularities lie is the key to appreciating the algorithmic adjustments we must make as we struggle to compute accurately how, e.g., a Bessel function extends itself along the real line. Accordingly, mapping a function's singularities on the complex plane helps us understand the otherwise mysterious computational

failures we encounter when we attempt to compute its values using standard series expansion techniques. So our earlier presumption that the singular behavior of a differential equation upon the complex plane ‘carries no physical significance’ was too hasty.

Indeed, the descriptive value of such off-the-real-axis singularities (and allied forms of extended domain location) runs much deeper than this, for such unanticipated positions enjoy a capacity to condense salient information about our target physical system, in very pungent (though sometimes mysterious) ways. Perhaps an old analogy from Felix Klein [1893] might help: the main determinants of fluid flow within a bathtub are intuitively fixed by its inlets and outlets, which represent singularities of the water flow within a mathematical treatment. From an informational point of view, the location and strength of these singularities encapsulates the main determinants of the sloshing we witness elsewhere within the tub. And, just as Cauchy realized with his series, this same centrality of informational representation often applies to singularities even when some of the functional ‘flow’ traces to inlets and outlets that lie out on the complex plane (or, more generally, upon a Riemann surface). For example, modern approaches to the diffraction of light utilize the information that lies codified within various ‘critical points’ out on the complex plane. Just because a mathematical expression appears to talk about ‘imaginary points’ does not indicate that it is not encoding physical information of the most salient kind.

The vital moral implicit in such examples that appears to be overlooked by many contemporary philosophers of science: the informational contents of mathematical expressions, insofar as they bear upon the physical world, should not be approached as static or a priori discernible semantic qualities. Instead, syntactic items extracted from the most far-flung corners of ‘pure mathematics’ often reveal surprising capacities to store vital physical information in remarkably effective ways, in the general manner of the useful singularities that sit outside the domain of original interest. Often a very probing mathematical analysis is required to understand why these unexpected modes of informational storage work in the way that they do.

In any case, such is the general phenomenon I have in mind under the heading of ‘enlarging our stall’ (think of an unhappy mule trying to increase its living quarters by kicking against the walls). And it often further happens that the extension elements useful to understanding some original behavior D may lie in another

different domain D′ connectable to the first only through an infinite sequence of intermediaries (we shall examine some concrete examples soon). Set theory provides the natural language for capturing this sort of situation.

II An important early example of this infinitary theme emerged in connection with the often frustrating computational behaviors of physical processes modeled by differential equations. In the olden days of Leibniz and Newton, such equations were regarded as meaningful only if established functions were known to which the differential equations would serve, in a conceptually subsidiary manner, as infinitesimal generators of the finite behaviors. From this point of view, a physicist could expect to fit physical nature to mathematical formula only in comparatively rare and favorable circumstances (such as when very symmetrical boundary conditions permit an ‘exact solution’ to her equations). Elsewhere, I have dubbed such an attitude mathematical opportunism:1 mathematically inclined physicists must scour Nature for those special opportunities to which their models can neatly apply. It was within this framework that d'Alembert famously opined that his own one-dimensional wave equation did not suit most strings encountered in nature.

As historians of these changes have often observed, somewhere in the interval between Euler and Cauchy it began to be assumed that differential equations are (usually) capable of growing functions on their own, even if the results do not correspond to any previously familiar formula [Ferraro, 2008, pp. 263–265]. And this altered conviction stemmed in large part from the faith that the differential equations capture the operative physics of real-world processes correctly, even in cases of arbitrarily chosen boundary and initial conditions. In contrast to ‘opportunist’ appraisals of the restricted scope of applied mathematics, this reformed attitude can be called mathematical optimism.

But with these enlarged pastures, a new problematic emerges, related to the problems with series expansions just canvassed. It is a brute fact that extracting reliable information from differential equations is often very difficult and our attempts to do so can resemble struggles with a truculent child. We can ask our differential equations lots of questions but those formulas ‘won't say nuthin' ’ in reply (or, when they do answer, they prevaricate). Thus there are a wide range of reasoning methods that can be formally applied to any differential equation, but will often produce very poor results (series-expansion techniques are frequently horribly misleading due to their miserably slow convergence). In Descartes' and d'Alembert's day, the collected reasoning techniques of mathematics were usually regarded as reliable, albeit limited in applicational scope. But the new Eulerian tolerance of the infinitary conditions laid down by differential equations forces

practical mathematics to work with formal reasoning methods whose soundness is often uncertain. Such widened circumstances forced nineteenth-century mathematicians to begin investigating how their formal reasoning techniques are likely to correlate with the differential equation behaviors they hope to augur.2

Here is a simple illustration of the kind of study required. Suppose that a bead sliding along a wire obeys a simple position-dependent equation of the form dx/dt=f(x), starting from some initial position p. Our Eulerian faith in the equation presumes that it will carve out a proper trajectory C whose contours may not match any previously known finite formula. But what reasoning techniques should we employ to discover where path C wends? (Eulerian optimism only tells us that it goes somewhere.) Series-expansion methods can help, but the simplest automatic method for extracting approximate data from an equation of this ilk is called Euler's method of finite differences. In this venerable technique, we decompose the passing time into small steps t and estimate the new location xi+1 that a bead originally located at position xi would reach if it traveled at a constant velocity f(xi) over the intervening interval t. By iterating this computational process repeatedly, we draw broken-line trajectory plots as illustrated, where the space between data points is controlled by the size of the time step t employed. Of course, this assumption of ‘constant velocity over the interval t’ is undeniably unrealistic, but we can hope that, if the time step t is chosen small enough, the trajectory plotted will lie within the close vicinity of the true real solution C. Well, such is our hope, but how reasonable is it and how small must t be chosen before passable results can be obtained? In other words, how closely will our computed paths correlate with the true path C given that we know very little at the outset about where C will actually wander? This is the kind of correlational question with respect to computation that researchers like Picard and Lipschitz investigated. And the most general conclusion they reached is: without suitable restrictions, Euler-method computations can prove utterly worthless, for the swarm of calculations may never coalesce around the correct answer no matter how small t is chosen. Or — to describe a more common form of rotten situation — the computational swarm may hover deceptively around an apparent limiting trajectory C* for all practical choices of t and will begin focusing upon the correct trajectory C only at minute choices of t scale far beyond the capacities of our most powerful computers. To be sure, Lipshitz et al. were able to propose restrictions on f(x) that can guarantee more trustworthy results but to this day we are not fully certain in many cases of

practical importance that the data we extract from differential equations through numerical methods like Euler's are truly reliable. In short, the sometimes unhappy lesson of our correlational studies is: ‘You Can't Always Compute What You Want’ (even when C exists).

Nonetheless, it should not be forgotten that possessing a reliable picture of what can potentially go wrong in a computation in itself provides valuable information even if the correlational portrait does not help us correct the problem 3 (we still want our maps to register the unnavigable swamps in a terrain, even if they cannot direct us safely through them). Through such considerations, mathematicians have found themselves in a position where they not only need to devise direct forms of ‘reasoning method’ such as d'Alembert and Descartes expected; they must also develop correlational pictures of when such methods are likely to succeed and fail (if they can; such studies are often very difficult). In particular, they must try to frame an overriding conception of how closely the numbers obtained through concrete numerical algorithms such as Euler's method parallel the values that Nature's own unfolding histories run through (when her processes are governed by differential equations). Such correlational studies, favorable and unfavorable, provide us with a portrait of our computational place in nature: they mark the bounds of how capably we should expect to augur the world's behaviors through available computational methods. And in bleak circumstances there simply will not be algorithms that can perform the tasks we would like.4

The upshot is a portrait of endeavor within mathematical physics that is not quite as rosy as giddy pioneers like Galileo anticipated,5 but it is not an entirely forlorn vision either. Our computational place in nature appears to largely be mildly transcendental,6 in the sense that, with adequate precautions, we can often gain true information about natural processes asymptotically by squeezing in on them through ever-improving computations in the manner of a successful application of Euler's method. And even when we cannot, we have at least managed to frame a decent picture of why our available algorithms fail to meet their mark.

The relevance of these musings to our central concerns is simply that the natural vocabulary for capturing our ‘mildly transcendental circumstances’ is that of set theory, which we use to capture the exact manner in which the webbings of ever-improving broken-line computations surround their target curves C. In many modern applications, the exact nature of this improving correspondence is subtle and needs to be explicated in quite technical, set-based, terms. We also need sets to capture the underpinnings of those ‘look outside your native country’ techniques with which our discussion began.

In these descriptive endeavors, sets do not enter the scene as surrogates for concepts in any clear sense (as they will in the next section). Instead, they are cited in order to capture the ‘mildly transcendental’ relationships in which common mathematical and physical objects stand to human algorithmic capacities (viz., our practicable calculations surround the target trajectories only as an infinite set of nested broken-line plots, arranged according to some appropriate notion of ‘limit’).

In fact, it is common in science (although many philosophers seem unaware of such procedures) to employ set-theoretic reasoning as a means of assuring ourselves of the existence of hidden physical properties for which we lack pre-existent ‘concepts’ in the usual sense of the term. A basic paradigm for this sort of argument can be found in the beautiful ‘hidden property’ story that C.F. Sturm articulated in the 1830s with respect to various non-obvious qualities hidden within the behaviors of suitably symmetric objects such as rectangular and circular plates. We are all familiar with the way in which the vibrations of a violin string are controlled by the properties of its tonal spectrum (how much energy is lodged within its fundamental tone, how much is accorded to its octave, etc.). Sturm was concerned to argue that many forms of symmetrical vibratory systems (e.g., rectangular or circular plates or drum heads) embody similar hidden controlling properties analogous to, but different from, the storage ‘modes’ of violin strings. In other words, he showed that there are certain hidden properties within a symmetric drumhead that can store energy in the same manner as the overtones of a string, but which control its vibratory movements in a somewhat different way (which is why drum heads and garbage-can lids rarely sound ‘musical’).

It should not be taken as obvious that hidden properties of this ilk exist. To this day, we are not entirely certain that less symmetrical plates such as guitar faces possess analogous traits (although instrument designers hope that they do). Sturm's argumentation does not work for such systems. His basic idea was to trap the desired quantities in a web of converging approximations not unlike the ones we examined in the bead case. In rough scheme, here is how he did it.

1. Appealing to the special symmetries of the plate, find various one-dimensional ordinary differential equations (DE) that factor its movements into repeating one-dimensional patterns.

2. Establish the patterns in which the zeros of these DEs appear (this is Sturm's famous ‘comparison theorem’).

3. Trap the desired quantities by selecting appropriate zero positions and homing in on them using broken-line approximations in a ‘shooting method’ variation upon Euler's method.

More specifically, start at a lefthand zero and guess a ‘velocity’ (=slope) S that might carry a point obeying DE to pass through a number of target zeros lying to the right (think of this as shooting an arrow from P with slope S and seeing where it lands according to Euler's method). Most likely, one's initial guess of S will prove too low or too high, but if the DE is of the requisite kind, Sturm showed that, by gradually improving our broken-line guesses, we can eventually home in, in a ‘mildly transcendental’ fashion, upon unique solutions (called the problem's ‘eigenfunctions’). To each such solution, our drumhead must contain a hidden-energy property analogous to the overtone spectrum of our violin string (the curves mapped out by our shooting-method routine tell us the rules whereby the hidden quantities contribute to the drumhead's eventual position). Although many formerly unknown traits of this sort have now been assigned established mathematical names (‘Bessel functions’), such quantities are usually novel in the sense that they (provably) cannot be constructed in finite terms from previously established traits such as polynomials or trigonometric functions.

In the usual jargon of functional analysis, the eigenfunctions appear as the ‘fixed points’ of the coersive mapping defined by the web of gradually improving approximations we construct as we lessen the step size x and improve upon our initial slope guess S.7 This Sturmian pattern supplies the basic prototype for the very subtle contractive mappings studied in modern functional analysis. In all cases, we are attempting to entrap a wild beast (=the hidden eigen-qualities) within a set-theoretic mesh. Such considerations underscore our earlier observations that the exact manner in which ‘physical content’ gets registered within a mathematically inflected language can prove quite complicated and unexpected.

Unfortunately, many contemporary philosophers continue to presume blithely that the intuitive notion of ‘physical structure’ can be neatly captured in the framework of the logician's language-aligned notion of ‘structure’ in the format 〈D,ϕ1,ϕ2,…,ϕn〉, where every ‘kind term of physics’ is expected to prove definable in the logician's sense from the primitive predicates on the ϕi-list. But this picture is quite misleading, because many of physics's most important descriptive traits — such as Sturm's hidden spectral properties — lie far outside the orbit of the ‘kind terms’ so delineated. Indeed, there is little reason to expect that we should possess ‘concepts’ to suit such quantities beforehand, for Sturm's basic argument traps a wide range of hidden properties of our drumhead within a set-theoretic webbing without regard to whether we knew about these quantities

beforehand or not. But these newly uncovered quantities are physically salient, for they represent important conserved quantities that remain invariant within the shifting patterns we witness in nature. The fact that their existence had eluded our attention previously should not seem a great surprise: very strange creatures often appear in fishermen's nets.

Although the fact is often overlooked today, the doctrines about ‘concepts’ that have come down to us (at the hands of Carnap, Quine, and others) in the misleading guise of ‘the thesis of extensionality’ historically trace to ‘properties as independent of concepts’ themes such as this.8 As such, they represent the natural concomitant of Euler's ‘mildly transcendental’ optimism.

Prima facie, there is nothing in these procedures that should alarm a reasonable ‘naturalist’. But let us now turn to a second strain of employing sets that proved genuinely ‘non-naturalist’ within its original motivational framework.

III The distinct employment of sets I now have in mind takes its origins from a gambit that I shall call ‘relative logicism’. Its historical traces are most familiar today in the writings of Gottlob Frege and Bertrand Russell, but the basic policy once assumed a much wider variety of articulations.

The original purpose of these ‘relative logicisms’ was to shore up the so-called ‘free creativity of the mathematician’: the presumed liberty of practitioners to enrich traditional dominions with supplementary entities if those additions can facilitate a deeper understanding of the behaviors in question (in the manner of our 'shortest route between A and B example earlier). One of the first — and still, in many ways, one of the most striking — examples of this newly claimed ‘creative liberty’ can be found in the projective geometry enrichment of Euclid with complex-valued points and positions at infinity. At the price of possibly unnecessary detail (the reader may skim the next several paragraphs if desired), let me first explain how concepts came to the fore in such extensions and how ‘relative logicism’ proposed treating them.

Let us start with the motivations that first suggested to mathematicians that they might wish to add imaginary points (i.e., locations with complex number coordinates) to regular Euclidean geometry.

Mathematicians circa 1830 were interested in understanding how magic-lantern images alter as they are successively projected from one surface to another,

in particular, when screens shaped like spheres and other shapes are employed. Suppose that a cat's face is painted on the bottom side of a globe and we employ a lantern A to project a distorted copy of the face onto the top of the sphere. We bring in a second lamp B to project both images onto the flat plane running through the two tangent points where the light from A just grazes the sphere's surface in a plane section (in such contexts we pretend that the ‘light’ we employ can transmit images backwards towards their lamp sources as well). The end result of these two projections will be two cat images on the plane (one of them stretching over the far horizon) whose parts will match up with one another in a nested harmonic map centered around the two fixed points marked. Such constructions have been familiar since antiquity, and the two fixed points are traditionally regarded as the ‘controlling points’ of the involution map we have constructed. But now suppose that we gradually move lamp A inside the sphere without any other change in our

arrangements. We now wind up with two projected cats on the plane whose corresponding parts now match up in an overlapping manner. Although the two situations appear rather similar, the two controlling points have disappeared from the scene and, with them, the traditional means of understanding such

mappings as ‘harmonic divisions’ around them.

And it is here the ‘free creativity of the mathematician’ enters, through the vehicle of suggesting that traditional Euclidean space should be enriched with a plethora of imaginary points, two of which can still serve as the invisible ‘controlling centers’ of our overlapping cat maps. Historically, the first justifications of this policy appealed to considerations of ‘persistence of form’, which we can intuitively understand as follows. Consider how the parts of the construction in the four-diagram figure adjust themselves as we gradually move lamp A inward in the manner of the top-to-bottom ‘animation’ (think of these shifts as if they represent a little mechanism cycling through its various configurations). Many of the figure's structural characteristics — its basic relational traits — plainly remain constant over these adjustments, except for those that alter as the two tangent points first coalesce and then disappear (the phrase ‘persistence of form’ alludes to the conserved relational qualities). If so, why not add new ‘controlling points’ to geometry that can restore the full set of ‘persisting properties’ nicely in all cases?

Such ploys facilitated astonishing advances within nineteenth-century geometry that cannot be surveyed in brief compass. Despite these rich rewards, mathematicians plainly cannot be allowed to postulate anything they would like anytime they would like it (they cannot be granted an unfettered ‘license to create’ in the mode of James Bond). It is this context that ‘relative logicism’ quickly appears on the scene as a collection of strategies for taming the wild ‘inductivism’ implicit within the older ‘persistence of form’ appeals, in hopes of restoring mathematical methodology to its proper a priori status. The general idea is this. It is maintained that our canons of scientific rationality allow us to frame logical constructions based upon concepts that can play the role of the supplementary entities wanted. In particular, a relative logicist maintains that logical considerations allow geometers to puff up the traditional Euclidean domain with auxiliary imaginary points.

The first ‘relative logicist’ strategy of this stripe (which is the only one which remains familiar today) maintains that logic can engender new objects over a domain D through abstraction (in a traditional empiricist vein) upon the regular objects found within D. In particular, we can form equivalence classes to collect together the range of objects over which the abstraction is made (this is where sets enter the picture). For example, there is a wide range of overlapping maps m1,m2,m3,… that should share the same imaginary ‘controlling point’ centers. The abstractionist ploy suggests that we can simply equate the missing points with the two sets {0,m1,m2,m3,…} and {1,m1,m2,m3,…},9 where the two sets are collected together by the rather complex relational property R shared by all overlapping maps that ought to be controlled by the same imaginary fixed points.

Or, to be more nuanced in our procedures and follow both Dedekind and common traditional doctrine on abstraction, we need not identify the imaginary points with such sets, but merely regard the latter as providing the abstractive ladder that logic must climb when it shifts from a ground domain D to a supervening realm D* of ‘abstracted objects.’ After all, D* can probably be reached via many ladders of this general sort; so any identification of the ‘abstract objects’ within D* with particular sets of D-domain objects will seem unnatural (there are quite a few ground sets that can adequately inspire the abstract idea of motherhood). The key requirement is that the members of D* must inherit their essential behaviors from their antecedents back in D.10 This demand requires that a clearly delineated equivalence relation R be cited so that logic can safely certify that D* represents a well-defined domain in its own right.

Although Dedekind represents the great pioneer practitioner of this kind of algebraic construction, I have never found conclusive textual evidence that he

regarded such equivalence-class constructions as an implementation of traditional views of abstraction. However, many of his near contemporaries explicitly did, such as Hermann Weyl and the celebrated Italian geometer Federigo Enriques: For it can be admitted that entities connected by such a relation [of equivalence-class type] possess a certain property in common, giving rise to a concept which is a logical function of the entities in question and which is in this way defined by abstraction. [Enriques, 1930, p. 132.] Incidently, regarding these set-theoretic techniques as part of ‘logic’ was fully in harmony with the standard portrait of the subject as it appears within most nineteenth-century ‘logic’ primers (which usually devote many pages to sundry policies for ‘abstraction’). Such books plainly do not view logic as ‘non-creative’ in the familiar mode of modern first-order logic.

However, the ingredient most central to ‘relative logicism’ is not the employment of sets per se, but the isolation of the persisting properties or concepts whereby they have become collected together (our R above). In fact, there were a number of programs (now largely forgotten or unrecognizably transmogrified) that employed concepts as a vehicle for introducing desirable extension elements in manners that eschewed abstractionist sets on the grounds that:

i. They often invoke infinitary notions in contexts that do not really require such appeals (Kronecker).

ii. Appeals to ‘abstraction’ pretend that the evaluative concepts implicit in the equivalence relations employed are mysteriously engendered from the objects themselves through some nebulous ‘abstractive act’, when, in truth, we must grasp an array of suitable concepts first, before we will be able to sort anything into set-like piles (many nineteenth-century logicians made this observation, including Frege).

Many of these alternative logicisms exploit salient dualities between concepts and objects. Perhaps the idea can be most efficiently explained through a familiar twentieth-century reconfiguring of the calculus due to Elie Cartan (based upon ideas deeply rooted in Frege's time). Suppose we are investigating the sideways gravitational potential f acting upon a hillside hiker, which becomes more or less intense according to how rapidly the mountain slopes away from her path. Considering her situation locally, the contour force can be represented as a linear function(al) df that evaluates our

hiker's current velocity vector v by supplying a measure of the instantaneous work being applied in the form df(v) (functions of this type are called ‘forms’). Graphically, we can represent df by a collection of equally spaced lines and the hiker's velocity v by an arrow. The ‘applied work’ evaluation simply consists in seeing how many contour lines v cuts when the two gizmos are superimposed. Normally, we regard the ‘form’ as a functional concept that evaluates a collection of objects (the individual vectors inside the tangent space of velocity vectors V). However, the notion of duality suggests that it can be useful sometimes to view this evaluative operation as acting in reverse, so that the vectors become treated as the functional concepts that act upon the forms considered as ‘objects’. Indeed, the forms can be profitably regarded as ‘objects’ that live inside a ‘space’ V * of their

own, upon which the members of V act as evaluative concepts (in other words, we are allowed to deposit the ‘unsaturated’ characteristics of an evaluative concept onto either df or v as we choose). Furthermore, when a metric is present, each form df will possess a natural vector representative within the dual space V (that is, a vector w in V such that w·v always supplies the same evaluation as df(v) for any v within V). It is common to call w the ‘push forward’ of df. This operation makes it easy to combine the spaces V and V * into a convenient unified

domain through simply converting the evaluative forms into their natural vector representatives and vice versa (in fact, even lacking a metric, it is still useful to convert erstwhile evaluative concepts into new ‘objects’ according to roughly the same pattern). Our new variety of relative logicism claims that ‘conversions’ of this ilk also qualify as moves that are directly sanctioned by logic without bringing in the notion of ‘set’ at all.

In many circumstances, it will happen that natural representatives will be missing for certain functional evaluators, but we can still claim that ‘logic’ allows us to install ‘pushed forward’ object representatives for these concepts within a unified domain nonetheless. In fact, such ‘pushed forward’ concepts were often employed to introduce the absent imaginary points into the Euclidean geometrical domain: the missing points were introduced as the ‘pushed forward’ representatives of the very same conceptual relationships R that we employed to collect our equivalence classes together. But according to the present dualized methodology, the concepts R are employed to this purpose directly, rather than harnessing set-theoretic intermediaries as their surrogates.

I happen to believe that Frege's original intent in the Grundlagen was to introduce his numbers through a variant upon this ‘push concepts forward’ ploy, rather than using equivalence classes (as he eventually did). But I have discussed these issues more adequately elsewhere.11

In any event, the salient feature of all ‘relative logicisms’ is that various humanly accessible concepts must be clearly grasped at the outset, for they serve as the firm foundations upon which innovative mathematical minds must rest their ‘free creations’. In contrast, the infinitary nets cast within Eulerian portraits of our computational place in nature sometimes catch unfamiliar properties in their meshes, where we seem to have reached these newly discovered traits without the assistance of any conceptual intermediaries whatsoever.

IV Observe that, as originally conceived, the ambitions of these ‘relative logicisms’ were determinately not ‘naturalistic’ in character; they traced to a contrary desire to maintain the traditional a priori certitude of mathematics even as it ventured into new realms beyond our immediate epistemological grasp. As we saw, various manipulations upon ‘concepts’ usually served as the required bridges to these faraway structures. Such appeals allowed the mathematical traditionalist to proclaim: ‘It is the a priori certitudes enshrined within “the logic of concepts” that allow us to accept these exotic structures as coherent; we do not require a posteriori or “naturalistic” justifications for their invocation.’

In my assessment (although I do not claim to have proved it), many contemporary philosophers of mathematics still view the ‘postulation of sets’ as following along rather similar ‘logic of concepts’ lines, albeit usually complicated by strange Quinean themes such as ‘the modern theory of sets represents the “scientific precisification” of older concerns about concepts’.12

As noted earlier, the use of equivalence-class constructions has become so prevalent within modern mathematical practice that it is rare to find any explanation given for methodological policies that, on the face of it, seem rather odd. In most cases, it is hard to determine how a given mathematical author regards the operative ‘philosophy’ behind the constructions she sets forth (I shall supply an evocative exemplar shortly). So is it obvious that modern mathematicians still view the surviving constructions of the relative logicists as ‘operations upon concepts’?

I do not believe this is certain at all. As a clarifying exercise, let us pluck a familiar equivalence-class construction from the literature and ask, ‘What is the underlying philosophical

rationale that makes this procedure acceptable?’ Specifically, let us look at the modern method of introducing differentiable manifolds through a surrounding swarm (or ‘atlas’) of Euclidean charts (differentiable manifolds represent a nice test case in that they are resistant to conventional axiomatic specification, in the manner, say, of the axioms for a topological manifold). The basic idea behind such constructions is this. Suppose that we need to convey an adequate understanding of our Uncle Fred's qualities to a friend who has never met him. Perhaps we can provide her with a large atlas of 2-dimensional photographs.13 In real life, Uncle Fred is 3-dimensional; so all of our atlas photos will prove slightly distorted, but our friend will be able to correct for the extraneous representations of such portraits by shifting to other photos in the atlas that lack the unwanted feature in question. Thus if our friend needs to know how long Uncle Fred's nose really is, we can supply her with a set of computational rules (i.e, a ‘metric’) that allows her to compute nose length based upon apparent nose length within his photographs in a consistent way (that is, the rules calculate the same objective nose length no matter which photo our friend decides to employ). Through such means, our friend can grasp ‘what Uncle Fred is really like’ by learning how to compute properly over an underlying collection of numerical representations, none of which are directly accurate with respect to Fred's intrinsic features. Operating within such a frame of computational correction, even a two-dimensional denizen of Flatland can come to ‘understand Uncle Fred’ properly, simply by knowing how to calculate correctly with respect to his true features.

The stock definition of a differentiable manifold assumes this same exact form, except that our photo album is replaced by an infinite collection of overlapping Euclidean charts (even when the manifold itself lacks Euclidean structure).

But what philosophy underwrites this familiar methodological ploy?

Under a Section III-like approach, we view the concept of ‘differentiable manifold’ as resulting by abstraction from the overlapping commonalities shared within the collection of Euclidean charts that we link together through the relational concepts that collect charts into suitable equivalence-class atlases. The standard modern definition of ‘differentiable manifold’ dates to a 1932 pamphlet by Veblen and Whitehead, who, in turn, followed the manner in which Hermann Weyl had earlier introduced ‘analytic manifolds’ in his celebrated Concept of a Riemann Surface. As noted earlier, Weyl explicitly identified such constructions with ‘conceptual abstraction’ and it is likely that he understood his own policies for introducing ‘manifolds’ in that same light. Be this as it may, must we truly approach the rationale for these atlas-like constructions in this abstractionist manner? The formative antecedents of the modern ‘manifold’ construction are quite

complicated; so let us consider the views of another important author in the historical narrative: Hermann Helmholtz. In his justly celebrated ruminations upon non-Euclidean geometry, he approached our basic worry ‘Are such constructions genuinely coherent?’ in a manner that strikes me as more closely allied to the manner in which we entrapped previously unknown properties within a Sturmian netting without the benefit of any intervening ‘conceptual’ thread whatsoever. For Helmholtz, the chief task we confront in ‘understanding a non-Euclidian geometry’ (with respect to the physical universe) is that of understanding what it would be like to live in such a world with respect to both measurement and calculation (we need to told how our measuring rods will behave, how our backwards and forwards movements will behave as a group, how the trajectories of objects from differential equations should be calculated and so forth). If we particularly attend to this last problem (which Helmholtz himself did not discuss fully), we face the same problem of constructing a background ‘world’ from the data we can easily obtain computationally, through Euler's method, say. The computational charts so obtained (think of them as broken line charts drawn on graph paper inscribed with x and t grids) should surround — if all goes well — the charts within an orthodox atlas as a ‘mildly transcendental’ swarm. The problem we now confront is that we lack any a priori understanding of what the ‘independent variables’ (the x and t axes) represent in terms of the ‘manifold’ we seek.14 When we plot such differential-equation trajectories within distinct charts that are distorted with respect to one another, we need to have supplied standards for adjudicating when two such plots should be considered as consistent or conflicting (this task requires the adjudication procedures typical of the tensor calculus). And we require further instructions for determining when a given chart should be abandoned in favor of some overlapping replacement (i.e., whether a computational oddity signals a mere coordinate breakdown or a genuine singularity of the target manifold). These problems force us to link our computational charts together in the modern ‘atlas and chart’ manner as a necessary sophistication of the general concerns with differential equations that we discussed in connection with Sturm's entrapment techniques in §III. From this point of view, the relationship between a typical non-Euclidean space and our concrete computational capacities proves to be of a ramified transcendental nature in view of the fact that we can no longer enclose the target behavior within a simple nesting of tractable calculations; we must further collate these results atlas-fashion to figure out their significance for the unfamiliar ‘manifold’ we hope to render precise.

From this point of view, the set-theoretic character of our chart-and-atlas constructions does not represent — pace their relative logicist origins — an attempt to render the processes of ‘conceptual abstraction’ precise; the object is rather to trace the ‘ramified transcendental’ connections required to link humanly feasible calculations to the natural processes that transpire within unfamiliar domains. Viewed in this way, such constructions do not strike me as alien to a reasonable ‘naturalism’; it is only their historical trappings that has made them seem thus.

Before concluding, let me add a few additional comments on Helmholtz's true philosophical position (uninterested readers may skip to the end of the section). Although I described his procedures as attempts to ‘understand what it is like to live in an unfamiliar world’, Helmholtz actually believed (if I understand him correctly15) that we never manage to ‘understand fully’ non-Euclidean realms through such constructive means, but only obtain a distanced (or ‘structuralist’) handle upon their behaviors adequate to the limited purposes of science. The policies of numerical entrapment captured within our atlas constructions never supply us with the richly understood concepts of the same quality that we enjoy when we directly apprehend the familiar realms of Euclidean geometry. We have entrapped the structural shadows cast upon our numerical charts by the desired non-Euclidean world, but we can no more ‘understand’ the qualities of such a universe than a blind person can understand what redness is really like, even if she might be able to calculate all of its varying valences quite accurately (e.g., she can correctly compute the probable trajectories of bulls near toreador capes, without grasping the true nature of the attractive quality in question). In Helmholtz's often quoted words, our non-Euclidean representations merely serve as structural ‘signs’, [A] sign need not have any kind of similarity at all with what it is the sign of. The relation between the two of them is restricted to the fact that like objects exerting an influence under like circumstances evoke like signs, and that therefore unlike signs always correspond to unlike influences. To popular opinion, which accepts in good faith that the images which our senses give us of things are wholly true [of them], this residue of similarity acknowledged by us may seem very trivial. In fact it is not trivial. For with it one can still achieve something of the very greatest importance, namely forming an image of lawfulness in the processes of the actual world. [Helmholtz, 1977, p. 122] Fortunately, science, in its limited purposes, only requires ‘signs’ of this limited quality.

In my own assessment, such conceptual themes (which continue to plague the most modern ‘structuralisms’) trace to a traditional but inadequate philosophy of ‘concepts’ from whose constrictive coils Helmholtz is endeavoring to escape without properly identifying their core distortions. Such issues cannot be pursued

further here (although they are intimately entangled with the present considerations).

V What, then, about our opening worries that any ‘entities built from sets’ comprise ‘abstract objects’ that ill suit a proper ‘naturalist’ account of the world? Surely, correctly capturing what was dubbed in §II as ‘our computational place in nature’ ought to represent a straightforward naturalistic task of the same general nature as explaining why certain biological strategies work well in some environments but not others. As often happens, clear answers to such questions only emerge when the strategies are examined over wider dominions than originally anticipated.16 It seems to be an inconvenient, yet undeniable, ‘natural fact’ that Nature commonly allows our effective computational techniques to map onto her own unfolding processes only in a ‘mildly transcendental’ manner. Set theory provides the natural vocabulary for capturing computational relationships of this somewhat estranged character in clear terms. But if such are the genuine limits that Nature places upon our algorithmic capacities, why should a sensible ‘naturalist’ wish to eschew the proper language for capturing such relationships accurately?

If one looks closely, one finds that the background misapprehensions typical of Quinean thinking trace to faulty presumptions that ‘writing down a “complete physics” ’ is an easier task than it actually is (he is reminiscent of Galileo in his methodological naivete). The mere fact that someone has written down a concrete set of differential equations does not guarantee that she automatically has an adequate account of how the local processes enshrined within such equations interact with their surrounding environment (factors that are reflected in the ‘side connections’ to be attached to such equations, to the singularities and interfaces that must be tolerated, etc.). Typical investigations into the ‘existence and uniqueness of solutions’ (of which Sturm's work qualifies as a great early paradigm) play an important role in addressing these significant applicational concerns. Somehow Quine's assumptions about ‘postulation in science’ tacitly assume that everyday physics starts in such a pristine semantic condition that practitioners need not face such worries at all. To do so fails to appreciate the vital partnership that unites mathematics and physics in a common ‘naturalistic’ enterprise.

In §IV, we observed that various constructions whose traditional motivations were ‘non-naturalist’ in character need not be construed in that manner and might better suit the ‘capture mildly transcendental relationships’ tasks we surveyed in §II. Accordingly, it is not fully evident that our §III constructions should appear alien to a proper ‘naturalism’ either. Plainly, it is impossible to pursue such grand issues

in adequate depth in such a short essay. But perhaps our historical ruminations have recaptured enough ‘lost innocence’ that we might begin looking upon the hoary problems of ‘mathematical naturalism’ with rejuvenated eyes.

Footnotes

↵† Thanks to Thomas Forster for indicating that a survey of this sort might possess some pedagogical utility and to Sébastien Gandon for suggesting that differential forms provide a brisk means for conveying the essential spirit of concept/object duality. Allied acknowledgments are due Anil Gupta, Penelope Maddy, Brice Halimi, and the other participants in the 2011 Besse colloquium.

↵1 See my [2000]. To be sure, Galileo had earlier written in The

Assayer: ‘Th[e] book [of Nature] is written in mathematical language, and the symbols are triangles, circles, and other geometrical figures, without whose help it is impossible to comprehend a single word of it; without which one wanders in vain through a dark labyrinth.’ But Descartes' more careful weighing of the capacities of algebraic formulas forced a substantial scaling back of this unsupported optimism with respect to mathematical physics.

↵2 I specifically have in mind the studies that logicians call

‘soundness proofs’ and workers in numerical methods call ‘correctness proofs’. For greater elaboration on their common nature, see my [2006, pp. 623–629.].

↵3 This insight is intimately linked to Poincaré's celebrated

‘qualitative approach to differential equations’. ↵4 In which case appeals to probability and allied tools commonly

make their fearsome entrance. ↵5 Cf. footnote 1. ↵6 In the sense of ‘transcendental function’, not ‘transcendental

philosophy.’

↵7 For more details on the criticism brooked here, see my [forthcoming].

↵8 Cf. my [2006, pp. 242-258]. ↵9 We also require arbitrary ‘0’ and ‘1’ markers within our sets

because we need two imaginary points for every set of equivalent involution maps (in the jargon, ‘0’ and ‘1’ supply a sense to the maps). In a more familiar exemplar of this kind of construction, geometry's real points at infinity are identified with equivalence classes of all lines that run parallel to one another.

↵10 This revised picture allows us to handle the already existing fixed

points of nested maps in the same way as the missing centers of the overlapping maps.

↵11 Cf. [Wilson, 2010]. Many of the themes broached in the present

section are developed more fully there. ↵12 Quine notoriously believes that the ‘postulation of sets’ is a

scientifically improved approach to the hierarchies of ‘concepts’ favored in Russell's original type theory and similar ‘logistical’ conceptions (this point of view is articulated at some length in his [1990] but can be found in many other Quinean loci as well).

↵13 In my diagram, I utilize the many perspectives of the Wizard

kindly provided by John R. Neill, the Imperial Illustrator of Oz. ↵14 For further explication, see my [1993]. ↵15 Because of his complex views on unconscious inferential

processes, I am not certain how, in the final analysis, Helmholtz regards ‘our intuitive concepts of Euclidean geometry’ (in contrast to ‘our intuitive concepts of color’).

↵16 Striking examples can be found in [Batterman, 1997] (which we might supplement with mention of J.B. Keller's celebrated later proposals [1962] with respect to ‘imaginary rays’).

© The Author [2012]. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]

References Robert Batterman “Into a mist”: Asymptotic Theories on a Caustic. Studies in History and Philosophy of Modern Physics 1997;28:395-413. Robert Batterman On the specialness of special functions. British Journal for the Philosophy of Science 2006;58:263-286. Enriques Federigo The Historical Development of Logic. New York: Holt; 1930 Giovanni Ferraro The Rise and Development of the Theory of Series up to the Early 1820's. New York: Springer; 2008. C.F.Gauss Theoria residuorum biquadraticorum, Commentatio secunda. In: Reid L. W., editor. The Elements of the Theory of Algebraic Numbers. Vol. 2. New York: Macmillan; 1910 Hermann von Helmholtz The facts in perception. In: Cohen R. S., Elkana Y., editors. Hermann von Helmholtz Epistemological Writings. Dordrecht: Reidel; 1977. M.F. Lowe, trans. Keller J. B. Geometrical theory of diffraction. J. Opt. Soc. Amer. 1962;52:116-130. Klein Felix On Riemann's Theory of Algebraic Functions and their Integrals. Cambridge: Macmillan and Bowes; 1893. Maddy Penelope Second Philosophy. Oxford: Oxford University Press; 2007. Quine W. V.O The Roots of Reference. LaSalle, Illinois: Open Court; 1990. Rosen Gideon Abstract objects. Stanford Encyclopedia of Philosophy 2012. (online): http://plato.stanford.edu/entries/abstract-objects/. Wilson Mark There's a hole and a bucket, dear Leibniz. Midwest Studies in Philosophy 1993;18:202-241. Wilson Mark The unreasonable uncooperativeness of mathematics in the natural sciences. The Monist 2000;83:296-314. Wilson Mark Wandering Significance. Oxford: Oxford University Press; 2006.

Wilson Mark Frege's mathematical setting. In: Michael Potter, Tom Ricketts, editors. The Cambridge Companion to Frege. Cambridge: Cambridge University Press; 2010. p. 379-412. Wilson Mark (forthcoming) Some comments on naturalism as we now have it. In: Ross D., Ladyman J., Kincaid H., editors. Scientific Metaphysics. Oxford: Oxford University Press.


Recommended