+ All Categories
Home > Documents > Volume Two: Symposia and Invited Papers || Synopsis

Volume Two: Symposia and Invited Papers || Synopsis

Date post: 09-Jan-2017
Category:
Upload: lamphuc
View: 216 times
Download: 2 times
Share this document with a friend
12
Synopsis Source: PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1982, Volume Two: Symposia and Invited Papers (1982), pp. xiii-xxiii Published by: The University of Chicago Press on behalf of the Philosophy of Science Association Stable URL: http://www.jstor.org/stable/192408 . Accessed: 09/05/2014 15:06 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . The University of Chicago Press and Philosophy of Science Association are collaborating with JSTOR to digitize, preserve and extend access to PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association. http://www.jstor.org This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PM All use subject to JSTOR Terms and Conditions
Transcript

SynopsisSource: PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association,Vol. 1982, Volume Two: Symposia and Invited Papers (1982), pp. xiii-xxiiiPublished by: The University of Chicago Press on behalf of the Philosophy of Science AssociationStable URL: http://www.jstor.org/stable/192408 .

Accessed: 09/05/2014 15:06

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

The University of Chicago Press and Philosophy of Science Association are collaborating with JSTOR todigitize, preserve and extend access to PSA: Proceedings of the Biennial Meeting of the Philosophy of ScienceAssociation.

http://www.jstor.org

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

SYNOPSIS

The following brief summaries provide an introduction to each of the papers in this volume.

1. Values In Science. Ernan McMullin. This paper argues that the appraisal of theory is in important respects closer in structure to value-judgement than it is to the rule-governed inference that the classical tradition in philosophy of science took for granted.

2. Models, High-Energy Theoretical Physics and Realism. James T. Cushing. Examples of theory development in quantum field theory and in S-matrix theory are related to three questions of interest to the phil- osophy of science. The first is the central role of highly abstract, mathematical models in the creation of theories. Second, the process of creation and justification actually used make it plausible that a suc- cessful theory is equally well characterized as being stable against attack rather than as being objectively correct. Lastly, the issue of the reality of theoretical entities is discussed in light of this representation of theory generation and selection.

3. Quantum Field Theory for Philosophers. Michael L.G. Redhead. The metaphysical commitments of quantum field theory are examined. A thesis of underdetermination as between field and particle approaches to the "elementary particles" is argued for but only if a disputed notion of transcendental individuality is admitted. The superiority of the field approach is further emphasized in the context of heuristics.

4 . Comments on the Papers of Cushing and Redhead: "Models, High- Energy Theoretical Physics and Realism" and "Quantum Field Theory for Philosophers." Paul Teller. In response to Cushing it is urged that the vicissitudes of quantum field theory do not press towards a nonre- alist attitude towards the theory as strongly as he suggests. A variety of issues which Redhead raises are taken up, including photon localiza- bility, the wave-particle distinction in the classical limit, and the interpretation of quantum statistics, vacuum fluctuations, virtual particles, and creation and annihilation operators. It is urged that quantum field theory harbors an unacknowledged inconsistency connected with the fact that the zero point energy has observable consequences, while to avoid infinities it must be "thrown away". Finally, Redhead's conception of ephemerals is pressed and the paper concludes with the suggestion that the particle concept largely drops out of quantum field theory.

5. A Response to Paul Teller. James T. Cushing.

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

6. The Incredibility of Rejecting Belief-Desire-Action Explana- tions. Alfred F. MacKay. If the conceptions of belief, desire, and action resemble phlogiston in their scientific standing, how is it that so many true, singular, causal claims about human behavior are made using these concepts? Alexander Rosenberg appeals to the distinction between attributive and referential uses of language to handle this objection. It is argued that this does not work, and that the truth of our singular, causal explanations of human behavior is little short of miraculous given his account of the nomological situation.

7. Mechanizing the Search for Explanatory Hypotheses. Bruce G. Buchanan. Recent work in artificial intelligence, or AI, has produced programs capable of serious intellectual work in science. Results from AI are used to show that there exist mechanized procedures for discov- ering hypotheses and that these methods often lead to plausible hypothe- ses. The question to be answered is: By what methods can a computer program find plausible explanations of empirical data? To answer this, the common metaphor of search for describing several AI programs is ex- amined followed by a look at examples of programs actually constructed. Five types of search considered are: random, exhaustive, selective, heuristic, and opportunistic search. From there, the paper generalizes to some conclusions about mechanizing hypothesis formation.

8. Artificial Intelligence and Philosophy of Science: Reasoning by Analogy in Theory Construction. Lindley Darden. This paper examines the hypothesis that analogies may play a role in the generation of new ideas that are built into new explanatory theories. Methods of theory construction by analogy, by failed analogy, and by modular components from several analogies are discussed. Two different analyses of analogy are contrasted: direct mapping (Mary Hesse) and shared abstraction (Michael Genesereth). The structure of Charles Darwin's theory of natural selection shows various analogical relations. Finally, an "abstraction for selection theories" is shown to be the structure of a number of theories.

9. Artificial Intelligence Psychology, and the Philosophy of Discovery. Paul Thagard. Buchanan and Darden have provided compelling reasons why philosophers of science concerned with the nature of scien- tific discovery should be aware of current work in artificial intelli- gence. This paper contends that artificial intelligence is even more than a source of useful analogies for the philosophy of discovery: the two fields are linked by interfield connections between philosophy of science and cognitive psychology and between cognitive psychology and artificial intelligence. Because the philosophy of discovery must pay attention to the psychology of practicing scientists, and because cur- rent cognitive psychology adopts a computational view of mind with AI providing the richest models of how the mind works, the philosophy of discovery must also concern itself with AI models of mental operations. The relevance of the artificial intelligence notion of a frame to the philosophy of discovery is briefly discussed.

10. Probabilistic Explanation: Introduction. Wesley C. Salmon.

xlv

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

11. Probabilistic Explanation and Probabilistic Causality. Joseph F. Hanna. This paper argues that if the world is irreducibly sto- chastic, then both Salmon's S-R model of explanation and Fetzer's C-R model of explanation have the following undesirable consequence: the objective probability (associated with the model's relevance condition) of any actual macro-event is either undefined or else, if defined, it equals one--so that the event is not even a candidate for a probabilis- tic explanation. This result follows from the temporal ambiguity of ontic probability i n an irreducibly stochastic world. It is argued fur- ther that an analogous difficulty faces those theories of probabilistic causality which depend upon the notions of contributing and counteract- ing causes. Because of the problem of temporal ambiguity, it is not possible to objectively label a particular event as a contributing (or a counteracting) cause of some subsequent event. The argument is carried through in detail for a recent theory of probabilistic causality proposed by Paul Humphreys.

12. Probabilistic Explanations. James H. Fetzer. The purpose of this paper is (a) to provide a systematic defense of the single-case propensity account of probabilistic explanation from the criticisms advanced by Hanna and by Humphreys and (b) to offer a critical appraisal of the aleatory conception advanced by Humphreys and of the deductive- nomological-probabilistic approach Railton has proposed. The principal conclusion supported by this analysis is that the Requirements of Maxi- mal Specificity and of Strict Maximal Specificity afford the foundation for completely objective explanations of probabilistic explananda, so long as they are employed on the basis of propensity criteria of explan- atory relevance.

13. Aleatory Explanations Expanded. Paul Humphreys. Existing definitions of relevance relations are essentially ambiguous outside the binary case. Hence definitions of probabilistic causality based on relevance relations, as well as probability values based on maximal specificity conditions and homogeneous reference classes are also not uniquely specified. A 'neutral state' account of explanations is pro- vided to avoid the problem, based on an earlier account of aleatory explanations by the author. Further reasons in support of this model are given, focusing on the dynamics of explanation. It is shown that truth in explanation need not entail maximal specificity and that proba- bilistic explanations should not contain a specification of probability values.

14. The Philosophical Basis of Cost-Risk-Benefit Analyses. David A. Bantz. Analytical techniques for determining the net worth of the consequences of alternative choices are increasingly used to motivate decisions about public projects and policies, especially where risks are prevalent. What information do these techniques provide, and what are the grounds for using them in decision making? It is argued that the apparent similarities of cost-risk-benefit analyses to decision theory are in some ways misleading, and that the true basis of such analyses in welfare economics suggests some inherent limitations to their use. The limitations described arise from the practice of aggregating individual preferences as determined by economic valuations; they restrict (but do not invalidate) the application of such analyses.

xv

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

15. Quantification, Regulation, and Risk Assessment. Douglas MacLean. The basic question for risk assessment is not "What are the risks?" but "How safe is safe enough?" Its ambitious goal is to make risk management a scientific enterprise. In order to succeed, not only must risks be quantified but also the many kinds of costs and benefits associated with technology and its control must be quantified and we must find a common metric for comparing these different factors. The risks of risk assessment include the possibility of distorting values in the quest for general comparability and also the problem of responding rationally to different kinds of uncertainty. The problems discussed include comparing novel events to statistically frequent events, com- paring different distributions of risk, and quantifying the value of saving human lives.

16. Costs and Benefits of Cost-Benefit Analysis: A Response to Bantz and MacLean. Peter Railton. Although the standard theory and actual practice of cost-benefit analysis are seriously defective, the general idea of making social policy in accord with an aggregative, maximizing, consequentialist criterion is a sensible one. Therefore it is argued, against Bantz, that interpersonal utility comparisons can be meaningful, and, against both Bantz and MacLean, that quantitative over- all assessments of expected value (using corrected prices) provide a presumptively rational basis for social choice. However, it does not follow that introducing cost-benefit tests into the political or legal process would always be optimal: recognizing some quite stringent legal rights against involuntary exposure to pollution or risk may actually promote cost-beneficial results more reliably than cost-benefit tests employed in very imperfect circumstances.

17. Beyond Darwinism? The Challenge of Macroevolution to the Synthetic Theory of Evolution. Francisco J. Ayala. The theory of punctuated equilibrium has been proposed as a challenge to the modern synthesis of evolutionary theory. Two important issues are raised. The first is scientific: whether morphological change as observed in the paleontological record is essentially always associated with speciation events. This paper argues that there is at present no empirical support for this claim: the alleged evidence is based on a definitional falla- cy. The second issue is epistemological: whether macroevolution is an autonomous field of study, independent from microevolutionary knowledge. It is herein argued that macroevolution and microevolution are not decoupled in two senses: identity at the level of events and compati- bility of theories. But macroevolution is autonomous in the epistemo- logically important sense: macroevolutionary theories are not reducible to microevolutionary principles. It is finally pointed out that the discipline of macroevolution is notoriously lacking in theoretical constructs of great import and generality.

18. Filling Some Epistemological Gaps: New Patterns of Inference in Evolutionary Theory. Stuart A. Kauffman. Contemporary evolutionary theory, derived from the intellectual marriage of Darwin's and Mendel's discoveries, leads us to view organisms as successful, but essentially ad hoc, responses to chance and necessity. Biological universals, the code, the pentadactyl limb, are frozen accidents shared by descent. The source of biological order has come to be seen as selection itself. This

xvi

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

paper argues that this view is fundamentally inadequate. It ignores those underlying sources of biological order which derive from the generic self-organizing properties of the biological building blocks. Among these generic properties are almost universal aspects of phase re- setting responses in biological rhythmic systems, fascinating properties of continuity, symmetry and handedness seen in pattern regeneration across distant phyla; and statistically robust properties expected of eukaryotic gene regulatory systems persistently subject to mutations which "scramble" regulatory interactions. These examples suggest that many properties in organisms reflect a balance between selection, and the rich generic properties which would occur in the absence of selec- tion. Where the balance is "close" to generic, a new pattern of evolu- tionary inference, and an ahistorical source of biological universals may be found: Those properties reflect, not selection, but the self- organizing features of the building blocks.

19. The Modern Synthesis: Its Scope and Limits. Elliott R. Sober. This paper locates the contributions of Kauffman and Ayala to this symposium in the context of recent discussions of the adequacy of the Modern Synthesis. The neglect of morphology and development described by Kauffman is understandable in view of the belief that selection is the most powerful evolutionary force. His idea that prop- erties of order may be explained by nonselective mechanisms is also examined. The paper subsequently takes up Ayala's criticism of S.J. Gould's view that macroevolution is a process "decoupled" from microevo- lution. It is argued that the idea of species selection makes Gould's antireductionism ontological in character; this contrasts with Ayala's contention that the decoupling is merely epistemological.

20. Is the Neo-Darwinian Synthesis Robust Enough to Withstand the Challenge of Recent Discoveries in Molecular Biology and Molecular Evo- lution? John R. Jungck. The impact of recent developments in molecular biology and molecular evolution on the concepts of "units of selection", the "levels of selection", and the "limits of selection" are discussed.

21. What is the Difference that Makes a Difference? Gadamer, Habermas, and Rorty. Richard J. Bernstein. Against the background of disputes about modernity and post-modernity in philosophy, this paper probes the differences among Gadamer, Habermas, and Rorty. Focusing on the themes of oraxiss phronesis, and practical discourse, it is argued that what initially appear to be hard and fast cleavages and irrecon- cilable differences turn out to be differences of emphasis. The common ground that emerges is adumbrated as "non-foundational pragmatic hu- manism". Although there are important differences among these three thinkers each of their voices contributes to a coherent conversation in developing "moral-political vision".

22. Saving the Differences: Gadamer and Rorty. Charles B. Guignon. Bernstein's attempt to identify a convergence in the ethical and political implications of the writings of Gadamer and Rorty is found to be inadequate on two counts. First, by accepting the extreme anti- foundationalism in Rorty, Bernstein tends to undermine the humanistic

xvii

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

ideals he wishes to defend. And, second, the important differences in the conception of history in Gadamer and Rorty are concealed. It is argued that Gadamer's view of 'effective-history' offers a basis for enduring values which would not be possible given Rorty's conception of history.

23. The Differences That Make a Difference: A Comment on Richard Bernstein. Thomas McCarthy. In contrast to Bernstein's emphasis on the common ground shared by Rorty and Habermas, this paper stresses the basic differences between them, particularly their diverse assessments of rationalism, universalism, foundationalism and developmentalism, as well as their opposed evaluations of systematic thought and critical social theory. Several difficulties with Rorty's views on reason, truth and objectivity, as with his historicism and physicalism are suggested. It is concluded that Bernstein's emphasis on the common elements in their "moral-political vision", in the face of these striking theoreti- cal differences, overestimates the extent to which normative ideals can survive being cut off from larger contexts of ideas.

24. On Einstein's Invention of Special Relativity. Arthur I. Miller. A scenario is conjectured for Einstein's invention of the special theory of relativity that receives support over the widest possible number of archival, primary and secondary sources. This scenario takes into account the philosophical-physical-technological currents of 1905.

25. On Writing the History of Special Relativity. John Earman, Clark Glymour, and Robert Rynasiewicz. Nearly all accounts of the genesis of special relativity unhesitatingly assume that the theory was worked out in a roughly five week period following the discovery of the relativity of simultaneity. Not only is there no direct evidence for this common presupposition, there are numerous considerations which militate against it. The evidence suggests it is far more reasonable that Einstein was already in possession of the Lorentz and field trans- formations, that he had applied these to the dynamics of the electron, and that portions of the 1905 paper had actually been drafted well before the epistemological analysis of time.

26. The Historiography of Special Relativity: Comments on the Papers by John Earman, Clark Glymour, and Robert Rynasiewicz and by Arthur Miller. Kenneth F. Schaffner. Two problems in the paper by EGR are considered. One is the lack of any direct confirmatory evidence for the elegant rational reconstruction. The second is a significant gap in the historical account, just at the critical point in Einstein's discov- ery process -- namely, the reanalysis of simultaneity. In addition, the EGR account appears in danger of being overly focused on the electrody- namical aspect of special relativity to the exclusion of optical null experiments, and in particular to the exclusion of the role of the 1887 Michel son-Morel y interferometer experiment.

The author disagrees with Miller on the Kantian aspects of Poincare, and on the role of Anschauungen,. preferring to sketch instead a more important role for a Mach-like analysis of fundamental scientific

xviii

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

concepts. It is argued that Miller has misconstrued the difference between an axiomatic theory and a "theory of principle" in Einstein's approach. Finally, some suggestions are made as to how the gap in our analysis of Einstein's revolution in simultaneity might be reexamined.

27. The Role of Randomization in Inference. Dennis V. Lindley. It is argued that randomization has no role to play in the design or analysis of an experiment. If a Bayesian approach is adopted this con- clusion is easily demonstrated. Outside that approach two principles, of conditionality and similarity, lead, via the likelihood principle, to the same conclusion. In the case of design, however, it is important to avoid confounding the effect of interest with an unexpected factor and this consideration leads to a principle of haphazardness that is clearly related to, but not identical with, randomization. The role of ex- changeability is discussed.

28. Direct Inference and Randomization. Isaac Levi. There are two uses of randomization in efforts to control systematic bias in experimental design: (a) Alchemical uses seek to convert unavoidable systematic errors into random errors. (b) Hygienic uses seek to reduce the prospect of the experimenter's involvement with the implementation of the experiment contributing to bias. A few remarks are made at the end of the paper about the hygienic use of randomization as a preventa- tive against sticky fingers. The bulk of the discussion addresses the alchemical applications. The thesis is that attitudes towards the cogency of Fisher's alchemical use of randomization ought to depend on views concerning statistical deduction or direct inference.

29. Arguments for Randomizing. Patrick Suppes. Three main lines of arguments are presented as a defense of randomization in experimental design. The first concerns the computational advantages of randomizing when a well-defined underlying theoretical model is not available, as is often the case in much experimentation in the medical and social scien- ces. The high desirability, even for the most dedicated Bayesians, of physical randomization in some special cases is stressed. The second line of argument concerns communication of methodology and results, especially in terms of concerns about bias. The third line of argument concerns the use of randomization to guarantee causal inferences, whether the inference consists of the identification of a prima facie or a genuine cause. In addition, the relation of randomization to measures of complexity and the possibility of accepting only random procedures that produce complex results are discussed.

30. Exemplars and Scientific Change. David L. Hull. Philosophers have distinguished a metaphysical category which they term "historical entities" or "continuants". Such particulars are spatiotemporally localized and develop continuously through time while retaining internal cohesiveness. Species, social groups and conceptual systems can be profitably treated as historical entities. No damage is done to pre- analytic intuitions in treating social groups as historical entities; both biological species and conceptual systems can be construed as historical entities only by modifying the ordinary way of viewing both. However, if species and conceptual systems are to "evolve", then they

xix

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

must be treated as historical entities. The type specimen method, which is used by systematists to individuate and name biological taxa, is set out and then extended to apply to scientific communities as social groups and conceptual systems.

31. Cormnents on David Hull's Paper on Exemplars and Type Specimens. Ernst Mayr. The type in taxonomy is not meant to be a particularly typical specimen, but simply a reference specimen suited to serve as a 'name bearer' whenever doubt arises concerning the identity of a species. The minimum requirement is that the specimen reflects some differentiating characteristics of the species. In analogy, only such individuals should be made the type of an ideological system as adhere to the principal ideologies of that system. Only such an evolutionist could serve as type for Darwinism who on the whole accepts gradual evo- lution and, as the major moving force in evolution, natural selection. It is very questionable whether the type-method would be of any use where highly heterogeneous, open, or rapidly evolving systems are involved. When the meaning of a system is changing it is less confusing to redefine it than to coin a new term for each change.

32. Taxonomy and Theory. A. Aaron Snyder. Biological evolution allegedly requires a genealogical conception of species (i.e., that species are descent-based "historical entities" rather than similarity- based "natural kinds"). After considering David Hull's arguments for this view, this paper opts instead for individuating species primarily via genetic similarities, but in a way which avoids charges of "Essen- tialism". It is suggested that a genealogical conception of species actually derives from a biological version of Behaviorism plus an inter- related pair of confusions regarding evolution and identity. Current taxonomic method may favor the genealogical conception, but evolutionary theory-- as well as genetics and molecular biology--count against it.

33. Inference in Perception. Irvin Rock. It is argued that per- ception is based on the same kinds of operations that characterize thought. However perception differs from thought in being rooted in the stimulus and predicated on a narrower range of (nonconscious) "know- ledge". Other theories fail to do justice to the ambiguity inherent in the stimulus and the organization and enrichment inherent in the per- cept. Examples of perception are given that suggest determination by cognitive processing such as description, inference, and problem solving. One percept is often based upon another percept. The final percept is a preferred solution and adequately accounts for and is supported by the proximal stimulus or an initial percept to which the stimulus gives rise.

34. Is The Visual System As Smart As It Looks? Patricia Smith Churchland. Irvin Rock's hypothesis that certain stages of perceptual processing resemble problem solving in cognition is contrasted to some recent work in computer vision (Marr, Ullman) which tries to reduce intelligence in perception to computational organization. The focal example is subjective contours which Marr thought could be handled by computational modules without descending control, and which Rock thinks are the outcome of intelligent processing.

xx

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

35. Beyond Inference In Perception. Stephen P. Stich. The controversy over inference in perception turns on the nature of the processes that intervene between the stimulus and the perceptual ex- perience or percept. Should the processes be viewed as something like inference and computation, or should they be viewed as psychologically primitive mechanisms whose workings are best accounted for at a neu- rological or physiological level? It is argued that the view that computational and inference-like processes play a role in perceptual processes should be adopted.

36. On the Present State of the Philosophy of Quantum Mechanics. Howard Stein. It is suggested that the true physical significance of the Hilbert space structure in quantum mechanics remains (despite the undoubted significance of the elucidation given early by von Neumann, and further clarified by later discussions) less well understood than is usually supposed. Reasons are given for this view from considerations internal to the theory; a (remote) analogy is considered to the role, and presumed physical significance, of the notion of "ether" in nine- teenth-century physics; the issues of measurement (or, more generally, application of the theory) are touched on, as well as the significance of Bell's results concerning locality, and the difficulties confronting the application of quantum mechanics to the whole cosmos.

37. How Technology Aids and Impedes the Growth of Science. Joseph Agassi. The vision of Horace, combining the sweet and the useful, is an expression of a sense of abundance. It came first and was than supported by Bacon's vision of a science-basea technology. Later this was further backed by classical liberalism and by metaphysical progress- ivism. That technology may impede and even destroy science is obvious. Yet the danger is overlooked--with the aid of the vision of Horace and of neo-conservative (Popperian) politics and of neo-reactionary (Kuhn- ian) politics of science. The science of science policy needs boosting in order to study means of democratic control of technology.

38. Stochastic Locality and the Bell Theorems. Geoffrey Hellman. After some introductory remarks on "experimental metaphysics", a brief survey of the current situation concerning the major types of hidden- variable theories and the inexistence proofs is presented. The category of stochastic, contextual, local theories remains open. Then the main features of a logical analysis of "locality" are sketched. In the deterministic case, a natural "light-cone determination" condition helps bridge the gap that has existed between the physical requirements of the special theory of relativity and formal conditions used in proving the Bell-Wigner theorem. Natural generalization to the stochastic type, taking account of the distinction between epistemic and physical proba- bilities, leads to a series of independence claims constituting some (possibly) significant limitations on generalized Bell theorems. In particular, the conditional stochastic independence requirement is seen both to go beyond the demand of compliance with the STR and to be a genuine necessity (up to equivalence in this kind of strength) in de- riving any Bell theorem for the stochastic case. The conclusion is also

xxi

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

supported that, if determinism is given up, the Bell theorems and experiments do not pose an additional obstacle to unifying relativity theory and quantum mechanics beyond what is already posed by the "instantaneous" collapse of the wave function.

39. Where the Theory of Probability Fails. Itamar Pitowsky. A local "resolution" of the Einstein-Podolsky-Rosen Paradox by way of a mechanical analogue (roulette) is presented together with some notes regarding the consequences of such models for the foundations of math- ematics and the theory of probability.

40. Must Beliefs Be Sentences? Brian Loar. Two naturalistic explications of propositional attitudes and their contents are distin- guished: the language of thought based theory, on which beliefs are relations to sentences in the language of thought; and the propositional attitude based theory, on which beliefs are functional states of a functional system that does not imply a language of thought, although consistent with it. The latter theory depends on interpersonally ascribable conceptual roles; if these are not available, the language of thought theory has the advantage. But the propositional attitude based theory explains intentionality and conceptual structure as well as the language of thought based theory, and it has two further advantages. First, it does not make the existence of beliefs and desires depend on the language of thought hypothesis. Secondly, its employment of in- terpersonally ascribable conceptual roles permits a theory of truth conditions to meet certain desiderata, such as a social basis for truth conditions, and a realist conception of truth.

41. A Reply to Brian Loar's "Must Beliefs Be Sentences?" Jerry Fodor. It is argued that Loar's paper overestimates the importance of the distinction between 'functionalist' and 'representationalist' theories of the propositional attitudes; specifically, that the only version of functionalism which appears likely to provide an adequate account of the attitudes is one which treats them as relations to mental representations. The paper ends with a brief discussion of some of Loar's objection to 'ideal indicator' theories of the relation between beliefs and their truth conditions. It is argued that these objections are not decisive.

42. Beliefs and Concepts: Comments on Brian Loar, "Must Beliefs Be Sentences?" Gilbert Harman. Concepts, not the beliefs employing them, have uses or roles in thought. Most conceptual roles cannot be specified solipsistically, and do not have inner aspects that can be specified solipsistically. (To think otherwise is to confuse function with misfunction.) A theory of truth conditions plays no useful part in any adequate account of conceptual role. Ordinary views about beliefs assign them conceptual structures which figure in explanations of func- tional relations. Which conceptual structures beliefs have may be relative to an arbitrary choice of "analytical hypothesis" but that does not mean that there is an adequate nonrelative account that dispenses with a system of concepts or language of thought.

43. Reply to Fodor and Harman. Brian Loar.

xxii

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions

44 . Commensurability, Comparability, Communicability. Thomas S. Kuhn. The author's concept of incommensurability is explicated by elaborating the claim that some terms essential to the formulation of older theories defy translation into the language of more recent ones. Defense of this claim rests on the distinction between interpreting a theory in a later language and translating the theory into it. The former is both possible and essential, the latter neither. The inter- pretation/translation distinction is then applied to Kitcher's critique of incommensurability and Quine's conception of a translation manual, both of which take reference-preservation as the sole semantic criterion of translational adequacy. The paper concludes by enquiring about the additional criteria a successful translation must satisfy.

45. Implications of Incommensurability. Philip Kitcher. It is argued that if Kuhn's current attempt to characterize conceptual incom- mensurability is correct, then the phenomenon of conceptual incommensur- ability is epistemologically innocuous. The first part of the paper explains why available techniques of reference specification provide rival scientists with sufficient access to one another's languages to compare their views. The second half of the paper attempts to elaborate an account of conceptual incommensurability that will develop (what the author takes to be) Kuhn's fundamental insight.

46. Comment on Kuhn's "Cpmmensurability, Comparability, Communica- bility". Mary Hesse. Kuhn's incommensurability thesis of 1962 still implies a very radical critique of standard theories of meaning. It is argued that incommensurability is sufficiently pervasive throughout the development of theories as to call in question standard linguistic palliatives, and that Kuhn's critique of extensionalist translation must be carried further into a theory of intepretation which not only depends on holistic meanings, but also explicitly addresses the ostensive and analogical processes of language learning. Such a theory is required for the pervasively metaphoric character of natural language, as well as for the understanding of theoretical terms in science.

47. Response to Commnentaries. Thomas S. Kuhn.

48. The Sure Thing Principle. Richard Jeffrey. The Sure Thing Principle (1), Dominance Principle (2), and Strong Independence Axiom (3) have been attacked and defended in various ways over the past 30 years. In the course of a survey of some of that literature, it is argued that these principles are acceptable iff suitably qualified.

xxiii

This content downloaded from 169.229.32.138 on Fri, 9 May 2014 15:06:19 PMAll use subject to JSTOR Terms and Conditions


Recommended