+ All Categories
Home > Documents > Lecture Two: The Analogy Theory [‘AT’] · Lecture Two: The Analogy Theory ... 2. [AT] claims:...

Lecture Two: The Analogy Theory [‘AT’] · Lecture Two: The Analogy Theory ... 2. [AT] claims:...

Date post: 09-Jun-2018
Category:
Upload: truongkhuong
View: 220 times
Download: 0 times
Share this document with a friend
3
Knowledge of Other Minds L2 15.11.13 Lecturer: Lucy Campbell [email protected] Lecture Two: The Analogy Theory [‘AT’] A. RE-CAP: THE EP-PROBLEM OF OTHER MINDS 1. I know my own mind ‘directly’, but all I have to go on when it comes to other minds is their behaviour. But looks consistent with behaviour that a) other minds are different from how I take them to be or b) they’re not there at all! The problem has specific and hyperbolic forms. Challenge is to spell out our entitlement to OM-judgments. B. ANALOGICAL INFERENCE AND [AA] 1. Analogical inference (see [ i ]): I form a judgment about b’s props by inferring from i) knowledge of a’s props and ii) knowledge that b is like a. Instances will be more or less complicated depending on no. of props involved. E.g. A) I judge that Trev’s car can do 0-60 in 3.1s on the basis of i) Rog’s car can do 0-60 in 3.1s, and ii) Trev’s and Rog’s cars are similar; B) I judge that I will die on the basis of i) Julie has ingested cyanide and has died, ii) I have ingested cyanide, iii) Julie and I are relevantly similar. 2. [AT] claims: OM-judgments justified by an argument from analogy [AA]: P1) S is in C-type circs. and/or is displaying B-type behaviour. [Non- psychological other-knowledge] P2) C-type circs. tend to cause me to be in M-type state and/or my B-type behaviours tend to be caused by M-type mental states in me . [Psychological self-knowledge] P3) S and I are relevantly similar. [Relevant Similarity Assumption] C) So, S is (probably) in M-type mental state. C. SOME REMARKS ON [AA] AND [AT] 1. ‘[AT]’ names the theory under consideration; [AA] the reasoning justifying OM-judgments. 2. (P1)-(C) is a reconstruction of various versions of [AA]. (For particular presentations see [ ii ]) 3. (P2) treats C M B relations as causal, but (P2) could also be correlative. 4. [AA] needn’t occur in consciousness (or even in mind) – it need only reconstruct one’s justification. 5. What is the relevant similarity between me and others? Two suggestions a) Similarity of outward behaviour (maybe relativised to circs.) b) Similarity of biological make-up (similar nervous systems etc.). C. TWO STANDARD OBJECTIONS (see [ iii ] for discussion) 1. [AA] represents a bad inductive inference; one from one known to many (indefinite) unknowns. NB: The force of this might depend on how general an OM-judgment our conclusion is (recall distinctions of generality from L1). 2. [AA] has logically uncheckable conclusions; so how do we know we’re using a reliable method of inference? D. (P1) AND THICK BEHAVIOURAL CONCEPTS (see also [ iv ]) 1. Premises of [AA] must – on pain of circularity – include only psychologically neutral concepts. But it’s implausible that we pick out others’ behaviour in a psychologically neutral way. Think about how you would most naturally describe these expressions:
Transcript

Knowledge of Other Minds L2 15.11.13

Lecturer: Lucy Campbell [email protected]

Lecture Two: The Analogy Theory [‘AT’] A. RE-CAP: THE EP-PROBLEM OF OTHER MINDS

1. I know my own mind ‘directly’, but all I have to go on when it comes to other minds is their behaviour. But looks consistent with behaviour that a) other minds are different from how I take them to be or b) they’re not there at all! The problem has specific and hyperbolic forms. Challenge is to spell out our entitlement to OM-judgments.

B. ANALOGICAL INFERENCE AND [AA]

1. Analogical inference (see [ i ]): I form a judgment about b’s props by inferring from i) knowledge of a’s props and ii) knowledge that b is like a. Instances will be more or less complicated depending on no. of props involved. E.g. A) I judge that Trev’s car can do 0-60 in 3.1s on the basis of i) Rog’s car can do 0-60 in 3.1s, and ii) Trev’s and Rog’s cars are similar; B) I judge that I will die on the basis of i) Julie has ingested cyanide and has died, ii) I have ingested cyanide, iii) Julie and I are relevantly similar.

2. [AT] claims: OM-judgments justified by an argument from analogy [AA]:

P1) S is in C-type circs. and/or is displaying B-type behaviour. [Non-

psychological other-knowledge] P2) C-type circs. tend to cause me to be in M-type state and/or my B-type

behaviours tend to be caused by M-type mental states in me. [Psychological self-knowledge]

P3) S and I are relevantly similar. [Relevant Similarity Assumption] C) So, S is (probably) in M-type mental state. C. SOME REMARKS ON [AA] AND [AT]

1. ‘[AT]’ names the theory under consideration; [AA] the reasoning justifying OM-judgments.

2. (P1)-(C) is a reconstruction of various versions of [AA]. (For particular

presentations see [ ii ])

3. (P2) treats C M B relations as causal, but (P2) could also be correlative.

4. [AA] needn’t occur in consciousness (or even in mind) – it need only

reconstruct one’s justification.

5. What is the relevant similarity between me and others? Two suggestions a) Similarity of outward behaviour (maybe relativised to circs.) b) Similarity of biological make-up (similar nervous systems etc.).

C. TWO STANDARD OBJECTIONS (see [ iii ] for discussion)

1. [AA] represents a bad inductive inference; one from one known to many

(indefinite) unknowns. NB: The force of this might depend on how general an OM-judgment our conclusion is (recall distinctions of generality from L1).

2. [AA] has logically uncheckable conclusions; so how do we know we’re using

a reliable method of inference? D. (P1) AND THICK BEHAVIOURAL CONCEPTS (see also [ iv ])

1. Premises of [AA] must – on pain of circularity – include only psychologically neutral concepts. But it’s implausible that we pick out others’ behaviour in a psychologically neutral way. Think about how you would most naturally describe these expressions:

Knowledge of Other Minds L2 15.11.13

Lecturer: Lucy Campbell [email protected]

2. [AT] can distinguish thick and thin behavioural concepts, and say: (P1) includes only thin concepts.

A smile is a facial expression formed by flexing the muscles near both ends of the mouth and by flexing muscles throughout the mouth. Some smiles include contraction of the muscles at the corner of the eyes.v

3. But it’s very implausible that we use these very often at all. Indeed, a lot of detailed empirical work (e.g. [ vi ]) goes into being able to describe facial expressions ‘thinly’. (Note also discrepancy between quote and pic.) [AA] loses force if it requires us to pick out others’ behaviour in this way.

4. But [AA] is supposed to reconstruct one’s justification for OM-judgments

– not our actual methodology/thought processes. As long as a justification is available then (says [AT]) our OM-judgments are justifiable. However, it’s not clear that we even have these thin concepts (psychologists have to discover the correct thin descriptions of facial expressions). And minimally, if [AA] is to justify my OM-judgments, I need to possess the concepts it requires.

E. (P2) AND THIN BEHAVIOURAL CONCEPTS

1. If (P1) uses thin concepts, then so must (P2) if [AA] is not to equivocate. But it’s hugely implausible that I pick out my own behaviour using thin behavioural concepts: my knowledge that my smile is a happy one doesn’t require any investigation into how it looks/which muscles are contracted/relaxed. Knowledge of my smile as happy seems as immediate as knowledge of my happiness.

2. Together with prob in (D), this constitutes a dilemma for [AT]: [AA]

either begs question or equivocates. 3. Can [AT] drop all talk of behaviour and concentrate on circumstances? Yes,

but this hugely weakens [AT]. Certain OM-judgments can’t be justified without ref. to behaviour, e.g. a) judgments that S is in M-given-C (when I wouldn’t be), b) judgments that S is in C-independent M, c) S is in M-given-unknown-C. But these are super-common OM-judgments.

F. (P3) AND DEFINING ‘RELEVANCE’

1. Not all similarities are relevant to drawing analogical inferences. Car—acceleration inference: similar models relevant; similar colours not. Why? Because we assume that: same model (but not same colour) entails same underlying props. These latter props determine acceleration props. Why is similarity of behaviour-given-circs and bio-make-up relevant to [AA]? Because this similarity is assumed to entail some underlying similarity which determined mindedness. But this means [AA] is question begging.

2. Is this unfair? ([AA]’s not trying to render OM-judgments certain; just to

make them probable/more probable than their negations.) No! how certain an analogical conclusion is depends on how similar things are. The current objection is that we can’t state which similarity is relevant to [AA] without begging the question; that my knowledge of relevance of a given similarity presupposes knowledge of minds other than mine:

Any argument to other minds is an attempt to justify the belief that I am not unique. [… T]here must not be any relevant difference between the case or cases which form the evidential base and the case or cases inferred to in the conclusion, but with every version of the analogical inference to other minds there is a difference: the evidential base is about my case, my instances of pain accompanying pain behaviour, etc., while the conclusion is about cases which are not mine. How, as is required for a satisfactory analogical inference, can I rule that such a difference is not a relevant one, that is, that it is reasonable to believe that I am not unique, without begging that question. For the argument in question is being used to justify this very belief, namely that I am not unique. [ vii ]

Knowledge of Other Minds L2 15.11.13

Lecturer: Lucy Campbell [email protected]

G. (P1), (P2) AND THE SCOPE OF [AA] 1. (Modulo previous considerations) Not clear that [AA] can justify OM-

judgments of propositional attitude – these seem to need attributing holistically, or if attributed singly, then only by presupposing certain other PA’s. If [AA] is supposed to be the only route to justified OM-judgments, looks like we can’t get started attributing PA’s.

READING FOR NEXT WEEK (THEORY THEORY) Crane, T. 2003. The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation. New York: Routledge. Ch. 2: 54-80 Pargetter, R. 1984. ‘The Scientific Inference to Other Minds’ in Australasian Journal of Philosophy Vol. 62(2): 158-163 Lewis, D. 1972. ‘Psychophysical and Theoretical Identifications’ in Australasian Journal of Philosophy Vol. 50(3): 249-258 i Paul Bartha, “Analogy and Analogical Reasoning,” Stanford Encyclopedia of Philosophy, June 25, 2013, http://plato.stanford.edu/entries/reasoning-analogy/. ii John Stewart Mill, An Examimation of Sir William Hamilton’s Philosophy, 6th ed. (London: Longmans, 1889), 243–4; A. J. Ayer, “One’s Knowledge of Other Minds,” in Philosophical Essays (London: The Macmillan Press, 1954), 191–214; Frank Jackson and Alec Hyslop, “The Analogical Inference to Other Minds,” American Philosophical Quarterly 9, no. 2 (1972): 168–176; Anita Avramides, “Other Minds,” in Oxford Handbook of Philosophy of Mind, ed. B. McLaughlin, A. Beckerman, and S. Walter (New York: Oxford University Press, 2009), 731; Alec Hyslop, “Other Minds,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta, Fall 2010, 2010, http://plato.stanford.edu/archives/fall2010/entries/other-minds/. iii Gilbert Ryle, The Concept of Mind (London: Penguin Books, 2000), 15–17; Jackson and Hyslop, “The Analogical Inference to Other Minds,” 168–174; Robert Pargetter, “The Scientific Inference to Other Minds,” Australasian Journal of Philosophy 62, no. 2 (1984): 160; Ayer, “One’s Knowledge of Other Minds,” 201–214; Stuart Hampshire, “The Analogy of Feeling,” Mind LXI, no. 241 (1952): 1–12; Norman Malcolm, “Knowledge of Other Minds,” in The Philosophy of Mind, ed. V.C. Chappell (New York: Dover Publications Ltd., 1981); A. Melnyk, “Inference to the Best Explanation and Other Minds,” Australasian Journal of Philosophy 72 (1994): 482–91; Hyslop, “Other Minds”; Alec

Hyslop, “Other Minds as Theoretical Entities,” Australasian Journal of Philosophy 54, no. 2 (1976): 1976. iv Norman Malcolm, Problems of Mind: Descartes to Wittgenstein, Essays in Philosophy (UK: George Allen & Unwin, 1972), 19–23. v “Smile,” Wikipedia, n.d. vi Charles Darwin, The Expression of the Emotions in Man and Animals (200th Anniversary Edition) (London: Harper Perennial, 2009); See also the work of Paul Eckman for more contemporary work on this topic, e.g. his Emotions Revealed: Understanding Faces and Feelings (2003). vii Pargetter, “The Scientific Inference to Other Minds,” 160.


Recommended