+ All Categories
Home > Documents > Epistemic Paradoxes

Epistemic Paradoxes

Date post: 06-Apr-2018
Category:
Upload: michel-giacometti
View: 237 times
Download: 0 times
Share this document with a friend

of 32

Transcript
  • 8/3/2019 Epistemic Paradoxes

    1/32

    Epistemic Paradoxes

    First published Wed Jun 21, 2006; substantive revision Fri Dec 30, 2011

    Epistemic paradoxes are riddles that turn on the concept of knowledge (episteme is Greek for

    knowledge). Typically, there are conflicting, well-credentialed answers to these questions (or

    pseudo-questions). Thus the riddle immediately informs us of an inconsistency. In the long run,the riddle goads and guides us into correcting at least one deep error if not directly about

    knowledge, then about its kindred concepts such as justification, rational belief, and evidence.

    Such corrections are of interest to epistemologists. Historians can date the origin of

    epistemology by the appearance of skeptics. As manifest in Plato's dialogues featuring

    Socrates, epistemic paradoxes have been discussed for twenty five hundred years. Given their

    hardiness, some of these riddles about knowledge will be discussed for the next twenty five

    hundred years.

    1. The Surprise Test Paradox

    1.1 Self-defeating prophecies and pragmatic paradoxes

    1.2 Predictive determinism

    1.3 The Problem of Foreknowledge

    2. Intellectual suicide

    3. Lotteries and the Lottery Paradox

    4. Preface Paradox

    5. Anti-expertise

    5.1 The Knower Paradox

    5.2 The Knowability Paradox

    5.3 Moore's problem

    5.4 Blindspots

    6. Dynamic Epistemic Paradoxes

    6.1 Meno's Paradox of Inquiry: A puzzle about gaining knowledge

    6.2 Dogmatism paradox: A puzzle about losing knowledge

    6.3 The Future of Epistemic Paradoxes

    Bibliography

  • 8/3/2019 Epistemic Paradoxes

    2/32

    Academic Tools

    Other Internet Resources

    Related Entries

    --------------------------------------------------------------------------------

    1. The Surprise Test Paradox

    A teacher announces that there will be a surprise test next week. A student objects that this is

    impossible: The class meets on Monday, Wednesday, and Friday. If the test is given on Friday,

    then on Thursday I would be able to predict that the test is on Friday. It would not be a

    surprise. Can the test be given on Wednesday? No, because on Tuesday I would know that the

    test will not be on Friday (thanks to the previous reasoning) and know that the test was not on

    Monday (thanks to memory). Therefore, on Tuesday I could foresee that the test will be on

    Wednesday. A test on Wednesday would not be a surprise. Could the surprise test be on

    Monday? On Sunday, the previous two eliminations would be available to me. Consequently, I

    would know that the test must be on Monday. So a Monday test would also fail to be a

    surprise. Therefore, it is impossible for there to be a surprise test.

    The riddle is: Can the teacher fulfill his announcement? We have an embarrassment of riches.

    On the one hand, we have the student's elimination argument. On the other hand, common

    sense says that surprise tests are possible even when we have had advance warning that one

    will occur at some point. Either of the answers would be decisive were it not for the

    credentials of the rival answer. Thus we have a paradox. But a paradox of what kind? Surprise

    test is being defined in terms of what can be known. Specifically, a test is a surprise if and only

    if the student cannot know beforehand which day the test will occur. Therefore the riddle of

    the surprise test qualifies as an epistemic paradox.

    Paradoxes are more than edifying surprises. Professor Statistics announces she will give

    random quizzes: Class meets every day of the week. Each day I will open by rolling a die.

    When the roll yields a six, I will immediately give a quiz. Today, Monday, a six came up. So you

    are taking a quiz. The last question of her quiz is: Which of the subsequent days is most likely

    to be the day of the next random test? Most people answer that each of the subsequent days

    has the same probability of being the next quiz. But the correct answer is: Tomorrow

    (Tuesday).

    Uncontroversial facts about probability reveal the mistake and establish the correct answer.

    For the next test to be on Wednesday, there would have to be a conjunction of two events: no

  • 8/3/2019 Epistemic Paradoxes

    3/32

    test on Tuesday (a 5/6 chance of that) and a test on Wednesday (a 1/6 chance). The probability

    for each subsequent day becomes less and less. (It would be very surprising if the next quiz day

    were a hundred days from now!) The question is not whether a six will be rolled on any given

    day, but when the next six will be rolled. Which day is the next one depends partly on what

    happens meanwhile, as well as depending partly on the roll of the die on that day.

    This riddle is instructive. But the existence of quick, decisive solution shows that only a mild

    revision of our prior beliefs was needed. In contrast, when our deep beliefs conflict, proposed

    amendments reverberate unpredictably. Problems worthy of attack prove their worth by

    fighting back Paul Erdos.

    The solution to a complex epistemic paradox relies on solutions (or partial solutions) to more

    fundamental epistemic paradoxes. For instance, many approach the surprise test as a nested

    sequence of puzzles: Inside the enigma of the surprise test is the preface paradox; inside the

    preface paradox is Moore's paradox. In addition to this depth-wise connection, there are

    lateral connections to other epistemic paradoxes such as the knower paradox and the problem

    of foreknowledge.

    There are also ties to issues that are not clearly paradoxes or to issues whose status as

    paradoxes is at least contested. Some philosophers find only irony in pragmatic paradoxes,

    only cognitive illusion in the lottery paradox, only an embarrassment in the knowability

    paradox. Calling a problem a paradox tends to quarantine it from the rest of our inquiries.

    Those who wish to dis-inhibit us will therefore deny that there is any paradox and admonish us

    for not making use of all our evidence.

    The surprise test paradox has yet more oblique connections to some paradoxes that are not

    epistemic, such as the liar paradox and Pseudo-Scotus' paradoxes of validity. They will be

    mentioned in passing, chiefly to set boundaries.

    We can look forward to future philosophers drawing surprising historical connections. The

    backward elimination argument underlying the surprise test paradox can be discerned in

    German folktales dating back to 1756 (Sorensen 2003a, 267). Perhaps, medieval scholars

    explored these slippery slopes. But let me turn to commentary to which we presently have

    access.

    1.1 Self-defeating prophecies and pragmatic paradoxes

  • 8/3/2019 Epistemic Paradoxes

    4/32

    In the twentieth century, the first published reaction to the surprise text paradox was to

    endorse the student's elimination argument. D. J . O'Connor (1948) regarded the teacher's

    announcement as self-defeating. If the teacher had not announced that there would be a

    surprise test, the teacher would have been able to give the surprise test. The pedagogical

    moral of the paradox would then be that if you want to give a surprise test do not announce

    your intention to your students!

    More precisely, O'Connor compared the teacher's announcement to sentences such as I

    remember nothing at all and I am not speaking now. Although these sentences are

    consistent, they could not conceivably be true in any circumstances (O'Connor 1948, 358). L.

    Jonathan Cohen (1950) agreed and classified the announcement as a pragmatic paradox. He

    defined a pragmatic paradox to be a statement that is falsified by its own utterance. The

    teacher overlooked how the manner in which a statement is disseminated can doom it to

    falsehood.

    Cohen's classification is too monolithic. True, the teacher's announcement does compromise

    one aspect of the surprise: Students now know that there will be a test. But this compromise is

    not itself enough to make the announcement self-falsifying. The existence of a surprise test

    has been revealed but there is surviving uncertainty as to which day the test will occur. The

    announcement of a forthcoming surprise aims at changing uninformed ignorance into action-

    guiding awareness of ignorance. A student who misses the announcement does not realize

    that there is a test. If no one passes on the intelligence about the surprise test, the student

    with simple ignorance will be less prepared than classmates who know they do not know the

    day of the test.

    Announcements are made to serve different goals simultaneously. Competition between

    accuracy and helpfulness makes it possible for an announcement to be self-fulfilling by being

    self-defeating. Consider a weatherman who warns The midnight tsunami will cause fatalities

    along the shore. Because of the warning, spectacle-seekers make a special trip to witness the

    wave. Some drown. The weatherman's announcement succeeds as a prediction by backfiring

    as a warning.

    1.2 Predictive determinism

    Instead of viewing self-defeating predictions as showing how the teacher is refuted, some

    philosophers construe self-defeating predictions as showing how the student is refuted. The

    student's elimination argument embodies hypothetical predictions about which day the

    teacher will give a test. Isn't the student overlooking the teacher's ability and desire to thwart

    those expectations? Some game theorists suggest that the teacher could defeat this strategy

    by choosing the test date at random.

  • 8/3/2019 Epistemic Paradoxes

    5/32

    As Professor Statistics taught us, students can be kept uncertain if the teacher is willing to be

    faithfully random. She will need to prepare a quiz each day. She will need to brace for the

    possibility that she will give too many quizzes or too few or have an unrepresentative

    distribution of quizzes.

    If the instructor finds these costs onerous, then she may be tempted by an alternative: at the

    beginning of the week, randomly select a single day. Keep the identity of that day secret. Since

    the student will only know that the quiz is on some day or other, pupils will not be able to

    predict the day of the quiz.

    Unfortunately, this plan is risky. If, through the chance process, the last day happens to be

    selected, then abiding by the outcome means giving an unsurprising test. For as in the original

    scenario, the student has knowledge of the teacher's announcement and awareness of past

    testless days. So the teacher must exclude random selection of the last day. The student is

    astute. He will replicate this reasoning that excludes a test on the last day. Can the teacher

    abide by the random selection of the next to last day? Now the reasoning becomes all too

    familiar.

    Another critique of the student's replication of the teacher's reasoning adapts a thought

    experiment from Michael Scriven (1964). To refute predictive determinism (the thesis that all

    events are foreseeable), Scriven conjures an agent Predictor who has all the data, laws, and

    calculating capacity needed to predict the choices of others. Scriven goes on to imagine,Avoider, whose dominant motivation is to avoid prediction. Therefore, Predictor must

    conceal his prediction. The catch is that Avoider has access to the same data, laws, and

    calculating capacity as Predictor. Thus he can duplicate Predictor's reasoning. Consequently,

    the optimal predictor cannot predict Avoider. Let the teacher be Avoider and the student be

    Predictor. Avoider must win. Therefore, it is possible to give a surprise test.

    Scriven's original argument assumes that Predictor and Avoider can simultaneously have all

    the needed data, laws, and calculating capacity. David Lewis and Jane Richardson object:

    the amount of calculation required to let the predictor finish his prediction depends on the

    amount of calculation done by the avoider, and the amount required to let the avoider finish

    duplicating the predictor's calculation depends on the amount done by the predictor. Scriven

    takes for granted that the requirement-functions are compatible: i.e., that there is some pair

    of amounts of calculation available to the predictor and the avoider such that each has enough

    to finish, given the amount the other has. (Lewis and Richardson 1966, 7071)

  • 8/3/2019 Epistemic Paradoxes

    6/32

    According to Lewis and Richardson, Scriven equivocates on Both Predictor and Avoider have

    enough time to finish their calculations'. Reading the sentence one way yields a truth: against

    any given avoider, Predictor can finish and against any given predictor, Avoider can finish.

    However, the compatibility premise requires the false reading in which Predictor and Avoider

    can finish against each other.

    Idealizing the teacher and student along the lines of Avoider and Predictor would fail to defeat

    the student's elimination argument. We would have merely formulated a riddle that falsely

    presupposes that the two types of agent are co-possible. It would be like asking If Bill is

    smarter than anyone else and Hillary is smarter than anyone else, which of the two is the

    smartest?.

    Predictive determinism states that everything is foreseeable. Metaphysical determinism states

    that there is only one way the future could be given the way the past is. Simon Laplace used

    metaphysical determinism as a premise for predictive determinism. He reasoned that since

    every event has a cause, a complete description of any stage of history combined with the laws

    of nature implies what happens at any other stage of the universe. Scriven was only

    challenging predictive determinism in his thought experiment. The next approach challenges

    metaphysical determinism.

    1.3 The Problem of Foreknowledge

    Prior knowledge of an action seems incompatible with it being a free action. If I know that you

    will finish reading this article tomorrow, then you will finish tomorrow (because knowledgeimplies truth). But that means you will finish the article even if you resolve not to. After all,

    given that you will finish, nothing can stop you from finishing. So if I know that you will finish

    reading this article tomorrow, you are not free to do otherwise.

    Maybe all of your reading is compulsory. If God exists, then he knows everything. So the threat

    to freedom becomes total for the theist. The problem of divine foreknowledge insinuates that

    theism precludes morality.

    In response to the apparent conflict between freedom and foreknowledge, medieval

    philosophers denied that future contingent propositions have a truth-value. They took

    themselves to be extending a solution Aristotle discusses in De Interpretatione to the problem

    of logical fatalism. According to this truth-value gap approach, You will finish this article

    tomorrow is not true now. The prediction will become true tomorrow. God's omniscience only

    requires that He knows every true proposition. God will know You will finish this article

    tomorrow as soon it becomes true but not before.

  • 8/3/2019 Epistemic Paradoxes

    7/32

    The teacher has freewill. Therefore, predictions about what he will do are not true (prior to the

    examination). Accordingly, Paul Weiss (1952) concludes that the student's argument falsely

    assumes he knows that the announcement is true. The student can know that the

    announcement is true after it becomes true but not before.

    W. V. Quine (1953) agrees with Weiss' conclusion that the teacher's announcement of a

    surprise test fails to give the student knowledge that there will be a surprise test. Yet Quine

    abominates Weiss' reasoning. Weiss breeches the law of bivalence (which states that every

    proposition has a truth-value, true or false). Quine believes that the riddle of the surprise test

    should not be answered by surrendering classical logic.

    2. Intellectual suicide

    W. V. Quine insists that the student's elimination argument is only a reductio ad absurdum of

    the supposition that the student knows that the announcement is true (rather than a reductio

    of the announcement itself). He accepts this reductio. Given the student's ignorance of the

    announcement, Quine concludes that a test on any day would be unforeseen.

    Common sense suggests that the students are informed by the announcement. The teacher is

    assuming that the announcement will enlighten the students. He seems right to assume that

    the announcement of this intention produces the same sort of knowledge as his other

    declarations of intentions (about which topics will be selected for lecture, the grading scale,how long he will be absent for minor surgery, and so on).

    There are extreme, philosophical premises that could yield Quine's conclusion that the

    students do not know the announcement is true. If no one can know anything about the

    future, as suggested by David Hume's problem of induction, then the student cannot know

    that the teacher's announcement is true. But denying all knowledge of the future in order to

    deny the student's knowledge is like using a cannon to kill a fly.

    In later writings, Quine evinces general reservations about the concept of knowledge. One of

    his pet objections is that know is vague. If knowledge entails absolute certainty, then too little

    will count as known. Quine infers that we must equate knowledge with firmly held true belief.

    Asking just how firm the belief must be is like asking just how big something has to be to count

    as being big. There is no answer to the question because big lacks the sort of boundary

    enjoyed by precise words.

  • 8/3/2019 Epistemic Paradoxes

    8/32

    There is no place in science for bigness, because of this lack of boundary; but there is a place

    for the relation of biggerness. Here we see the familiar and widely applicable rectification of

    vagueness: disclaim the vague positive and cleave to the precise comparative. But it is

    inapplicable to the verb know, even grammatically. Verbs have no comparative and

    superlative inflections . I think that for scientific or philosophical purposes the best we can

    do is give up the notion of knowledge as a bad job and make do rather with its separate

    ingredients. We can still speak of a belief as true, and of one belief as firmer or more certain,

    to the believer's mind, than another (1987, 109).

    Quine is alluding to Rudolph Carnap's (1950) generalization that scientists replace qualitative

    terms (tall) with comparatives (taller than) and then replace the comparatives with

    quantitative terms (being n millimeters in height).

    It is true that some borderline cases of a qualitative term are not borderline cases for the

    corresponding comparative. But the reverse holds as well. A big man who stoops may stand

    less high than another big man who is not as lengthy. Both men are clearly big. It is unclearthat The lengthier man is bigger. Qualitative terms can be applied when a vague quota is

    satisfied without the need to sort out the details. Only comparative terms are bedeviled by tie-

    breaking issues.

    Science is about what is the case rather than what ought to be case. This seems to imply that

    science does not tell us what we ought to believe. The traditional way to fill the normative gap

    is to delegate issues of justification to epistemologists. However, Quine is uncomfortable with

    delegating such authority to philosophers. He prefers the thesis that psychology is enough to

    handle the issues traditionally addressed by epistemologists (or at least the issues still worthaddressing in an Age of Science). This naturalistic epistemology seems to imply that know

    and justified are antiquated terms as empty as phlogiston or soul.

    Those willing to abandon the concept of knowledge can dissolve the surprise test paradox. But

    to epistemologists, this is like using a suicide bomb to kill a fly.

    Our suicide bomber may protest that the flies have been undercounted. Epistemic

    eliminativism dissolves all epistemic paradoxes. According to the eliminativist, epistemicparadoxes are symptoms of a problem with the very concept of knowledge.

    Notice that the eliminativist is more radical than the skeptic. The skeptic thinks the concept of

    knowledge is fine. We just fall short of being knowers. The skeptic treats No man is a knower

    like No man is an immortal. There is nothing wrong with the concept of immortality. Biology

    just winds up guaranteeing that every man falls short of being immortal.

  • 8/3/2019 Epistemic Paradoxes

    9/32

    Unlike the believer in No man is an immortal, the skeptic has trouble asserting There is no

    knowledge. For assertion expresses the belief that one knows. That is why Sextus Empiricus

    (Outlines of Pyrrhonism, I., 3, 226) condemns the assertion There is no knowledge as

    dogmatic skepticism. Sextus often seems to prefer agnosticism about knowledge rather than

    skepticism (considered as atheism about knowledge). Yet it also seems inconsistent to assert

    No one can know whether anything is known. For that conveys the belief that one knows that

    no one can know whether anything is known.

    Agnostics overestimate how easy it is to identify what cannot be known. To know, one need

    only find a single proof. To know that there is no way to know, one must prove the negative

    generalization that there is no proof. After all, inability to imagine a proof is commonly due to

    a failure of ingenuity rather than the non-existence of a proof. In addition to being a more

    general proposition, a proof of unknowability requires epistemological premises about what

    constitutes proof. Consequently, meta-proof is even more demanding than proof.

    The agnostic might be tempted to avoid presumptuousness by converting to meta-

    agnosticism. But this retreats in the wrong direction. Meta-meta-proof even more

    demanding than meta-proof. Meta-meta-proof need both the epistemological premises about

    what constitutes proof that meta-proof needs and, in addition, meta-meta-proof needs

    epistemological premises about what constitutes meta-proof.

    The eliminativist has even more severe difficulties in stating his position than the skeptic.Some eliminativists dismiss the threat of self-defeat by drawing an analogy. Those who denied

    the existence of souls used to be accused of undermining a necessary condition for asserting

    anything. However, the soul theorist's account of what is needed to make an assertion begs

    the question against those who believe that a healthy brain is enough for mental states.

    If the eliminativist thinks that assertion only imposes the aim of expressing a truth, then he can

    consistently assert that know is a defective term. However, an epistemologist can revive the

    charge of self-defeat by showing that assertion does indeed require the speaker to attribute

    knowledge to himself. This knowledge-based account of assertion has recently been supportedby a paradox that originated among philosophers of science rather than philosophers of

    language.

    3. Lotteries and the Lottery Paradox

  • 8/3/2019 Epistemic Paradoxes

    10/32

    Lotteries pose a problem for the theory that we can assert whatever we think is true. Given

    that there are a million tickets and only one winner, the probability of This ticket is a losing

    ticket is very high. If our aim were merely to utter truths, we should be willing to assert the

    proposition. Yet we are reluctant.

    What is missing? Speakers will assert the proposition after seeing the result of the lottery

    drawing or hearing about the winning ticket from a newscaster or remembering what the

    winning ticket was. This suggests that asserters represent themselves as knowing. This in turn

    suggests that there is a rule, or norm, governing the practice of making assertions that requires

    us to assert only something we know. This knowledge norm explains why the hearer can

    appropriately ask How do you know? (Williamson 2000, 249255). Perception, testimony,

    and memory are reliable processes that furnish answers to this challenge.

    Do these processes furnish certainty? When pressed, we admit there is a small chance that we

    misperceived the drawing or that the newscaster misread the winning number or that we are

    misremembering. While in this conciliatory mood, we are apt to relinquish our claim to know.

    The skeptic generalizes from this surrender (Hawthorne 2004). For any contingent proposition,

    there is a lottery statement that is more probable and which is unknown. A known proposition

    cannot be less probable than an unknown proposition. So no contingent proposition is known.

    Notice that the probability skeptic's mild suggestions about how we might be mistaken are not

    the extraordinary possibilities invoked by Rene Descartes' skeptic. The Cartesian skeptic tries

    to undermine vast swaths of knowledge with a single counter-explanation of the evidence

    (such as the hypothesis that you are dreaming or the hypothesis that an evil demon is

    deceiving you). These comprehensive alternatives are designed to evade any empirical

    refutation. The probabilistic skeptic, in contrast, points to pedestrian counter-explanations

    that are easy to verify: maybe you transposed the digits of a phone number, maybe the ticket

    agent thought you wanted to fly to Moscow, Russia rather than Moscow, Idaho, etc. You can

    check for errors, but any check itself has a small chance of being wrong. So there is always

    something to check, given that the issues cannot be ignored on grounds of improbability.

    You can check any of these possible errors but you cannot check them all. You cannot discount

    these pedestrian possibilities as science fiction. These are exactly the sorts of possibilities we

    check when something goes wrong. For instance, you think you know that you have an

    appointment to meet a prospective employer for lunch at noon. When he fails to show at the

    expected time, you begin a forced march backwards through your premises: Is your watch

    slow? Are you remembering the right restaurant? Could there be another restaurant in the city

    with same name? Is he just detained? Could he have just forgotten? Could there have been a

    miscommunication?

  • 8/3/2019 Epistemic Paradoxes

    11/32

    Probabilistic skepticism dates back to Arcilaus who took over the Academy two generations

    after Plato's death. This moderate kind of skepticism, recounted by Cicero (Academica 2.74,

    1.46) from his days as a student at the Academy, allows for justified belief. Many scientists are

    attracted to probabilism and dismiss the epistemologist's preoccupation with knowledge as

    old-fashioned.

    Despite the early start of the qualitative theory of probability, the quantitative theory did not

    develop until Blaise Pascal's study of gambling in the seventeenth century (Hacking 1975). Only

    in the eighteenth century did it penetrate the insurance industry (despite the fortune to be

    made by accurately calculating risk that should have been obvious to those in the business of

    insuring against risk). Only in the nineteenth century did probability make a mark in physics.

    And only in the twentieth century do probabilists make important advances over Arcelius.

    Most of these philosophical advances are reactions to the use of probability by scientists. In

    the twentieth century, editors of science journals began to demand that the author's

    hypothesis should be accepted only when it was sufficiently probable as measured by

    statistical tests. The threshold for acceptance was acknowledged to be somewhat arbitrary.

    And it was also acknowledged that the acceptance rule might vary with one's purposes. For

    instance, we demand a higher probability when the cost of accepting a false hypothesis is high.

    In 1961 Henry Kyburg pointed out that this policy conflicted with a principle of doxastic logic

    (the logic of belief). Logicians thought that rational belief agglomerates: If you rationallybelieve p and rationally believe q then you rationally believe both p and q. Little pictures

    should sum to a big picture. These logicians also required that rational belief be consistent. But

    if rational belief can be based on an acceptance rule that only requires a high probability, there

    will be rational belief in a contradiction! Suppose the acceptance rule permits belief in any

    proposition that has a probability of at least .99. Given a lottery with 100 tickets and exactly

    one winner, the probability of Ticket n is a loser licenses belief. Symbolize propositions about

    ticket n being a loser as pn. Symbolize I rationally believe as B. Belief in a contradiction

    follows:

    1.B~(p1 & p2 & & p100),

    by the probabilistic acceptance rule.

    2.Bp1 & Bp2 & & Bp100,

    by the probabilistic acceptance rule.

    3.B(p1 & p2 & & p100),

  • 8/3/2019 Epistemic Paradoxes

    12/32

    from (2) and the principle that rational belief agglomerates.

    4.B[(p1 & p2 & & p100) & ~(p1 & p2 & & p100)],

    from (1) and (3) by the principle that rational belief agglomerates.

    Since belief in an obvious contradiction is a paradigm example of irrationality, Kyburg poses a

    dilemma: either reject agglomeration or reject probabilistic acceptance rules. Kyburg chooses

    to reject agglomeration. He promotes toleration of joint inconsistency (having beliefs that

    cannot all be true together) to avoid belief in contradictions. Reason forbids us from believing

    a proposition that is necessarily false but permits us to have a set of beliefs that necessarily

    contains a falsehood. Henry Kyburg's choice was soon supported by the discovery of a

    companion paradox.

    4. Preface Paradox

    In D. C. Makinson's (1965) preface paradox, an author rationally believes each of the assertionsin his book. But since the author regards himself as fallible, he rationally believes the

    conjunction of all his assertions is false. If the agglomeration principle holds, (Bp & Bq) B(p

    & q), the author must both rationally believe and disbelieve the conjunction of all the

    assertions in his book!

    The preface paradox does not rely on a probabilistic acceptance rule. The preface belief is

    generated in a qualitative fashion. The author is merely reflecting on his similarity to other

    authors who are fallible, his own past failing that he subsequently discovered, his imperfection

    in fact checking, and so on.

    At this point many philosophers join Kyburg in rejecting agglomeration and conclude that it

    can be rational to have jointly inconsistent beliefs. Kyburg's solution to the preface paradox

    raises a methodological question about the nature of paradox. How can paradoxes change our

    minds if joint inconsistency is permitted?

    A paradox is commonly defined as a set of propositions that are individually plausible but

    jointly inconsistent. Paradoxes force us to change our minds in a highly structured way. Forinstance, much epistemology responds to a riddle posed by the regress of justification, namely,

    which of the following is false?

    1.A belief can only be justified by another justified belief.

    2.There are no circular chains of justification.

  • 8/3/2019 Epistemic Paradoxes

    13/32

    3.All justificatory chains have a finite length.

    4.Some beliefs are justified.

    Foundationalists reject (1). They take some propositions to be self-evident. Coherentists reject

    (2). They tolerate some forms of circular reasoning. For instance, Nelson Goodman (1965) has

    characterized the method of reflective equilibrium as virtuously circular. Charles Peirce (193335, 5.250) rejected (3), an approach later refined by Peter Klein (2007) and most recently

    defended at book-length by Scott F. Aikin (2011). Infinitists believe that infinitely long chains of

    justification are no more impossible than infinitely long chains of causation. Finally, the

    epistemological anarchist rejects (4). As Paul Feyerabend refrains in Against Method,

    Anything goes (1988, vii, 5, 14, 19, 159).

    Very elegant! But if joint inconsistency is rationally tolerable, why do these philosophers

    bother to offer solutions? Why is it not rational to believe each of (1)(4), in spite of their joint

    inconsistency?

    Kyburg might answer that there is a scale effect. Although the dull pressure of joint

    inconsistency is tolerable when diffusely distributed over a large set of propositions, the pain

    of contradiction becomes unbearable as the set gets smaller (Knight 2002). And indeed,

    paradoxes are always represented as a small set of propositions.

    If you know that your beliefs are jointly inconsistent, then you should reject R. M. Sainsbury's

    definition of a paradox as an apparently unacceptable conclusion derived by apparentlyacceptable reasoning from apparently acceptable premises (1995, 1). Take the negation of

    any of your beliefs as a conclusion and your remaining beliefs as the premises. You should

    judge this jumble argument as valid, and as having premises that you accept, and yet as having

    a conclusion you reject (Sorensen 2003b, 104110). If the conclusion of this argument counts

    as a paradox, then by a similar argument for the negation of any of your beliefs counts as a

    paradox.

    The resemblance between the preface paradox and the surprise test paradox becomes more

    visible through an intermediate case. The preface of Siddhartha Mukherjee's The Emperor of

    All Maladies: A Biography of Cancer contains a warning: In cases where there was no prior

    public knowledge, or when interviewees requested privacy, I have used a false name, and

    deliberately confounded identities to make it difficult to track. Those who refuse consent to

    be lied to are free to close Doctor Mukherjee's chronicle. But nearly all readers think the

    physician's trade-off between lies and new information is acceptable. They rationally

    anticipate being rationally misled. Nevertheless, these readers learn much about the history of

    cancer. Similarly, students who are warned that they will receive a surprise test rationally

  • 8/3/2019 Epistemic Paradoxes

    14/32

    expect to be rationally misled about the day of the test. The prospect of being misled does not

    lead them to drop the course.

    The preface paradox pressures Kyburg to extend his tolerance of joint inconsistency to the

    acceptance of contradictions (Sorensen 2001, 156158). Consider a logic student who isrequired to pick one hundred truths from a mixed list of tautologies and contradictions.

    Although the modest student believes each of his answers, A1, A2, , A100, he also believes

    that at least of one these answers is false. This ensures he believes a contradiction. If any of his

    answers is false, then the student believes a contradiction (because the only falsehoods on the

    question list are contradictions). If all of his test answers are true, then the student believes

    the following contradiction: ~(A1 & A2 & & A100). After all, a conjunction of tautologies is

    itself a tautology and the negation of any tautology is a contradiction.

    If paradoxes were always sets of propositions or arguments or conclusions, then they would

    always be meaningful. But some paradoxes are semantically flawed (Sorensen 2003b, 352) and

    some have answers that are backed by a pseudo-argument employing a defective lemma

    that lacks a truth-value. Kurt Grelling's paradox, for instance, opens with a distinction between

    autological and heterological words. An autological word describes itself, e.g., polysyllabic is

    polysllabic, English is English, noun is a noun, etc. A heterological word does not describe

    itself, e.g., monosyllabic is not monosyllabic, Chinese is not Chinese, verb is not a verb, etc.

    Now for the riddle: Is heterological heterological or autological? If heterological is

    heterological, then since it describes itself, it is autological. But if heterological is autological,

    then since it is a word that does not describe itself, it is heterological. The common solution to

    this puzzle is that heterological, as defined by Grelling, is not a genuine predicate (Thomson

    1962). In other words, Is heterological heterological? is without meaning. There can be no

    predicate that applies to all and only those predicates it does not apply to for the same reason

    that there can be no barber who shaves all and only those people who do not shave

    themselves.

    The eliminativist, who thinks that know or justified is meaningless, will diagnose the

    epistemic paradoxes as questions that only appear to be well-formed. For instance, the

    eliminativist about justification would not accept proposition (4) in the regress paradox: Some

    beliefs are justified. His point would not be the anarchist theme that ostensible authorities fail

    to meet a minimal standard of legitimacy. The eliminativist unromantically diagnoses justifiedas a pathological term; like heterological, declarative sentences that apply the word fail to

    express a proposition. Just as the astronomer ignores Are there a zillion stars? on the grounds

    that zillion is not a genuine numeral, the eliminativist ignores Are some beliefs justified? on

    the grounds that justified is not a genuine adjective.

  • 8/3/2019 Epistemic Paradoxes

    15/32

    In the twentieth century, suspicions about conceptual pathology were strongest for the liar

    paradox: Is This sentence is false true? Philosophers who thought that there was something

    deeply defective with the surprise test paradox assimilated it to the liar paradox. Let us review

    the assimilation process.

    5. Anti-expertise

    In the surprise test paradox, the student's premises are self-defeating. Any reason the student

    has for predicting a test date or a non-test date is available to the teacher. Thus the teacher

    can simulate the student's forecast and know what the student is expecting.

    The student's overall conclusion, that the test is impossible, is also self-defeating. If the

    student believes his conclusion then he will not expect the test. So if he receives a test, it will

    be a surprise. The event will be all the more unexpected because the student has deluded

    himself into thinking the test is impossible.

    Just as someone's awareness of a prediction can affect the likelihood of it being true,

    awareness of that sensitivity to his awareness can also affect its truth. If each cycle of

    awareness is self-defeating, then there is no stable resting place for a conclusion.

    Suppose a psychologist offers you a red box and a blue box (Skyrms 1982). The psychologist

    can predict which box you will choose with 90% accuracy. He has put one dollar in the box he

    predicts you will choose and ten dollars in the other box. Should you choose the red box or the

    blue box? You cannot decide. For any choice becomes a reason to reverse your decision.

    Epistemic paradoxes affect decision theory because rational choices are based on beliefs and

    desires. If the agent cannot form a rational belief, it is difficult to interpret his behavior as a

    choice. You cannot rationally choose an option that you believe to be inferior. So if you make a

    choice, then you cannot really believe that you were doing so as an anti-expert, that is,

    someone whose opinions on a topic are reliably wrong (Egan and Elga 2005).

    The medieval philosopher John Buridan (Sophismata, Sophism 13) gave a starkly minimal

    example of such instability:

    (B) You do not believe this sentence.

  • 8/3/2019 Epistemic Paradoxes

    16/32

    If you believe (B) it is false. If you do not believe (B) it is true. You are an anti-expert about (B);

    your opinion is reliably wrong. An outsider who monitors your opinion can reckon whether (B)

    is true. But you are not able to exploit your anti-expertise.

    5.1 The Knower Paradox

    David Kaplan and Richard Montague (1960) think the announcement by the teacher in our

    surprise exam example is equivalent to the self-referential

    (K-3) Either the test is on Monday but you do not know it before Monday, or the test is on

    Wednesday but you do not know it before Wednesday, or the test is on Friday but you do not

    know it before Friday, or this announcement is known to be false.

    Kaplan and Montague note that the number of alternative tests can be increased indefinitely.

    Shockingly, they claim the number of alternatives can be reduced to zero! The announcementis then equivalent to

    (K-0) This sentence is known to be false.

    If (K-0) is true then it known to be false. Whatever is known to be false, is false. Since no

    proposition can be both true and false, we have proven that (K-0) is false. Given that proof

    produces knowledge, (K-0) is known to be false. But wait! That is exactly what (K-0) says so

    (K-0) must be true.

    The (K-0) argument stinks of the liar paradox. Subsequent commentators sloppily switch the

    negation sign in the formal presentations of the reasoning from K~p to ~Kp (that is, from `It is

    known that not-p', to`It is not the case that it is known that p). Ironically, this garbled

    transmission results in a cleaner variation of the knower:

    (K) No one knows this very sentence.

    Is (K) true? On the one hand, if (K) is true, then what it says is true, so no one knows it. On the

    other hand, that very reasoning seems to be a proof of (K). Proving a proposition is sufficientfor knowledge of it, so someone must know (K). But then (K) is false! Since no one can know a

    proposition that is false, (K) is not known.

    The skeptic could hope to solve (K-0) by denying that anything is known. This remedy does not

    cure (K). If nothing is known then (K) is true. Can the skeptic instead challenge the premise that

    proving a proposition is sufficient for knowing it? This solution would be particularly

  • 8/3/2019 Epistemic Paradoxes

    17/32

    embarrassing to the skeptic. The skeptic presents himself as a stickler for proof. If it turns out

    that even proof will not sway him, he looks more like the dogmatist he so frequently chides.

    But the skeptic should not lose his nerve. A student taking a logic examination can be surprised

    that he soundly deduced a theorem. The student did not know the conclusion because itseemed implausible and he was only guessing that a key inference rule was valid. His instructor

    might have trouble getting the student to understand why his answer constitutes a valid proof

    (rather than merely a desperate bid for partial credit).

    The logical myth that You cannot prove a universal negative is itself a universal negative. So

    it implies its own unprovability. This implication of unprovability is correct but only because

    the principle is false. For instance, exhaustive inspection proves the universal negative No

    adverbs appear in this sentence. Reductio ad absurdum proves the universal negative There

    is no largest prime number.

    Trivially, false propositions cannot be proved true. Are there any true propositions that cannot

    be proved true?

    Yes, there are infinitely many. Kurt Gdel's incompleteness theorem demonstrated that any

    system that is strong enough to express arithmetic is also strong enough to express a formal

    counterpart of the self-referential proposition in the surprise test example This statement

    cannot be proved in this system. If the system cannot prove its Gdel sentence, then thissentence is true. If the system can prove its Gdel sentence, the system is inconsistent. So

    either the system is incomplete or inconsistent. (See the entry on Kurt Gdel.)

    Of course, this result concerns provability relative to a system. One system can prove another

    system's Gdel sentence. Kurt Gdel (1983, 271) thought that mathematical intuition gave him

    knowledge that arithmetic is consistent. Human knowledge is not restricted to what human

    beings can prove.

    J. R. Lucas (1964) claims that this reveals human beings are not machines. A computer is a

    concrete instantiation of a formal system. Hence, its knowledge is restricted to what it can

    prove. By Gdel's theorem, the computer will be either inconsistent or incomplete. However,

    Lucas draws an invidious comparison: a human being with a full command of arithmetic can be

    consistent (even if he is actually inconsistent due to inattention or wishful thinking).

  • 8/3/2019 Epistemic Paradoxes

    18/32

    Other philosophers defend the parity between people and computers. They think we have our

    own Gdel sentences (Lewis 1999, 166173). In this egalitarian spirit, G. C. Nerlich (1961)

    models the student's beliefs in the surprise test example as a logical system. The teacher's

    announcement is then a Gdel sentence about the student: There will be a test next week but

    you will not be able to prove which day it will occur on the basis of this announcement and

    memory of what has happened on previous exam days. When the number of exam days equals

    zero the announcement is equivalent to sentence K.

    Several commentators on the surprise test paradox object that interpreting surprise as

    unprovability changes the topic. Instead of posing the surprise test paradox, it poses a

    variation of the liar paradox. Other concepts can be blended with the liar. For instance, mixing

    in alethic notions generates the possible liar: Is This statement is possibly false true? (Post

    1970) (If it is false, then it is false that it is possibly false. What cannot possibly be false is

    necessarily true. But if it is necessarily true, then i t cannot be possibly false.) Since the

    semantic concept of validity involves the notion of possibility, one can also derive validity liars

    such as Pseudo-Scotus' paradox: Squares are squares, therefore, this argument is invalid

    (Read 1979). If Pseudo-Scotus' argument is valid then, since its premise is true, its conclusion is

    true which means it is invalid. If Pseudo-Scotus' argument is invalid, it is possible for the

    premise to be true and conclusion false. But if an argument is invalid, it is necessarily invalid. A

    similar predicament follows from The test is on Friday but this prediction cannot be soundly

    deduced from this announcement.

    One can mock up a complicated liar paradox that resembles the surprise test paradox. But this

    complex variant of the liar is not an epistemic paradox. For the paradoxes turn on the semantic

    concept of truth rather than an epistemic concept.

    5.2 The Knowability Paradox

    Frederic Fitch (1963) reports that in 1945 he first learned of this proof of unknowable truths

    from a referee report on a manuscript he never published. Thanks to Joe Salerno's (2009)

    archival research, we now know that referee was Alonzo Church.

    Assume there is a true sentence of the form p but p is not known. Although this sentence isconsistent, modest principles of epistemic logic imply that sentences of this form are

    unknowable.

    1. K(p & ~Kp) (Assumption)

    2. Kp & K~Kp 1, Knowledge distributes over conjunction

  • 8/3/2019 Epistemic Paradoxes

    19/32

    3. ~Kp 2, Knowledge implies truth (from the second conjunct)

    4. Kp & ~Kp 2, 3 by conjunction elimination of the first conjunct and then conjunction

    introduction

    5. ~K(p & ~Kp) 1, 4 Reductio ad absurdum

    Since all the assumptions are discharged, the conclusion is a necessary truth. So it is a

    necessary truth that p & ~Kp is not known. In other words, p & ~Kp is unknowable.

    The cautious will draw a conditional moral: If there are actual unknown truths, there are

    unknowable truths. After all, some philosophers will reject the antecedent because they

    believe there is an omniscient being.

    But many idealists and virtually all logical positivists and other secular verificationists concede

    that there are some actual unknown truths while also maintaining that all truths are knowable.

    Astonishingly, they seem refuted by this pinch of epistemic logic.

    Timothy Williamson doubts such astonishment is enough for the result to qualify as a paradox:

    The conclusion that there are unknowable truths is an affront to various philosophical

    theories, but not to common sense. If proponents (and opponents) of those theories long

    overlooked a simple counterexample, that is an embarrassment, not a paradox. (2000, 271)

    The polemical intent of denying that the result is paradox is to remove an inhibition.

    Williamson does not want us to quarantine the theorem with such suspicious characters as the

    liar paradox.

    Those who believe that the Church-Fitch result is a paradox can respond to Williamson with

    examples of paradoxes that accord with common sense. For instance, common sense heartily

    agrees with conclusion that something exists. But it is surprising that this can be provedwithout empirical premises. Since the quantifiers of standard logic (first order predicate logic

    with identity) have existential import, the logician can deduce that something exists from the

    principle that everything is identical to itself. Most philosophers balk at this simple proof

    because they feel that the existence of something cannot be proved by sheer logic. Likewise,

    many philosophers balk at the proof of unknowables because they feel that such a profound

    result cannot be obtained from such limited means.

  • 8/3/2019 Epistemic Paradoxes

    20/32

    5.3 Moore's problem

    Church's referee report was composed in 1945. The timing and structure of his argument for

    unknowables suggests that Church may have been by inspired G. E. Moore's (1942, 543)

    sentence:

    (M) I went to the pictures last Tuesday, but I don't believe that I did.

    Moore's problem is to explain what is odd about declarative utterances such as (M). This

    explanation needs to encompass both readings of (M): p & B~p and p & ~Bp. (This scope

    ambiguity is behind my favorite joke about Rene Descartes: Descartes is sitting in a bar, having

    a drink. The bartender asks him if he would like another. I think not, he says, and

    disappears.)

    The common explanation of Moore's absurdity is that the speaker has managed to contradicthimself without uttering a contradiction. So the sentence is odd because it is a counterexample

    to the generalization that anyone who contradicts himself utters a contradiction.

    There is no problem in third person counterparts of (M). Anyone else can say about me, with

    no paradox, Camels have three eye lids but Roy Sorensen does not believe it. (M) can also be

    embedded unparadoxically in conditionals: If those membranes are eye lids, then camels have

    three eye lids but I do not believe it. The past tense is fine: Camels have three eye lids but I

    did not believe it. The future tense, Camels have three eye lids but I will not believe it, is a bit

    more of a stretch (Bovens 1995). We tend to picture our future selves as better informed.Later selves are, as it were, experts to whom earlier selves should defer. When an earlier self

    foresees that his later self believes p, then the prediction is a reason to believe p. Bas van

    Fraassen (1984, 244) dubs this the principle of reflection: I ought to believe a proposition

    given that I will believe it at some future time.

    Robert Binkley (1968) anticipates van Fraassen by applying the reflection principle to the

    surprise test paradox. The student can foresee that he will not believe the announcement if no

    test is given by Thursday. The conjunction of the history of testless days and the

    announcement will imply the Moorean sentence:

    (A) The test is on Friday but you do not believe it.

    Since the weaker element of the conjunction is the announcement, the student will not believe

    the announcement. At the beginning of the week, the student foresees that his future self may

    not believe the announcement. So the student on Sunday will not believe the announcement

    when it is first uttered.

  • 8/3/2019 Epistemic Paradoxes

    21/32

    Binkley fortifies this reasoning with doxastic logic. The principle of this logic of belief can be

    understood as idealizing the student into an ideal reasoner. In general terms, an ideal reasoner

    is someone who infers what he ought and refrains from inferring anymore than he ought.

    Since there is no constraint on his premises, we may disagree with the ideal reasoner. But if we

    agree with the ideal reasoner's premises, we appear bound to agree with his conclusion.

    Binkley specifies some requirements to give teeth to the student's status as an ideal reasoner:

    the student is perfectly consistent, believes all the logical consequences of his beliefs, and does

    not forget. Binkley further assumes that the ideal reasoner is aware that he is an ideal

    reasoner. According to Binkley, this ensures that if the ideal reasoner believes p, then he

    believes that he will believe p thereafter.

    Binkley's account of the student's hypothetical epistemic state on Thursday is compelling. But

    his argument for spreading the incredulity from the future to the past is open to three

    challenges.

    The first objection is that it delivers the wrong result. The student is informed by the teacher's

    announcement, so Binkley ought not to use a model in which the announcement is as absurd

    as Canada extends to the North Pole but I do not believe it.

    Second, the future mental state envisaged by Binkley is only hypothetical: If no test is given by

    Thursday, the student will find the announcement incredible. At the beginning of the week,

    the student does not know (or believe) that the teacher will wait that long. A principle thattells me to defer to the opinions of my future self does not imply that I should defer to the

    opinions of my hypothetical future self. For my hypothetical future self is responding to

    propositions that need not actually true.

    Third, the principle of reflection may need more qualifications than Binkley anticipates. Binkley

    realizes that an ordinary agent foresees that he will forget details. That is why we write

    reminders for our own benefit. An ordinary agent foresees periods of impaired judgment. That

    is why we limit how much money we bring to the bar.

    Binkley stipulates that the students do not forget. He needs to add that the students know that

    they will not forget. For the mere threat of a memory lapse sometimes suffices to undermine

    knowledge. Consider Professor Anesthesiology's scheme for surprise tests: A surprise test will

    be given either Wednesday or Friday with the help of an amnesia drug. If the test occurs on

    Wednesday, then the drug will be administered five minutes after Wednesday's class. The drug

    will instantly erase memory of the test and the students will fill in the gap by confabulation.

  • 8/3/2019 Epistemic Paradoxes

    22/32

    You have just completed Wednesday's class and so temporarily know that the test will be on

    Friday. Ten minutes after the class, you lose this knowledge. No drug was administered and

    there is nothing wrong with your memory. You are correctly remembering that no test was

    given on Wednesday. However, you do not know your memory is accurate because you also

    know that if the test was given Wednesday then you would have a pseudo-memory

    indistinguishable from your present memory. Despite not gaining any new evidence, you

    change your mind about the test occurring on Wednesday and lose your knowledge that the

    test is on Friday. (The change of belief is not crucial; you would still lack foreknowledge of the

    test even if you dogmatically persisted in believing that the test will be on Friday.)

    If the students know that they will not forget and know there will be no undermining by

    outside evidence, then we may be inclined to agree with Binkley's summary that his idealized

    student never loses the knowledge he accumulates. As we shall see, however, this overlooks

    other ways in which rational agents may lose knowledge.

    5.4 Blindspots

    A blindspot is a consistent but inaccessible proposition. Blindspots are relative to the means of

    reaching the proposition, the person making the attempt, and time he makes the attempt.

    Although I cannot know the blindspot There is intelligent extra-terrestrial life but no one

    knows it, I can suspect it. Although I cannot rationally believe Polar bears have black skin but I

    do not believe it you can. This means there can be disagreement between ideal reasoners

    (even under strong idealizations such as Binkley's). The anthropologist Gontran de Poncins

    begins his chapter on the arctic missionary, Father Henry, with a prediction:

    I am going to say to you that a human being can live without complaint in an ice-house built

    for seals at a temperature of fifty-five degrees below zero, and you are going to doubt my

    word. Yet what I say is true, for this was how Father Henry lived; . (Poncins 1988, 240)

    Gontran de Poncins' subsequent testimony might lead the reader to believe someone can

    indeed be content to live in an ice-house. The same testimony might lead another reader to

    doubt that Poncins is telling the truth. But no reader ought to believe Someone can be

    content to live in an ice house and I doubt it.

    If Gontran believes a proposition that is a blindspot to his reader, then he cannot furnish good

    grounds for his reader to share his belief. This holds even if they are ideal reasoners. So one

    implication of blindspots is that there can be disagreement among ideal reasoners because

    they differ in their blindspots.

  • 8/3/2019 Epistemic Paradoxes

    23/32

    This is relevant to the surprise test paradox. The students are the surprisees. Since the date of

    the surprise test is a blindspot for them, non-surprisees cannot persuade them.

    The same point holds for intra-personal disagreement over time. Evidence that persuaded me

    on Sunday that My new locker combination is 183614 but on Friday I will not believe itshould no longer persuade me on Friday (given my belief that the day is Friday). For that

    proposition is a blindspot to my Friday self.

    Although each blindspot is inaccessible, a disjunction of blindspots is normally not a blindspot.

    I can rationally believe that Either the number of stars is even and I do not believe it, or the

    number of stars is odd and I do not believe it. The author's preface statement that there is

    some mistake in his book is equivalent to a very long disjunction of blindspots. The author is

    saying he either falsely believes his first statement or falsely believes his second statement or

    or falsely believes his last statement.

    The teacher's announcement that there will be a surprise test is equivalent to a disjunction of

    future mistakes: Either there will be a test on Monday and the student will not believe it

    beforehand or there will be a test Wednesday and the student will not believe it beforehand or

    the test is on Friday and the student will not believe it beforehand.

    The points made so far suggest a solution to the surprise test paradox (Sorensen 1988, 328

    343). As Binkley (1968) asserts, the test would be a surprise even if the teacher waited untilthe last day. Yet it can still be true that the teacher's announcement is informative. At the

    beginning of the week, the students are justified in believing the teacher's announcement that

    there will be a surprise test. This announcement is equivalent to:

    (A) Either

    i.the test is on Monday and the student does not know it before Monday, or

    ii.the test is on Wednesday and the student does not know it before Wednesday, or

    iii.the test is on Friday and the student does not know it before Friday.

    Consider the student's predicament on Thursday (given that the test has not been on Monday

    or Wednesday). If he knows that no test has been given, he cannot also know that (A) is true.

    Because that would imply

    (iii) The test is on Friday and the student does not know it before Friday.

  • 8/3/2019 Epistemic Paradoxes

    24/32

    Although (iii) is consistent and might be knowable by others, (iii) cannot be known by the

    student before Friday. (iii) is a blindspot for the students but not for, say, the teacher's

    colleagues. Hence, the teacher can give a surprise test on Friday because that would force the

    students to lose their knowledge of the original announcement (A). Knowledge can be lost

    without forgetting anything.

    This solution makes who you are relevant to what you can know. In addition to compromising

    the impersonality of knowledge, there will be compromise on its temporal neutrality.

    Since the surprise test paradox can also be formulated in terms of rational belief, there will be

    parallel adjustments for what we ought to believe. We are criticized for failures to believe the

    logical consequences of what we believe and criticized for believing propositions that conflict

    with each other. Anyone who meets these ideals of completeness and consistency will be

    unable to believe a range of consistent propositions that are accessible to other complete and

    consistent thinkers. In particular, they will not be able to believe propositions attributing

    specific errors to them, and propositions that entail these off-limit propositions.

    Some people wear T-shirts with Question Authority! written on them. Questioning authority is

    generally regarded as a matter of individual discretion. The surprise test paradox shows that it

    is sometimes mandatory. The student is rationally required to doubt the teacher's

    announcement even though the teacher has not given any evidence of being unreliable.

    Indeed, the student can foresee that their change of mind opens a new opportunity for

    surprise.

    Another consequence is that there can be disagreement amongst ideal reasoners who agree

    on the same impersonal data. Consider the colleagues of the teachers. They are not amongst

    those that teacher targets for surprise. Since surprise here means surprise to the students,

    the teacher's colleagues can consistently infer that the test will be on the last day from the

    premise that it has not been given on any previous day.

    6. Dynamic Epistemic Paradoxes

    The above anomalies (losing knowledge without forgetting, disagreement amongst equally

    well-informed ideal reasoners, rationally changing your mind without the acquisition of

    counter-evidence) would be more tolerable if reinforced by separate lines of reasoning. The

    most fertile source of this collateral support is in puzzles about updating beliefs.

  • 8/3/2019 Epistemic Paradoxes

    25/32

    The natural strategy is to focus on the knower when he is stationary. However, just as it is

    easier for an Eskimo to observe an arctic fox when it moves, we often get a better

    understanding of the knower dynamically, when he is in the process of gaining or losing

    knowledge.

    6.1 Meno's Paradox of Inquiry: A puzzle about gaining knowledge

    When on trial for impiety, Socrates traced his inquisitiveness to the Oracle at Delphi (Apology

    21d in Cooper 1997). Prior to beginning his mission of inquiry, Chaerephon asked the Oracle:

    Who is the wisest of men? The Oracle answered No one is wiser than Socrates. This

    astounded Socrates because he believed he knew nothing. Whereas a less pious philosopher

    might have questioned the reliability of the Delphic Oracle, Socrates followed the general

    practice of treating the Oracle as infallible. The only cogitation appropriate to an infallible

    answer is interpretation. Accordingly, Socrates resolved his puzzlement by inferring that his

    wisdom lay in recognizing his own ignorance. While others may know nothing, Socrates knows

    that he knows nothing.

    Socrates continues to be praised for his insight. But his discovery is a contradiction. If

    Socrates knows that he knows nothing, then he knows something (the proposition that he

    knows nothing) and yet does not know anything (because knowledge implies truth).

    Socrates could regain consistency by downgrading his meta-knowledge to the status of a

    belief. If he believes he knows nothing, then he naturally wishes to remedy his ignorance by

    asking about everything. This rationale is accepted throughout the early dialogues. But whenwe reach the Meno, one his interlocuters has an epiphany. After Meno receives the standard

    treatment from Socrates about the nature of virtue, Meno discerns a conflict between Socratic

    ignorance and Socratic inquiry (Meno 80d, in Cooper 1997). How would Socrates recognize the

    correct answer even if Meno gave it?

    The general structure of Meno's paradox is a dilemma: If you know the answer to the question

    you are asking, then nothing can be learned by asking. If you do not know the answer, then

    you cannot recognize a correct answer even if it is given to you. Therefore, one cannot learn

    anything by asking questions.

    The natural solution to Meno's paradox is to characterize the inquirer as only partially

    ignorant. He knows enough to recognize a correct answer but not enough to answer on his

    own. For instance, spelling dictionaries are useless to six year old children because they

    seldom know more than the first letter of the word in question. Ten year old children have

    enough partial knowledge of the word's spelling to narrow the field of candidates. Spelling

  • 8/3/2019 Epistemic Paradoxes

    26/32

    dictionaries are also useless to those with full knowledge of spelling and those with total

    ignorance of spelling. But most of us have an intermediate amount of knowledge.

    It is natural to analyze partial knowledge as knowledge of conditionals. The ten year old child

    knows the spoken version of If the spelling dictionary spells the month after January as F-e-b-r-u-a-r-y, then that spelling is correct. Consulting the spelling dictionary gives him knowledge

    of the antecedent of the conditional.

    Much of our learning from conditionals runs as smoothly as this example suggests. Knowledge

    of the conditional is conditional knowledge (that is, conditional upon learning the antecedent

    and applying the inference rule modus ponens: If P then Q, P, therefore Q). But the next

    section is devoted to some known conditionals that are repudiated when we learn their

    antecedents.

    6.2 Dogmatism paradox: A puzzle about losing knowledge

    Saul Kripke's ruminations on the surprise test paradox led him to a paradox about dogmatism.

    He lectured on both paradoxes at Cambridge University to the Moral Sciences Club in 1972. (A

    descendent of this lecture now appears as Kripke 2011). Gilbert Harman transmitted Kripke's

    new paradox as follows:

    If I know that h is true, I know that any evidence against h is evidence against something that is

    true; I know that such evidence is misleading. But I should disregard evidence that I know is

    misleading. So, once I know that h is true, I am in a position to disregard any future evidence

    that seems to tell against h. (1973, 148)

    Dogmatists accept this reasoning. For them, knowledge closes inquiry. Any evidence that

    conflicts with what is known can be dismissed as misleading evidence. Forewarned is

    forearmed.

    This conservativeness crosses the line from confidence to intransigence. To illustrate the

    excessive inflexibility, here is a chain argument for the dogmatic conclusion that my reliablecolleague Doug has given me a misleading report (corrected from Sorensen 1988b):

    (C1) My car is in the parking lot.

    (C2) If my car is in the parking lot and Doug provides evidence that my car is not in the parking

    lot, then Doug's evidence is misleading.

  • 8/3/2019 Epistemic Paradoxes

    27/32

    (C3) If Doug reports he saw a car just like mine towed from the parking lot, then his report is

    misleading evidence.

    (C4) Doug reports that a car just like mine was towed from the parking lot.

    (C5) Doug's report is misleading evidence.

    By hypothesis, I am justified in believing (C1). Premise (C2) is a certainty because it is

    analytically true. The argument from (C1) and (C2) to (C3) is valid. Therefore, my degree of

    confidence in (C3) must equal my degree of confidence in (C1). Since we are also assuming that

    I gain sufficient justification for (C4), it seems to follow that I am justified in believing (C5) by

    modus ponens. Similar arguments will lead me to dismiss further evidence such as a phone call

    from the towing service and my failure to see my car when I confidently stride over to the

    parking lot.

    Gilbert Harman diagnoses the paradox as follows:

    The argument for paradox overlooks the way actually having evidence can make a difference.

    Since I now know [my car is in the parking lot], I now know that any evidence that appears to

    indicate something else is misleading. That does not warrant me in simply disregarding any

    further evidence, since getting that further evidence can change what I know. In particular,

    after I get such further evidence I may no longer know that it is misleading. For having the new

    evidence can make it true that I no longer know that new evidence is misleading. (1973, 149)

    In effect, Harman denies the hardiness of knowledge. The hardiness principle states that one

    knows only if there is no evidence such that if one knew about the evidence one would not be

    justified in believing one's conclusion. New knowledge cannot undermine old knowledge.

    Harman disagrees.

    Harman's belief that new knowledge can undermine old knowledge may be relevant to the

    surprise test paradox. Perhaps the students lose knowledge of the test announcement even

    though they do not forget the announcement or do anything else incompatible with their

    credentials as ideal reasoners. A student on Thursday is better informed about the outcomes

    of test days than he was on Sunday. He knows the test was not on Monday and not on

    Wednesday. But he can only predict that the test is on Friday if he continues to know the

  • 8/3/2019 Epistemic Paradoxes

    28/32

    announcement. Perhaps the extra knowledge of the testless days undermines knowledge of

    the announcement.

    6.3 The Future of Epistemic Paradoxes

    We cannot coherently predict that any specific new epistemic paradox awaits discovery. To see

    why, consider the prediction Jon Wynne-Tyson attributes to Leonardo Da Vinci: I have learned

    from an early age to abjure the use of meat, and the time will come when men such as I will

    look upon the murder of animals as they now look upon the murder of men. (1985, 65) By

    predicting this progress, Leonardo is showing that he already believes that the murder of

    animals is the same as the murder of men.

    There would be no problem if Leonardo thinks the moral progress lies in the moral

    preferability of the vegetarian belief rather than the truth of the matter. One might admire

    vegetarianism without accepting the correctness of vegetarianism. But Leonardo is endorsing

    the correctness of the belief. This sentence embodies a Moorean absurdity. It is like saying

    Leonardo took twenty five years to complete The Virgin on the Rocks but I will first believe so

    tomorrow. (This absurdity will prompt some to object that I have uncharitably interpreted

    Leonardo; he must have intended to make an exception for himself and only be referring to

    men of his kind.)

    I cannot specifically anticipate the first acquisition of the true belief that p. For that prediction

    would show that I already have the true belief that p. The truth cannot wait. The impatience of

    the truth imposes a limit on the prediction of discoveries.

    Bibliography

    Aikin, K. Scott, 2011, Epistemology and the regress problem, London: Routledge.

    Anderson, C. Anthony, 1983, The Paradox of the Knower, The Journal of Philosophy, 80: 338

    355.

    Binkley, Robert, 1968, The Surprise Examination in Modal Logic, Journal of Philosophy, 65/2:

    127136.

    Bommarito, Nicolas, 2010, Rationally Self-Ascribed Anti-Expertise, Philosophical Studies, 151:

    413419.

    Bovens, Luc, 1995, P and I will believe that not-P: diachronic constraints on rational belief,

    Mind, 104/416: 737760.

    Burge, Tyler, 1984, Epistemic Paradox, Journal of Philosophy, 81/1: 529.

  • 8/3/2019 Epistemic Paradoxes

    29/32

    , 1978a, Buridan and Epistemic Paradox, Philosophical Studies, 34: 2135.

    Buridan, John, 1982, John Buridan on Self-Reference: Chapter Eight of Buridan's Sophismata,

    G. E. Hughes (ed. & tr.) Cambridge: Cambridge University Press.

    Carnap, Rudolph, 1950, The Logical Foundations of Probability, Chicago: University of Chicago

    Press.

    Christensen, David, 2010, Higher Order Evidence, Philosophy and Phenomenological

    Research, 81: 185215.

    Cicero, On the Nature of the Gods, Academica , H. Rackham (trans.) Cambridge,

    Massachusetts: Loeb, 1933.

    Collins, Arthur, 1979, Could our beliefs be representations in our brains?, Journal of

    Philosophy, 74/5: 22543.

    Cooper, John (ed.), 1997, Plato: The Complete Works, Indianapolis: Hackett.

    Egan, Andy and Adam Elga, 2005, I can't believe I'm stupid, Philosophical Perspectives, 19/1:

    7793.

    Feyerabend, Paul, 1988, Against Method, London: Verso.

    Fitch, Frederic, 1963, A Logical Analysis of Some Value Concepts, Journal of Symbolic Logic,

    28/2: 135142.

    Gdel, Kurt, 1983, What is Cantor's Continuum Problem?, Philosophy of Mathematics, Paul

    Benacerraf and Hilary Putnam (eds.), Cambridge: Cambridge University Press, 258273.

    Hacking, Ian, 1975, The Emergence of Probability, Cambridge: Cambridge University Press.

    Hajek, Alan, 2005, The Cable Guy paradox, Analysis, 65/2: 112119.

    Harman, Gilbert, 1973, Thought, Princeton: Princeton University Press.

    Hawthorne, John, 2004, Knowledge and Lotteries, Oxford: Clarendon Press.

    Hintikka, Jaakko, 1962, Knowledge and Belief, Ithaca: Cornell University Press.

    Hughes, G. E., 1982, John Buridan on Self-Reference, Cambridge: Cambridge University Press.

    Kaplan, David and Richard Montague, 1960, A Paradox Regained, Notre Dame Journal of

    Formal Logic, 1: 7990.

    Klein, Peter, 2007, How to be an Infinitist about Doxastic Justification, Philosophical Studies,

    134: 772529.

    Knight, Kevin, 2002, Measuring Inconsistency, Journal of Philosophical Logic, 31/1: 7798.

    Kripke, Saul, 2011, Two Paradoxes of Knowledge, in S. Kripke, Philosophical Troubles:

    Collected Papers, Volume 1, New York: Oxford University Press, pp. 2751.

  • 8/3/2019 Epistemic Paradoxes

    30/32

    Kvanvig, Jonathan L., 1998, The Epistemic Paradoxes, Routledge Encyclopedia of Philosophy,

    Boston: Routledge.

    Kyburg, Henry, 1961, Probability and the Logic of Rational Belief, Middletown: Wesleyan

    University Press.

    Lewis, David, 1998, Lucas against Mechanism, Papers in Philosophical Logic, Cambridge:Cambridge University Press, pp. 1669.

    Lewis, David and Jane Richardson, 1966, Scriven on Human Unpredictability, Philosophical

    Studies, 17/5: 6974.

    Lucas, J. R., 1964, Minds, Machines and Gdel, in Minds and Machines, ed. Alan Ross

    Anderson. Englewood Cliffs, N.J.: Prentice Hall.

    Makinson, D. C., 1965, The Paradox of the Preface, Analysis, 25: 205207.

    Malcolm, Norman, 1963, Knowledge and Certainty, Englewood Cliffs, New Jersey: Prentice

    Hall.

    Moore, G. E., 1942, A reply to my critics, The Philosophy of G. E. Moore, edited by P. A.

    Schlipp. Evanston, IL: Northwestern University.

    Nerlich, G. C., 1961, Unexpected Examinations and Unprovable Statements, Mind, 70/280:

    503514.

    Peirce, Charles Sanders, 19311935, The Collected Works of Charles Sanders Peirce, Charles

    Hartshorne and Paul Weiss (eds.), Cambridge, MA: Harvard University Press.

    Plato, Plato: The Complete Works, John M. Cooper (ed.), Indianapolis: Hackett, 1997.

    Poncins, Gontran de, 1988, Kabloona in collaboration with Lewis Galantiere, New York: Carroll

    & Graff Publishers, originally published 1941.

    Post, John F., 1970, The Possible Liar, Nous, 4: 405409.

    Quine, W. V., 1953, On a so-called Paradox, Mind, 62/245: 657.

    , 1969, Epistemology Naturalized, in Ontological Relativity and Other Essays, New York:

    Columbia University Press.

    , 1987, Quiddities, Cambridge, MA: Harvard University Press.

    Read, Stephen, 1979, Self-Reference and Validity, Synthese, 42/2: 26574.

    Sainsbury, R. M., 1995, Paradoxes, Cambridge: Cambridge University Press.

    Salerno, Joseph, 2009, New Essays on the Knowability Paradox, New York: Oxford University

    Press.

  • 8/3/2019 Epistemic Paradoxes

    31/32

    Scriven, Michael, 1964, An Essential Unpredictability in Human Behavior, in Scientific

    Psychology: Principles and Approaches, ed. Benjamin B. Wolman and Ernest Nagel, New York:

    Basic Books.

    Sextus Empiricus, Outlines of Pyrrhonism, R. G. Bury (trans.) Cambridge Massachusetts:

    Harvard University Press, 1933.

    Skyrms, Brian, 1982, Causal Decision Theory, Journal of Philosophy, 79/11: 695711.

    Sorensen, Roy, 1988a, Blindspots, Oxford: Clarendon Press.

    , 1988b, Dogmatism, Junk Knowledge, and Conditionals, Philosophical Quarterly, 38

    (October) 433 454.

    , 2001, Vagueness and Contradiction, Oxford: Clarendon Press.

    , 2002, Formal Problems in Epistemology, The Handbook of Epistemology, edited by Paul

    Moser, Oxford: Oxford University Press, pp. 539568.

    , 2003a, Paradoxes of Rationality, The Handbook of Rationality, ed. Al Mele, Oxford:

    Oxford University Press, pp. 257275.

    , 2003b, A brief history of the paradox, New York: Oxford University Press.

    Thomson, J. F., 1962, On Some Paradoxes, in Analytical Philosophy, ed. R. J. Butler. New

    York: Barnes & Noble, pp. 104119.

    Tymoczko, Thomas, 1984, An Unsolved Puzzle about Knowledge, The Philosophical

    Quarterly, 34: 437458.

    van Fraassen, Bas, 1984, Belief and the Will, Journal of Philosophy, 81: 235256

    , 1995, Belief and the Problem of Ulysses and the Sirens, Philosophical Studies, 77: 737

    Veber, Michael, 2004, What Do You Do with Misleading Evidence?, The Philosophical

    Quarterly, 54/217: 557569.

    Weiss, Paul, 1952, The Prediction Paradox, Mind, 61/242: 2659.

    Williamson, Timothy, 2000, Knowledge and its Limits, Oxford: Oxford University Press.

    Wynne-Tyson, Jon, 1985, The Extended Circle, Fontwell, Sussex: Centaur Press.

    Academic Tools

    How to cite this entry.

    Preview the PDF version of this entry at the Friends of the SEP Society.

    Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).

    Enhanced bibliography for this entry at PhilPapers, with links to its database.

  • 8/3/2019 Epistemic Paradoxes

    32/32

    Other Internet Resources

    Epistemology Page, maintained by Keith De Rose (Yale University).

    The Epistemology Research Guide, maintained by Keith Korcz (University of

    Lousiana/Lafayette).

    Paradox or Fallacy, maintained by Andrew McMillan.

    The Sleeping Beauty Problem, maintained by Barry R. Clarke.

    Related Entries

    fatalism | Fitch's paradox of knowability | logic: epistemic | logic: of belief revision |

    probability, interpretations of | prophecy | skepticism | suspense, paradox of | vagueness


Recommended