THESES SIS/LIBRARY TELEPHONE: +61 2 6125 4631 R.G. MENZIES LIBRARY BUILDING NO:2 FACSIMILE: +61 2 6125 4063 THE AUSTRALIAN NATIONAL UNIVERSITY EMAIL: [email protected] CANBERRA ACT 0200 AUSTRALIA
USE OF THESES
This copy is supplied for purposes of private study and research only.
Passages from the thesis may not be copied or closely paraphrased without the
written consent of the author.
GENTZEN SYSTEMS AND DECISION PROCEDURES
FOR RELEVANT LOGICS
by
Steve Giambrone
B.A., University of Southwestern Louisiana, 1971 M.A., Louisiana State University, 1975
A dissertation submitted for the degree of
Doctor of Philosophy, in the Australian
National University, February, 1983.
The research contained herein was carried
out either by me independently or in
conjunction with Dr. R.K. Meyer. In
particular, the semantic results of §1.7
and §1.8 are based on collaborative work
performed by Dr. Meyer and myself.
~~ Steve Giambrone
iii
ABSTRACT
This dissertation is primarily a proof theoretic
investigation of the positive fragments and boolean
extensions of two of the principal relevant logics
T and R, with and without contraction, and of th.e
corresponding positive semilattice relevant logics. In
addition to motivational and syntactic preliminaries,
Chapter 1 contains some new semantic results which are Oj _0-,
useful in the later chapters. In particular, T , TW and
RWo 1 are proved to be complete with respect to their
boolean semantics, and are then shown to be conservative
extensions of T0
, TW and RW, respectively. In Chapter 2
we develop subscripted Gentzen systems for four
positive semilattice logics. Appropriate Cut Theorems
are proved, and one system is shown to be equivalent to
uR+' Decision procedures are then given for the two
contractionless systems. In Chapter 3 Gentzen systems
are given for TW+, T+, RW+ and R+, Cut Theorems and
equivalences are proved, and TW+ and RW+ are shown to
be decidable. The sequent calculi that are used are
multiply structured as required for relevant logics.
Chapter 4 begins by collecting decision procedures for
fragments of TW+ and RW+. We then discuss and make
some progress toward solving some open problems, viz.,
the decision questions for EW+, TW and RW, and the
question of equivalence between RW+ and its semilattice
counterpart uRW+.
TABLE OF CONTENTS
Abstract iii
Acknowledgements vi
Preface ix
CHAPTER 1. INTRODUCTION AND PRELIMINARIES
§1. Introduction
2. Relevant Gentzen Systems
3. Decision Questions for Relevant Logics
1
3
9
4. Contractionless Relevant Logics 14
5. Syntactic Preliminaries 19
6. Axiomatics 21
7. Semantic Completeness for 24 Boolean Relevant Logics
8. Conservative Extension 37
9. Semilattice Semantics 52 CHAPTER 2. SUBSCRIPTED GENTZEN SYSTEMS
§1. Introduction
2. Preliminaries
3. Critique of Kron 78
4. Critique of Kron 80
5. G-Systems
6. Vanishing-t
7. Cut and Modus Ponens
8 u c u . R+ - G R+ u u
9. R+ is G R+
10. Decidability
64
66
69
79
85
95
100
113
123
125
CHAPTER 3. DUNN-STYLE GENTZEN SYSTEMS
§l. Introduction 137
2. Formulation l, Definitions and 139 Facts
3. Cut Theorem 148
4. Equivalence and Represent- 155 ational Adequacy
5. Formulations 2 and 3: 165 Vanishing-t
6. Denesting 175
7. Reduction 187
8. Degree and Decidability 197
CHAPTER 4. CONCLUDING RESULTS AND OPEN QUESTIONS
§l. Introduction 204
2. Decidable Fragments 205
3. E+ and EW+ 208
4. Extensions and Decidability 211
5. RW+ = uRW+? 226
BIBLIOGRAPHY 229
vi
ACKNOWLEDGEMENTS
It has been an enormous privilege and pleasure
to have worked with my principal supervisors Dr. R.K. Meyer
and Dr. R. Routley. The influence of their writings and
of both logical and philosophical conversations with them
permeates this work. It has been with Dr. Meyer that I
have worked most closely, and to whom credit is deserved
for much of what is good to be found herein.
I am also indebted to several other scholars
who have been at one time or another members of the
Logic Group of the Philosophy Department, RSSS during my
course of study at the Australian National University,
namely, Dr. C. Mortensen, Dr. M.A. McRobbie, Dr. E.P. Martin,
Dr. J. Slaney, Mr. P. Thistlewaite, and Mr. Adrian Abraham.
I am particularly indebted to Dr. M.A. McRobbie as a proof
theoretic island in an ocean of algebraist, and to
Dr. E.P. Martin for so generously sharing his knowledge of
and insights into Ticket Entailment. I would also like to
single out Mr. P. Thistlewaite who read most of this
manuscript and suggested numerous (needed) corrections.
Above and beyond the debts owed to individual members of
the Logic Group, the Group as a whole deserves mention
for the enthusiastic atmosphere that it provides. It
seems inconceivable that an isolated researcher could
find comparable stimulation for his work, nor enjoyment
therein.
~o~uaw puB pua~~d
~alaw ·x ~~aqo~ or
One cannot write a dissertation without torturing
friends and family (largely with boredom) if, that is,
one is fortunate enough to have such who will endure it.
vii
In this respect I have been abundantly blessed. Those
individuals already mentioned must be thanked for their
tolerance and immeasurable support. In addition, special
thanks also go to Dr. C. Fahlander, Dr M. Dronjak-Fahlander,
Mr. P. Filmer-Sanke, Ms. V. Sieveking, Mr. J. Larocque,
and Ms. L. Sachs.
Ms. A. Duncanson deserves both gratitude and praise
for transforming an illegible manuscript filled with
technical notation into a fine typescript in a very short
space of time.
Finally, I want to thank Mr. Bruce Toohey, an
unsung hero of logic and the source of A-grade inspiration.
1
SECTION 1. Introduction
This thesis is primarily a contribution to the
proof theoretic investigation of sentential or zero-order
relevant logics, although some new semantic results are
contained in this chapter. Chapter II is devoted to
subscripted Gentzen systems for positive semi-lattice
relevant logics based on previous work of Aleksandar Kron.
The systems G~W+, G~RW+' GuT+ and GuR+ are formulated,
and suitable Cut Theorems are proved. Moreover, the
contractionless systems are shown to be decidable.
Although it is likely that all of the systems are equivalent
to their axiomatic namesakes, we have a proof of
equivalence only for GuR+ and u· R+.
In Chapter 3 we build primarily on the work of
J. Michael Dunn to formulate Gentzen systems which are • 0 0 0 0
proved to be equ1valent to TW+' RW+' T+ and R+' respectively.
We then build on insights from the previous Chapter to show
that these contractionless systems are likewise decidable.
Chapter 4 begins by gathering some easy results for
fragments of the logics treated in Chapters 2 and 3. The
latter sections are devoted to discussing some interesting
open questions which arose from this research, and to
contributing as much as we now can to their solution.
These problems are the decidability of EW+' TW and RW, and
the equivalence of RW+ with uRW · +
Sections 2~ of this chapter are devoted to
historical and motivational remarks, while §5 and §6 give
2
the necessary syntactic and axiomatic preliminaries. In
§ 7 semantics are presented for T01, TW
01 and RW
0
1
and completeness is proved. These systems are then shown 0
to be conservative extensions of T , TW and RW,
respectively, in § 8. In § 9, the known semantic
completeness results for semi-lattice relevant logics
are recorded, and an alternative semantics is given for
uRW+. These semantic results are useful for the work to
be done in the following chapters.
3
SECTION 2. Relevant Gentzen Systems
Sequent calculi (consecution calculi, Gentzen
system~ have been a powerful tool of formal research since
their introduction in Gentzen 35. It was recognised early
on that "normal" sequents or consecutions utilizing sequences
of formulae could be used to provide Gentzen systems for
pure implicational fragments of relevant logics. The first
of these was the system LI announced in Belnap 59, which
is equivalent to the pure implicational fragment of E.
(It is presented in detail in Belnap 60.) R was fitted +
with a similar sequent calculus in Kripke 59, in which R+
is also shown to be decidable. A similar style of
formulation was developed in Belnap and Wallace 65 for E , - +-
which is also shown to be decidable there. And Meyer 66
extends the use of simple sequences of formulae in relevant
Gentzen systems all the way to the system R-Distribution.
But to this point no one knew how to fully
accommodate conjunction and disjunction (even just along
with implication) in a relevant sequent calculus. It was
J. Michael Dunn who made the breakthrough (announced in
Dunn 73).
In a sequent calculus for, say, classical logic,
a sequence of formulae is "implicitly representing", if
you like, the conjunction of those formulae when it occurs
in the antecedent of a sequent, and the disjunction of
those formulae when it occurs in the consequent. In the
relevant Gentzen systems which had previously been
formulated, sequences of formulae were being used alternately
4
to implicitly represent intensional conjunction (fusion)
and disjunction (fission). From this point of view, what
Dunn discovered was that two different kinds of sequences
would be required to formulate Gentzen systems for the full
positive relevant logics, in particular for R:. Moreover,
such sequences must be allowed to be arbitrarily nested
within one another.
Such a simplified overview (made with a great
deal of hindsight), misrepresents these ideas as being
simple. They are far from it. We should also note
that equal credit is due to Mine 72 which develops a 0 1 sequent calculus for R .
In any event, Dunn's system LR+ was presented in
Dunn 75. A Cut Theorem is proved there and the system
is shown to be equivalent to R:t. Sequents are singular
on the right, but the antecedents in the general case are
structures, nested within one another to arbitrary depths.
The next advance came from Meyer 76a. Although
that paper deals explicitly only with systems of pure
implication, the moral is obviously more general: Since
sequences are standing in for "generalized" conjunction
of one form or another, why not explicitly introduce
structural connectives corresponding to the formulae
connectives in question. Each such structural connective
is to be governed by structural rules expressing its
particular "combinatorial" properties.
These ideas lead naturally to Gentzen systems
for other positive relevant logics via Routley and
Meyer 72.
5
This technique gives definite notational
simplification, and, we think, conceptual clarity and unity
to the complex Gentzen systems of positive relevant logics.
Indeed, we view the ideas of Meyer 76a as a bridge between
Dunn 75 and Belnap 8+.
Display Logic is presented in Belnap 8+. It is a
very general and powerful Gentzen system which can
simultaneously accommodate an indefinite number of logics -
many well-known and others yet to be "discovered". The
central idea is to conceive of connectives as coming in
"families". Each family has a (formula) "conjunction", a
"disjunction", a "negation", an "implication", and at least
one sentential constant (0-ary connective), "the true".
Some families may have other formula connectives, such as
necessity in modal cases. But all of the formula
connectives can be defined in terms of "kernel" connectives
of conjunction, disjunction, negation and truth.
Further each family has three structural connectives.
One is a negative structural connective, obviously standing
in for negation. The ''interpretations" of the other two
are context dependent, just as with Gentzen's commas and
the empty symbol. So one of these is alternately
interpreted as the conjunction (of the family) or the
disjunction, depending on whether or not it occurs as an
6
2 antecedent or consequent part of a sequent. And the final
structural connective alternately stands in for the true
or the negation of the true (the false, when such is
postulated as a primitive formula connective}. 3
Now for each family of connectives, three sets of
postulates are given. The first is a group of display
equivalences. The second is a set of structural rules.
These two together determine the character of the structural
connectives of a given family. Finally, there are logical
or (formula) connective rules, one for introducing a
given connective on the left, and another for introduction
on the right.
Different logics can now be associated with one or
more families of connectives. Classical logic, for instance,
is ·associated with th.e boolean connectives, whereas modal
and relevant logics are individually associated with the
boolean family of connectives and a family of connectives
distinctive to the particular logic in question. So it
is often convenient to think of not one Display Logic,
but of many Display Logics, each having only a definite,
given family or families of connectives governed by
particular postulates. 4
Another central feature of Display Logic is
Display Theorem. (Theorem 3-2, section 3.2 of Belnap 8+).
Each antecedent part X of a consecution S can be
displayed as the antecedent (itself) of a display-
equivalent consecution X l- W; and the consequent W is
7
determined only by the position of X in S, not by what X
looks like. Similarly for consequent parts of S.
This feature of Display Logic allows a very general (and
very pretty) Cut Theorem to be proved in Belnap 8+. It
covers an enormous range of "Display Logics". (See note 4
of this section.)
Prior to Display Logic, there were no known Gentzen
systems for relevant logics with negation and a full
complement of positive principles. The logics representable
in Display Logic include the major boolean or classical
relevant logics. The conservative extension results of 0 0
§1 and Meyer and Routley 74 show that TW, RW, T and R
are also exactly represented. But the significance of
Display Logic ·goes beyond that of providing sequent
calculi for various relevant logics.
Before closing this section, we should note that
this historical sketch is in no way intended to be complete.
Most particularly, we have not discussed the ''one-sided''
Gentzen systems, as in Belnap and Wallace 65, McRobbie
and Belnap 79 and McRobbie 79, nor the merge systems of
AB75. However, what has been presented here was intended
as stage-setting for the results of succeeding chapters, in
which such systems play no part.
8
FOOTNOTES
1 .m A similar sequent calculus for R+ · was developed
independently in BGD80.
2Antecedent and consequent parts are more or less what
one would expect. For an explanation, see sections 2.3
and 2.4 of Belnap 8+, or §4.4 of this work.
3The question of how many families are to be postulated
is left open, but some interesting examples are given.
4Although this move somewhat mars the beauty and possibly
the ultimate conception of Display Logic, it is very
convenient for particular purposes. So we take that
point of view in Chapter 4.
9
SECTION 3. Decision Questions for Relevant Logics
Decision Questions for sentential relevant logics
have been notoriously difficult to answer. Until very
+ recently only one system, Q of Meyer and Routley 73c, has
been shown to be undecidable. (But see below.) The
question for the full systems T, E and R remains open in
spite of significant efforts (by many stalwart logical
champions) to arrive at a solution, especially for R.
Indeed, since Martin slew the dragon of Belnap's conjecture
in Martin 78 (also published in Martin and Meyer 8+), the
decision problem for R has come to be known as "the big
enchillada". (Terminology is due to R. K. Meyer.)
The system T of ticket entailment has proved to be
particularly recalcitrant. Aside from PW (and the zero
degree and first degree fragments of T, which it has in
common withE and R), decision questions for even interesting
subsystems ofT have remained open (until this work);
whereas, these questions for R , E , R and E have all ~ + +~ +-
been answered in the affirmative. (See Kripke 59, Belnap
and Wallace 65 and Meyer 66.)
A decidability claim for TW+ and RW+ was made in
Kron 78 on the basis of a claimed decision procedure for
the subscripted Gentzen systems GT+-W and GR+-W. However,
we show in the next Chapter that those systems are not
equivalent to TW+ and RW+. Further, the argument for
decidability is unsound.
We then build on the work of Kron 80 to produce
10
subscripted Gentzen systems for positive semi-lattice logics,
and appropriate Cut Theorems are shown. However, at this
point, only one, GuR+' can be proved equivalent to its
namesake. We then show decidability for the two
contractionless systems, GuTW+ and GRW+.
0 0
In Chapter 3 we show that TW+ and RW+ are indeed 0 0
decidable. Dunn-style Gentzen systems LTW+ and LRW+ are
developed for this purpose, and insights gained in
Chapter 2 are used to show decidability.
The subscripted systems and the Dunn-style systems
are treated in a broadly similar fashion. We take a
rather common approach of giving different formulations
best suited for different purposes. (See Curry 63 and
Kleene62, for instance.) The initial formulations utilize
t (or its structural analogue) to remain non-empty on the
left, which facilitates a proof of Cut. The systems are
then given formulations without t which are more suitable
for showing them decidable.
The arguments for decidability are obviously proof
theoretic. First, a complete and effective proof search
tree is defined for any given formula. Konig's Lemma
(Konig 27) is then used to show that all such proof search
trees are finite. As usual the Finite Fork Property
presents little problem. But as we have said before,
Gentzen systems for full positive relevant logics and their
supersystems are more complex than Gentzen systems which
use simple sequences of formulae as structures for building
sequents, so a new technique had to be developed for showing
the Finite Branch Property.
Obviously, it suffices for the Finite Branch
Property to get control over the length or "complexity''
11
of a sequent that can occur on any branch of the proof
search tree - given Irredundancy and an appropriate
Subformula Property, that is. The technique developed in
this thesis involves first ~eparating out" the intensional
and extensional structural components of sequents, and
getting control over each separately. In the Dunn-style
systems, the different structural components are explicitly
distinguished (except at the formula level) as intensional
and extensional sequences. In the subscripted systems,
intensional and extensional components of a structure must
be unpacked from the interrelations of the subscripts
occurring in it.
Control over the extensional complexity of a sequent
is gained by showing a Reduction Lemma in the spirit of
Gentzen 35. Then we develop an appropriate notion of
degree as a measure of intensional complexity. Of course,
a Degree Lemma is proved which gives control over intensional
complexity. And then it is shown that the combined effect
yields the required control on the overall complexity of
the sequent.
It is hoped that this technique can be generalized
to handle all of TW and RW. A proposed method of doing so
is sketched in §4 of Chapter 4. However, the method does
not appear to generalise to even positive systems with
contraction. And a recent result by Urquhart would lead
12
one to believe that no argument for decidability will do in
these cases.
We mentioned before that the system Q+ has been
shown to be undecidable. However, that system was motivated
simply as an undecidable relevant logic. The system KR
(see 5. 4 of RLRI), on the other hand, was independently
motivated and of interest in its own right. In Urquhart 82
the word problem for semi-groups is encoded into KR - via
a theory of projective geometry - thus showing that the
system is undecidable.
If one can, as many suspect, encode at least an
appropriate portion of KR into R, the big enchillada will
be made a meal of. As essentially recorded in Meyer and
Giambrone 80, if such encoding can be done in R+' then the
decidability questions for (R+,) T+, E+, T and E will
likewise be answered in the negative.
ADDENDUM
We received a copy of Urquhart 8+ while we were
in the process of making final corrections to this
manuscript. That work ends an era of research into
decision questions for relevant logics by proving that
T, E and R - and many other relevant logics - are
undecidable. To this author's knowledge, T, E and R
are now the first philosophically well-motivated
sentential logics to be shown undecidable.
Although we have not yet had time to study the
paper in detail, we suspect that it dashes our hope of
showing TW and RW decidable £[ the method proposed in
§ 4. 4 .
13
14
SECTION 4. Contractionless Relevant Logics
The study of contractionless logics goes back at
least as far as the development of Zukasiewicz's three
valued logic Z3 first published in Zukasiewicz 20. (See
§1.6 for a statement of the Contraction Axiom.) And
interest in contractionless relevant logics was present
very early in the study of relevant logics per se. §8.11
of AB75 gives Belnap's Conjecture, namely, that if A+B and
B+A are both theorems of PW, then A and B are the same
formula. The original date of the conjecture is unknown to
this author, but progress toward its solution had been
reported by Powers as early as 1968. (The results were
eventually published in Powers 76.) We have already noted
that the conjecture was proved in Martin 78.
RW+ appears in Smiley 591 and is studied in Meredith
and Prior 63. Both it and PW were given formal semantics in
Urquhart 73, as well as subscripted Gentzen systems. But
interest in full contractionless systems was first
stimulated by Meyer, Routley and Dunn 79, where it is shown
(with due acknowledgement to Curry and Feys 58) that a
non-trivial naive set theory can not be founded on R, T
nor E. The problem, of course, is that the contraction
axiom in the presence of other very minimal logical
principles will collapse any theory containing the full
Abstraction Principle.
The ability to be used in formally investigating
non-trivial but inconsistent theories, i.e. being weakly
paraconsisten~ has always been a motivating feature
15
of relevant logics. And naive set theory has always been
near the top of the list of interesting such theories -
both within and outside of the relevant program. So
contractionless relevant logics have found favor with those
who want a logic suitable for such purposes, but want to
maintain as much as possible of the full motivation of the
traditional relevant logics. 2
Another point on which the contractionless relevant
systems commend themselves is that of being more Catholic
than the Pope with respect to a central feature of relevant
logics. AB75 begins with the claim that ''the heart of
logic [lies] in the notion of 'if ... then -· •; .... ''We
take the point to be that the central task of a logic is to
separate out valid inferences from invalid ones. Contraction
less relevant logics can be seen as taking this point further.
Distinguishing valid inferences from invalid inferences is
not simply the central task of logic, it is the task of
logic.
To be sure, logic must say something "about" truth
functions, since they are needed in the vocabulary to
express certain truths about implication, such as, if A
implies B and A implies C, then A implies B and C. But
whether or not Excluded Middle, for instance, is to be
accepted is not a matter to be determined by logic.
Accordingly, the contractionless relevant logics (we
have TW, RW and EW particularly in mind) do not record any
purely truth-functional tautologies as a logical truth.
More precisely, no formula in which an+ does not occur
is a theorem of these logics.3
16
It is not that such logics deny any of the putative
truths about truth functions - or for that matter about
quantifiers, or alethic modal operators, or what have you.
Rather, it is that such mattersare to be decided on
non-logical grounds, and recorded in theories appropriate
to those subject matters.
And with respect to taking valid inference to be
its proper subject matter, TW outshines its cousins. For
Slaney 83 establishes the following fact.
TW Implicational Fact. 4 Every theorem of TW is equivalent
to a conjunction of theorems each of which is a disjunction
one of whose disjuncts is a valid implication.
As Slaney puts it, this fact ''establishes a good sense in
which TW is fundamentally implicational 1'. Which is, we
think, as a logic ought to be.
Now one point needs clarification here. It is an
historical accident that RW, TW and EW are called
lcontractionless logics!. There are (in many ways good)
historical reasons for the name, these reasons having to
do with the usual axiomatisations of R, T and E. But
there is nothing sacrosanct about these axiomatisations.
One could add Excluded Middle, for instance, to any of
these contractionless logics and still have subsystems of
the original logics in which some instances of Contraction
were not valid- i.e., still have contractionless systems.
It is not the simple lack of Contraction as a theorem which
17
gives these systems the characteristic which we have been
commending above.
Finally, we should note that the contractionless
relevant logics are prime; that is, a disjunction is a
theorem of one of these systems just in case one of the
disjuncts is. This is a trait which is put to good use in
§1.7 and 1.8. And of course it is a point on which to
recommend these systems to those who cherish constructivism.
FOOTNOTES
1As the pure implicational fragment of a system with
negation also primitive.
18
2However, serious doubts are cast on the usefulness of RW
in this respect in Slaney So and Slaney 83.
3we have avoided the normal reference to zero degree
formulae, since a slightly different notion of degree is
employed in this work.
4The name is our own.
19
SECTION 5. Syntactic Preliminaries
A number of logics are treated in this work. For
each, a formal language£, a set of formulae, is assumed,
being built up in the usual way from (a denumerably infinite
stock of) atomic formulae - and a sentential constant t,
where appropriate - via the suitable connective(s) from
among the unary connectives ~and 1, and the binary
connectives &, v, o and~. In order of occurrence, these
connectives are De Morgan negation, boolean negation,
(extensional) conjunction, (extensional) disjunction,
fusion or intensional conjunction, and implication.
Other connectives and/or sentential constants are defined
when useful.
We take formulae to be obs, more or less in the
sense of Curry 63. We use
p,q,r,pl,ql,rl,···
as sentential parameters taking atomic formulae as values,
and
A,B,C,D,A 1 ,B 1 ,C 1 ,D 1 , •••
as formula variables. Representations of formulae are
disambiguated according to the following conventions:
the connectives bind less tightly in the order presented
above (so- binds most tightly and+ least tightly);
dots and parentheses are used in the standard way;
otherwise, disambiguate by associating to the left.
However, where there is no chance of confusion, we use
20
simple juxtaposition for o, in which case it binds more
tightly than any other connective.
Notation for various Gentzen systems is introduced
along with those systems. Local variables, for sets for
instance, are introduced when needed. c We let u, n, -,
and E serve their usual set theoretic functions; and let
c be proper subset, while u is generalised set union
(see p.lll of Kuratowski and Mostowski 68, for instance.)
21
SECTION 6. Axiomatics
Hilbert style axiomatizations of the logics
treated in this work can be formulated from the following
axiom schemata and rules. For each logic we assume the
appropriate base language£; and for the boolean systems,
it is convenient to have r~l according to
Definition. A~B =df IA¥B
Axl. A+A
Ax2. A+B+.B+C+.A+C
Ax3. B+C+.A+B+.A+C
Ax4. (A+.A+B)+.A+B
Ax5. A+.A+B+B
Ax6. A+B+.A+B+C+C
A!-7. AB+C+.A+.B+C
Ax8. (A+.B+C)+.AB+C
Ax9. A&B+A, A&B+B
AxlO. (A+B)&(A+C)+.A+B&C
Axll. (A+A+A)&(B+B+B)+.A&B+A&B+A&B
-Axl2. A+AvB, B+AvB
Axl3. (A+C)&(B+C)+.AvB+C
-Axl4. A&(BvC)+.(A&B)vC
Axl5. -A+A
Axl6. A+-B+.B+-A
Identity
Suffixing
Prefixing
Contraction
Assertion
Restricted Assertion Exportation
Importation
Simplification
Composition
Addition
V+
Distribution
-Double negation
-contraposition
Axl7. A-+~A-+~A
Axl8. t
Axl9. t-+.A-+A
Ax20. A&IA-+B
Ax21. A-+.B-+CVIC
Ax22. (A-+B)~.A~B
Rl
R2
R3
A A-+B B
A B A&B
A-+.B-+C AB+C
22
~Reductio
Ex Falsum Quod
Libet
Modus Ponens
Adjunction
Rule Importation
R4 A&(A 1&ql-+· ... +.A &a+C)+.B 1&q1+ .... -+.B &q +B Fine's Rule n n n n
A& CA1.&.q1-+ .... -+.An &<JDD )+ .B1 &q1-+ •... -+. Bn &<1n +B
A&(Al-+· ... -+. An -+CvD)-+.Bl-+ ..... -r.Bn -+B
with 0 ~ n, provided ~occurs only where indicated.
FORMULATIONS
PW. Axl-3; Rl
T. Axl-4,9,10,12-17; Rl,R2
E. T + Ax6,11
R. T + Ax5
urr. T + R4
uR. R + R4
23
Fragments and extensionscan be formulated by dropping
and adding, respectively, the appropriate axioms and rules.
Contractionless (W-) systems are obtained by dropping Ax4.
Not all of the axioms are independent in the various
sytems. Ax22, in particular, is redundant in R01 , T01
RW 01. However, it is independent in TW 01 , as the following
matrices (due to Meyer) show.
Matrices for&, v, and I can be read from the
following Hasse diagram
3
1\ 2 1
\I 0
with- identified with !,and 3 the only designated value.
The +-matrix can be specified by
1. a+b = 3, if a< b, and
2. a+b = 2, otherwise.
We now leave it to the reader to verify the claim.
In the next section we will help ourselves to
well-known and/or easily proved theorems of all of the
boolean logics, such as truth-functional tautologies
in & , v, and -l, DeMorgan Laws and Double-negation for I,
as well as (boolean) Disjunctive Syllogism, i.e.,
~L A&(IAvB)+B.
Throughout this work we use I f-1 subscripted
with the name of a logic or with L to indicate theoremhood
in a logic or logics - as we have just done above.
24 SECTION 7. Semantic Completeness for Boolean Relevant Logics
In Meyer & Routley 74a, R"~ is shown to be complete
c respect to its boolean semantics, and semantic argument 0
is given to show that it is a conservative extension of R .
In this sectjon we show that T"~, TW"~ and RW"~ are
similarly complete with respect to their expected semantics,
as a prelude to showin~ analogous conservative extension
results. So let L range over those three systems.
Now let a L model structure (L m.s.) be a
quadruple <K,O,R,*> with K a set, OEK, *a unary operation
on K, and R a ternary relation on K satisfying postulates
from the following as given below.
pl. ROab iff a = b
p2. Raaa
p3. Rabc => R2abbc
p4. R2abcd => R2a(bc)d and R2b(ac)d
p5. R2abcd => R2acbd (Pasch)
p6. a** = a
p7. Rabc => Rac*b*
p8. Raa*a
with the following definitions
Dfl. R2 abcd = df for some x (Rabx and Rxcd)
Df2. R2 a(bc)d = df for some x (Rbcx and Raxd)
Df3. a < b = ROab df
I. For TW"~: pl, p4, p6, p7.
II. 01 6 For RW : pl, p5, p, p7.
III. For T0
1: pl, p3, p4, p6, p7, p8.
IV. 01 6 For R : pl, p2, p5, p , p7.
For any (L) model structure~= <K,O,R,*> a
valuation V on M is a function from {atomic formulae
of£} x K into {True, False}. And for any such
25
valuation V, we define the interpretation I associated with
Vas a function from{£ x K} into {True, False}, satisfying
the following conditions:
Ip. I(A,a) = True iff V(A,a) = True, for any atomic
formula A.
I&. I(B&C,a) = True iff I(B,a) = True and I(C,a) = True.
Iv. I(BvC,a) = True iff I(B,a) = True or I(C,a) =True.
I+. I(B+C,a) =True iff for all b,c EK, if Rabc and
I(B,b) = True, then I(C,c) = True.
I-. I(-B,a) =True iff I(B,a*) =False.
II. I(IB,a) = True iff I(B,a) = False.
Then let us say that a formula A is true on an
interpretation I (on a model structure ~ = <K,O,R,*>) at
a point x (i.e., XEK) just in case I(A,x) =True. A is
verified by I on ~ just in case I(A,O) = True. Finally, a
formula A is L valid just in case for every L m.s. ~
and every interpretation I thereon, A is verified by I
on M -· We leave it to the reader to show that
Theorem 1.7.1. Lis consistent with respect to the L
semantics, i.e., for any formula A, A is provable in L
only if A is valid in L.
For the sake of notational convenience let us
write lA I for II(A,x) = Truel. We will then write X
lA = Fl for II(A,x) = Fl. Now we will proceed to show X
completeness along the standard lines, i.e., using L-
theories.
An L-theory is a non-null set of formulae
26
closed under adjunction and L-implication, i.e., if A is
an element of an L-theory and ~LA+B, then B is an element
of that L-theory. Taking L to be its set of theorems, a
theory S is regular iff L c S. S is prime iff AVBES
only if AES or BES. It is !-complete just in case for all
A, either AES or IAES; and it is !-consistent iff for no A
is it the case that AES and IAES. (Note that for any
L-theory S, Sis !-consistent iff Sis non-trivial.) And
S is !-maximal iff it is both !-consistent and !-complete.
Since AviA is a theorem of L, regularity and primeness
imply !-completeness.
Let us now say that an L-theory S is faithful
just in case it is closed under modus ponens, i.e., if
AES and (A+B)ES, then BES. Not all theories of the
contractionless systems are faithful. But
Fact 1.6.1. For any regular L-theory x, x is faithful.
27
Proof. Let x be an arbitrary regular L-theory, and
assume A+BEx and Adr.. Since 1-1
A+B+II(A+B) and x is
closed under L-implication, II(A+B)Ex. But ~L A+B~.A~B;
whence II(A+B)&(A+B~.A~B)Ex, since x is regular and
closed under adjunction. So by (boolean) Disjunctive •
Syllogism and the fact that x is closed under L-implication,
A~BEx. But on assumption AEx. So by a similar argument,
BEx as required.
The reader should take particular notice of this fact.
We will feel free to call upon it without reference in
what follows.
Now for any theorioox andy, let x0y = {CI there
is a B such that B+CEX and BEy}. Where there is no
danger of confusion, we let xy be x®y. And for any
L-theory S, an S-theory will be a non-null set of formulae
closed under adjunction and S-implication.
Now for any regular, maximal L-theory S, let
ST = {S-theories}. Note that since S is faithful, it is
itself an S-theory. As a prelude to completeness, we will
want some facts about ST. So in what follows, let S be
an arbitrary regular, maximal L-theory.
Fact 1.7.2. For all x,yEST, xyEST.
Proof. Let x andy be elements of ST. For any A,B and C,
28
(A+.B+CviC)EL. So since x is non-empty
and closed under 3-implication, whence under L-implication
since S is regular, (B+CviC)Ex, for all B. But y is
non-empty; so AviAExy by the definition®. Thus xy is non-empty
The by now standard argumenwutilizing Composition
and Prefixing, respectively, will show that xy is closed
under adjunction and S-implicatio~ to finish the proof.
~~~~--~~----
Fact 1.7.3. For all XEST, Sx = x; and in the case of
R01 and RW 01 , xS = x.
Proof. ~L A+A and ~ RWo A+.A+B+B, from which facts +
the reader can easily construct a proof.
For any XEST and any formula A, let [x,AJ =
{CI for some BEx, (B&A+C)ES}. Then
Fact 1.7 .4. For any x and A, [x,A]EST; and if Aix,
then xc[x,AJ.
Then let us define a ternary relation R on ST
as follows: Rabc iff ab £c. R2 is defined in the obvious
way. The reader can easily verify
Fact 1.7.5. R2abcd iff abc~ d, and R2a(bc)d iff
a(bc) ~ d. Further, R satisfies the
postulates from p2-p5 appropriate to L.
Of course, we are well on the way to doing the
typical canonical modelling. To accommodate * and Iv,
we will move to !-maximal S-theories. A few more facts
will put the needed machinery in good working order.
Fact 1.7.6. All maximal S-theories are prime.
Proof. Use De Morgan Laws for I.
Fact 1.7.7. Fov any !-maximal XEST, y = {AI-Aix} is a
!-maximal S-theory.
29
Proof. Choose an arbitrary !-maximal S-theory, say x, and
let y = {CI-cix}. We must show:
(1) y is non-empty. But since x is a !-maximal,
and thus !-consistent, S-theory, at least one of --p and
--lp fails to be in x. Whence y is non-empty on definition.
(2) y is closed under adjunction. Use Fact 1.7.6.
(3) y is closed under S-implication. Use
-Contraposition.
~
Next we prove a few lemmas to help us verify the
postulates when the time comes.
30
Lemma 1. 7 .1. For any x,y,zEST such that z is !-maximal,
if Rxyz, then there is a !-maximal WEST such that
Rwyz and x ~ w.
Proof. Choose arbitrary x, y and z satisfying the
conditions of the Lemma. Let X be the set of all w E ST
such that x c w and wy c z. X is non-empty (x is in it)
and partially ordered by c. Further, each chain of X is
bounded from above by the (possibly infinite) union of its
members. So by Zorn's Lemma, there is a maximal element of
X (maximal in terms of c, that is.) Let x ' be one such
maximal element. It will now suffice to show that x' is
!-maximal. Since z is !-maximal and hence !-consistent,
x' is !-consistent, since x'y c z. So it will suffice to ,
show that x is !-complete.
So assume for reductio that Aix' and IAix'.
By Fact 1. 7 .4 [x' ,AJ and [x' ,lA] are members of ST,
x'C[x',AJ and x'C[x',IAJ. So by the maximality of X~
[x',A]y ~ z and [x',IA]y ~ z. So let B,B',C, D,E,F be
such that BEx~ B'Ex: EEy, FEy, Ciz, Diz, (B&A+.E+C)ES
and (B'&IA+,F+D)ES. Then note that since S is regular,
((B&B')&(AviA)+.E&F+CvD)ES. Further AviAEx', whence so
is (B&B')&(AviA). But then E&F+CVDEx' and E&FEy. So
CvDEZ. But z is maximal, and hence prime by Fact 1. 7 .6.
So either CEZ or DEz which contradicts our assumptions .
• So x is !-maximal, which suffices for the proof.
31
In a similar fashion, one can prove
Lemma l. 7 . 2 . For all x,y,zEST such that y is !-consistent,
z is !-maximal and Rxyz, there is a !-maximal wEST such that
RXWZ and y C W.
Lemma l. 7 . 3 . For all w,x,y,zEST such that z is !-maximal
and R2w(xy)z, xy is !-consistent.
Proof. Let w,X,Y,Z be S-theories in accordance with the
lemma. Then w(xy) c z, by Fact 1.7.5. w is not empty,
so let AEW. z is !-maximal, so assume Biz. Then assume
for reductio that C&ICExy.
Now note that I-LA+.-B+CviC1 and 1-LCviC+-(C&IC). So
1-LA+.C&IC+B, whence (A+.C&IC+B)ES. So (C&IC+B)EW and BEZ,
which is absurd. So the proof is finished.
~
We are now ready for the first crucial lemma for
completeness. So let S be a regular, !-maximal L-theory.
Then let K = {xlx is !-maximal and XEST}. Let R be c c
the restriction of R(on ST) to K , and let a* = {AI-Aia} c
for each aEKc. Then
Lemma 1.7.4. Canonical Model Structure Lemma
<K ,S,R ,*> is an L.m.s. c c
32
Proof. SeK , R is a ternary relation on K by definition, c c c and Kc is closed under* by Fact 1.7.7. So it will suffice
to show that the appropriate postulates are satisfied.
Ad.pl. Right to left is immediate by Fact 1.7.3.
For left to right, use Fact 1.7.3, the !-completeness of a
and !-consistency of b.
Ad.p2,p6,p7 and p8, where appropriate,
The arguments are straightforward using Contraction for the
first and Double Negation, Contraposition and Reductio,
respectively, for the last three.
Ad.p3,p4,p5, where appropriate.
The arguments are similar in each case. We show only that
R~abcd ~ R~b(ac)d for TW0
'.
Choose a,b,c,deK and assume that R2abcd, i.e., c c
abc c d. We must show that for some xeK , R acx and - c c
Rcbxd, i.e., ac c x and bx c d.
On assumption, R;abcd and hence R 2 abcd by definition.
So by Fact 1.7.5, R 2b(ac)d, i.e., b(ac) c d. Then
by Lemma 1.7.3, ac is !-consistent. Recalling that d
is !-maximal, we see that bw c d and ac c w for some weKc
by Lemma l. 7. 2. But that is precisely what was to be shown,
so we are finished.
The other cases can now be left to the reader. (Use
Lemma 1. 7 .1 in place of Lemma 1. 7. 2 for the. other half of
p4, and Contraction for p3.) So the proof is completed.
33
Again let S be a regular, maximal L-theory, and
let M = <K ,S,R ,#> be as in the previous lemma. Now define c c
a canonical valuation V on M by: c -
Vc(A,x) =True iff AEX, for all atomic formulae A and all
XEK c.
Obviously, V is well-defined. So let I be the associated c c
interpretation. Then
Lemma 1.7.5. Canonical Interpretation Lemma
A iff AEx, for all formulae A and all XEK . X C
Proof. The standard argument, as in Meyer and Routley
73b, for instance, will do. (N presents no difficulties.)
The avid reader who actually wants to check the proof
will find Fact 1.7.6 useful for Iv; and Lemmas 1.7.1
and 1.7.2 will be handy for I•.
~
Given these two lemmas, the following will clinch
the completeness argument.
Lemma 1.7.6. Refutation Lemma
For any A, a non-theorem of L,there is a regular,
!-maximal L-theory of which A is not a member.
34
Proof. Let A be an arbitrary non-theorem of L, and let
X = {wlw is a regular, !-consistent L-theory such that
Aiw}. LeX, so X is not empty. By the standard argument,
X has a maximal member. Let x be one such. Then it will
suffice to show that x is !-complete.
First we show
Refutation Fact. IA•x
Proof. Assume for reductio that IAix. Then x c [x,IAJ,
whence [x,IAJ is regular and thus faithful. Further, by Fact
1.7.4 we see that [~,IAJ is an L-theory. So by the
maximality of x, either AE[x,IAJ or [x,IAJ is !-inconsistent.
But since IAE[x,IA], it is !-inconsistent in either case.
So assume ~ 1B&IA+C&IC, with BEx. But then it is
easy to show that l-1
B+.Av(C&IC), whence ~ 1B+A. But
since x is an L-theory and B•x, AEx. But this contradicts
x•X, thus proving the fact.
Then we show x is !-complete by reductio.
So assume Dix and IDix. Then let y = [x,DJ.
Again xcy, so by the Refutation Fact, IAEy. Now the argument
used for the Refutation Fact will show IDex, whis is absurd.
So the proof is finished.
~
That lemma puts us in the home stretch.
35
Theorem 1.7.2. L is complete with respect to the L
semantics, i.e., for any formula A, A is valid in the
L semantics only if ~LA.
Proof. By contraposition. So let A be a non-theorem of
L. By the Refutation Lemma, there is a regular, 1-maximal
L-theory of which A is not a member. Let S be such.
Then let M = <K ,S,R ,#> be as before. By the Canonical - c c
Model Structure Lemma, M is an L.m.s. Then let Vc and Ic
be as in the Canonical Interpetation Lemma. By that
lemma Ic(A,S) =False, since AiS on assumption. So A is
not L valid, and we are finished.
~
We sum up this section with
Theorem 1.7.3. Boolean Completeness Lemma
R0
1, Tol, RW 0 1, TW 0 1 are consistent and complete with
respect to their boolean semantics.
36
FOOTNOTES
1This is of course an instance of Ax21, which is stronger
than is needed with fusion in L, since A~BviB would do.
But without fusion, the extra strength is needed in
precisely this spot for the boolean T-systems.
37
SECTION 8. Conservative Extension
~~ 0 That R is a conservative extension of R · was
shown in Meyer and Routley 74. We can now extend this
result to T0
~, TW 0~ and RW 01 . Recent results in the study
of contractionless systems will ease the way in the latter
two cases. The technique used for the first case is an
adaptation of one used by Adrian Abraham in an independent
proof of the result of Meyer and Routley 74. So let us
first turn to T0 1.
The argument is semantic in nature. The full
semantics of T0 has yet to be published, but it turns
out to be much as one would expect from Routley and
Meyer 72. That is, a T 0
m. s. is Oj like aT m.s., except
for having
, pl . ROaa, and
pl". R2 0abc ~ Rabc
in place of pl. And a valuation must satisfy
Hereditary Condition. (HC). For any points x, y and
atomic formula p, if V(p,x) = True and x ~ y, then
V(p,y) =True. (x ~ y iff ROxy.)
All other relevant definitions are as usual. Notice that
adding - to T+ does not necessitate adding a set of
''regular worlds'' with special postulates to T+ m.s. as
was the case with E. (See Routley and Meyer 8+.)
38
The completeness proof of Routley and Meyer 1973a
can be straightforwardly adapted to show
Theorem 1.8.1. To Completeness Theorem
0
~ToA iff A is valid in the T semantics.
And let us say that a T0
m.s. is normal just in
case it satisfies the additional
pl'n. (Normality Postulate) 0 = o*.
Further let us say that A is normally valid just in case
it is valid on the normal semantics. Then we have
Lemma 1. 8 . 1. 0
T Normality Lemma
~ToA iff A is normally valid.
Proof. As in the Normality Lemma of Routley and Meyer
72b. Left to right is obvious by Theorem 1.7.1, since
every normal m.s. is an m.s. For right to left, assume 0
A is a non-theorem ofT . By Theorem 1.7.1, let
* 0 M = <K,O,R, >be a T m.s. with V a valuation thereon
such that I(A,O) = False, with I the interpretation induced , # , # * by V. Then let K = Ku{O }, let 0' = 0 and a = a ,
for all aEK, and let R' be defined by:
For all a,b,cEK,
' 1. R abc iff Rabc
2. RO'o'a iff ROOa
39
3 . RO'ab iff ROab
4. RaO 'b iff RaOb
5. R'O'aO' iff ROaO*
6. R'aO'O' iff Raoo*
7. R'abO' iff Rabo*
8. RO'O'O'
+ Now for all aEK, let a be a and a be a. And
,+ ,- * let 0 be 0 and 0 be 0 . The reader can confirm that , . + + - , R abc lff either Ra b c or a=b=c=O . Then note that
ROa-a+, and that Ro*oo* by p8. Further, one can assume
without loss of generality that SN1-SN3, below, hold for
M. (It is simple to show that they hold for all T"
canonical model structures.) With these facts in hand,
we leave it to the reader to show that~· = <K' ,0' ,R1 ,#>is a T"r
Now define a valuation v' on ~K such that for any
A, V'(A,a) = V(A,a), for all aEK; and V'(A,O') = V(A,O).
Let I' be the interpretation on~ associated with V'.
- + The reader will have already noticed in verifying ROa a
that ROo*o. The argument for Theorem 4 of section VIII of
Routley and Meyer 72b can then be adapted to show that for
all aEK and for any formula B, I'(B,a) = I(B,a) and if
I(B,O) =False, then I'(B,O') =False. (Use SNl below for
fusion.)
So! 'is a normal T"m.s. and I' (A,O') =False,
which suffices for the Lemma.
~
40
The Normality Lemma is strong enough for the
conservative extension argument for R01 . But T01 requires
something stronger. So let us say that a T0
m.s. is
super-normal if it is normal and additionally satisfies
SNl. c ~ d and Rabc * Rabd;
SN2. a~ b and Rcbd * Read; and
SN3. RaOb and Rbcd * Racd.
And let us say that A is super-normally valid iff it is
valid on the super-normal semantics. Then
Lemma 1.8.2. Super-Normality Lemma
~ToA iff A is super-normally valid.
Proof. Simply verify that the canonical model structures
satisfy SNl - SN3, and that this property is preserved
under the normalization construction.
The strategy for showing T01 is a conservative 0 0
extension of T will be to show that a normal T model
can be ''changed into" a T01 model that refutes the same
formulae in the original vocabulary.
The essence of the procedure is to add a new base
world and give it the properties that it must have.
Thinking in terms of canonical models, this amounts to
making a "copy" of the base theory, renaming it and fixing
R for it in the desired way. c
* 0 So let M = <K,O,R, > be a super-normal T m.s.
and let V be a valuation thereon with I the associated
41
interpretation. Then let !:'!--, = <K1 ,o1 ,~,#> where
(1) K1 = Ku {01 }
(2) # * a = a , for all aEK; # 01 = o1 , and
(3) (i) R1 abc iff Rabc, for all a,b,cEK;
(ii) ~b01c iff RbOc, for all a,b,EK;
(iii) R1 bc01 iff RbcO, for all b,cEK;
(iv) R1 o1 bc iff b = c, for all b,cEK1 ; and
(v) Rao1 o1 iff RaOO, for all aEK.
It is at (3)(1) and (3)(iii) that this technique
differs from the usual one. Pasch and p7 guarantee that
RabO iff RaOb* iff ROab* in normal canonical models, whence
a= b*. And of course ROab iff RaOb. But this is not the
case in general without Pasch. So we give R1 01 bc the
necessary property, and otherwise let 01 behave like the
original 0. Now to show that it works.
Lemma 1.8.3. Let ~and~~ be as above. ~~is a T 01
model structure.
Proof. As these things usually go, only p4 requires any
work to check. It obviously holds when a,b,c and d are
members of K. To finish the proof, one alternatively sets
a = 01 , b = o1 , c = 01 and d = 01 and shows that the
postulate holds in each case. We leave it to the reader
to check that supernormality gets you through.
* 0 Now let m = <K,O,R, >beaT .m.s., and let V be
a valuation thereon with I the associated interpretation.
Then let ~~ = <K1 ,o1 ,R1 ,#> be the corresponding T01 m.s.
as above. And define a valuation v1 on~~ as follows:
42
V1 (A,x) = True iff V(A 1x) = True, for all atomic formulae
A and all xEK; and
V1 (A,01 ) = True iff V(A,O) = True, for all atomic formula A.
And let r1 be the associated boolean interpretation. Then
Lemma l. 8 • 4 • For any A in the language of T0
, I1 (A,01 ) =
True iff I(A,O) = True, and for an XEK, I1 (A,x) = True
iff I(A,x) = True.
Proof. The two claims are proved simultaneously by
induction on the complexity of A. The argument follows
the familiar pattern and is left to the reader.
These two lemmas make easy work of
01 0 Theorem 1.8.1. T is a conservative extension of T ,
i.e.' f. ToA iff 0
f.ToiA, for any A in the language of T .
Proof. Left to right is immediate. Right to left is
by contraposition. So let A be a non-theorem of T0
• Then
by the Supernormality Lemma there is a super-normal T0
m.s.
~ and a valuation V and associated interpretation I on M
which falsifies A at O(of ~). Whence by Lemmas 1.8.3
and 1.8.4, A is not valid in the T01 semantics. So by
the Boolean Completeness Theorem of the previous section,
A is a non-theorem of T01 , which completes the proof.
~-
43
This strategy for showing conservative extension
will have to be amended for the contractionless systems.
The Normality Theorem for T depends upon the
fact that prime, regular T-theories are
-- complete since ~T Av-A. But of course (De Morgan)
Excluded Middle is not a thorem of RW and TW. Hence,
we do not have the liberty to set 01 = o1#, and will be
forced to introduce both a new 0 and 0*.
However, this task can be simplified if we take
into account some recent results from Slaney 8+a. There
the techniques of Meyer 76 are extended to contractionless
relevant logics showing them to be metacomplete. Slaney
utilizes a double metavaluation and notes that a more
''standard'' three-valued metavaluation could be used. 1
We shall not recapitulate the argument here, but merely
restate a useful corollary noted in that paper.
Slaney Lemma 1. RW and TW are prime.
As Slaney points out, his results easily extend
to RW0
, since o can be defined therein in the standard
way. Of course, this is not the case for TW, and all 0
of our attempts to extend these results to TW have
been barren to date. 2 So we abandon oat this point. 3
The RW and TW semantics can be specified from
the RW 01 and TW 01 in a similar fashion to that of To
from T01. That is, pl is exchanged for pl' and plN, the
Hereditary Condition is put in force, and, of course, Io
and II are dropped from the specification of an interpretation.
44
But given Slaney Lemma 1, RW and TW can be shown complete
with respect to their semantics by a straightforward
adaptation of the argument of Routley and Meyer 72, simply
defining * on theories as was done earlier in this section.
The reader can easily check this claim, so we move on to
reap the benefits.
Let K be the set of prime TW(RW) theories. Define c
a ternary relation Rc on Kc by Rabc iff a~ b ~ c, for
all a,b,cEKc. And for all aEKc' let a*= {AI~Aia}. '!'hen
define I : £xK ~{True,False} by I (A,x) = True iff AEx, c c c for all AE£ and XEK . Then c
Fact 1.8.1. <K ,TW,R ,*>is a TW model structure, and Ic c c
is an interpretation (associated with the obvious valuation)
thereon. Similarly for RW.
And of course
Lemma 1. 8 . 5 . Single Canonical Model Lemma
~TWA (~RWA) iff Ic(A,TW) =True (Ic(A,RW) =True).
We call <K ,TW,R ,*>the TW canonical model structure, and c c
Ic the canonical interpretation. Similarly, for RW.
We can now build a TW 01 (RW 01 ) model which will
refute all non-theorems of TW(RW). So let K1 * = Kcu{o1 ,o1 },
# * # '* and let a = a , for aEKc and set o1 = o1 and *# o, = o,. Then let R1 be the result of adding the following triples
to R : c
45
(i) <o1 ,a,a>, for all aEK1 ;
and in the case of RW 01,
( ii) <a,o1 ,a>, for all aEK1 ; and
(iii) # # <a,a ,o1 >, for all aEK,·
# And let ~~ = <K1 ,o1 ,R1 , >. Then
Lemma 1.8.6. M is a TW 01 model structure or a RW01
model structure, as the case may be.
Proof. Left to the reader. (As in Lemma, 1.7.3, but easier)
Now we will want two more results from Slaney 8+a.
Slaney Lemma 2. For all A ,B, ~ RW-( A+B) iff ~ RWA and
~ RW-B; and
Slaney Lemma 3. For all A,B, -(A+B) is not a theorem of TW.
For given these, we can define an interpretation on~~
in the expected way which will refute all non~heorems of
TW or RW, as the case may be.
So define a valuation v1 on ~~ as follows, for all
atomic formulae A (with 0 being TW(RW)):
l. v1 (A,x) = IC(A,x), for all XEKC;
2. v1 (A,D1 ) = Ic(A,O); and
3. V1 (A,o;) = Ic(A,O*).
And let I 1 be the associated interpretation on~~· We
now show
46
Lemma 1.8.7. For any formula A in the language of TW(RW),
(i) T (A,x) = I (A,x), for all XEK ; -, c c
(ii) I 1 (A,o1 ) = Ic(A,O); and
(iii) I1 (A,O~) = Ic(A,O*),
with I1 as above and Ic the canonical interpretation,
Proof. By induction on the complexity of A. The lemma
is guaranteed by definition of I 1 when A is an atomic
formula, and the cases for -,& and v are straightforward.
So assume A= B~c. We now proceed to show (i), (ii) and
(iii) by cases.
Case 1. Assume for (i) that XEK • It is then c
straightforward to show that I 1 (A,x) = Ic(A,x).
Case 2. It is straightforward on inductive hypothesis
that if Ic(B+C,O) = True, then I1 (B~c,o1 ) = True.
So to prove (ii), it will suffice to show the converse.
So assume that I (B~c,o) = False, which justifies the c
assumption that R Oxy and I (B,x) = True and I (C,y) = c c c
False, for specific x and y in K . Then by the Hereditary c
Condition, Ic(C,x) = False. So on inductive hypothesis,
I1
(B,x) = True and r1 (C,x) =False, which suffices to show
that I1
(B+C,0 1 ) = False, as required.
Case 3. It is again straightforward that if
Ic(B+C,O*) = True, then I9 (B+C,O~) = True. So to prove
(iii), we again show the converse.
Assume I (B+C,O~) = True. For TW, it is always
* the case that Ic(B+C,O ) = True, by Slaney Lemma 3 and
47
the Single Canonical Model Lemma. For RW, assume for
reductio that I (B+C,O*) =False. Then by I-, I (-(B+C),O) = c c
True. And by Slaney Lemma 2 and the Single Canonical Model
Lemma, I (B,O) = True and I (-G,O) = True. But then c c
Ic(c,o*) =False. So on inductive hypothesis I1
(B,01 ) = # # # # True and ~(c,o1 ) = False. But R0
1o
1o1 . So I
1(B+c,o1 ) =
False, contradicting the initial case assumption and
finishing the proof of the case and lemma.
------------------------
Given Lemmas 1.8.6 and 1.8.7 and the completeness
proofs of the previous section, we present without further prooJ
Theorem 1.8.2. TW0
1 and RW01 are conservative extensions
of TW and RW, respectively.
Notice that to this point neither t nor T have
been in the language. We shall actually want them, but
only as "placeholders'' in Gentzen systems for the Classical
or Boolean Relevant Logics. So we do want to know that
they can be added conservatively to these systems.
That question has already been answered in the
affirmative for R01 in Meyer 1979a. T is no problem in
48
any of the systems we are considering, since it can simply
be defined as pvlp. We now use the results of the previous
section to take care oft.
In the boolean semantics, t is added by the
interpretation clause:
It. t iff X = 0. X
And it is a straightforward matter to show that the semantic
extension is conservative, i.e.,
Lemma 1.8.8. For
L semantics iff it
Equally simple is
any t-free formula
is valid in the Lt
A, A is valid in the
4 semantics.
Lemma 1.8. 9. Lt is consistent with respect to the Lt
semantics.
Finally, we have
Theorem 1.8.3. Lt is a conservative extension of its
non-boolean counterpart without t.
Proof. t Let A be a non-boolean, t-free theorem of L .
By Lemma 1.8.9, A is valid in Lt, whence it is valid in
L by Lemma 1.8.8. So by Theorems 1.8.1 and 1.8.2,
it is a theorem of the non-boolean counterpart of L,
49
which suffices.
~
For ease of future reference, we sum up this
section with
Theorem l. 8. 4. Tolt, Ro1 t, TWolt and RWolt are
conservative extensions of To, R0
, TW and RW.
Note, however, that we have not claimed that the
boolean extensions with t are conservative extensions of
corresponding non-boolean systems. The following non-theorem
of R0
is valid in the TW 01 semantics: -tv(-t&t+A). So
Theorem l. 8 . 5 . TWo It •
o1t o1t o1t RW , T and R are not
conservative extensions of TWt, RWt, Tt and Rt, respectively.
Obviously, they are not conservative extensions of those
systems with fusion either.
It is also worth noting that the rule y
!-A 1--AvB 1-B
is not admissible in TW1 and RW1 . For -(p+p) v 1-(p+p)
is a theorem as an instance of boolean Excluded Middle.
And, of course, p+p is an axiom. But the following matrices
can be used to show that 1-(p+p) is a non-theorem of RW1
.
-+
3 2
1
0
3 J
3
3
3
3
2 - 1 -0 1
2 1
1 3
3 3
3
/'""' 1 2
"""/ 0
0 -0
0
1
3
3 2
1
0
50
The &, v and I matrices can be read from the Hasse
diagram above in the usual way. 3 and 2 are designated.
We leave it to the reader to verify that the axioms of RW1
are valid and that modus ponens and adjunction are
admissible rules. And 1-(p-+p) can be falsified by assigning
p the value of two, which suffices to substantiate the claim.
51
FOOTNOTES
l The three-valued metacompleteness argument for TW was
independently discovered also by Meyer and Martin.
2 Slaney does not discuss adding o to TW. We assume that
his attempts met a similar fate.
3of course this is no loss for RW, since o can be
reinstated by definition. Although o has turned out to
be technically important in the study of all relevant
logics, we do not think that this connective has a very
plausible, intuitive interpretation in systems of Ticket
Entailment. So we feel no great loss in not dealing 0
with TW .
4 of of of L still ranges over T , TW and RW .
52
SECTION 9. Semilattice Semantics
As a final preliminary, we present the semilattice
semantics for uR+ which will be used in the next chapter. u Formally, a R+ model structure is a triple <K,¢,v>
where K is a set, ¢EK (not necessarily the null set) and
v is a binary operation on K satisfying the following
postulates for all x,~ZEK:
pl. ¢vx = x
p2. XYX = X
p3. xv(yvz) = (xvy)vz
p4. xvy = yvx.
A valuation is defined on such a model structure
in the usual way. Given a valuation V on a model
structure, the associated interpretation I is a function
from £xK into {True,False} satisfying
Ip. I(A,x) = True iff V(A,x) = True, for all atomic A;
I&. I(A&B,x) = True iff I(A,x) = I(B,x) = True;
Iv. I(AvB,x) =True iff either I(A,x) =True or
I(B,x) = True; and
I+. I(A+B,x) = True iff for all y, if I(A,y) = True,
then I(B,xvy) = True.
Note that we assume £ to be without o and t, although
they can be added straightforwardly. We shall not be
53
concerned with them here. The interested reader is
referred particularly to Urquhart 73 where they are
dealt with in some detail.
Other semantic terminology can be brought forward
from the previous sections. Then we record from
Charlwood 81:
Charlwood Theorem 1. Semilattice Completeness
For any formula A, ~ A iff A is valid in the uR+ UR+
semantics.
Some variations on the uR+ semantics are discussed
in Urquhart 72, but Urquhart 73 contains a more in~depth
exploration. In particular, semantics are offered for a
positive semi-lattice system of Ticket Entailment, and
for corresponding contractionless systems. We name and
record them as follows.
uT+ semantics. A uT+ model structure is a
quadruple <K,¢,v,<;;>with K,¢ and vas before . .;; is a
binary relation on K satisfying
p6. .;; is transitive;
p7 . .;; is montane, i.e., if x.;; y, then xvz.;; yvz; and
p8. for all XEK, ¢ .;; x.
Valuations, interpetations, etc. are defined as before
except that I+ is replaced by
54
I +, I(A+B,x) = True iff for all y ~ x, if I(A,y) = True
then I(B,xvy) = True.
The semantics for uTW+ and uRW+ are specified by simply
dropping the idempotence condition p2 from the definition of
a model structure in the corresponding semantics with
contraction. And the semantics for pure implication and
implication-conjunction fragments are obviously specified
by dropping the irrelevant condition(s) on an interpretation.
With respect to these, Urquhart 73 gives the following
results:
Urquhart Theorem 1. The pure implication and conjunction
implication fragments of TW, T, RW and R are complete
with respect to their corresponding semi-lattice semantics.
Of course, since R4 is not a rule of the corresponding
u u fragments of R+ and T+' the above result applies to
them as well.
u u u Completeness results for TW+, RW+ and T+ are
unfortunately still lacking. However, it is interesting
to note that semi-lattices can be regained as model
structures for uRW+ by putting a disjointness condition
on I+, 1 as will now be shown.
u u Let d RW+ m.s. be as R+ m.s., i.e., they are
triples whose base set K is a semilattide with an identity
¢. Valuations will be as before. To define an associated
interpretation, we need to define a notion of disjointness:
for all x,yEK: XAy = ¢ (x is disjoint from y) iff
either (1) x = ¢ or y = ¢, or otherwise (2) there is no
ZEK such that z ~ ¢ and either zvx = y or zvy = x. We
should also note that xAy = ¢ iff yAx = ¢, and
XA(yvw) = ¢ iff XAy = ¢ and XAW = ¢.
55
Given a valuation V, the associated interpretation
I is a function from £xK into {True,False} satisfying the
conditions for atoms, & and v as before, and also
di~. I(A+B,x) = True iff for all yEK, if yAx = ¢ and
I(A,y) = True, then I(B,xvy) = True.
Other relevant definitions are assumed as before. And
we write rAx1 for ri(A,x) = Truel and r, Ai for I(A,x) = False.
We now use techniques similar to those of Meyer 77
to show
Theorem 1. 9 .1. For any formula A, A is valid in the
d-uRW+ semantics iff it is valid in the uRW+ semantics.
Proof. The remainder of this section is devoted to proving
the theorem. Naturally, it will be done in two stages,
left to right and right to left. The strategy will be
similar in both cases, using contraposition.
STAGE I. Left to right.
The proof proceeds by contraposition. So choose an
arbitrary formula, A, an arbitrary uRW+ m.s. <K,0,0>, and
an arbitrarary valuation V thereon with associated
interpretation I. Then assume as
Stage I Hypothesis (SIH) IA0
, i.e. I(A,O) = False.
It will suffice to show that there is a countermodel to
u A in the d RW+ semantics. Let O,a 1 ,a2 , ••• be the
(distinct) elements of K. 2 We now proceed to build a u
56
d- RW+ m.s. So let 1 2 n 1 2 n ai,al, ... ,al, ... ,az,az, ... ,az,···,···
be distinct entities, and let Kd be the closure under
binary set union of {¢}U{a~/l~i and l~j}. J
Then define a function f:Kd+K as follows:
l. f(¢) = 0;
2. f({am}) =a , 1 ~ m and 1 ~ n; n n
3. f({x 1 u ... uxn}) = f(x 1 )® ... ®f(xn)' for x1' ... ,xn
distinct singleton elements of Kd.
To see that f is well-defined, simply note that
each element of Kd is either ¢, a singleton or the
union of a finite number of such singletons, and that
® is a commutative and associative operation on K.
Then note that f is an onto function (a surjection),
and further
Fact 1. 9 .1. For all x and y elements of Kd such that
xny = ¢, f(xuy) = f(x)®f(y).
57
The point of the construction is as follows. In
the commutative monoid semantics, a (multiple) fusion of
a point x need not be x; and further, it may be doing
some work, specifically in refuting some A+B. So in
converting • into a semilattice join, we will want as
many ''copies'' of x as will be needed to give us disjoint
points whose union can do the work of the multiple fusion
of x. That the construction works as we want is
essentially recorded in
Fact 1.9.2. For all x,yEK and all zdEKd such that
f(zd) = x, there is a WdEKd such that f(wd) = y,
zdnwd = ¢ and f(zduwd) = x®y.
The proof of the fact is straightforward and left to
the reader.
So let ~d = <Kd,¢,v>. Obviously, ~dis a
d-uRW+ m.s. To complete the proof of Stage I, it will
now suffice to define an interpretation on ~d that
"mimics" I, so that A is false at ¢. But this is easy
to do. Simply define Vd on ~d thus, for all atomic
formulae Band all XEKd:
Vd(B,x) = V(B,f(x)).
Then let Id be the associated interpretation on ~d' And
let us agree to write ldBxl for 1 Id(B,x) = True1 and
rdiBxl for Id(B,x) =False. Then note
58
Fact 1. 9. 3. For all w,zEKd and all formulae B, if
f(w) = f(z), then dB iff dB . w z
Proof. By induction on complexity of B. The base step
is guaranteed by definition of Vd. Only the case for+
in the inductive step is shown since the others are
straightforward. So choose arbitrary D and C, and assume
as
Inductive hypothesis (IH). The fact holds for all formula
D' of complexity less than that of D+C.
Next choose arbitrary w and ZEKd such that f(w) =
f(z). It will now suffice to show·d~(D+C) iff d~(D+C) . w z
The argument for left to right is ana,logous to
that for right to left. So assume
Case Assumption (C). d~(D+C) , whence by I~+ assume z
yEKd' dDY, d~C and ynz = 0. yuz
And recalling that f(w) = f(z), by Fact 1. 9.2, let
y'EKd be such that f(yF) = f(y), wny' =¢and f(wuy') =
f(w)®f(y), i.e., f(wuy') = f(z)®f(y). But by C and
Fact 1. 9.1, f(z)®f(y) = f(zuy). But by C and IH, for
all u,vEKd,
dD and d~C u v
such that f(u) = f(y) and f(v) = f(zuy),
So dD , and d~C ,, whence ~(D+C) -which y wuy w
finishes the proo~
Now we can prove
Lemma 1. 9 • 1. For all XEK and all formulae B, (Bx
iff for all yEKd such that f(y) = x, dBx).
Proof. The proof of the lemma proceeds by induction
on the complexity of B. The base step is immediate on
59
the definition of Vd. As usual, only the case for+ in
the inductive step requires serious checking. So choose
arbitrary D and C, and assume as
Inductive Hypothesis (IH). The lemma holds for all E of
complexity less than that of D+C.
Then choose arbitrary XEK. For left to right, we
proceed by contraposition. So choose yEKd and assume
Case assumption 1 (Cl). f(y) = x and di(D+C) . y
By Cl, let WEKd' wny = ¢, dD and diG w yuw Then Fact
1.9.3 allows us to use IH to conclude that Df(w) and
ICf(yuw)" But by Fact 1.9.1, f(yuw) = f(y)0f(w).
So Df(w) and ICf(y)®f(w)' i.e., ICx®f(w)" Whence by
I+, I(D+C)x, as required.
Right to left also proceeds by contraposition.
So assume
Case assumption 2 (C2). I(D+C)x
Then by C2 and I+, let ZEK, D and IC = . Then choose Z X'"'Z
arbitrary yEKd such that f(y) = x. By Fact 1. 9 .2, let
WEKd,f(w) = z, xnw =¢and f(xuw) = f(x)0f(w). It is
then straightforward by IH that I(D+C) • Whence by y
Fact 1. 9 .3, for all yEKd such that f(y) = x, I(D+C)Y,
which completes the proof
Now by SIH, lA . 0
By definition off, f(¢) = 0.
60
So by Lemma 1. 9 .1, diA¢. u So A is invalid in the d- RW+
semantics. Which completes Stage I of the proof of
Theorem 1. 9 . 1 .
STAGE II. Right to left.
We again proceed by contraposition. So choose an
arbitrary A, an arbitrary d-uRW+ m.s., ~=<Kd,¢,v>, and an
arbitrary valuation Vd thereon with associated
interpretation rd. Then assume
Stage II hypothesis (SIIH). diA¢.
The construction this time is the dual of the previous
one. Two non-disjoint points in the semi-lattice are
doing no work in determining the values assigned by Id
to any formula at either of those points. So in defining
®on ''copies" of such points, say x and y, we want to be
sure that x®y likewise does no work. If one thinks of a
point of a model as the theory determined by the
interpretation and thinks of ® as the theory fusion of
the semantics of traditional relevant logics, the question
becomes that of what theory we can assign to the fusion
61
of theories which were non-disjoint points that will not
change any of the theories we already have (since they
already do what we want) - preferably one which can be
added singly to the set of theories and leave it closed
under ®. In these terms, the answer is obvious. The
trivial or absurd theory exactly fits the bill.
So let ¢,a 1 ,a2 , ••• be the distinct elements of Kd.
Then let 1, ¢', a 1 ',a2 ', ... be distinct entities, and
let K = {1, ¢', a 1 ', a 2 ', ... }. Then define® on K by: 3
1. 1®z = z6¢1 = 1, for all ZEK;
2. for x,yEKd'
(a) x'®y' = (xvy)', if XAY = ¢;
(b) otherwise, x'®y' = 1.
It is clear that ® is a well-defined binary operation on
K, and that M = <K,¢' ,®> is a uR+m.s.
So define a valuation V on Mas follows, for all
atomic formulae Band all XEKd:
1. V(B,l) = True;
2. V(B,x') = Vd(A,x).
and let I be the associated interpretation.
It is then easy to check that 1 behaves as desired,
i.e., for all formulae B,
Fact 1. 9. 4. I(B,1) = True.
And by a straightforward induction on the complexity of
A, one can then show
Lemma 1. 9 .2. For any formula B and any XEKd,
I(B,x') = Id(B,x).
From this lemma and SIIH, we get immediately that
I(A,~') =False, which completes the proof of Stage II,
and thus of Theorem 1. 9 .1.
~
62
63
FOOTNOTES
1This idea, originally due to Meyer, arises quite
naturally from consideration of the subscripted Gentzen
systems of Chapter 2.
2The notation builds in an assumption that K is
denumerable, but this is mere notational convenience.
The construction to come would work as well on the
contrary assumption. The same applies for the
construction at Stage II.
3where convenient, as in this definition, we treat '
as a function from Kd into K.
SWITlSXS NITZlNIT9 GITldiHJSHOS . Z HITldVHJ
64
SECTION 1. Introduction
This Chapter is devoted to the study of subscripted
Gentzen systems for the positive semi-lattice logics
~H+' ~T+' ~RW+ and ~TW+. This work is based on that of
Aleksandar Kron. Kron 78 and 80 present six (cut-free)
subscripted Gentzen systems GT+-W, GR+-W' G2T+' G2R+'
G1T+ and G1R+. Unlike the systems of Urquhart 73, they
are proof theoretic in character being based on earlier
work by Kron (Kron 73 and 76) on deduction theorems for
relevant logics.
The first four of these systems were claimed to
be equivalent to their axiomatic counterparts. An argument
for the decidability of the first two is presented in
Kron 78, and TW+ and RW+ are claimed to be decidable on
this basis. Kron's work was broadly well-conceived, we
think. However, it is seriously flawed.
These systems are set up in §2. Section 3 and 4
are then devoted to a critique of Kron 78 and 80,
"respectively. Most significantly we show there that the
above cited equivalence claims are false. In the process
we show that the proffered proofs that Cut and/or
modus ponens is admissible (for all six of the systems)
are unsound. And we show that the decidability argument
of Kron 78 is likewise faulty.
So in §5 we make a fresh start, presenting the
systems GuT+' GuR+' GuTW+ and GuRW+ and gathering
preliminary facts about them. The systems are fitted
65
with a ''place-holder'' to facilitate the proofs of Cut and
modus ponens in §7. We then show in §6 that the place
holders can be suitably eliminated, and that GuT+ and
GuR+ are in fact equivalent in an appropriate sense to
G1T+ and G1R+ of Kron SO.
developed independently.)
(However, our G-systems were
In §S and 9, we show that GuR+ is equivalent to
uR+. To do so, we utilize results from Charlwood So.
Although we believe that the other G-systems are equivalent
to the corresponding axiom systems, proofs (at least in
the style of §S and 9) must await proofs of semantic
completeness and of equivalence of the matching natural
deduction systems.
Finally, in §10 we show that GuTW+ and GuRW+
are decidable. As a matter of history, it was Kron 7S
which stimulated our interest in the decision questions
for contractionless relevant logics. And it was in
reflecting on the argument for decidability given in §10
of this chapter that we discovered the decidability
argument for LTW+ and LRW+ as presented in Chapter 3.
The problem of developing "natural" cut-free,
subscripted Gentzen systems for T+' R+' TW+ and RW+
remains open. This is a shame. For the subscripted
Gentzen systems provide, we think, a simpler proof theory
than do the Dunn-style systems of Chapter 3. Although the
complexity of nested extensional and intensional sequences
(or structural connectives) is simply traded in for the
complication of using subscripted formulae instead of
formulae, subscripts are just finite sets of integers.
And what could be more simple?
SECTION 2. Preliminaries.
A subscripted formula (sf) is an ordered pair,
the first member of which is a formula, and the second
member (the subscript) of which is a finite subset of
the natural numbers ( {1, 2, 3, ... } ) . (The Language L is
assumed without any propositional constants - for the
66
time being.) We use a,b,c,d,e,w,x,y,z with or without
subscripts (in the ordinary syntactic sense) and/or super
scripts as variables ranging over subscripts. In practice
we write rAal for <A,a>. Max(a) is the numerically largest
member of a, if a is not empty; otherwisei it is 0. A
structure is a possibly empty sequence of sfs, and W,X,Y and Z
(with or without scripts) are used as variables ranging thereove
Then a sequent is anything of the form X 1- A a, provided that ¢
does not occur as a subscript on an sf in X. Let us call X the
<ultecedent and Aa the consequent of such a sequent. (We speak of
an occurrence of a sequent, a structure, an sf, a formula
or of a subscript in the obvious way.) And we often
write rJ-AI instead of r~A~I. In context we often use x
as the union of all subscripts occurring in X. And we
use E with or without scripting as a variable ranging
over sequents.
Kron's systems can then be formulated from the
following set of axioms and rules. (Standard set theoretic
notation is assumed throughout.)
AXIOMS
Aa f-Aa (a f. ¢) •
STRUCTURAL RULES
c f-
K 1 1-
X,Aa,Bb,Y }- Cc
X,Bb,Aa,Y }- Cc
Xf-C c
X, A }- C a c
X,A ,Y }-C Kz 1- a c
X,Aa,Y,Ba f-Cc
LOGICAL RULES
& f-X,A f- C a c
W}-X,A ,A ,Y }-C a a c
X,Aa,Y f-Cc
provided (1) a f. ¢; and
(2)aS.c.
67
provided (1) a is a singleton
X,Ba f-Cc I-&
Xf-Aa XI-Ba
X,(A&B) }-C a c X,(A&B)ai-Cc XI-(A&B)a
v 1-X,A ,Y,Z f-C a c X,B ,Y,Z f-C a ~ c provided
X,AvB ,Y ,z f- C a c (1) xny = x.nz = ynz = ¢;
(2) If Y is non-empty, a is the only prefix occurring
therein;
(3) Z is idemdis
(4) max(x) ~ max(a) ~ max(z).
1- v
+ 1-
X 1- A
X I- A vB a
XI-B a X I-AvBa
XI-Aa Y,Baubi-Cc
X, Y, (A+B)b t- Cc provided ( 1) b 1' ¢;
(2) anb = ¢;
68
(3) max(b) ~ max(a).
1-+ X,Aa 1- 8aub
X I- (A+B)b provided (1) anx 1' ¢;
(2) max(b) ~ max(a).
C 1- anq W 1- are rules of permutation and contraction,
respectively. K1 l-and K2 1- are weakening or thinning
rules. So in applications of such rules we refer to the
permuted sfs, the contracted sf and to the sf weakened in
the obvious ways.
Formulations of various systems are given in
different sections to come. Provisos that certain
subscripts not be empty are not given by Kron, but it
is apparent that they do no harm. Where provisos for
disjointness and max restrictions are assumed together,
~ is effectively < . Idemdis is defined in the next
section. Also, provisos (2) and (3) of v 1- differ from
those of Kron, but they suffice. Indeed, Fact 2.3.2,
below, shows that (1), (2) and (3) are unnecessary in
Kron 78. Finally, where neither (1), (2), (3) nor (4)
is in effect on v 1- , it can be taken as
X,A 1-C X,B 1-C a c a c
X, (AvB)a 1- Cc
69
SECTION 3. Critique of Kron 78
The systems GT+-W and GR+-W presented in Kron 781
can be specified as follows. GT+-W has the axioms given
in the previous section and the following rules:
W f-, K2 f- and all of the logical rules as stated.
c f- '
GR -W + comes from the former simply by dropping the provisos on
max from the rules that have such.
A derivation of a sequent E in GT+-W (GR+-W)
is a finite tree, branching upward such that
(1) each node of the tree is a sequent (occurrence);
(2) the bottom node is (an occurrence of) E; and
(3) each node is either an axiom or follows from the
node(s) immediately above it by one of the rules
of GT+-W(GR+-W).
The notion of immediately above (below) is taken as
primitive. The notion of above (below) is its transitive
closure. And we say that A is provable iff f-A is
derivable.
Where Der is a derivation and o is a particular
occurrence of some sequent therein, the subderivation
determined by o is the derivation that one would get by
deleting from Der all sequent occurrences except o and
those above it. A sequent occurrence o (immediately)
precedes a sequent occurrence o' in a derivation just in
case o is (immediately) above o'; similarly for (immediately)
succeeds. And predecessor and successor are used in the
70
obvious way. Then a branch of a derivation is a sequence
ol, ... ,on of sequent occurrences such that o1 has no pre
decessors and on has no successors, and for all 1 < i < n, xi
immediately precedes oi+l" A branch segment is a subsequence
of a branch.
The weight of a derivation, say Der, is the length
of a longest branch, and the weight of a sequent occurrence
o in Der is the weight of the subderivation determined by
o. The conclusion (bottom node) of a derivation that has
weight n is said to be derivable with weight n.
Finally, the height of a sequent occurrence, say
o, in a derivation Der is the length of the branch segment
consisting of o and all sequent occurrences below it.
Kron makes the following major claims for the
systems given above.
Claim 1. Cut is admissible in an appropriate form.
(Theorems 3.1 and 4.1) 2
Claim 2. Modus ponens is admissible, i.e., if A and A+B
are provable, so is B.
Claim 3. GT+-W (GR+-W) is equivalent (in terms of provable
formulae) to TW+(RW+).
Claim 4. The systems are decidable. (Theorem 6.10)
We will show that his proofs of all of these claims
are unsound, and indeed that at least half of 1 is false,
and 2 and 3 are entirely false. To do so, we begin by
collecting a few facts about these systems.
In the first place
Fact 2.3.1. If X f-C is derivable, then c = x, i.e., c
c is the union of subscripts occurring in X.
Proof. On inspection, the axioms are such and the rules
clearly preserve the property.
The fact is elementary and will be taken for granted
hereafter.
Now let us say that a sequent l: is idemdis
(identical or disjoint) just in case for any subscripts
x and y having distinct occurrences in the antecedent
of E, either x = y or xny = ¢. A derivation is idemdis
iff all sequents occurring therein are. And a system
is idemdis just in case all of its derivations are.
Then let us call a sequent l: singular iff for
any subscripts x and y with distinct occurrences in the
antecedent of l:, if x = y, then xis a singleton. And
extend this terminology to derivations and systems as
before. Finally, a sequent (derivation, system) is
singularly idemdis just in case it is both singular and
idemdis.
Fact 2.3.2. GT+-W and GR+-W are singularly idemdis.
Proof. The axioms are singularly idemdis, and the rules
7l
72
preserve this property. The only rule that really requires
checking is+~. So choose an arbitrary_instance thereof,
say
E1 = X~A X Y,Bbux~Cc = E2
E = X,Y,(A+B)b!-Cc
and assume
Fact Assumption. E1 and E2 are singularly idemdis.
It will suffice to show that E is singularly idemdis.
If X is empty, b = bux and we are finished on the
fact assumption. If X is not empty, then bux is not a
singleton by provisos (1} and (2) of + !-. So by the fact
assumption, yn(bux) = ¢. But then ynb = ¢ and ynx = ¢.
Further, bnx = ¢ by proviso (2) of + ~. So we are finished
on the fact assumption.
~~
Note that provisos (1), (2) and (3) on vj-are
not needed to prove the above fact. So one can easily show
that they are not needed simply using the fact itself and
c ~ .
Then note that Kron's Theorem 2.1 is correct 3,
which is recorded formally as
Fact 2.3.3. If A is provable in GT+-W (GR+-W), it is
provable in TW+(RW+).
73
With these facts in hand, let us turn to Claim 1
given above. To show a Cut Theorem, one would have to show
that the subscripts occurring in a derivable sequent can
be re-written in certain ways, preserving derivability.
Kron attempts to accomplish this in his Theorems 2.3 and
2.4. However, if Theorem 2.4 were true from right to left,
other things being_equal, one could trade in singleton
subscripts occurring in the antecedent of a derivable
13equent for non-singletons. Proviso (1) on K2 1- leads
one to rightly suspect the contrary. For
Fact 2.3.4. It is straightforward to show that
(1) (A&(BvC)) { 1
} ~ ( (A&B)vC) { J}
is derivable in both GT+-W and GR+-W. But
(2) (A&(BvC)){ 1 , 2 } ~ ( (A&B)vC) { 1 , 2 }
is not.
Proof. Assume for reductio that (2) is derivable. It
is not an axiom, and by examination of the rules it could
only be the conclusion of an instance of & ~, f-v or
W ~ . And Fact 2. 3. 3 rules out & ~ and ~ v. So
(3) (A&(BvC)){I, 2 },(A&(BvC)){l,z} ~ ((A&B)vC){!, 2 }
must be derivable. But this is impossible by Fact 2.3.2,
ending the proof.
74
To finish the matter off formally, Kron's Theorem
2.4 from right to left can be stated as follows:
Kl. For any derivable sequent, say <A 1,(al-b)>, ... ,
<An,(an-b) > ~ <A,(a-b) >, such that for all l.;;; i.;;; n,
either ain b
derivable.
= ¢ or bcai, <A1 ,al >, ... ,<A ,a > is also n n
But of course, the previous fact presents a
clear counterexample to it. And we now know where to look
for trouble in the argument(s) for Claim 1, namely cases
which require subscript rewriting. Consider the following
case which arises in the putative proofs of Kron's Theorems
3.1 and 4.1. Assume that there is a derivation ending as
follows
(l) X,A ~B a aux
(2) X 1-A+Bx
and another ending thus
(3) y' ~ Ay Z B I- C ' xuy c
( 5) Y,Z,A+Bx I-Cc
It is required that
(6) y ,z,x 1- cc
be derivable. His argument for (6) runs as follows:
On the basis of (l) and the subscript re~writing theorems,
(l ) X,A ~ B y yux
is derivable- but we needn't go any further.
75
We have already seen that the suggested rewriting could
not always be done when a is a singleton but y is not.
Now if Theorem 4.1 were true in spite of its
proof being unsound, Cut would be admissible in GR+-W'
as in Claim 2. And if Claim 2 were true, Claim 3 would be
also. So producing a TW+ theorem (and thus RW+ theorem)
which is not provable in GR+-W (thus, nor in GT+-W)
will suffice to substantiate most of our own negative claims.
And the previous arguments and discussion tell us how to
find such: Look for a TW+ theorem for which any GR+-W
derivation would require that a suitable version of
Distribution be used as the right premise of + ~ .
Meyer suggested (p+qvr)&(p+p)+.(p&q+r)&(r+r)+.p+r;
and it works. However, p+q&(rvs)+.p+(q&r)vs
is simpler and will do as well, as the diligent reader can
easily verify.
Before turning to Claim 4, we should point out
briefly what is wrong with
the admissibility of modus
His claim that
the independent argument 4
ponens (pp. 72-73) .
(1) A+BEK iff A+BEJ and either AiK or BEK
for
is false and does not follow from his definition of Del.
If A+BEK, then A+BEJ and either A is not retained in J
(and thus inK) orB is not deleted from J. But that B
is not deleted from J doffinot imply that BEK, since it
could be the case that B is not deleted from J because it
wasn't in J to begin with.
76
With respect to Claim 4, the decidability argument
appeals to his Theorem 6.5. This theorem claims that if
A and B occur in the antecedent of a sequent of some X y
derivation with xny = ¢ and have descendants (in the
usual sense) in the antecedent of a succeeding sequent,
the prefixes of those descendantsare likewise disjoint.
But this is false, since such descendants may be one and
the same sf occurrence, as in the following (foreshortened)
derivation.
l: = (A+B) {2}' (B+C) { 3 }'A{4} 1- C{2' 3,4} 1-+
( A+B) { 2} ,(B+C) { 3} 1- (A+C) { 2 , 3} 1-+
(A+B) {2} 1- (B+C+.A+C) {2 } D{l,2} I-D{l,2}
l: , = (A+B) {2}, ((B+C+.A+C)+D) { 1 } 1- D{l , 2}
B+C{ 3} and A{ 4 } together satisfy the initial conditions
of the theorem in l:, but have the same descendant in l:'.
So Theorem 6.5 is false and the decidability
argument is undone, as we claimed. The argument of §2.10
below can be adapted to show that GT+-W and GR+-W are
decidable. But these systems are too weak to make that
fact interesting.
Finally, there are several other mistakes in the
article, most of which are minor (possibly typographical)
+I-
and can be easily corrected. We now point out and correct
one of them, since our claim for Fact 2.3.3 relies upon
it. The restriction on modus ponens for T+-W in the
definition of propf from hypotheses (p.62) should read:
77
if S = T+-W and bj I¢, then max(bi)>max(bj). His
Theorem 2.1 can then be proved along the lines suggested
there.
FOOTNOTES
1unless otherwise specified, referenc~in this section
are to Kron 78.
2The statement of Theorem 4.1 is slightly garbled
(probably typographical errors); but the intention is
reasonably clear in context and can be sorted out by
referring to Theorem 6.1 of Kron 80, in any event.
78
3However, his proof requires the minor correction supplied
at the end of this section.
4Kron's argument is an adaptation of the techniques of
Harrop 56, which are similar to the metavaluational
techniques developed independently in Meyer 76a. It was
Meyer who first pointed out the problem in Kron's
argument.
SECTION 4. Critique of Kron 80.
G2T+ and G2R+ (of Kron 80, of course) can be
formulated by dropping the contraction restriction
(proviso (2) anb = ¢) on+ ~and restrictions (1), (2)
and (3) on v ~from GT+-W and GR+-W' respectively.
Appropriate definitions carry over from the previous
sections. We refer to G2-systems in the obvious way.
Kron claims
79
Claim 5. Cut is admissible in an appropriate form in the
Gz-systems. (Theorems 4.1 and 6.1).
Claim 6. Modus ponens is admissible in the G2-systems.
(Theorems 5.1 and 6.3).
Claim 7. The G2-systems are equivalent to their
axiomatic counterparts. (Theorem 7. 4).
But given the form of weakening, namely K2 ~ again, one
would expect these claims to go the way of Claims 1, 2
and 3 of the previous section. And this is indeed the
case. For some of the subscript re-writing theorems
(Theorems 3.3 and 3.4) are false, just as their analogue
from Kron 78 was, as will presently be shown.
First we need a few facts. Obviously
Fact 2.4.1. Let G' be the result of deleting+~ from
G2 R+. The distribution sequents (1) and (2) of p. 73
of the previous section are derivable in G' iff they
so
are derivable in G2 R+.
And recalling that provisos ( l), ( 2) and ( 3) on v f- were
redundant in GR+-W, it is clear on inspection that
Fact 2.4.2. G' is a subsystem of GR+-W, i.e., for any
sequent E, E is G' derivable only if it is GR+-W derivable.
Sequent (l) referred to in Fact 2.4.1 is easily
derived in G2T+' hence also in G2 R+. So if either of
Kron's Theorems 3.3 or 3.4 were true for the G2 -systems,
sequent (2) referred to in Fact 2.4.1 would be derivable
in G2R+' and thus by that fact also in G'. But it is
not so derivable, by Fact 2.3.4 and Fact 2.4.2. So those
theorems are false (for the G2-systems), as was previously
claimed.
Note that Theorems 3.3 and 3.4 are not corollaries
of Theorem 3.2 as claimed in the paper. But they are
corollaries of the corrected Theorem 3.2 given in
Kron 81. So the argument above suffices to show that
the corrected theorem is itself false.
As one would expect, the arguments for Claim 5
fail in the same way as the arguments for the Cut Theorem
of Kron 78. Claims 6 and 7 are false. Again, it will
suffice to show that Claim 7 is false; and the same
counterexample will do. But since the G2 -systems are not
known to be decidable, the argument requires a bit more
work. So let us gather a few facts.
It is relatively straightforward to show
Fact 2.4.3. p +q&(rvs)+. p+ (q&r)vs is provable (in a
G2 -system) iff (p +q&(rvs)){l}' p{2
} r ((q&r)vs){1 2
} is . '
derivable.
Now define a function T from sequents to formulae
(of the classical sentential logic K) as follows:
(i) T(rA)=A, and
(ii) T(<A1 ,al), ... ,(An,an) r(A,a)) = A1& ... &An +A.
Then by a straightforward induction on the weight of
derivation of ~. we have
Fact 2.4.4. For any sequent ~. E is derivable in G2~ only
if T(I) is a theorem of K.
Next, let r be the set of all finite sequences
(including the empty sequence) that can be built up from
p +q&(rvs){l}'p{ 2 }' and q&(rvs){l, 2 }. Using Fact 2.4.4
above and Theorem 3.1, it is simple to show
Fact 2.4.5. For all X E r, neither rp nor X rp{l} is
derivable in G2R+.
Further, Facts 2.4.4 and 2.4.5 can be used in a
straightforward induction on weight of derivation to prove
81
let Y E 6 with Der a G2 R+ derivation of
Y,q{l, 2 } ~ ((q&r)vs){l, 2 }. Then let S be the rightmost
branch of Der. By inspection of the rules, either
r{l, 2 } ~r{l, 2 } or s{l, 2 } ~s{l, 2 } occurs inS. But again
by inspection of the rules (using Fact 2.4.7 for~~ and
noting the singleton restriction on K2 ) q{l 2 } occurs in '
the antecedent of every member of S - which is absurd.
Hence (i) cannot be the case. And a similar argument
will show that (ii) cannot be the case. So the proof
is finished.
~
Slb
83
with combined weight w iff for each l ~ i ~ n Xi ~<Ai,xi>
is derivable with weight wi and w = w1+ ... +wn. Let X be
<AI,aJ>, ... ,<An,an> and let hl,···•hn be the complexities
of AJ,···•An, respectively. The complexity of X is
hJ+ ... +hn.
We shall write rx I instead of rxl if X is not a
empty and each prefix occurring in X is a. And we say
that ~Xa is derivable just in case each formula occurring
in Xa is provable. And for any two structures X andY,
we write rx;y;l to indicate that each sf occurring in Y
also occurs in X.
Now let Y be <B 1 ,b 1>, ... ,<Bm,bm> with the following
conditions holding for any l ~ j ~ m:
(l) either aS bj or a n bj = ~; and for G1T+
(2) if an bj =~'then max(a) < max(bj);
0
and let Y be the result of deleting all sfs with the
subscript¢ from <B 1 ,b 1-a>, ... ,<B ,b -a>. Finally we state m m
Kron's Theorem 5.1. If U is of degree h, ~U and a a
Y!Ual ~Bb are derivable with combined weight w, (l) and
(2) are satisfied and all sfs occurring in Y with the
subscript a are members of U , then Yo ~<B,b-a> is derivable. a
The proffered proof of this theorem by double
induction breaks down in the following sort of case. Assume
there are derivations ending as follows:
84
l:1 = 1-A
E 2 = 1- AvB
and
1: 3 = X,AvB ,A I- C a a c X,AvBa,Ba 1- Cc = E4
E 5 = X,AvB ,AvB I- C a a c
with xna = ¢, just to make things simple. Note that the
complexity of AvB ,A is greater than that of AvB ; and, a a a indeed, the combined weight of derivation of E1 and E2
is greater than that of 1: 2 • So neither inductive hypothesis
(p.395) can be used to apply cut to E1,E 2 and 1: 3 as would
be required. Thus the argument fails.
Note that if 'x/Y/ 1 is taken to indicate that all
sfs occurring in Y occur in X the same number of times as
they occur in Y, the argument will break on the obvious
cases involving contraction. Indeed, all variations on
Kron's argument which this author has attempted have met
with similar fates. So it is best to start from the
beginning to investigate systems similar to the G2-systems.
85
SECTION 5. G-Systems
We will now give a first formulation of subscripted
Gentzen systems, GuTW+, GuT+' G ~W+ and G ~+'
So we let L range over the obvious systems, and refer to
the G-systems, GT-systems, etc. in the straightforward way.
Contractionless systems will be referred to as GLW, or simply
as the W-systems. With an eye to proving a Cut Theorem, we
introduce a structural analogue of t, namely I, as in
Belnap 8+. So let <I,a> be an sf for any non-empty subscript
a, and note that I is a structural constant, not a formula.
Otherwise, appropriate definitions are brought forward
from previous sections.
The G-systems can be formulated from the following:
AXIOMS
A a ~A a, for any formula A and non-empty subscript a.
RULES
Structural Rules
c 1-
K 1-
X,Z,W,Y ~ Cc
X,W,Z,Y ~Cc
X ~ Cc
X, Y ~ C c
w ~ X,Y,Y ~Cc
x,Y ~cc
provided (l) y ~ ¢ and ¢ does
not occur in Y; and
(2)ys_c.
The G-systems can be formulated as follows:
u G TW+ has all of the axioms and rules as stated,
but without proviso (2) on j- ,..,. ;
G0RW+ comes from G~W+ by dropping proviso (3)
from + 1- and 1- +, and adding proviso ( 2) on 1-+.
The systems with contraction come from their
W-counterparts by dropping proviso (2) from + 1- and
1- +, and adding I 1- with proviso (3) for GuT+ and
without proviso (3) for G0 R+.
Note that GW does not have I 1- as a rule, not
even with the restriction that anb = ¢, as would be
87
required. The reason for this is simple. The subscript
manipulation that seems to be required to handle I l-
in the proof of Cutdo not preserve derivability in those
systems. We can think of no reason why this should be.
So we suspect that there is some "book-keeping" fact
which has so far gone unnoticed.
The lack of I 1- forces G0RW to remain non-empty
on the left for the proof of Cut, and blocks the proof
of modus ponens for both W-systems. This situation does
no harm for the time being, since we are not yet in a
position to prove an equivalence for those systems even
with modus ponens. But it is inelegant and ought to be
rectified.
Without I 1- there is no point in having sfs with
I as their first member. So we banish them from the
86
Logical Rules
& f-
v f-
..,. f-
f--+
I f-
X,Aa f-Cc X,B f- C a c f-&
X,(A&B)a f-Cc X,(A&B) f- C a c
Xf-Aa Xf-Ba
Xf-(A&B)a
X,A f-C X,B f-C a c a c
X, ( AvB) a f-Cc
X f-A Y ,<B,aub> f- C a c
X,Y, (A-+B)b f-Cc
X,Aa f-<B, x ua>
X f- (A-+B) X
X,<A,aub> f- C c
X, Ib , A a f- C c
f- v X f-Aa
X f- ( AvB) a
X f- Ba
X f- ( AvB) a
provided (1) b i ¢;
(2) bna = ¢;
(3) max(a) ~ max(b).
provided (1) xna = ¢;
(2) X f ¢
(3) max(a) ~ max(x)
provided (1) a i ¢ and b i ¢;
(2) max(a) ~ max(b);
In an application of I f- , we say that the displayed
occurrences of <A,aub> and <A,a> have been weakened onto.
88
W-systems.
Now the following simple facts can oe established
by straightforward inductions on weight, which are left
to the reader.
Fact 2.5.1. Let E be a derivable sequent in GL.
Then
( 1) The antecedent of l: is not empty when L is
GURW
(2) The subscript of the consequent of l: is equal
to the union of the subscripts occurring in the
antecedent of E.
(3) The null set does not occur in the antecedent
of E.
(For the remainder of this Chapter, references to
sequents of a G-system are to sequents satisfying the
conditions of the above fact.)
Fact 2.5.2. ~ & is invertible in the sense of Curry 63,
89 i.e, if X ~(A&B) is derivable, so are X f-A and X ~B
X X X
Fact 2.5.3. !;-+is invertible,
i.e., if X f-A+B is derivable, so is X,A r <B,xua>, for x a
flome a satisfying the appropriate proviso(s) of f--...
Proving Cut and Equivalance will require the
ability to rewrite subscripts in certain, sometimes
peculiar, ways. The following strong rewriting Lemma
will help us prove the facts that are needed. For the
sake of convenience let us allow formula variables to
range over formulae and I for the rest of this section -
except where I would obviously not be permitted.
For any subscripts al•···•an,ai, ... ,a~ let
a= {a 1, ... ,a} and a' = {aj, ... ,a'}. Let a= Ua and n n
a'= Ua'. And let o andy range over the various unions of
the ai, i.e.,, over elements of {Ufll fl ~a and fl I </>}.
And . ,
where o lS a 1 u ... ua., for example, let o be J ,
similarly for y and y . Note that in a , ,
aw ... uaj;
degenerate case, o might simply be a 1 , for instance.
Then
Lemma 2.5.1. Rewriting Lemma
, , For any subscripts a 1, ... ,ah,a 1, ... ,an and any formulae
A,Al, ... ,A , if l: = <A1 ,a1>, ... ,<A ,a > r<A,a> is derivable n n n
. u u , , , ,. ln G R+(G T+), then so is l: = <A 1 ,a 1>, ... ,<Anp.n> f-<A,a >
provided, for any o and y,
(1) , ,
if o c y, then o c y , and
(2) for the Gl\r+ for alll <;; i,j
then max(ail > max(a~),
90
.;; n, if max Ca i l > max (a j )
Proof. By induction on weight of derivation of E. The
base step is straightforward, and only four cases of the
inductive step require checking, since only four
rules alter subscripts in any way.
Case 1. For K f- , assume that
E = <A 1,a 1 >, ... ,<A ,a >,<B 1,b 1 >, ... ,<B ,b >f-< A,a> n n m m
is derivable following from
E 1 = <A 1 , a 1 >, ... , <An , ar{> f- < A, a > .
, ' , , And let a 1 , ... ,an,b 1, ... ,bm satisfy the provisos of the
, , , lemma, whence b 1u ... ubm Sa. On inductive hypothesis,
, E 1
, , , = <A 1 ,a 1>, ... ,<A 1 ,a > f-<A,a >
n
is derivable. And by the previous observation, it is , ,
clear that E follows from E1 by K f-, which suffices.
Case 2. For f-+, assume there is a derivation
ending as follows
E1 = <A 1,a 1>, ... ,<A ,a >,<B,b>f-<C,aub> n n
E = <A 1 , a 1 >, ... , <~ , an> f- <B+C , a>
with boa=¢, of course.
the proviso of the lemma.
, , Then choose a 1, ... ,a satisfying
n , p
Obviously, a1, .. ,an,
9l
{max(a')+l} also satisfies the proviso. So on inductive
hypothesis,
, , , , , , <A 1 ,a 1>, ... ,<A ,a >,<B, max (a ) + 1> 1- <c, a u {max( a ) +1}> n n E I =
is derivable, from which r' follows by ~+. (The proviso's
on disjointness and maxima are obviously met.)
Case 3. For + ~ , assume that
E = <AI,aJ>, ... ,<A ,a >,<B 1 ,b 1>, ... ,<B ,b >,<A+C,c>I-<D,aubuc> n n m m
is derivable, following from
r 1 =<A 1 ,a 1>, ... ,<An,an>~Aa' and
l:2 = <BI,bl>, ... ,<B ,a >,<C,auc> ~<D,aubuc> . m m , , , , ,
Then choose a 1 , ••• ,a ,b 1 , ••• ,b ,c satisfying the proviso. n m
Obviously, r{ is derivable on inductive hypothesis.
Further, r' follows from it and
, , , , , , , . , E2 = <Bl>b 1>, ... ,<B ,b >,<C,a uc > 1-<D,a ub u c > m m
p
by + 1-. So it will suffice to show that 1: 2 is derivable,
for which it will suffice to show that , , , ,
bJ, ... ,b ,(auc),b 1 , ••• ,b ,(a uc) satisfy the proviso. m m
But this is clearly the case, since auc = a 1 u ... uanuc and
a'uc' = aju ... ua'uc', and since obviously max(a') ~ max(c') n
in the appropriate case; so we are finished.
Case 4. The argument for I ~is similar to that
of Case 3 and is left to the reader.
-~
This is quite a powerful rewriting lemma, which
has the useful
Corollary 2.5.1. For any structure X, for any
formulae B and C, for any subscript b such that bnx = ¢
and for any subscript d whatsoever, X,Bb ~<C,xub> is
derivable in GuR+(GuT+) only if X,Bd ~<C,xud> is
provided, in the case of GuT+' that for all c occurring
in X, max(c) ~ max(b) iff max(c) ~ max(d).
92
The corresponding rewriting lemma for the W-systems
is more complex.
Lemma 2.5.2. W-Rewriting Lemma
, , For any subscripts a 1, ... ,a ,a1, ... ,a and any formulae n n
A1 , ••• ,An,A' if
z: = <A 1 ,a 1>, ... ,<A ,a > i-<A,a> is derivable in a particular n n
, , , W-system, then so is <A 1 ,a 1>, ... ,<A a > ~<A,a >
n' n .
provided
(l) for any o, y, if o S y, then o' c y';
(2) for all 1 ~ i, j ~ n, if ainaj
and
93
= ¢, then a:na: = ¢; l J
(3) for GuTW+' for all 1 ~ i,j ~ n, max(ai) ~ max(aj)
only if max(a~) ~ max(aj)
Proof. The proof proceeds in a fashion similar to that of
Lemma 2.5.1 and is left to the reader.
It has the slightly weaker
Corollary 2.5.2. For any structure X, for any formulae
B and C, and for any subscripts b and d such that bnx = ¢
and dnx = ¢, X, Bb ~ <C,xub> is derivable in a particular
W-system only if X, Bd ~<C,xu:i>
for GuTW+ that max(b) ~ max(c)
all c occurring in X.
is - provided, of course,
iff max(d) ~ max(c) for
Some particular little facts, all corollaries of
the rewriting lemmas, will be useful - the first for the
Vanishing-t Theorem to come and the second for handling
I ~ in the proof of Cut.
Fact. 2.5.4. Let a 1 , ••• ,a ,b be subscripts such that for n
all 1 ~ i ~ n, either bna. = ¢ or b c a.. Then for any l l
formulae A,A 1 , ••• ,A , <A 1 ,a 1>, ... ,<A ,a > ~ <A,a> is n n n derivable in G only if <A 1 ,a 1-b>, ... ,<A ,a -b>
n n
J <A,a-b> is - provided in the case of the GT-systems that
for all l ~ i ~ n, max(a.) > max(b). l
Fact 2.5.5. Let a 1, ... ,am,b 1, ... ,bn,c be subscripts
satisfying the following conditions
(l) for all l ~ i, j ~ n and for all l ~ k ~ m,
max(bi) = max(bj) and max(bi) ~ max(ak);
(2) in the case of the W-systems, for all
l ~ i ~ n, for all l ~ j ~ m, bi nc =
ajnc = ¢.
(3) in the case for the GT-systems,
max ( b 1 ) ;;;. max ( c ) .
Then for all AJ, ••• ,~,BJ,.,.,Bn,D' if
<A 1 , a 1 >, ... , <A , a >, <B 1 , b 1>, .•• , <B , b > f- <D, au b> m m n n ·
is derivable in GL, so is <AJ,al>, ... ,<Am,am>'
<B1 ,b1uc>, ... ,<Bn,bnu c> f-< D,aubuc>.
We now have sufficient control over subscripts
to prove the desired Cut and Equivalence Theorems. But
first we will want to show that our placeholder, I, can
be done away with under the appropriate conditions.
94
95
SECTION 6. Vanishing-t
Technically, we will show a vanishing - I theorem,
but somehow it doesn't have the same ring to it. And I
"really" is t, just with a different name. Of course, we are
concerned here with only GuT+ and GuR+. So let L range only
over them in this section. And for the sake of notational
convenience, we will identify sequents which differ only in
the order of the constituents of their antecedents. 1
Lemma 2.6.1. Vanishing-t Lemma
Let X and Y be arbitrary structures such that for some
subscript Y
(1) all sfs occurring in Y have y as their subscripts; and
(2) for each subscript a occurring in X, either
any = ¢ or yea;
(3) for G0 T+' if X is non-empty, then max(a) > max(y),
for all a occurring in X,
Then if X,Y rCc is GL derivable, so is X- r<C,c-y> ,
where x- is the result of replacing a by a-y in X, for
each a occurring in X.
Proof. By strong induction on weight of derivation.
Note that if Y is empty, the lemma holds by Fact 2.5.5.
Thus the base step is straightforward, since I does not
occur in axioms. So choose an arbitrary j ~ 1 and assume
Inductive Hypothesis (H). For any X', Y', y' satisfying
the conditions of the lemma and for any c , if x', y' ~c c c
is derivable with weight j'<j, then x'- r <C,c-y'> is
derivable with some weight k ~ j'.
96
Next choose arbitrary X,Y,y and arbitrary Cc' and assum
Conditional Hypothesis CCl. l: = X, Y J- C c is derivable with
weight j and X,Y and y satisfy the conditions of the lemma.
(Let Derc be the derivation of E.)
It will then suffice to show that E- = x- i-<C, c-y > is
derivable with some weight k ~ j. The inductive step
proceeds by cases.
Case l. Derc ends with an application of I 1-,
say
E 1 = W, <A, au y >, Z 1- C c
E =W,A ,I ,ZI-C a y c
with X = W,A andY = I ,z. 2 Note that by choice of y in a Y
accordance with (2) of the lemma, yea or yna ~ ¢. So
a-y = (auy)-y = a, and ye(auy). Further, the subscripts
in z 1 satisfy condition (3) for the lemma when applicable.
So apply (H) to z 1 to finish the case.
Case 2. Derc ends with an application of + 1- :
E1 = W1 ,W2 i-<A,w 1 uw 2 > Z1,Z 2 ,<B,buw 1uw 2 > I-Cc = E2
E = W1 ,z 1, (A+B\,W 2 ,Z 2 1- Cc
with X= W1,Z 1,(A+B)b andY= W2 ,z 2 , thus y = w2 = z 2 .
Note that either w1ny =¢or yew 1 , and bny =¢or ycb.
It is clear by (C) that E1 satisfies the conditions of
the lemma. So by (H), Ej = W1 I- <A,w 1-w 2> is derivable
with appropriate weight. It is equally clear by (C) that
E? satisfies the required conditions, and that
97
(w 1nw2 =¢or w2cw 1) and (bnw 2 =¢or w2cb). So again
by (H), Ez = z1,.q,,(b-w2)u(w1-w2)>1-<c,c-w2> is derivable
with appropriate weight. Then note that in the case for
GuT+' max(w 1uw2);;;. max(b) by proviso (3) on+ 1-; but by
(C) -condition (3), in particular- max(b) > max(w2 ).
Whence, max(w 1);;;. max(b). So E- follows from E! and Ez by + ~ , and hence is derivable with appropriate weight -
which finishes the case.
Case 3. If Derc ends with an application of
any other rule, the argument is straightforward and
left to the reader.
~
Now let us say that a sequent is I-free just
in case I does not occur in it, and that a derivation
is I-free just in case each sequent occurring in it is
I-free. The Vanishing-t Lemma then makes short work of
Theorem 2.6.1. Vanishing-t Theorem
For any subscript a and any formula A, Ia ~ Aa is derivable
in GL iff there is an I-free derivation of 1- A.
Proof. Left to right is immediate by the Vanishing-t
Lemma and inspection of the rules. Right to left is by
induction on the weight of derivation of 1- A. If A is,
say, B+C, then Bbl- Cb is derivable for some b, by Fact
2.5.3. But by Corollary 2.5.1 or 2.5.2, as the case
may be, .q,,au{max(a)+U>I-<c, a'u{max(a)+l}> is also
derivable. Whence by I 1-, I ,.q,,{max(a)+l}> ~ a
<c,au{max(a)+l}> is derivable. So by 1-+, Ia ~ (B+C)a
98
is derivable, as desired.
Other cases are simple on inductive hypothesis.
So the proof is finished. ~
Before moving on, we should note that by inspection
of the rules
Theorem 2.6.2. &R+ and GuT+ are equivalent to Kron's
G1R+ and G1T+' respectively; i.e., for any sequent E
in which I does not occur, E is derivable in &R+(GuT+) iff
it is derivable in G1R+(G 1T+).
Hence the proofs to come of Cut and modus ponens for the
G-systems will vindicate Kron's analogous claims for his
G1-systems.
99
FOOTNOTE
1That is, we treat antecedents as firesets or multisets.
(See Meyer and McRobbie 1979.) Obviously, the rules are
taken to be likewise specified, so in essence there is
no rule C ~'or at least no need thereof.
2Technically, we are treating only one subcase of this
case. The other subcase is when the sf introduced by
I 1- is not a member of Y. But this subcase is
straightforward on (H) and is left to the reader.
100
SECTION 7. Cut and Modus Ponens
A Cut Theorem in the style of Theorem 4.1 of
Kron 80 could now be shown. But we prefer one along the
lines of Dunn 75. So we begin with an analysis of the
rules.
First, an inference is an ordered pair consisting
of a finite (non-null) sequence of sequents - the premises -
as left member and a sequent - the conclusion - as right
member. A rule is a set of inferences, and its members
are called instances thereof. A calculus or system is a
set of sequents - the axioms - together with a set of
rules.
Let o be a sf occurrence in a premise of an
inference Inf. The immediate descendant of o is the sf
occurrence in the conclusion of Inf which "matches" o in
the sense which is obvious from the statement of the rule
of which Inf is an instance, with the following exception.
If the conclusion of an inference is the same sequent as
one of the premises, then the immediate descendant relation
is determined by similarity of position. 1 An sf occurrence
in the premise of an inference is the immediate ancestor
of its immediate descendant. This terminology is taken
over in the obvious way to derivations. The relation of
ancestor is the transitive closure of immediate ancestor.
An sf occurrence in the conclusion of an inference
which is an instance of a logical rule other than I ~is a
101
principal constituent thereof just in case it is the "newly
introduced" sf occurrence. The immediate ancestor(s) of a
principal constituent are subaltern(s). All other sf
occurrences in a premise or in the conclusion of an
inference are parametric constituents, either premise
parameters or conclusion parameters, as the case may be.
Note that all immediate ancestors of a conclusion parameter
have the same subscriptas it unless it was weakened onto.
Now we shall say that a rule Ru is closed under
parametric substitution if it satisfies the following
conditions. Let Inf be an arbitrary instance of Ru. Let
a be a set containing some conclusion parametersin Inf and
all of their immediate ancestors, such that each conclusion
parameter in a has the same subscript, say x, and is not of
the form Ix nor is it an sf that has been weakened onto. And
for an arbitrary structure X, the union of the subscripts
occurring in which is of course x, let Inf[X/a] be the result
of substituting X (in the premise(s) and conclusion of Inf)
for each member of a. Then either InfiX/a] is an instance
of Ru or its "conclusion~ is the same sequent as its
"premise".
This definition is a minor modification of the
expected adaptation of the analogous definition of left
regularity in Dunn 75. It differs from the exf)ected only in
having the"· .. is the same sequent as ... " clause.
Now the following lemma can be verified by
inspection of the rules.
102
Lemma 2.6.1. Closure Under Parametric Substitution
The rules of GL are closed under parametric substitution.
Next we say that a Rule Ru is antecedent expandable
if it satisfies the following. Assume that for
1 .;;; i .;;; n, l:i is Xi~ <ci'ci > and l:n+l = X ~Cc, and that
Y,Bc,Z ~Dd is a sequent. Then suppose that
( 1 ) l: 1 , • • • , l: n
l: n+l
is an instance of Ru, with the displayed occurrence of Cc
in l:n+l parametric. Then
(2) l:l, · ·. ,1:' n l: , n+l
is an instance of Ru, where l:n+l'
1.;;; i.;;; n l:~ is either Y,Xi,Z ~Dd
whether or not C. is an immediate l
is Y ,X,Z ~ Dd and for all
or l:. depending on l
ancestor of C in (2).
This definition is the appropriate analogue of right
regularity in Dunn 75. It is not so general as the
definition to come in §3.3, but will suffice for our
purposes. For
Lemma 2.6.2. Antecedent Expandability
All of the rules of GL are antecedent expandable.
Now for the needed notion of rank in a derivation.
Let Der be a GL derivation of l:. Unless l: is the top
node of a branch of Der, let Inf be the inference (in Der)
of which I: is the conclusion, and let a be a set of
constituents of E. Then define the rank of a in Der as
follows. If a is empty, its rank in Der is 0. If a is
103
non-empty but contains no premise parameters, then the
rank of a in Der is 1. (In this case, a is in fact a
singleton. Otherwise let inf be
(1) El•···•E n
I:
with Der. the subderivation determined by E. for each ~ 1
1 ~ i ~ n, and let ai be the set containing all and only
immediate ancestors in Ei of members of a. (Note that
if all members of a were weakened in, then ai = 0.) Let k
be the maximum rank of any a. in its corresponding Der .. 1 ~
Then the rank in Der of a is k+l. And following BGD 80
we talk of the consequent rank of Der as the rank of
a in Der when a is the singleton containing the consequent
of the conclusion of Der.
Then, where a is a set of sf occurrences in Y(E),
let Y[X/aJ (E[X/aJ) be the result of substituting X in
Y(E) for each member of a. (When a is a singleton, say
{o}, we let E[X/oJ = E[X/a].)
We are finally ready for the Cut Theorem which can be
stated as follows:
Theorem 3.3.1. Cut Theorem
Let a be a set of occurrences of any sf Ax in a
structure Y. If XI-A and YI-C are GL derivable, then X C
so is YCX/aJ ~c .3 c
Proof. The proof proceeds as in Dunn 75 by a double
induction. So choose arbitrary m>O, j and k such that
j+k>O, and assume
104
Outer inductive hypothesis (OH). For all X,Y,C ,A and a C X
(a set of occurrences of A in Y), if the complexity of A X
is less than m, then if X 1- A and Y ~ C are derivable, so X C
is YCX/aJ ~c ; and c
Inner inductive hypothesis (IH). For all X,Y,C ,A of C X
complexity m, and a (a set of occurrences of A in Y), if X~ A X X
is derivable with consequent rank j', and there is a derivation
of Y ~C in which the rank of a is k' and j'+k'<j+k, then c
YCX/aJ ~C is derivable. c
Next choose arbitrary A with A of complexity m X
and arbitrary X,Y,C and a (a set of occurrences of Ax in Y),
and assume
Conditional Hypothesis (CH). DerL is a GL derivation of
X ~Ax with consequent rank j and DerR is a GL derivation
of Y ~ Cc in which the rank of a is k.
It will suffice to show that Y[X/a] 1-C is GL derivable. c
For the sake of notational convenience, let
L-premise = X ~A , R-premise = Y ~ C and Conclusion = X C
Y[X/a] 1-C. c
105
We now proceed by cases.
Case 1. k = 0, whence ~ is empty, CNote that j is
never 0.) Then Conclusion is L-premise and we are finished
by CH.
Case 2. k = l. There are three subcases.
Case 2.1. R-premise is an axiom.
Then Conclusion is L-premise, and we are finished
by CH.
Case 2.2. R-premise follows from a sequent, call
it E, by K ~. Then each member of a was weakened in and
is parametric, (but has no immediate ancestor;) whence by
closure under parametric substitution, either Conclusion
is E or follows therefrom by K ~ .
Case 2.3. R-premise follows by a logical rule
on the left, call it Ru. Then a is a singleton containing
a principal constituent of R-premise. There are two
subcases.
Case 2.3.1. j = l. Then L-premise is either
an axiom or follows by a logical rule on the right "matching"
Ru. In the first instance, Conclusion is R-premise,
whence it is derivable by CH. So assume L-premise follows
by a logical rule, call it Ru'. There are three subcases
one for each pair of matching logical rules.
(Note that A cannot be I.) In each case one applies OH
to the appropriate premises to get
Conclusion. The subcases for v ~ and & ~ are
straightforward and left to the reader. So we now show
that Kron's difficulties with~ ~ do not afflict us.
106
But since there are now no singleton requirements on any
of the rules, we easily got Corollaries 2.5.1 and 2.5.2,
which see us through as follows:
Suppose Der1 ends thus
El
L-premise
=
=
X,Aa l- <l3,xua>
X 1-A~B X
and that DerR ends thus:
E2 = Z f-A Y,<B,zu x>f- C z c
R-premise = Y ,A+B f- C X C
= E3
We must show that Conclusion = X,Y f- C is derivable. c
Now, in the GT-systems, note that by proviso (3)
on f- + and + f- , max (a) ;;;. max ( x) and max ( z) ;;;. max ( x) .
Whence, max(a);;;. max(b) and max(z) ;;;. max(b), for any b
occurring in X. (Note that z ~ ¢) And for GuTW+' by
107
proviso ( 1) of ~ + and proviso ( 2) of + f- , anx = znx = ¢.
So by Corollary 2.5.1 or 2.5.2, as the case may be
Ej = X,A f-B is derivable, since E1 is. Then, applying z ~z
OH to E; and E3 , it follows that Conclusion is derivable,
as desired.
We proceed in a similar fashion in the GR-systems
when z I-¢. So for GR+ assume z = ¢. Then by the
Vanishing-t Theorem and z2 , we have that
(3) I f-A a a
is derivable. Then by (OH) and (3) and Et, we get
(4) X,Ia 1-<B,xua>.
From which by the Vanishing-t Theorem, we get
(5) Xf-B . X
Now, since z =¢on assumption, x = xuz. So by (OH),
(5) and E3, we have X,Y ~ C as required. c
108
Case 2.3.2. j>l, whence the consequent of L-premise
is parametric. For the sake of simplicity, assume that
DerL ends with an instance of a single premised rule
Ru' as follows:
E
L-premise
=
=
W ~A X
X 1- Ax
Then E has consequent rank (in the subderivation
determined by it) of j-1. So by IH we find that
E' = W,Y ~C is derivable. But since A is parametric C X
and a is a singleton, according to Antecedent Expandability,
Conclusion= X,W 1-C follows from E' by Ru', as required. c
Cases where L-premise follows from a two-premised rule are
handled in a similar fa.shion, and are left to the reader.
Case 3. k>l. Then suppose DerR ends with the
following instance of some rule Ru other than I 1- :
E 1 ' • • • ' l:n '
(l .;;; n .;;; 2 ) R-premise = y 1- c
and let for 1 .;;; i .;;; n Deri be the subderivation determined
by Ei. Then let a' be the set of all conclusion parameters
in a, and for 1 .;;; i .;;; n let ai be the set containing all
immediate ancesters in Ei of members of a'. Then note
that for each such ai' the rank of ai in Deri is less
thank, whence j+(rank of a~ is less than j+k. So by IH
we see that l:.[X/a.J is derivable, l.;;; i.;;; n. Whence by l l
closure under parametric substitution, E = Y[X/a'J }C
is derivable by Ru, or r 1 is E. (Call the derivation of , ) , E Der . If a = a, we are finished. Otherwise r 1 I E
and l: and Conclusion are of the form W ,A a, Z 1--C c and
W,X,Z ]-Cc' respectively, with the displayed occurrence
of Aa being principal in the application of Ru ending
109
, Der . Whence we advert to case 2.3 to complete the proof.
Finally, assume that DerR ends with the following
instance of I } :
l:l = <D 1 ,d1>, ... ,<Dh,dh.>,A ... ,A ,<A,xuy>,z 1-C X X C
R-premise <D 1 ,d 1>, ... ,<Dh,dh>'Ax'''''Ax,Iy,Ax,z tCc
with a containing all and only the displayed occurrences
of A in R-premise, with max(x)nz = ¢ and with X
max(x) = max(di)' for each 1 ~ i ~h. And assume
L- premise = W,<l31 ,b1>, ... ,<13 ,b > 1-A n n x
with max(bi) = max(x) and max(x) nw = ¢, for each 1 ~ i ~ n.
Then note that this is not a case for the
W-systems, and that for GIJT+' max(x) ;;;. max(y).
by proviso (2) of I 1-. By CH r 1 and L-premise are
derivable, whence by Fact 2.5.5 so are
, E 1 = <D1 ,d 1uy>, ... ,<Dh,dhuy>,<A,xuy>, ... ,<A,xuy>,z l-Cc
and
L-premise' = W,<l3 1,b 1uy>, ... ,<l3 ,buy> ~<A,xuy> n n
llO
Then note that they are derivable in such a fashion
that the consequent rank of L-premise' = j and the consequent
rank of a'= k-1, where a 1 contains only the occurrences of ,
<A,xuy> displayed in E{. 4 So Conclusion follows from E1
and L-premise' by an application of Cut licensed by IH,
followed by applications of I ~ to restore the original
subscripts, followed by applications of W ~ to remove
excess occurrences of <l,y>.
The proof of the Cut Theorem is now completed.
~
It is worth noting that Vanishing-t was not needed
in this proof for the GT-systems. This is because proviso
(3) on + ~ guarantees that no premise of an instance of
that rule will have an empty antecedent.
The reader can now use the invertibility of ~+,
the Vanishing-t Theorem and the Cut Theorem ·to show
Theorem 3.3.2. Modus ponens is admissible in
u u G T+ and G R+.
We will now proceed to show that G~+ is equivalent
in an appropriate sense to uR+. The proof will use both
the semilattice semantics and Urquhart's natural deduction
system. The question of equivalence between the other
G-systems and their namesakes must be left open.
111
For neither the appropriate natural deduction systems nor
semantics are known to be equivalent to the other logics.
112
FOOTNOTES
1This convention is slightly broader than that of BGDBO,
p.35l; but the effect is the same. Strictly speaking,
we should either stop treating sequents as multisets, or
define a function to determine the relation of immediate
descendent, since otherwise "matching" really makes no
sense. But to do so would cloud what is essentially clear
terminology, and needlessly complicate the statement of
rather simple facts.
2 Note that although some instances of I ~ are also instances
of K ~' when structures are treated as multisets, this
causes no problems.
3Note that no sequent of the form X ~I is derivable. X
Further, x ~ ¢ in the statement of the Theorem when a
is not empty.
4This is a simple matter to check in the proofs of the
Rewriting Lemmas. Indeed, given E and E1 with E
derivable and E1 being the result of rewriting subscripts
in E according to one of the Lemmas, one can obtain a
derivation of E 1 by simply re-writing the subscripts
occurring in any derivation of E in an appropriate fasnion.
SECTION 8. UR c G UR + - +
We want to show that
for each theorem A of uR+.
u ~A is derivable in G R+
As a matter of notational
convenience, we will refer to G and to N, rather than
GuR+ and N~1R+ - in this section and the next, only,
The strategy of the proof will be in essence
to translate normal proofs in N into derivations in G.
Charlwood Theorems 2 and 3 (given below) will then
finish the work for us. This method neatly by-passes
the cumbersome rule R4. So let us introduce Urquhart's
N.
ASSUMPTIONS
113
Ax may be entered as a top node, for any x and for any A.
RULES
&I
vi
A B X X
A
A&B X
X
AvB X
&E
B vE X
AVB X
AvB X
A&B X
Ax
[A J X
. c xuy --c xuy
A&Bx
B X
[Bx]
. c xuy
+I [A{k}]
Bxu{k}
A+Bx-{k}
+E A+B X Ay_
A+Bxuy
With reference to these schemata, the principal premise
of +E is A+B ; of &E is A&B ; and of vE is AvB . All X X X
other premises are lesser. We refer to the lesser
premises of vE as AC and BC in the obvious way. x uy xuy
We stipulate that all assumptions of the form A{k}
above A+Bx-{k}' the conclusion of an application of +I,
are discharged at this application of +I. In the same
context, we refer to {k} as the deleted subscript. At
an application of vE with principal premise AvB , all X
undischarged assumptions of the form Ax (Bx) above and
including AC (BC ) are discharged at once. 1 And xuy xuy
let us say that the premises, where there is more than
one, of any application of a rule are side-connected.
A derivation in N is any finite tree whose nodes are sf
occurrences such that:
1. the tree branches upward and has the usual
properties;
2. no subscript is deleted more than once; and
3. if
of
Ay is a premise of +I discharging assumptions
the form B{k}' then every undischarged +I
assumption above A with subscript {k} is of y
the form B{k} ·
114
115
A proo! in N is a derivation in which every
assumption is discharged. And we use the normal notion
of the sub-derivation determined by a particular sf
occurrence in a given derivation.
Let us record a few facts established in Charlwood
80.
Charlwood Theorem 2. A is a theorem of u R+ iff A is
provable in N.
Charlwood Theorem 3. A¢ is provable in N iff it has a
normal proof in N; and a normal proof of A¢ has the
following properties:
(l) Subformula Property
Every formula occuring in it is a subformula of A; and
(2) Normality
No principal premise of vE is subscripted with ¢. 2
It is useful to think of derivations in N as
branching downward below a principal premise of vE,
until the vE assumptions are discharged. That is, we
want to think of the minor premises of vE, and all nodes
above them, as being ''below" the principal premise of
the application of vE discharging those assumptions.
So we introduce a notion of (immediately) precedes,
which is like that of (immediately) above, except that
the principal premise of an application of vE immediately
precedes the vE assumptions (discharged by that application
116
of vE), but does not immediately precede the conclusion
of that application of vE. The conclusion is the
immediate successor of the minor premises only. For the
sake of simplicity, we will have it that each node precedes
itself.
Then for any derivation der and for any sf occurrence
s therein, define the weight of 6 (wt(o)) in der as
follows:
(1) wt(a) = 1, if a is an +I assumption; otherwise
(2) wt(o) = j+l, where j is the maximum weight of the
immediate predecessor(s) of s.
And a derivation has weight m, if its conclusion has
weight m therein.
For any occurrence a of any sf, say A , in a X
normal proof, say of B, let X0 be the sequence, in some
assumed order, of undischarged +I assumptions preceding
a. And let X0 be the sequence, in some assumed order,
of undischarged vE assumptions. Then let g(o) be
X0,X0 ~Ax. We will want to show that g(a) is derivable
in G, for each such o. The following facts will be needed
to do so.
In the first place, the definition of proof
guarantees
Fact 2.8.1. Let o be an occurrence in a proof of an sf,
say A , and let y be an occurrence in the same proof of X
some By' such that x £ y. Then x is the union of the
117
subscripts occurring in X and each subscript occurring in 0
X0
is a subset of x. Further, each member of X0
is a
member of Y . y
Similarly, the reader can easily establish
Fact 2.8.2. Let a be the principal premise, 8 and y
the lesser premises and o the conclusion of an application
of vE in a proof. Let A and B be the vE assumptions X X
discharged by that application, whence X is the sequence a
of undischarged vE assumptions preceding a. Let Y8
,
YY and Y0 be the corresponding sequences for a, y and o.
Then there are W1 and W2 such that Y0 is X ,w 1 ,A andY ~ a x y
is X ,W2 ,B . a x . Further, an sf occurs in Y
0 iff it
occurs in either X , W1 or Wz. a
Finally, we can show
Lemma 2.8.1. For any A, for any normal proof P of A¢
in N and for any sf occurrence o in P, g(o) is derivable
in G.
Proof. By induction on weight of o in P. The base step
is straightforward. So choose an arbitrary k>l, and assume
Inductive Hypothesis. (H) For any A and any normal proof
P of A¢ in N, and for any sf occurrence y of weight
ll8
< k in P, g(y) is derivable in G.
Next choose arbitrary A and an arbitrary P, a normal proof
of A~ in N, and an arbitrary o of weight k+l in P. It
will suffice to show that g(o) is derivable in G. The
proof proceeds by cases.
Case 1. o is a vE assumption, say Bx. Note that
by Normality of Charlwood Theorem 3, x is not null.
Obviously, B ~B is derivable in G, from which X,, X X u
X0 ~Bx' i.e. g(o), follows by K~(if necessary) in light
of Fact 2.7.1. So the case is finished.
Case 2. o is the conclusion of +I. Let the
case be as follows:
[Bk]
y = <c(u{k}> o = B+C)
X
assuming without lose of generality that k I x. For the
sake of simplicity, let z = xu{k}. Using Fact 2.8.1 and
the definition of a proof, it is clear that Zy is X0,B{k}"
Further, any occurrence that is a vE assumption and
precedes o, likewise precedes y; so Zy = X0 . So by (H),
g(y) = x5,B{k}, x8
.f-<(B+C), xu{k}> is derivable in G.
And again by Fact 2.8.1, each subscript in X0 is a subset
of x. So we see that X0 , X0 ~ (B+C)x, i.e., g(o), follows
from g(y) by f- +, finishing the case.
Case 3. 6 follows by +E. Then let the case be
as follows:
By (H)
(1)
( 2)
a = (B+C)
6 =
B = y z
c yuz
y Y ~(B+C) , and a' a Y
Z ,z t- B y y z
are derivable in G. Then by (1) and Fact 2.5.3,
( 1,) Y ,Y ,Bb i-<c,yub> a a
is derivable, for some b such that bny = ¢ and b ~ ¢.
There are now two subcases.
Case 2.1. z = ¢, whence (2) is l-B. Then by
the Vanishing-t theorem,
( 2' ) Ib 1- Bb
is derivable in G. So by (1'), (2•) and Cut,
( 3) Ib 'y a 'y a f- <c' y ub>
119
is derivable in G. Recalling that bny = ¢, we see by the
Vanishing-t Lemma that g(o) is derivable as required.
Case 2.2 z I¢. Then by (1') ahd the rewriting
Corollary 2.5.1,
( 1 N ) Y , Y N , B 1- <.c, y uz> a ~ z
is derivable in G. Whence by (2), (1") and Cut, g(o)
is derivable as required.
Case 3. o follows by vE. Let the case be
a = (BvC) z
0 =
[B J z
<D,zuy> = y
<D,zuy>
[C J z
<D,zuy> = A
120
and for notational convenience let x = zuy. By Fact 2.8.1,
X = X = X = y A 0 Z ,WJ, for some structure W1. Cl
And by
Fact 2.8.2, Xy = z ,w2 ,B Cl z and XA = za,w3,cz' for some
W2 and w3 . So by (H)
(1) Z , W 1 , Z , W 2 , B l- D , and a · Cl Z X
(2) z ,w 1 ,z ,w 3 ,c 1- D a a Z X
are derivable in G. Then by K 1-
( 1,) z ,w1,Z ,w 2 ,W3,B 1- D , and a a Z X
( 2,) Z ,WI ,Z ,W2 ,W3 ,C 1- D a a Z X
are also derivable. So by v 1- ,
(3) Z ,W 1,Z ,W2 ,W3, (BvC) l- D a a Z X
is derivable in G.
But by (H) again,
( 4 ) Z ,z 1- (BvC) Cl Cl z
121
is also derivable. So by (4), (3), Cut and W 1-,
(5) Z ,W1,Z ,Wz,W3} Dx., i.e", X_,.,.Z ,Wz,W3 r D a u v a X
is derivable in G. But by Fact 2.8.2, g(o) either is (5)
or follows therefrom by W 1-, which finishes the case.
Case 4. If 6 follows by any other rule, the
argument is straightforward and can be left to the reader.
So the proof is finished.
We did use I in the proof of the Lemma, but Theorem
2.6.1 tells us that its use was inessential. So henceforward
we take all of the G-systems to be formulated without I.
The above lemma and Charlwood Theorems 2 and 3 now finish
the business of this section.
Theorem 2. 8 .1. A is a theorem of uR+ only if 1- A is
derivable in G.
FOOTNOTES
1Note that an application of vE discharging assumptions
of the form B which occur above a particular lesser X
premise, say y, does not discharge any assumptions
above y which are not (occurrences of) Bx.
122
2The definition of normal is given in Charlwood 80 on p.8.
However, only those properties of normal proofs just
listed are of interest to us.
3rn order not to confuse some relatively simple matters,
we are a bit lax here and below with the distinction
between an sf and an sf occurrence.
123
SECTION 9. UR . GUR + 15 +
As before, let us say that a formula A is provable
in G just in case ~A¢ is derivable. We need to show that
all formulae provable in G are theorems of u R+. Given
Charlwood Theorem 1, it will suffice to show that all
theorems of G are valid in the uR+ semantics laid out in
Chapter 1. A technique similar to that of Urquhart 73 is used.
The concept of the technique is very simple.
Sequents will be interpreted as metasemantic statements.
Then we will show that the interpretation of each derivable
sequent is true, with the desired conclusion then
being immediate. Notational conventions will be adopted
to make the actual technique match the concept in simplicity.
So on interpretation, we will let integers be
variables ranging over points of a uR+ model structure.
So let n1
, ••• ,nk be the elements of a· non-empty subscript,
say x. For any formula A, we let [A J be the statement X
that I(A,n 1 u ... unk) =True on m, where I is a variable
ranging over interpretations and m is a variable ranging
u + over R model structures. And let [A0 J be the statement
that I(A,¢) =True on m. Then let us agree to abbreviate
I[A J1 to 1 A 1 , for any subscript x. Then for any X X
non-empty structure X, say <A 1 ,a 1 >, ... ,<A ,a >, and for any m m
sf Aa' let [X ~AaJ be the statement that if
[<A 1 ,a! >J, ... , C<A ,a >J then [A J. And let [ ~ AJ be [A_,~ J, m m a "'
for any formula A. Finally, for any sequent X ~A a, let
124
t(X !-A) be the statement u that for all R+ model structures
m, for all interpretations I and for all points nJ,···>nk'
[X!- AaJ - where n 1 , ••• ,nk are all of the positive integers
occurring in X!- A . (With the obvious understanding when a a = ¢)
With these definitions, we can conveniently state
Theorem 2.9.1. For any sequent E, E is derivable in G
only if t(E).
Proof. By a straightforward induction on weight of
derivation of E.
Putting this Theorem and Charlwood Theorem l
together we get
Theorem 2.9.2. A is provable in G only if A is provable u
in R+.
And this theorem with Theorem 2.8.1 finishes the
equivalence:
Theorem 2.9.3. Equivalence Theorem
G is equivalent to u R+.
125
SECTION 10. Decidability
Now we turn to showing that the GW-systems are
decidable. The simplicity of structures - they are,
after all, just sequences - makes at least part of the
job straightforward. The overall strategy of the proof
will be to define a complete and effective proof search
procedure which builds proof search trees, then show that
such trees are finite via Konig's Lemma (see below).
For once, we actually have to worry (a little)
about the Finite Fork Property. W;i. th the rule 1- + as it
stands, a search for a proof of ~A+B, for instance, is
immediately infinite, since on the face of it, we must
check A{l} f-B{l}' A{z} ~B{z}'···· But this is easily
remedied.
Let G'LW come from GLW by changing proviso (1) to
(1') a = {max(x) + 1}.
Using the re-writing Corollary 2.5.2, it is simple to
show that GLW and G'LW are equivalent in terms of derivable
sequents. So we will not bother to distinguish between them;
we simply take GLW to be formulated with proviso (1')
instead of (1) on~+. And let us now say that max(x) + 1
is discharged by an application of~+.
To show that any branch of a proof search tree
is finite, we will take the simple approach of showing
that only finitely many distinct sequents can occur only a
finite number of times on a given branch. However, with
126
W ~as a rule, it is clear that we must reduce sequents
that can occur, i.e., put an upper bound on the number of
times that an sf can occur in a sequent.
But, again, the simplicity of our structures makes
this an easy task. We have already been taking structures
to be mul tisets. Given K ~and W 1- , nothing stands in the
way of going to sets simpliciter.
So let an s-structure be a (possibly empty) set
of sfs, and bring forward other appropriate definitions
in the obvious way. So structure variables now range over
s-structures. And let us simplify notation by dropping
parentheses from singletons when we wish and allowing commas
to stand in for the set union sign. We now officially
formulate G2 uTW+ and G2uRW+ by taking their axioms and
rules to be specified by the statement of the same for
GuTW+ and GuRW+, respectively. It is then straightforward
that
Lemma 2.10.1. Reduction Lemma
GLW and G2 LW are equivalent in the obvious sense.
Notice, however, that the premise and conclusion of an
instance of W 1- in G2LW are the same sequent. Likewise for
C 1-. So we take G2LW to be formulated without these rules.
If the elements of our structures were simply
formulae, the Finite Branch Property would be guaranteed
by the Reduction Lemma and the Subformula Property, below.
(Given Irredundancy, that is). For it would then be the
127
case that all sequents occurring on a branch of a proof
search tree would be built out of a finite number of
formulae, each of which could occur only a finite number
of times in a given sequent. Obviously, only a finite
number of distinct sequents could be so constructed. But
the constituents of our sequents are sfs. The Reduction
Lemma puts a definite, finite upper bound on the number of
times a formula can occur with the same subscript in
any sequent. But it gives no information about the number
of times that a formula can occur with a different subscript.
So the problem that remains for decidability is to
get an upper bound on the number of distinct subscripts
that can occur on a branch of a proof search tree. To solve
this problem, it will be helpful to think of relevant logics
as a mixture of the intensional and extensional, as Meyer
has often urged (in Meyer and McRobbie 79, for instance) -
or as being ''hybrid'', as it is put in Belnap 8+. 1 In our
context, & and v are extensional connectives, and + is
an intensional connective.
Now let us think of GLW along these lines. 2 Given
& ~, K ~and W ~, it is clear that two sfs with the same
subscript occurring in the antecedent of a sequent are
being structurally represented as being conjoined, that
is as being extensionally related. So the Subformula
Property and Reduction Lemma give us control over the
extensional complexity, if you like, of the antecedent of
sequents that must be considered for a proof search for a
given sequent.3
128
Now in terms of intensional complexity, f- + and
+ !- obviously indicate that sfs in the antecedent with
disjoint subscripts are being represented as being
intensionally related. GLW is not idemdis, and sfs with
"overlapping" subscripts muddy the water, but we now have
a fair hint for a measure of the intensional complexity
of a sequent.
First, for any formula A and subscript x, define
the degree of A (deg(A )) as the number of + 1 s occurring X X
in A. Then for any structures X, Y, let Y be an intensional
barometer of X, if
(l) Y ~ X; and
(2) all subscripts occurring in Yare pairwise disjoint.
And for any structure Y satisfying (2), define the indicator
of Y ind(Y) as the sum of the degrees of its elements. And
for any structure X, define deg(X), as the maximum of the
indicatorsof its intensional barometers. Obviously, if X
is empty, the degree of X is 0. Finally, for any sequent
E, let deg(E) be the sum of the degrees of its antecedent
and consequent.
Two questions remain to be answered. Can we get
control over the intensional complexity of sequents
occurring in a GLW derivation of a given sequent? If so,
will the combined control over extensional and intensional
complexity, as we choose to put it, yield a decision
procedure? The answers to these questions are ''Yes'' and
''Yes'' - respectively.
129
First, note that
Lemma 2.10.2. Degree Lemma
The rules of G2LW are degree preserving. That is, the
conclusion of an instance of any rule is at least as great
as that of any of its premises.
Proof. By cases.
Case l. Let the following be an arbitrary instance
of 1- +:
E1 = X,A f-<B,xua > a E = X 1- ( A+B)
X
Let Y be an intensional barometer of X with maximum
indicator. Then deg(E) = ind(Y) + deg(A) + deg(B) + l.
But since anx = ¢, deg(E 1 ) = ind(Y) + deg(A) + deg(B).
So deg(E) is greater than deg(E 1)
Case 2. Let the following be an arbitrary instance
of + }- :
E1 =Xf-A X
Z ,<B, 8!W X> f- C c
E = X,Z, (A+B)a 1- Cc
First note
= Ez
(l) Every intensional barometer of the antecedent
of E1 or of the antecedent of E2 is an intensional
barometer of the antecedent of E;
130
and
(2) anx = ¢, by proviso (2) of + 1- •
To show deg(E) ~ deg(E 1 ), let Y be an intensional
barometer of E1 with maximum indicator. Then
deg(E 1 ) = ind(Y) + deg(A ). But by (1) and (2), Y u {(A+B) } x a
is an intensional barometer of the antecedent of E, and its
indicator is ind(Y) + deg(A) + deg(B) + 1, which suffices.
Then let W be an intensional barometer of E2 with
maximum indicator.
(W- {<B, awx>}) u
If<B,aux>E W, then by (1) and (2),
{(A+B)a} is an intensional barometer
of the antecedent of E. So an argument similar to the one
immediately above will suffice. And if <B, aux > i W, we
are finished by (1). So the case is completed.
Case 3. All of the other rules are straightforward
on inspection, which finishes the proof.
~
It is important to note from the above proof that ~+
is degree increasing.
The Degree Lemma will yield a finite upper bound
on the number of subscripts that can occur in a branch of
a proof search tre~ which will work together with the
Reduction and Irredundancy Lemmas and Subformula Property
to yield the Finite Branch Property. So let us get the
decidability argument properly underway.
131
First say that a tree is irredundant provided that
no sequent occurs more than once on any branch of it.
Clearly
Lemma 2.10.3. Irredundancy Lemma
Any sequent E is G2LW derivable iff it has an irredundant
derivation.
Next, let us specify as follows a proof search
procedure which produces the LRW:(LTW:) proof search tree
of E for any sequent E:
(l) Enter E as the bottom node;
(2) above each sequent E' occurring with height k
(in the tree so far constructed) (a) enter
nothing, if E' is an axiom, (b) otherwise enter
(in some assumed order) all sequents EN such
that EN is a premise of some G2LW inference of
which E' is the conclusion and such that the tree
remains irredundant.
Obviously
Lemma 2.10.4. Effectiveness Lemma
The proof search procedure thus specified is effective.
Now let us say that a (possibly null) tree T'
is a subtree of a tree T iff it is the result of deleting
132
some (possibly no) sequent occurrences in T and all sequent
occurrences above them. Then by the Irredundancy Lemma
and the above specification:
Lemma 2.10.5. Completeness Lemma
The proof search procedure is complete, i.e., E is G2LW
derivable iff some subtree of the proof search tree of
E is a G2 LW derivation of E.
As usual, a tree has the finite fork property
iff it has at most finitely many nodes of any given height;
and a tree has the finite branch property iff each of its
branches contains at most finitely many nodes. And recall
Konigs Lemma. A tree is finite iff it has the finite
fork and finite branch property. (Konig 27)
Now, by inspection of the rules
Lemma 2.10.6. The proof search tree of any sequent E
has the finite fork property.
To show the finite branch property, we need a few more
facts and lemmas. Of course G2LW has the Subformula
Property, which we state as follows:
Lemma 3.8.6. For any inference of G2LW, every formula
occurring in a premise thereof is a subformula of a
formula occurring in the conclusion.
133
As was indicated in the earlier discussion, what
is needed now is control over the number of distinct
subscripts that can occur in the sequents of a branch of
a proof search tree. For clearly the move to sets in
the Reduction Lemma gives us
Fact 2.10.1, For any sequent E and subscript x, there
are at most finitely many sequents in which
(1) all formulae that occur are subformulae of formulae
occurring in E; and
(2) all subscripts that occur are subsets of x.
So for any branch 8 of a proof search tree, let
cs(8) (the conclusion subscripts of 8) be the union of all
subscripts occurring in the bottom node of 8. And let
ds(8) be the set of all positive integers discharged by
~+on 8. Then by the definition of a proof search tree
and inspection of the rules
Fact 2.10.2. Subscript Fact
All subscripts occurring in 8, a branch of a proof search
tree, are subsets of cs(8) u ds(8).
Now it is time for the Degree Lemma to do its job.
Lemma 2.10.5. Subscript Lemma
For any sequent " and branch s of a proof search tree for "'
134
cs(S) u ds(S) is finite.
Proof. Choose arbitrary E and S in accordance with the
lemma. Obviously cs(S) is finite, whence it will suffice
to show that ds(S) is finite. So assume for reductio
that ds(S) is infinite. Then clearly there are infinitely
many distinct applications of ~+ on B. Now recall that
~+is degree increasing. Then by the Degree Lemma,
deg(E) is infinite - which is absurd. So the lemma is
proved.
~~-
So straightaway we have
Lemma 2.10.6. Finite Branch Property
All G2 LW proof search treeshave the finite branch property.
Proof. By the subformula property, every formula occurring
on a branch is a subformula of that branch's bottom node,
call it r. And by the Subscript Lemma, cs(r)u ds(E) is
finite. So by the Subscript Fact and Fact 2.10.1, only
finitely many distinct sequences occur on any branch. Since
the proof search procedure guarantees that a proof search
tree is irredundant, every branch of such is finite,
as was required.
135
This Chapter can now be concluded with
Theorem 2.10.1. G2LW, and hence GLW, is decidable.
Proof. By the Effectiveness and Completeness Lemmas and
Konig's Lemm~along with the Finite Fork and Finite
Branch Properties.
136
FOOTNOTES
1we do not intend to suggest that there is any underlying
philosophical agreement among the authors cited.
2Although we have no proof, it seems very reasonable to
believe that the GW-systems are formulations of relevant
logics, namely, of uRW+ and uTW+' respectively.
3This much is a drastic oversimplification, but is close
enough to be of heuristic value.
137
SECTION 1. Introduction
The major result of this work, namely that TW+
and RW+ are decidable, is contained in this Chapter which
is a study of Dunn style Gentzen systems for TW+, RW+'
T+ and R+. The L-systems are presented in §2 where we also
spend some time to gather some basic facts. These systems
are actually ''hybrid", utilising extensional sequences as
in Dunn 75 and an intensional, binary structural connective
as in Meyer 76b and Belnap 8+. The reason for this is that
although the use of binary structural connectives has
definite notational advantages, sequences are much simpler
to deal with for the sort of extensional reduction needed
for the decidability argument.
In §3 the systems are fitted with appropriate
Cut Theorems. The desired equivalences are then proved
in §4, where we also develop a notion of representational
adequacy and show that the L-systems meet the criterion
thereof.
Next we begin the business of showing that the
contractionless systems are decidable. The strategy is
an appropriate modification of the one used for the
subscripted systems of Chapter 2.
In §5 we reformulate the systems to be empty on
the left, ridding ourselves of t via a Vanishing-t theorem
analogous to that of the previous chapter. Doing so is a
first step toward a suitable formulation to show
decidability for the contractionless systems, since the
138
rule t- ~is not degree-preserving. And in §6 we give
a final formulation which "denests" extensional sequences.
(The terminology is explained in §6.) This move facilitates
the extensional reduction required for the decidability
argument.
The proof of an appropriate extensional reduction
lemma is given in §7. But rather than moving to sets,
i.e., trading in denested E-sequences for sets, we simply
show that there is a finite upper bound on the number of
repetitions of a structure as an immediate constituent
of an extensional sequence. The reason for this is
purely practical. We will basically adopt the notational
conventions of Dunn 75, than which we can find no better.
And our own experience has been that with such notation,
it is far easier to check case-ridden arguments, as are
common in proof theoretic investigations.
Finally, in §8 we formulate an appropriate notion of
degree and then give a decision procedure for LTW+ and
LRW+. Given the equivalence results of §4, this suffices
to show that TW+ and RW+ are decidable. (The difficulties
involved in extending the argument to cover EW+ are
discussed in §3 of the final chapter.)
139
SECTION 2. Formulation 1, Definitions and Facts
The primary task of this section is to present
Dunn-type Gentzen Systems for R:t and T:t with and without
contraction, introduce vocabulary and gather a few facts.
Accordingly, unless otherwise specified L ranges over ot o~ ot ot
TW+' RW+ , T+ and R+ \'i'e will call
these Gentzen systems 'L-systems'. Since other formulations
will be presented later, superscipting on 'L' will be used
to distinguish the different formulations, e.g., L 1 T~ 0 ,
L2RW~ 0 , etc. Further, 'LT-systems', 'L 2 R-systems',
'LW-systems', etc. are used in the obvious way.
Of course, a base language built out of atomic
formulae in the usual manner using the connectives and
constants appropriate to the system(s) under consideration
is assumed throughout. From the wffs, nested
structures will be built as was indicated in 13.1.
We will have extensional sequencffias in Dunn 75, for which
our notation will be the same as there. And we will have
a binary intensional structural connective as in Meyer 76b
and Belnap 8+, which will be represented by a semicolon.
Then letting 'X', 'Y', 'Z' and 'W' with or without
subscripts and/or superscripts be structural variables,
a structure is defined recursively:
(1) A is a structure, for any wff A;
(2) if X and Yare structures, so is X;Y; and
( 3) if X1 , ••. •X"n are structures and n ;;;>- 2, then so is
E (X 1 , ... , Xn) .
140
Note that there are no null structures nor any
structures of the form E(X). With respect to the latter,
we say that our structures are denuded. This is the first
move toward simplifying the counting of structures, as we
will of course want to do for decidability. There is no
conceptual or technical loss involved, since structures
of that form carried no representational load. Unless
otherwise indicated, semicolons are taken to be associated
to the left. Parentheses are used to disambiguate notation
as necessary.
Structures of the form of (3) are called
extensional structures, extensional sequences, or
e-sequences. Those of the form of (2) are intensional
structures or i-structures. And we say that a structure
X occurs in a structure Y just in case
(1) X is Y; or
(2) Y is W;Z and X occurs in W or in Z; or
(3) Y is E(W1 , ... ,Wn) and X occurs in some Wi.
Of course, if X occurs in Y, then X is a substructure of
Y, and the appropriate occurrence(s) of X is/are a
constituent(s) of Y. (The notion of a particular occurrence
of a structure is taken as primitive. However, the
distinction between a structure and a particular occurrence
thereof is often ignored when it is not likely to cause
confusion.) And for 1 ~ i ~ n, the "displayed" occurrence
of Xi in E(X 1, ... ;Xn) is an immediate constituent thereof.
And for an intensional structure, say X;Y, we refer to X
as the left constituent and to Y as the right constituent.
141
A sequent is an entity of the form X ~A. &, with
or without scripting, is used as a variable ranging over
sequents. X is the antecedent and A is the consequent of
X ~ A. And Y occurs in the sequent l:, just in case it
occurs in its antecedent or consequent. The use of
constituent is similarly extended. And we say that
structures and sequents are built up from or built up
out of the wffs that occur in them.
The following structural analogue to the notion
of the length or complexity of formulae will be very
useful. So define the structural complexity (sc) of
a structure and of a sequent as follows
(1) sc(A) = 1, for any formula A;
(2) sc(X;Y) = sc(X) + sc(Y) + 1, for any structures
X and Y;
(3) sc(E(Xl•···•Xn)) = sc(X1 ) + ... + sc(Xn) + 1, for
any structures X1, ... ,Xn; and
(4) sc(X ~A) = sc(X) + sc(A), for any X and A.
Upper case Greek letters (except 'E') are used to
range over (possibly empty) sequences of symbols drawn from:
formula variables and parameters, 'E', left and right
parenthesis, the comma and the semicolon. For example
r 1 X r 2 ~A
represents a sequent. Further, a particular occurrence
of X is taken to have been displayed.
142
Now Formulation 1 of the L-systems can be given
from the following set of axioms and rules. Two-sided
rules are indicated by •~•
AXIOMS
A 1- A
RULES
Structural Rules
r 1x r 2 1- c r 1 E ( x1
, ••• , Y , w, ... , x ) r 2 1- c Ce 1- n , N ;;. 0 !
r 1 E(X1
, •.• ,W,Y, ... ,Xn)r 2 1- C Ke 1-
r 1E(X,Y) r 2 1- C
r 1E(X,X)r 2 1- c We 1-
r 1 X r 2 1- c
ee 1- r 1E(X 1 , •.• ,E(Y 1 , ••• ,Ym), ... ,Xn)rz 1- C""
r 1E(X 1 , ••• ,Y 1 , ••• ,Ym, ... ,Xn)r 2 1- c, n;;. 1 and m;;. 2
r 1 (X;Y)r 2 1- c B- i I-
r 1 ( (X;Y) ;Z)r 2 1- c CIH
r 1 (Y;X)r 2 1- c r 1 (X;(Y;Z))r 2 1- c
r 1 (X;(Y;Z))r 2 1- c r 1 (X;(Y;Z))r 2 1- c Bi 1- B, . 1-
r 1 (X;Y;Z)r 2 1- c l r 1 (Y;X;Z)r 2 ~ c
& ~
v ~
_, ~
0 ~
t ~
Logical Rules
r 1A r 2 ~c
r 1A&B r 2 ~ c
r 1B r 2 ~c
r 1 A&B r 2 ~ c
r 1A r 2 ~c r 1 B r 2 ~c
r 1AvB r 2 ~ c
Y~Ar 1Br 2 ~c
r 1 (A+B;Y) r 2 ~c
r 1 (A;B) r 2 ~ c
r 1 AoB r 2 f-C
r 1x r 2 ~c
r 1 (t;X) r 2 ~c
~v
~-+
~ 0
t- f-
X f-A X f-B f-&
X f- A&B
X ~A X ~B
X~AvB X~AvB
X; A ~B
X ~ A-+B
X ~A Y ~B
X;Y ~AoB
r 1 (t;X) r 2 ~c
r 1 x r 2 ~c
The axioms, all of the logical rules and Ke ~,
143
We ~ , to
LTW+
Cie ~ and ee f- are common to the L-systems. To get
add Bi f- and B' i ~ . to For LRW+ replace B'i f- by Cii f-
and B-il-instead. And for t~ to , . LT t and LR+ add Wl 1- to the
appropriate contractionless system.
The reader familiar with Dunn 75, Meyer 76b and/or
Belnap 8+ will feel at homewith this formulation, even though
144
some of the rules are slightly different. This form of
We ~ is demanded for preserving well-formedness, since our
structures are denuded. The choice of logical rules for &
is discussed later in this section, as well as the superfluity
of ee ~. And the role oft-~ is discussed in §4. Now
some definitions are wanted, before a few facts are collected.
In the first place we should re-establish some
familiar terminology. Ke ~ is of course an extensional rule
of weakening, and we speak in the obvious way of a structure
having been weakened in. Ce ~ and CTi ~ are Permutation
rules, and We~ and Wi ~ are Contraction rules. Again we
speak of a permuted structure and of a contracted structure.
A derivation in LL is a finite tree branching
upward with the normal sorts of properties, and a proof of
A is a derivation of t ~A. We take the notion of a sequent
(occurrence) being immediately above (below) another sequent
(occurrence) as primitive. Being above (below) is the
transitive closure of immediately above (below). So where
Der is a derivation and x is a particular occurrence of some
sequent therein, the subderivation determined by x iS the
derivation that one would get by deleting from Der all
sequent occurrencffiexcept x and those above it. A sequent
occurrence x (immediately) precedes a sequent occurrence
y in a derivation just in case x is (immediately) above y;
similarly for (immediately) succeeds. And predecessor and
successor are used in the obvious way. Then a branch of
a derivation is a sequence x 1 , .•. ,x of sequent occurrences n
such that x 1 has no predecessors and xn has no successors,
145
and for all 1 < i<n, xi is immediately above xi+l' A branch
segment is a subsequence of a branch.
The weight of a derivation, say Der, is the length
of a longest branch, and the weight of a sequent occurrence
x in Der is the w.eight of the subderi vat ion determined by x.
The conclusion (bottom node) of a derivation that has weight
n is said to be derivable with weight n. The concept of
weight is an important one, since many arguments to follow
will be by induction on weight.
Finally, the height of a sequent occurrence, say s,
in a derivation Der is the length of the branch segment
consisting of s and all sequent occurrences below it.
Now to gather a few facts. In particular we want
to show that various logical rules are
invertible (cf. Curry 63). In each case, the
proof is by a straightforward induction on weight of
derivation, which is generally left to the reader. Some of
these facts will be useful in what follows. Others are
given just out of interest.
Fact 3.2.1. o ~ is invertible, i.e.,
r 1 AoBfz 1- C is L1L derivable only if r 1 (A;B)r 2 1- C is.
Fact 3.2.2. 1- + is invertible.
Fact 3.2.3. vf- is invertible, i.e.,
r 1AvB r 2 I-C is derivable only if both r 1 A r 2 1-C and
r 1 B r 2 1- c are .
146
Fact 3.2.4. ~ & is invertible.
Similar claims for ~ v, f- o, -+ f- and & f- will not be
forthcoming, mainly because they would be false. But
there is an analogous fact for & f- in its Ketonin style
(see Curry 63, p.20l). Dunn 75 uses Ketonin & ~ and non
Ketonin f- & , but they do not fit so nicely with the sort
of ''canonical forming" to be done later on e-sequences.
However, they are admissible.
Fact 3.2.5. If X f-A and YI-B are derivable, so is
E(X,Y) f-A&B.
Proof. Use Ke 1- twice, then f- & .
Fact 3.2.6. If r 1E(A,B)r 2 f-C is derivable, so is
r 1 A&Br 2 ]-C.
Proof. Use & f- twice, then We f- •
Fact 3.2.7. The inverse of the Ketonin form of & 1-
is admissible.
And while we are at it, the two following facts
are worth explicit note.
Fact 3.2.8. B'i ~ is admissible in the
LR-systems.
So by inspection of the rules and Fact 3.2.8,
it follows that
Fact 3.2.9 The LT-syste!Jl3·are subsystems of the • ot ot • LR-systems, i.e., E lS derivable in LT+ (LTW+) only lf
. ot ot it is derivable ln LR+ (LRW+ ).
Before moving on to more serious business,
it should be noted that L 1L does not have a strict
subformula property as is common for Gentzen systems.
Obviously, t- ~is the culprit. It has instead
Theorem 3.2.1. Approximate Subformula Property.
Let E' be a sequent occurring in a LlL derivation of
some sequent E. If A occurs in E', then either A is
tor A is a subformula of some formula occurring in E.
But we defer a discussion of this fact until §5,
in order to deal with more immediately pressing matters.
147
148
SECTION 3. Cut Theorem
As we said before, Formulation l was chosen for
relative ease in proving Cut and the desired equivalences.
The business of this section will be to give definitions
and an analysis of the rules that allow us to state and
prove an appropriate Cut Theorem. We begin with an analysis
of the rules as in §2.7 . So of course, the analysis is
along the lines of Dunn 75 and BGD80.
First, an inference is an ordered pair consisting
of a finite (non-null) sequence of sequents - the premises -
as left member and a sequent - the conclusion - as right
member. A rule is a set of inferences, and its members are
called instances thereof. A calculus or system is a set
of sequents - the axioms - together with a set of rules.
Again, we define the immediate ancestor(s) of a
formula constituent of the conclusion of an instance of
any rule of L1L by reference to the statement of the rule, as
in §2. Recall from §2.7 our ad ho~ convention for instances of
a rule in which the conclusion is the same sequent as the
premise, and that this convention is somewhat broader than
that of BGD80, p.351. The converse notion of immediate
descendant is also taken as defined.
Premise and conclusion parameters, principal and
subaltern are defined in the obvious ways analogous to the
definition of Chapter 2. But note that the occurrence of t
in the premise of the instance <t ;p )- p, p f- p> of t- !-is
a subaltern.
Now we will say, as in Chapter 2, that a rule Ru
149
is closed under parametric substitution just in case the
following conditions are met. Let Inf be an arbitrary
instance of Ru, and let a be a set containing only some
conclusion pararnetersof Inf and all of their immediate
ancestors. Then for an arbitrary structure X, let
Inf[X/aJ be the result of substituting (in the premise(s)
and conclusion of Inf) X for each member of a. Then
either Inf[X/aJ is an instance of Ru or its ''conclusion''
is the same sequent as its "premise". Note again that
the definition is a minor modification of those of AB75
and BGD80. We see straightaway
Lemma 3.3.1. Closure under parametric substitution.
The rules of LL are closed under parametric substitution.
Proof. Verification is relatively easy on inspection of
the rules.
Next, we say that a rule Ru is antecedent
expandable if it satisfies the following. Assume that
for 1 .:;; i .:;; n Ei is Xi f- Ci and En+l is X 1- C, and that
r 1 Ar 2 is a structure. Then suppose that
( 2) l:l, ... ,En
l:n+l
is an instance of Ru, Then
( 3) ' E' El,···, n
En+l ,
is an instance of Ru, where En+l' is r 1 xr 2 f- C and for
l ~ i ~ n E~ is either r 1xir 2 ~Ci or Ei depending on
whether or not Ci is an immediate ancestor of C in (3).
Again, this definition is a modification of BGD80.
And we reap
Lemma 3. 3. 2. All of the L1L rules except f- + and f- o
are antecedent expandable.
Proof. By inspection of the rules.
150
Now for the needed notion of rank in a derivation.
Let Der be a LlL derivation of E. U.nless E is an axiom
(the top node of a branch of Der), let Inf be the inference
(in Der) of which E is the conclusion, and let a be a set
of formula constituents of E. Then define the rank of a
in Der as follows. If a is empty, its rank in Der is 0.
If a is non-empty but contains no conclusion parameters,
then the rank of a in Der is l. (In this case, a is in
fact a singleton.) Otherwise, let inf be
(l) E 1 ' •.• 'En
E
with Der. the subderivation determined by E. for each ~ l
l ~ i ~ n, and let ai be the set containing all and only
immediate ancestors in E. of members of a. (Note that l
if all members of a were weakened in by Ke f- , then
ai = ¢.) Let k be the maximum rank of any ai in its
corresponding Der .. Then the rank in Der of a is k+l. ~
And following BGD80 we talk of the consequent rank of Der
151
as the rank of a in Der when a is the singleton containing
the consequent of the conclusion of Der.
Then, where a is a set of formula occurrences in
Y(E), let Y[X/aJ (E[X/a] be the result of substituting
X in Y(E) for each member of a. (When a is a singleton,
say {y}, we let E[X/y] = E[X/aJ.)
We are finally ready for the Cut 'l'heorem which can be
stated as follows:
Theorem 3.3.1. Cut Theorem.
Let a be a set of occurrences of any formula A
in a structure Y. If X f-A and Y f- C are L1L derivable,
then so is Y[X/a] f- C.
Proof. The proof proceeds as in Dunn 75 by a double
induction. Since the base steps of the inductions are trivial,
choose arbitrary m>O,.i and k such that j+k>O, and assume
Outer inductive hypotheses (OH). For all X,Y,C,A and a
(a set of occurrences of A in Y), if the complexity of A
is less than m, then if X 1- A and Y 1- C are derivable, so
is Y[X/aJ f- C; and
Inner inductive hypotheses (IH). For all X,Y,C,A of
complexity m, and a (a set of occurrences of A in Y), if
X 1-A is derivable with rank j' and there is a derivation
of Y f-C in which the rank of a is k' and j'+k'<j+k, then
Y[X/aJ f-C is derivable.
Next choose arbitrary A of complexity m and
arbitrary X,Y,C and a (a set of occurrences of A in a),
and assume
Conditional Hypothesis (CH). DerL is an LlL derivation
of X ~A with consequent rank j and DerR is a L1L
derivation of Y ~C in which the rank of a is k.
It will suffice to show that Y[X/aJ ~C is L1L derivable.
For the sake of notational convenience, let
L-premise = X ~A, R-premise = Y ~ C, and Conclusion =
Y[X/aJ ~C.
We now proceed by cases.
Case l. k = 0, so a is empty. (Note that j is
never 0.) Then Conclusion is L-premise and we are
finished by CH.
Case 2. k = 1. There are three subcases.
Case 2.1. R-premise is an axiom.
152
Then Conclusion is L-premise, and we are finished
by CH.
Case 2.2. R-premise follows from a sequent,
call it E, by Ke}. Then each member of a is parametric,
(but has no immediate ancesto~) whence by closure under
parametric substitution, either Conclusion is E or
153
follows therefrom by Ke }- .
Case 2.3. R-premise follows by a logical rule
on the left,call it Ru. Then a is a singleton containing
the principal constituent of R-premise. There are two
subcases.
Case 2.3.1. j = l. Then L-premise is either
an axiom or follows by a logical rule "matching" Ru on
the right. In the former instance, Conclusion is R-premise,
whence it is derivable by CH. So assume L-premise follows
by a logical rule, call it Ru '. There are four subcases
matching the four different logical rules on the right.
(Note that A cannot bet.) In each case, one applies OH
to the appropriate premises (possibly twice) to get
Conclusion. One never uses a structural rule as is
sometimes required in the analogous case in AB 75 and
BGD 80, We show one case as an example and leave the
rest to the reader.
Suppose Der1 ends as follows:
I 1 = X~B X~D = I2
L-premise = X ~ B&D
and DerR ends thus (without loss of generality):
I3
R-premise
=
=
r 1Dr 2 ~c
r 1B&Dr 2 ~ c
We want to show that Conclusion = r 1xr 2 ~C is derivable.
Let d be the set containing the displayed occurrence of D
154
in r 3, and note that on assumption B&D has complexity m,
whence D is of complexity less than m. Since E2 and E3
are derivable on the case assumption, so is
E4 = r 1xr 2 ~c by OH, as required.
Case 2.3.2. j>l, whence the consequent of L-premise
is parametric. This case is left to the reader to be
handled in a similar fashion to case 2.3.2 of the Cut
Theorem of §2.7.
Case 3. k>l. Again as in §2.7, but without the
special case for I ]- .
~
With the Cut Theorem in hand, we proceed to the
desired equivalences.
SECTION 4. Equivalence and Representational Adequacy 155
We are in a much happier position for showing
that the L-systems are equivalent to their Hilbert-style
analogues than was the case with the G-systems. The two
types of complex structures of the L-systems are directly
representing formula connectives, namely o and &. So define
a function t from the set of structures and sequents into £
by the following recursive specifications:
(1)
( 2)
( 3)
( 4)
t(A) = A, for every formula A;
t(X;Y) = t(X)ot(Y);
t(E(X 1 , ••• ,Xn)) = t(X 1 )& ... &t(Xn); and
t(X ~A) = t(X)->A.
We will want to show that t tA is L1L derivable
iff I-LA. Left to right is straightforward and simply
recorded as
Lemma 3.4.1. If X tA is L 1L derivable, then tLt(X tA).
This lemma shows that the translation is a good
one. However, the following fact is also significant in
that respect - and will be useful for other purposes.
Facts 3.4.1. r 1xr 2 1-A is L1L derivable iff r 1 t(X)r 2 1-C
is.
Proof. By induction on the structural complexity of X.
Base step. sc(X) = 1. Then X is a formula and t(X) = X,
156
so we are finished.
Now choose an arbitrary m>l and assume
Inductive hypothesis. ll 1 X'll 2 f-C is L1L derivable iff
61 t(X' )6 2 ~ C is, for all X' such that sc(X' )<m.
Inductive step. sc(X) = m. There are two cases.
Case l. X is, say, E(W 1 , ••• ,Wn). Then for all
l ~ i ~ n, sc(Wi)<m. So using the inductive hypothesis
n times, we have r 1E(W 1 , •.• ,Wn)r 2 f-C is L1L derivable
iff r 1E(t(W1 ), ••• ,t(W ))r 2 f-C is. But by Fact 3.2.6 n
and Fact 3.2.9, r 1E(t(W 1 ), ••• ,t(W ))r 2 f-C is derivable n iff r 1 t(W 1 )& ... &t(W )r 2 f-C is. Then by transitivity and
n
the definition oft, r 1E(Wl•· .. ,W )r 2 rC is derivable n
iff r 1 t(E(W 1 , ••• ,w ))r 2 f-C is, which finishes the case. . n
Case 2. X is, say, Y;Z. Proceed as in case l,
using the inductive hypothesis (twice), • f- and Fact 3.2.1
Returning to the matter of equivalence, for the
right to left half, it will be necessary to show that
the rules of L are admissible (in appropriate form) in LL.
Lemma 3.4.2. Rl, R2, and R3 are admissible, i.e.,
(l) If t f-A and t f- A+B are L1L derivable, so is t f- B;
(2) If t f-A and t f-B are L1L derivable, so is
t f-A&B; and
( 3) t 1- A+ .B+C is derivable in an LT-system only if
t I- AB+C is.
157
Proof. (2) is straightforward by 1-&. For (l), assume that
t ~A and t ~ A+B are L 1L derivable, and note that in any
event A+B;A f-B is LlL derivable. So using Cut twice,
t; t 1- B is derivable. Whence by t- ~ , t 1- B is derivable
as desired. Finally, for (3) from left to right, assume
that t f-A+.B+C is L1L-derivable. Again using the converse
of ~+,(t;A);B f-C is derivable. So by t- f-, A;B f-C is
derivable. Then by o ~ , AB f- C is derivable; whence by
t f- and 1- +, t f- AB+C is derivable to complete the proof.
~
These two lemmas make the proof of the Equivalence
Theorem straightforward.
Theorem 3.lJ.l. L 1L Equivalence Theorem
t j-A is derivable in L 1L iff ~LA.
Proof. For left to right, assume t f-A is L 1L derivable.
Then by Lemma 3. 4 .l, 1-L t+A, whence ~ LA.
Right to left proceeds by induction on length
of proof of A, for which it is necessary and sufficient
to show that t 1-A is LL derivable for each axiom A of L
and that Rl-R3 are admissible in the requisite sense.
The latter was accomplished in Lemma 3.4.2. For the
158
former, we show one example and leave the rest to the
reader. Since this is the first time for any LT-system
to be discussed in detail in print, we choose suffixing
0 t , l-in LTW+ as the example and thereby show that B i
does its job.
+ 1- A j- A Bj-B
A+B;A j-B c 1- c + 1-
B+C; (A+B; A) ]- C B'i 1-
A+B;B+C;A J-C J-+
A+B ;B+C I- A+C 1-+
A+B I- B+C+. A+C t 1-
t; A+B ]- B+C+. A+C
t ]- A+B+. B+C+. A+C.
~
The theorem is much as would be expected. The
only reason for bothering with its proof in so much
detail is that it provides an appropriate context for the
promised discussion of t- 1- •
First note that the use oft- 1- fuinessential in
the proof of (l) of Lemma 3. 4. 2 for systems with Wi ]- .
But matters are different for the contractionless systems.
If one wants a definite equivalence between them and their
axiomatic counterparts, there seems to be no alternative
to having t- ]--at least as an admissible rule. Without
such, one can (except as indicated below) show an
159
equivalence in the form of X ~A being L1LW derivable iff
~LWA, where X is a structure built up from t. But such
an equivalence will not suffice to give decidability of
the axiomatic systems from the decidability of their
corresponding 1-systems, and as such is inadequate for
our purposes.
Further, t- 1- is not needed in the proof of Ax 8
in the LR-systems, but the author knows of no way to
prove (3) of Lemma 3.4.2 without the use oft-~. In
fact there appears to be no way to show the even weaker
form of admissibility of R3 without it - the weaker
form being that if Y 1-A+.B+C is derivable, for some Y
built up from t, then for some X also built up from t,
so is X ~ AB+C. (Hence the qualification on the equivalence
claim of the previous paragraph.)
Aside from its practical necessity as just
indicated, two further considerations commend t- ~ ,
at least as an admissible rule. For the first one,
note that t is an identity with respect to o in
R:t and in RW:t, and that it is a left identity with
respect to o in the corresponding T-systems. So, since
the intensional structural connective (at least on this
point of view) is representing o, t- ~ ought to hold on
the grounds of representational adequacy. (Note, however,
that this point is irrelevant to the question of
equivalence between the L-systems and their axiomatic
counterparts. For t ~A+tA and t ~tA+A are derivable
without t- ~.)
The second reason is again representational
adequacy, but from a different point of view. If one
takes L to be indicating entailment in the sense of
Quine 66, then LL can be taken as a theory of L-
entailment (or L-deducibility). To put matters a bit
more formally, let us say that A L-entails B (or B is
L-deducible from A) just in case ~LA+B. One can then
coherently interpret each sequent (in LL) X ~C as
saying that t(X) L-entials C. For
Theorem 3.4.2. The L-systems are deductively complete,
that is, B is L-deducible from A iff for all structures
X such that t(X) =A, X 1-A is LL derivable. 1
Proof. Right to left is immediate by Lemma 3.4.1.
For left to right, assume that B is L-deducible from
A, i.e., ~ 1A+B, and let X be an arbitrary structure
160
such that t(X)=A. By Theorem 3.4.1 t ~A+B is derivable
in L1L, whence by Lemma 3.2.2, so is t;A ~B. So by t- 1-,
A ~B is derivable, i.e., on assumption, t(X) ~B. But by
Fact 3. 4 .l, X~ B is derivable - which completes the
proof.
The use oft-~ is essential in the proof,
as we now proceed to show. Let L'L come from L 1L
by dropping t- ~ . Then
161
Theorem 3.4.3. The L'L-systems are deductively incomplete.
Dot S Indeed, LR+ of AB75 and LR+ of BGD 0 are likewise
deductively incomplete.
Proof. Obviously, for LR+ and LROot t is to be taken as the
translation of AB75 p.385 and of BGD80 p.348, respectively,
The only ''significant'' change is that in these cases
t(E(X)) = t(I(X)) = t(X). It now suffices to note that
rLp+.(p+p)•p, and to show
(l) E = p r(p+p)op is not derivable in any of the
Gentzen systems under consideration.
To show (1), assume for reductio that Der is a
derivation of E in one of those systems. E is not an
axiom. Furthe~ by inspection of the rules, there is a
sequent, say E', occurring above E in Der which is of
the form w r (p+p) op and is the conclusion of an
instance of 1- o. Let Y r p+p be the left premise of
that instance of 1- o. It is easy to see that pis the
only formula that occurs in Y.
It then suffices to show that t ( Y )+.p+p is not a
theorem of L contradicting (the appropriate analogue of)
Lemma 3.4.1. This is easy to do by using the following
matrices (with l, 2 and 3 designated) which are sound, ot
in the usual sense, for R+ , and assigning p the value
of 2. (The & and v matrices are the normal ones
obtained by defining a&b(avb) as the glb(lub) of {a,b}
on numerical ordering, and t = 1.)
+
0
1
2
3
0
3
0
0
0
1
3
1
0
0
2 3
3 3
2 3
1 3
0 3
0
0
1
2
3
162
0 1 2 3
0 0 0 0
0 1 2 3
0 2 3 3
0 3 3 3
Of course, in the case of ROot, the proof relies on the
conservative extension results of Meyer and Routley 1974.
~~
One final observation on t- ~is in order. The choice
of a fusion formula as the counter example for the theorem
is inessential for all of the systems except the L'T
systems. The argument would work just as well with the
RW+ theorem p+p+p ~p. However, the L'T-systems without
fusion are deductively complete.
And lastly, there is an interesting observation
to be made about ee ~ . The rule was not used in the proof
of Cut except in those cases in which one or the other
of the premises followed by it. (Recall, in particular,
the remark made in case 2.3 and the particular example
given there.) Nor was it needed for the proof of the
admissibility of the rules of Lin Lemma 3.4.2. And
finally, it is not required for the proof of any of the
axioms. So.
Theorem 3.4.4. The L-systems obtained by dropping ee ~
are equivalent to their axiomatic counterparts, in the
sense of Theorem 3.4.1.
163
In spite of this, ee ~ is kept in the formulation
for its usefulness in the Denestation Fact (and thus
in the Denestation Theorem) of §6.
164
FOOTNOTES
1 This idea, in a slightly modified form, was originally
suggested by Robert K. Meyer.
165
SECTION 5 FORMULATIONS 2 AND 3: Vanishing-t
If we pause for a moment to think about our goal
of a decision procedure for the contractionless systems,
it is quite obvious that Formulation 1 is not well-suited
for the task. There are several problems, but in this
section we will deal with just one- t- ~- The problem is
not the approximate subformula property. Other things being
equal, it will do quite nicely. Rather it is that we shall
want to use a suitable version of the decidability argument
of the previous chapter, and as things will work out t- ~
is not degree preserving. Two solutions come quickly to
mind. In the first instance, we could replace t- ~ by
other rules that provide its effect but are better behaved
with respect to degree. This can be done by adding (where
appropriate) rules such as
t+ ~ ti-A r 1Br 2 J-C
r 1A+Br 2 1-C
The second option is to do away with t and all of its works.
By and large, our only interest in t was for the technical
succor it provides, particularly for the cut theorem.
The first option has at least this much to commend
it: it is the more complete approach. Indeed, it was the
path that we initially took. But in the end, the second
option proves to be somewhat simpler and clearer, so the
systems will be reformulated to allow sequents to be
empty on the left.
We keep the definition of structures as before.
There will be no null or empty structure. We simply
allow sequents to be entities either of the form X ~A or
of the form ~A. To do otherwise is to introduce the
166
ridiculous question of whether or not there are structures
of the form E(X 1, ... ,X ), for instance, where each X. is n l
empty. Of course, the adopted policy is not without its
own headache. Technically, whenever we want to say something
about sequents in general we must speak double, once about
structures of the form X ~A and once about structures of
the form ~A.
Of course, when one has a headache, the sensible
thing to do is to take aspirin. Our aspirin will be to
use double-speak rather than speak double. We now allow
structural variables to be existentialist variables, that
is, they range over structures and the dreaded Nothingness.
Otherwise, notation remains the same.
We must still occasionally restrict structural
variables to range only over structures. But with a bit
of goodwill (and commonsense) on the part of the reader
and a few conventions, this is not so cumbersome. In the
first place, we insist that structural variables never
range over Nothingness when used to represent an immediate
constituent of an E-sequence. And likewise for structural
variables that occur in the statement of structural rules.
Further conventions can be adopted as the need arises.
The simplest method for getting rid of t is to
first leave it in and make a few modifications (including
167
being empty on the left) to Formulation 1, and then show
that we no longer need t. So let 1 21 come from L 1L by
I. Adding ~ t as an axiom;
II. Leaving the structural rules as they are (but note
the conventions on Nothingness;)
III. For the LT-systems, insisting that (1) the left
premise of _,. ~ is never empty on the left, and
(2) the right premise of ~ • is empty on the left
only if the left premise is; and
IV. For the LR-systems, replacing t ~ by the more
general
t# ~ r 1xr 2 ~c r 1xr 2 ~c
r 1 (X;Y)r 2 ~c r 1 (Y;X)r 2 ~c
where Y is a t-structure, and at-structure of
course is a structure in which the only formula
that occurs is t.
Since this is the first opportunity for the reader
to display such, a quick check on his/her good will is in
order. The reader should understand that
A ~B
~A+B
is an instance of~+, just as
~A
t 1-A
is an instance of t ~ (and of t# ~) and
t ~A
~A
is an instance of t- ~. However,
E(p,q) ~p
E(t,p,q) ~P
is not an instance oft~. Nor is
E(p,t,q) ~p
E(p,q) ~P
an instance of t- ~ .
168
Now if one extends the translation t of the previous
section to include the following clause:
( 0) t( ~A) = A,
one can show
Fact 3.5.1. If X ~A is L 2L derivable, then ~Lt(X ~A).
Proof. As in previous section. Note that where Y is a
t-structure, ~L t(Y)+t, and that the restrictions on
emptiness for the LT-systems are completely necessary.
,......,..,__,..._.....,,.......,...,_,~,....., .................... ,..,,....., ...... ,.....,
Of course L2L is a supersystem of L1L. So
given Fact 3.5.1 and the L 1L equivalence Theorem, it is
immediate that
Theorem 3.5.1. t ~A is L2L derivable iff ~L A.
Then note by t- ~. t ~ and t# ~
Theorem 3.5.2. t ~A is L2L derivable iff ~A is.
And these two theorems give us
Corollary 3.5.1. ~A is L2 L derivable iff ~L A.
But note that this Corollary is not sufficient
169
for us to do away with t. As yet we have no guarantee
that when t is not a subformula of A and ~A is derivable,
there is a derivation of it in which t is not employed.
For, so far, all that can be established is an approximate
subformula property as in Theorem 3.2.1. To rectify this
situation, we first show
Lemma 3.5.1. Let der be an L2 -derivation of a sequent E
satisfying the following conditions:
(1) t is not a subformula of the consequent of E;
(2) t is not a proper subformula of any formula
occurring in the antecedent of E; and
(3) E is not of the form r 1E(Y 1 , ... ,X, ... ,Yn)rz ~c.
where X is a t-structure and for some l < i < n
Yi is not a t-structure.
Then every sequent in der satisfies conditions (1), (2),
and (3).
Proof. That every such sequent satisfies (1) and (2) is
obvious (more or less) from Theorem 3.2.1. For (3), let
170
L' be an arbitrary sequent (occurrence) in der. An
induction on the height of L' will show that it satisfies
(3). The base step holds on assumption (since L' is then
L), and the cases for the inductive step are straightforward
on inspection of the rules.
"'"'"'""'"'""'"'"'"'"'"' ....... """'-
Of course the conditions of the lemma do not come
out of thin air. They are conditions that must be met by
a subderivation of a proof of a t-free formula. (1) and
(2) are obvious. Condition (3) is the important one. Put
quite loosely, once an occurrence of t gets ''properly
inside'' of an E-sequence which is not a t-structure, if
some descendant of that occurrence of t is not in a
''similar position'', then some descendant must have been a
principal constituent of a non-t logical rule. And if a
sequent containing such an occurrence of t were to be
in a derivation of a t-free sequent, then some descendant
of that occurrence of t must get in a "dissimilar position''
in order to be vanished by t- ~. The lemma shows that this
cannot happen.
Now we can show the important
171
Lemma 3.5.2. Vanishing-t Lemma.
Let X be a t-structure and let E be a sequent
satisfying conditions (1), (2) and (3) of Lemma 3.5.1. If
E is L2L derivable with weight n, then if E is
r 1 (X;Y)r 2 ~C (or also, in the case of the L2 R-systems,
if r is r 1 (Y;X)r 2 ~C) then E' = r 1 Yr 2 ~Cis likewise
derivable with weight ~ n - where Y is possibly empty if
r1 and r2 are
Proof. By induction on n. The base step is trivial,
and the inductive step is reasonably straightforward on
examination of the rules. We will only make a few general
comments and consider a couple of the trickier subcases.
First notice that Lemma 3.5.1 virtually guarantees on its
own that the inductive hypothesis will be applicable when
needed. Further, since the weight of derivation of E' is
no greater than that of E, the inductive hypothesis can be
applied successively when needed, e.g.,
E 1 = r 1 ((X;Y);(X;Y)r 2 f-C
r = r 1 (X;Y)r 2 ~c w; ~ .
Two applications of the inductive hypothesis to the premis
yields r 1 (Y;Y)r 2 ~C with no increase in weight, from which
E' follows by Wi ~C with appropriate weight.
For the L2 T-systems, some subcases of Bi ~ and B'i f
are tricky, but still easy, e.g.,
172
E1 = r 1 (W;(Z;Y)r 2 1-C
E = r 1 (Z;W;Y)r 2 1-C , where Z;W is a t-structure.
We want to show r 1 Yr 2 1-C. But W and Z are t-structures,
and both the occurrence of W and the occurrence of Z in
the premise are in position for the (successive) application(s)
of the inductive hypothesis. So we are finished.
Finally, for the LR-systems, consider
LJ = r 1xr 2 ~c t # 1-
E = r 1 (Y;X)r 2 t-c
where the desired E' is r 1Yr 2 ~c. (Note that this is not
a case for the LT-systems.) There are two principal
subcases. (l) The displayed occurrence of X in z1 is not
an immediate constituent of an e-sequence. So suppose
without loss of generality that z 1 is 61(W;X)b 2 ~C. Then
on inductive hypothesis b 1Wb 2 1-C is derivable with no
greater weight than z 1 • Whence by t# I-, b 1(W;Y)b 2 ~Cis
derivable as desired. (2) Otherwise. Then E1 is (say)
blE(Wl, ... ,X, ... ,Wm)b 2 1-C. By condition (3) onE, each
Wi is a t-structure. So assume without loss of generality
that the displayed e-sequence is not an immediate constituent
of an e-sequence. (If it were, the containing E-sequence
would likewise be a t-structure, and we would deal with it
in a similar fashion.) Then z 1 is actually, say,
A1(Z;E(W1, ... ,X, ... ,W ))A 2 I-C. Then on inductive hypothesis m
A1ZA 2 ~Cis derivable with no greater weight than E1 .
Whence A1 (Z;E(W 1, ... ,Y, ... ,Wm))A 2 1-C is derivable with
appropriate weight by t# 1-.
Other cases for the Lemma are now left to the
reader's inspection,
173
The Vanishing-t Lemma puts us virtually home and
hosed, So let us say that a sequent is t-free just in case
t is not a subformula of any constituent thereof, and that
a derivation is t-free just in case every sequent therein
is such.
Lemma 3.5.3. If L is a t-free sequent, then E is L2 L
derivable iff there is a t-free derivation of it.
Proof. Right to left is immediate. Left to right proceeds
by induction on the weight of derivation of E. The base
step is trivial. The inductive step is easy. For note
that E does not follow by t r . And if I follows from
any rule Ru except t- r, then on inductive hypothesis the
premise(s) of that application of Ru has (have) a t-free
derivation(s). Whence by an application of Ru, so does E.
And if E follows by t- r, then by applying the Vanishing-t
Lemma to its premise,we see that Lis derivable with less
weight. So we are finished on inductive hypothesis.
With this lemma in hand we can formally say that
Formulation 2 has served us well:
Theorem 3.5.3. If t is not a subformula of A, then ~LA
iff there is a t-free, L2L derivation of rA.
Proof. Immediate from Lemma 3.5.3 and Corollary 3.5.1.
174
Now let us drop t from our language, so that L
ranges over the appropriate logics without t and its works
- except in reference to L 1L or L 2L, obviously. The
well-known conservative extension results tell
us that this is no real loss. Then let Formulation 3 of
L come from Formulation 2 by dropping the axioms t rt and
rt and the rules t r, t# rand t- r. Henceforward, we
will say that A is provable (in LL) iff rA is derivable.
So
Theorem 3.5.4. A is provable in L 3L iff rLA.
Proof. Obviously A is provable in L3L iff there is at-free
L 2L derivation of rA. Whence the theorem follows by
Theorem 3.5.3 and the aforementioned conservative extension
results.
175
SECTION 6. Denesting.
With little t out of the way, it is time to turn
to the promised canonical form fore-sequences. The point
of the exercise is the following. Even if e-sequences were
limited to reduced form (as they will be in the next
section), there are still an infinite number of
e-sequences that can be built up from a single formula
by nesting, e.g.,
E(p,p), E(p,E(p,p)), E(p,p,E(p,p)), etc.
This is representationally fitting, since such
nesting represents the different ways in which the
conjuncts of a conjunction can be associated. But we
must be able to at least ignore the differences brought
on by association, if we want to show decidability. So
the 1-systems must be reformulated yet again.
Let L 4L come from L 3L by adding the following
weakening and contraction rules:
' K e 1- •
w' e ~
w' 1 ~
r 1 E (X 1 , ••• , xn) r 2 ~ c
r 1E(X1 , .•. ,xn,Y)r 2 1-C
r 1 E (X 1 , ••• , xn, Y, Y) r 2 1- c
r 1 E ( x 1 , ... , xn , Y ) r 2 f- c n ;;. 1
r 1E(X 1 , ••• ,Xn,(E(W 1 , ••• ,W );E(W 1 , ••• ,W )))r 2 ~C m m
r 1 E(X 1 , ••• ,xn,W 1 , ••• ,Wm)r 2 ~c
176
We call Ke 1- and K'e f- (for instance) the companion of one
another. Naturally, K 'e f- is the prime companion of Ke f- .
It is easy to see that the primed rules are
admissible in the L3 -systems (use the companion rule
and ee 1- ); and of course, L3L is a subsystem of L4L. So
Theorem 3.6.1. L4L Equivalence Theorem
X f-A is L 4L derivable just in case it is L 3L
derivable. Hence, A is provable in L4L iff it is
provable in L.
Now let us say that a structure X is denested
just in case it has no substructure of the form
E(YJ, ... ,E(WJ, ... ,Wm), ... ,Yn). Then for any structure X,
define the denestation of X (dN(X)) as follows:
(l) dN(A) = A, for any formula A;
(2) dN(X;Y) = dN(X); dN(Y);
( 3) dN(E(X 1 , ••• ,E(Y 1 , •.• ,Ym), ... ,Xn))
dN(E(X 1 , ••• ,YJ, ... ,Ym' .. · ,Xn) );
=
( 4) dN(E(X 1 , .•• ,Xn)) = E(dN(X 1 ), .•. ,dN(Xn)),
where no Xi is an e-sequence;
Strictly speaking, clause (3) will not suffice. But
this is its most convenient form, so we state it as
such from the beginning for ease of reference. The
diligent reader is advised to take (3) as having the
proviso that the displayed occurrence of E(Y 1 , ••• ,Ym)
178
denested just in case each sequent that occurs in it is
denested. And let us say that an occurrence of a
substructure X of a structure Y is a nested e-sequence
(in Y) just in case it is an occurrence of an e-sequence
as an immediate constituent of an e-sequence (in Y).
Some further facts and lemmas can now be gathered toward
proving what is required.
Fact 3.6.4. If Z = r 1xr 2 is such that the displayed
occurrence of X is not a nested e-sequence, then
dN(r 1xr 2 ) = dN(r 1aN(X)r 2 ) = ~ 1 dN(X)~ 2 for some ~ 1 and
~ 2 , with the ''displayed occurrence of dN(X)'' in ~ 1 dN(X)6 2
corresponding, in the obvious sense, to the displayed
occurrence of X in Z.
Proof. By a straightforward induction on complexity of
Z which is left to the reader.
Fact 3.6.5. dN(r 1E(X 1 , ••• ,E(Y 1 , ••• ,Ym), ... ,Xn)r2) =
dN(f 1E(X 1 , ••• ,Y 1 , ••• ,Ym' ... ,Xn)r 2 ) •
Proof. Again by a straightforward induction on complexity.
Loosely speaking, what lies behind these two facts (and
the proof of the upcoming lemma) is this: for any
r 1 ,r 2 ,X 1 , ... ,X, there are some tq,A 2 ,Y 1 , ... ,Y such that n m
dN(r 1 x 1 , ... ,xnr 2 ) = A1Y1 , ... ,YmA 2 • And further, A1 and
J\ 2 are functions of r 1 and r 2 only. That is, if
dN(r 1 X 1 , ••. ,xnr 2) =
dN(r 1Z1, ... ,z1/2l =
A1Y1, ... ,Y A2 , then m
A1W1, ... ,W.A 2 , for some W1, ... ,W .. J J
177
is the first immediate constituent (in order of occurrence)
of the containing e-sequence which is itself an e-sequence.
The reader can then show that (3) is a fact as stated.
And let us say that a sequent is denested just
in case its antecedent is (or is empty); and for any
sequent X ~A, define dN(X ~A) as dN(X) ~dN(A), i.e.,
dN(X) ~A. (Of course, dN( ~A) = ~ dN(A) = ~A.) Then it
is clear that
Fact 3.6.1. For any structure X, dN(X) is a denested
structure. Thus, for any sequent E, dN(E) is a denested
sequent. And
Fact 3.6.2. For any denested structure X, dN(X) = X.
Hence for any denested sequent E, dN(E) = E.
The reader will no doubt have noticed that for
any sequent E, dN(E) either is E or follows from it by
one or more applications of ee ~ , in which case E also
follows from dN(E) by a sequence of applications ofee ~
So the following important fact is immediate.
Fact 3.6.3. Denestation Fact.
For any sequent E, E is L4L derivable iff dN(E) is.
This fact shows that every sequent has an equivalent
extensional canonical form. But the decidability
argument will require that derivations have an extensional
canonical form. So let us say that a derivation is
Lemma 3.6.1. dN-Substitution Lemma.
Let Z be a structure containing an occurrence y of some
structure Y such that y is not a nested e-sequence.
By Fact 3.6.5, let dN(Z) = ~ 1 dN(Y)~ 2 • Let y' be the
displayed occurrence of dN(Y). Then for all structures
X such that the substituted occurrence of X in Z[X/yJ
179
is not a nested e-sequence, dN(Z[X/yJ) = (dN(Z))[dN(X)/y'].
Proof. The proof proceeds by induction on the complexity
of Z. Since the base case is trivial, choose an arbitrary
m>O, and assume
Inductive hypothesis (IH). The lemma holds for a Z'
satisfying the conditions of the lemma.
Then choose an arbitrary Z of complexity m, y (an occurrence
in Z of some arbitrarily chosen Y) and X, all satisfying
the appropriate conditions of the lemma; and let
~ 1 dN(Y)~ 2 and y' be as in the lemma. It will suffice to
show that dN(Z[X/yJ) = (dN(Z))[dN(X)/y'J. There are
two cases.
Case 1. Z = Y. Straightforward.
Case 2. Y is a proper substructure of Z.
Again there are two cases.
Case 2.1. Z is an e-sequence, say E(X 1 , ••• ,
r 1Yr 2 , ••• ,Xn) withy the displayed occurrence of Y.
(Keep in mind that y is not a nested e-sequence.) The
180 proof branches into two subcases.
Case 2.1.1. An immediate constituent of Z is
an e-sequence. Then assume without loss of generality
that X1 is E(WJ, ... ,Wm).
Let Z1 = E(W 1, ... ,wm•·· .,r 1Yr 2 , ... ,XJ and let y 1
be the displayed occurrence of Y therein. By Fact 3.6.4,
and the definition of denestation, let dN(Z)=dN(Z 1) =
AJdN(Y)A 2 , withy' obviously corresponding to both y and
Y!· Clearly, dN(Z)[dN(X)/y'] = dN(Z 1)[dN(X)/y'].
And by Fact 3.6.5 dN(Z[X/y]) = dN(Z 1[X/y 1J). But by
IH, dN(Zl[X/ylJ) = dN(Z 1)[dN(X)/y'],so we are finished.
Case 2.1.2. No immediate constituent of Z is
an e-sequence. Of course, Z[X/y] = E(X 1, ... ,(r 1Yr 2 )
[X/y], ... ,Xn). Then dN(Z[X/y]) = E(dN(X 1), ... ,
dN((r 1Yr 2 )[X/y]), ... ,dN(X )). So by IH, n
dN(Z[X/y]) = E(dN(X 1), ... ,(dN(r 1Yr 2 ))[dN(X)/y'J, ... ,
dN(X )), which is (E(dN(X 1), ... ,dN(r 1Yr 2 ), ... ,dN(X )) n n
[dN(X)/y'J, i.e., (dN(Z))[dN(X)/y'J as desired.
Case 2.2. Z is an intensional structure. Without
loss of generality, let Z = A1YA 2;W. Then the argument
of case 2.1.2 applies mutatis mutandis to finish the
proof.
~
The reader should become familiar with the pattern of
argument above, since it is typical of many arguments to come.
The dN-Substitution Lemma facilitates the proof
of the
Lemma 3.6.2. Denestation Lemma.
If" follows from E1(E 2) by an application of any L4L
rule Ru, then either dN(E) = dN(E 1) or it follows from
181
dN(E 1 )(dN(E 2 )) by a sequence of applications of Ru and/or
its companion and possibly Ce ~ , such that the conclusion
of each such inference is denested.
Proof. By cases
Case 1. The lemma holds for logical rules on
the right by inspection. Further, if E follows by ee ~
from E1 , then dN(E) = dN(E 1 ) by Fact 3.6.5. And in the
case of Ce ~, it is clear that dN(E) follows from dN(E 1 )
by one or more applications of Ce ~ .
Case 2. If Ru is an intensional structural rule
or a logical rule on the left, then dN(E) follows from
dN(E 1 )(dN(E 2 )) by Ru. The cases are similar and reasonably
straightforward using the dN-Substitution Lemma. The
details of the proof of one case is provided as an
example.
Case 2.1. Ru is~·~
E 1
E
=
=
y ~A r 1Br 2 ~c
r 1 (A~B;YJr 2 ~ C
= E2
Let b be the displayed occurrence of B in E2 , and by
Fact 3.6.4 let dN(r 1Br 2 ) = A1 dN(B)A 2 = A1 BA 2 with b'
the displayed occurrence of B. (Assume without loss of
generality that Y is not Nothingness). But then by the
dN-Substitution Lemma, dN(E) = A 1 dN(A~B;Y)A 2 ~C, i.e.,
A 1 (A~B;dN(Y))A 2 ~C- which obviously follows from
dN(E 1 ) and dN(E 2 ) by~~.
182
Case 3. The remaining rules are companioned and
should be treated in pairs, first showing that the lemma
holds for the primed rule and then using that result as
needed for its companion. Ce ~ is often required due
to the positioning demanded for weakened and contracted
structures. However, these can be safely ignored since
the reader will have already noticed in checking case l
that the conclusion of an instance of Ce r is denested
iff the premise is.
Case 3.1 Ru is K'e r
E1
E
=
=
f1ECX1, ... ,XN)r2 r C
r 1 E ( X 1 , ••• , XN , Y ) r 2 r C
The proof proceeds by induction on the complexity of E. The
base step is trivial, so choose arbitrary m>O and assume
Inductive hypothesis (IH). For all E' of complexity
less than m, if E' follows from some l: 1' by K' e r , dN( E')
follows from dN(E;) in accordance with the conditions of
the lemma.
Then choose an arbitrary E of complexity m and assume
Conditional hypothes~~ (C). l: follows from some
E1 (call it E1 ) by K'e ~
It now suffices to show that dN(E) follows from dN(E1)
in accordance with the conditions of the lemma.
Let E1 and E be as displayed above, and let w be the
displayed occurrence of E(X 1 , ... ,Xn) in E1 . There are
two cases.
183
Case 3.1.1. Y is an e-sequence, say E(Wl•···•Wk).
Then E' = flE(X 1 , ... ,Xn,Wl,···•Wk)r 2 rC follows from
E1 by k applications of K'E r. So one can use (IH)
k times to show that dN(E') follows from dN(E 1 ) in
accordance with the conditions of the lemma. But by
Fact 3.6.5, dN(E) = dN(E') which finishes the case.
Case 3.1.2. Y is not an e-sequence. There are
two subcases.
Case 3.1.2.1. w is a nested e-sequence. Then
E' = f1X1, ... ,Xn,Yf2 J- C follows from E{ = f1X1, ... ,Xnf2 r C
by K 'E f-. Since sc(E ') < sc(E), by IH dN(E ') follows
• from dN(E 1) in accordance with the conditions of the lemma.
But by Fact 3.6.5, dN(E 1 ) = dN(E;) and dN(E) = dN(E')
which finishes the case.
Case 3.1.2.2. w is not a nested e-sequence; and
on case assumption 3.1.2, y (the displayed occurrence
of Y in E) is not either. Now if some X. is an e-sequence, l
proceed as in similar cases that have come before. So
assume no Xi is an e-sequence. In the first place, by
Fact 3.6.4, dN(E 1 ) = dN(f 1dN(E(X 1 , ... ,Xn))r 2 ) f-C, Now
let dN(EJ) = i\ldN(E(XJ,···•Xn))i\ 2 rC, and let w' be the
displayed occurrence of dN(E(X 1 , ... ,Xnll. Since
E = E1[E(X 1 , ... ,~,Y)/wJ, by the dN-Substitution Lemma,
dN(l.) = i\ldN(E(Xl, ... ,Xo,Y)i\ 2 rC. But since neither Y
nor any Xi is an e-sequence, dN(E) = A1E(dN(X1), ... ,
dN(Xn),dN(Y))A 2 ~C and dN(E 1) ~ A1E(dN(X1 ), ... ,dN(Xn))
A 2 ~ C. So dN (E) follows from dN ( l: 1 ) by K 'e ~ as
desired -which finishes Case 3.1.
Case 3.2. Ru is Ke~:
r 1 = r 1 xr 2 ~c
E = r 1E(X,Y)r 2 ~ C
Proof. By induction on complexity of E. The base step
184
is trivial. The inductive step separates conveniently into
two cases.
Case 3.2.1. X is an e-sequence, say E(W 1 , ... ,Wm).
Then E1 = r 1E(W 1, ... ,Wm)r 2 ~C. Let E' = r 1E(W 1 , ... ,Wm,Y)
r 2 ~ C, and note that l:' follows from E 1 by K 'e ~ . So by
case 3.1, dN(l:') follows from dN(E 1) in accordance with
the conditions of the lemma. But by Fact 3.6.5
dN(E) = dN(l:'), which completes the case.
Case 3.2.2. X is not an e-sequence. If X is
an immediate constituent of an e-sequence, an argument
similar to the one above will suffice. So assume otherwise.
There are two subcases.
Case 3.2.2.1. Y is an e-sequence, say E(WJ, ... ,Wm).
Then note that E2 = r 1E(X,W 1)r 2 ~C follows from r 1 by
Ke ~ and sc(l:~<sc(r 1 ). So on inductive hypothesis
dN(E 2) follows from dN(E 1) in accordance with the lemma.
But r' = r 1E(X,W 1, ... ,Wm)r 2 ~ C follows from E2 by a sequence
of applications of K'e ~. Whence by case 3.1, dN(r')
185
follows from dN(E 2 ) in accordance with the lemma.
Obviously, dN(E') then follows from dN(E 1 ) in accordance
with the lemma. And by Fact 3.6.5, dN(E) = dN(E') to
complete the case.
Case 3.2.2.2. Y is not an e-sequence. There are
two subcases.
Case 3.2.2.2.1. The displayed occurrence of
E(X,Y) in E is nested. Then r' = r 1X,Yr 2 ~C follows
from r 1 by K'e ~ and dN(E') = dN(E), so we are finished
on inductive hypothesis.
Case 3.2.2.2.2. The displayed occurrence of
E(X,Y) is not nested. Then use the dN-Substitution
Lemma as in previous cases.
This completes case 3.2. The other cases are
handled in a similar fashion and are left to the reader.
With the Denestation Lemma, we can make short
work of the proof of the following theorem to finish
the business of this section.
Theorem 6.2. Denestation Theorem.
For any sequent E, E is L 4L derivable iff
dN(E) has a denested derivation.
186
Proof. Right to left is obvious by the Denestation Fact.
Left to right proceeds by induction on the weight of
derivation of E. The base step is simple using Fact
3.6.2, and the cases for the inductive step are
straightforward using the Denestation Fact and the
Denestation Lemma.
187
SECTION 7 Reduction.
Now that e-sequences have effectively been given a
canonical form, a decidability argument analogous to the
one of Chapter 2 can be given. The first step is to get
the obvious analogue of reduction.
So let us say that a structure is reduced just in
case no constituent thereof is an e-sequence three or more
immediate constituents of which are occurrences of the same
structure. So no structure occurs more than twice as an
immediate constituent of any given extensional substructure
of a reduced structure. Then a structure is e-reduced iff
it is denested and reduced. Of course, a sequent is
reduced (e-reduced) just in case its antecedent and
consequent are (or its antecedent is empty); and a
derivation is reduced (e-reduced) iff each sequent occurring
therein is.
Next let us say that a structure is super reduced
just in case it contains no e-sequence with two distinct
immediate constituents that are occurrences of the same
structure. Again, the definition is extended to sequents
in the obvious way. (Obviously a super reduced structure
or sequent is reduced.) Then define the super reduct of
any denested structure as follows:
(1) sr(A) = A, for any formula A;
(2) sr(Y;Z) = sr(Y); sr(Z), for any structures Y and
Z; and
188
(3) for any structures Y1 , ... ,Yn, sr(E(YJ, ... ,Yn)) =
sr(Y 1 ), if for all 1 ~ i ~ n, Y. = Y1 ; otherwise, l
sr(E(YJ, ... ,Yn)) = E(WJ, ... ,Wm)' where E(WJ, ... ,Wm)
is as follows: For each Y., let k. be the number l l
of occurrences of sr(Y.) as an immediate constituent l
of E(sr(Y 1 ), •.• ,sr(Yn)). Then E(W 1 , ... ,Wm) is
the result of deleting the first ki-1 occurrences
of sr(Yi) from E(sr(Y1
), •.• ,sr(Ym)).
Naturally, for any formula A and denested structure X,
s r (X 1- A) = s r (X) 1- s r (A) = s r (X) 1- A, and s r ( 1- A) = 1- s r (A) =
1-A. And for the sake of notational convenience, let
X '(l:') be sr(X) (sr(E)) for the remainder of this section
only.
Then note straightaway that
Fact 3.7.1. For any denested structure X and any denested
sequent l:, X' is a super reduced structure and l:' is a
super reduced sequent.
And although it is not the case that the denestation of
any super reduced structure is super reduced, it is the
case that
Fact 3.7.2. For any denested structure X and any denested
sequent l:, X' and l:' are denested, and hence, by the
previousfact, e-reduced.
Given Ce ~ and the extensional contraction and
weakening rules, it is clear from the above and the
definition of super reduction that
Fact 3.7.3. The Super Reduction Fact
189
For any denested sequent I, I is L4L derivable iff I' is.
But the fact gives no reduction control over entire
derivations. What we want to show is that I' as above
has an e-reduced derivation if it has one at all. The
following facts and lemma will clear the way for it. But
first a definition.
For any denested e-sequence X, let con(X') (the
immediate constituents of X') be {X'}, if all the super
reductions of the immediate constituents of X are the same
structure, and otherwise let con(X') be the set of
structures occurring as immediate constituents of X'.
Then obviously
Fact 3.7.4. Let Z be a denested e-sequence. Then for all W,
WE con(Z') iff there exists a Y such that Y occurs as an
immediate constituent of Z and W = Y'.
The following simple fact will be important for
the main lemma.
Fact 3.7.5. For any structures X1 , ••• ,Xn,Y, and W which
are note-sequences, if Z1 = E(X 1 , ••• ,Y, ... ,Xn) and
Z2 = E(X 1 , ... ,W, ... ,XN)' then either
190
(l) con(z;) = con(Z~),
( 2) con(z;) - {Y '} = con(Z 2'),
(3) con(z;) - {W '} = con(Z{), or
( 4) con(Z~) - {Y'} = con(Z~) - {W'}
Proof. By Fact 3.7.4, con(Z{l and con(Zz) differ at most
in Y' being an element of the former but r.ot of the latter
and/or w' being an element of the latter but not of the
former - which is exactly what the fact says.
~~~
The fact not only provides a bit of useful information, but
also some structuring for cases in the proof of:
Lemma 3.7.1. Reduction Lemma
Let Inf
z1 (Z 2 )
Z3
be an instance of some L4L rule Ru, such that z 1 (z 2 ) and , , , , ,
Z3 are denested. Then either z3 = z 1 or z 3 = z2 , or E3 , ,
follows from z 1 (and/or z 2 ) by a sequence of applications
of Ce ~ , Ke ~ , ,
K e ~ , We 1- , w' e ~ , and/or at most one
application of Ru (if Ru be distinct from the aforementioned
rules), the conclusion of each of which is e-reduced.
Further, if no antecedent of a premise nor of the conclusion
191
of Inf is an e-sequence, then neither is the antecedent of the
conelusion of any inference in the above-mentioned ·sequence
Proof. By induction on the complexity of E. Since the
base case is trivial, choose an arbitrary m>O and assume
Inductive hypothesis (IH). The lemma holds for any rule
Ru' and any instance Inf' thereof such that the complexity
of the conclusion of Inf' is less than m (and of course
the premise(s) and conclusion are denested.)
Then choose an arbitrary rule Ru and an instance Inf
thereof with premise(s) E1 (E 2 ) and conclusion E3 , all of
which are denested and such that sc(E 3 ) = m. It will
suffice to show that the lemma holds for Inf. The proof
proceeds by cases.
Case l. Ru is ~o, ~+, ~v or ~&. Then E'3
follows from E~(E;) by Ru, by inspection.
Cases for the remaining rules all proceed in a
similar fashion. Significant details of the most difficult
case are presented below. The other cases are left to the
reader.
Case 2. Ru is v ~. There are three subcases.
Case 2.1. The antecedent of E3 is a single formula.
The case is simple, and left to the reader.
Case 2.2. The antecedent of I 3 is an intensional
structure. Assume without loss of generality that Inf is
1: 1 = x;r 1Ar 2 ~c X;r 1Br 2 f-C = l:z
l: 3 = x;r 1AvBr 2 f- c
with X a structure. By IH, (r 1AvBr 2 )' ~C either
(l) is (r 1Ar 2 )' f- C or (r 1Br 2 ) '~ C, in which case
x';(r 1AvBr 2)' f-C is x';(r 1Ar 2 )' ~cor x';(r 1Br 2 )' ~c,
. , , , , l.e., 1: 3 = 1: 1 or 1: 3 = 1: 2 ; or
192
(2) follows from (r 1Ar 2 )' f-C and/or (r 1Br 2 )' f-C in accordance
with the lemma. _Then by antecedent expandability,
t3 = X';(f1AvBr2)' ~C follows from t] = X';(r1Ar2)' f-C
and/or t'2 = X ; ( r 1Br 2 )' 1- C by a matching sequence of
applications of the same rules. And it is clear that the
conclusion of each is e-reduced, and that the furthermore
clause is fulfilled.
Case 2.3. The antecedent of l: is an extensional
structure. Technically, there are subcases according as
the super reducts of the immediate constituents of t 1, t 2
or t 3 are the same sequent or not. But the former case
is simple and is left to the reader. So assume
the latter and let Inf be
1: 1 = E(X 1, ... ,r 2Ar 2 , ... ,Xn) f-C E(X 1 , ... ,r 1Br 2 , ... ,Xn) f-C = <2
l: = E(X 1 , ... ,r 1AvBr 2 , ••• ,Xn) f-C
Now let z1 , Z2 and Z3 be the antecedents of t 1 , t 2 , and l:3
respectively. One can use Facts 3.7.5 to show that the
following cases are exhaustive.
193
Case 2.3.1. con(z3•) = con(zl') or con(Z 3') =
con(Z 2 '). Then either I; is E{ orE~, or it follows from
one of them by permuting the immediate constituents of its
antecedent. Obviously such applications of Ce ~ preserve
being e-reduced, and the furthermore clause is vacuously
fulfilled; so we are finished.
Case 2.3.2. con(Z 1 ')or con(Z2') is
con(Z 3 ')-{(r 1AvBr 2 )'}. Then Ii f~llows from ~i orr; by an
application of K'e ~, possibly followed by a sequence
of permutations of the immediate constituents of its
antecedent. Again it is obvious that the applications
of such rules preserve being e-reduced, and the
furthermore clause is vacuously fulfilled.
Case 2.3.3. Either (1) con(Z 3') =
con(Z 1 ')-{(r 1 Ar 2 )'} = con(Z 2 ')-{(r!Br 2 )'}, or
(2) con(Z 3 ')-{(r1 AvBr 2 )'} = con(Z 1 ')-{(.r 1Ar 2 1'} = con(Z 2 ')-{(r 1 Br 2 )'}. Noting·the previous remarks about
premuting, let 1:; = E(Y 1 , ... ,Y , (r 1 Ar 2 )') ~ C and let m
E'2 = E(Y 1 , ... ,Y ,(r 1Br 2 )') ~c. Then in the case of (2) m
above, 1:; is E(Y 1 , ... ,Y ,(r 1AvBr 2 ) ') ~C. In the case m
of (1), some Y. is (r 1 AvBr 2 )'. Without loss of generality, l
assume in that case that it is Y . Then 1: 3' is E(Y 1 , ... ,Y ) ~C. m m
Now note that by IH, (r 1 AvBr 2 )'~c follows from
(r 1Ar 2 )' ~ C and/or (r 1Br 2 )' ~C in accordance with the
lemma. Strictly speaking, there are two subcases according as
to whether or not v ~is used. But a proof for the latter
case is easy to construct out of a proof for the former.
So assume the former, and assume that the ''quasi-derivation''
promised by IH is as follows:
(riArz)'f-C (riBr 2 )' 1-C
xi f-C yi 1-C
xi 1-C yk 1-C
wi 1- c
wh 1- c
(riAvBr 2 ) 'f-C
with i,k and h greater than or equal to 0, respectively.
Call this quasi-derivation Deri. Then by antecedent
expandability
E(YI, ... ,Ym,(riAr 2 )') 1-C
E(YI,. .. ,Ym, XI) f-C
i E(YI, ... ,Ym, X )1-C
E(YI, ... ,Y ,(riBr 2)') 1-C m
E(YI, ... ,Y, yi) f-C m
k E(YI, ... ,Y, Y) ~C m
E(YI, ... ,Ym, wi) ~c
h E(YI, ... ,Ym, W) f-C
E(YI, ... ,Y ,(riAvBr 2 )') ~C m
is also a quasi-derivation, with each sequent following
from its predecessor(s) by an application of the same
rule as that by which its corresponding sequent followed
194
195
in Der 1. Call this new quasi-derivation Der 2 .
, , Of course the top nodes of Der 2 are E1 and E2 ,
respectively, and thus are super reduced and e-reduced.
It is simple to verify that each sequent of Der 2 is
e-reduced after noting that
(a) each sequent in Der 1 is e-reduced;
(b) since E!, E2 and E3 are e-reduced and thus denested,
neither (r 1 Ar 2 )' ,(r 1 Br 2 )' nor (r 1 AvBr 2 )' is an
e-sequence; whence (by the furthermore clause) none
X l i 1 k 1 h . d of , ... ,X ,Y , ... ,Y ,W , ... ,W are e-sequences, an
(c) each application of a rule changes the only structure
(occurrence) which could be a second occurrence of a
structure as an immediate constituent of the
antecedent of a premise of the application of that
rule.
Again, the furthermore clause is vacuously
fulfilled. Finally, for (2) above, the bottom node of
Derz is E3, and for (1) E3 follows from the bottom node
by w'e f- This completes the proof.
The Reduction Lemma makes quick work of
Theorem 3.7.1. Reduction Theorem
For any denested sequent E, E is L4L derivable iff E'
has an e-reduced derivation.
Proof. Right to left is straightforward by the Super
Reduction Fact. Left to right proceeds by induction on
the weight of derivation of E. The base step is simple,
and the inductive step is strightforward by the Super
Reduction Fact and the Reduction Lemma.
-~~
196
197
SECTION 8. Degree and Decidability
The Reduction Theorem produces a situation somewhat
familiar from Chapter 2. It will provide a finite upper bound
on the number of (denested) e-sequences built up from a
finite number of formulae that need to be examined in a
proof search - provided that a finite upper bound can be
placed simultaneously on the number of intensional structures
that can be built up from the said formulae. An appropriate
notion of degree will fill the bill for the proviso. So
define the degree (deg) of a formula as follows:
(l) deg(A) = o, if A is an atom;
( 2) deg(B&C) = deg(BvC) = deg(B) + deg(C), for
any formulae B and C; and
( 3) deg(B+C) = deg(BoC) = deg(B) + deg(C) + l,
for any formulae B and C.
Recalling that the degree of a formula is
supposed to indicate its intensional complexity, the
definition is obviously felicitous. And given that the
structural connective is ''standing in'' for fusion, it is
clear that the degree of a structure should be defined
as follows:
(l) deg(A) is of course the degree of the formula A
as defined above, for any formula A;
(2) deg(X;Y) = deg(X) + deg(Y)+l, for any structure
X and Y; and
(3) deg(E(X1 , ... ,Xn)) = max{deg(X 1), ... ,deg(Xn)},
for any structures X1 , ... ,Xn.
Naturally, for any structure X and formula A, deg(X}A)
198
= deg(X) + deg(A), and deg( ~A)= deg(A). Note again
that the degree of a sequent is not raised by 1 in virtue
of f-.
Now note the following obvious fact.
Fact 3.8.1. For any structures X, Y and Z, if deg(X) ~
deg(Y), then for any occurrence y of Yin Z, deg(Z[X/y]) ~
deg(Z).
Up to this point, all of the L4 -systems have
travelled along together. But here, as in Chapter 2,
the systems with intensional contraction part company.
For let us say that a rule is degree preserving just in
case for any instance of the rule, the degree of the
conclusion is greater than or equal to that of any premise.
Then using the above fact, it is clear that
Lemma 3.8.1. Degree Lemma
The rules of L4Rw: and L4 TW: are degree preserving.
199
Now reduction and degree will work in tandem
to give us the needed control on the total complexity of
structures that can occur in the sorts of derivations of
a given sequent to which we can restrict our attention.
The virtual coup de grace is delivered by
Lemma 3.8.2. Counting Lemma
For any formula A and any n ~ 0, there are at most
finitely many e-reduced structures of degree ~n built
up from subformulae of A.
Proof. By induction on n. The base step is simple.
So choose an arbitrary m>O and assume
Inductive hypothesis (IH). For any formula Band any
k<m, there are at most finitely many e-reduced structures
of degree ~ k.
Now choose an arbitrary formula A. It will then suffice
to show that there are at most finitely many e-reduced
structures of degree ~ m built out of subformula of A.
But any such structure is either
(1) a subformula of A, of which there are only
finitely many;
(2) an intensional structure,whose left and right
constituents are of degree < m (by the definition
of degree ) and of course are built out of
subformulae of A. But by IH there are at most
finitely many such structures to serve as left
and right constituents. Whence there are but
finitely many intensional structures of the
required kind; or
200
(3) an e-sequence, each of whose immediate
constituents is a non-extensional structure of
degree ~ m (by the definition of e-reduced and
degree) and again built out of subformulae of A.
By IH and (1) and (2) above, there are at most
finitely many structuresto serve as immediate
constituents; and by the definition of e-reduced,
none can occur more than twice as such. So
there are at most finitely many e-sequences of
the requisite sort.
And Finitely Many + FINITELY MANY + FINITELY MANY =
FINITELY MANY. So we are finished.
~
Of course, the lemma holds equally well for e-reduced
sequents built up from subformulae of any of a finite
number of formulae.
Decidability is now clearly in sight. All that
remains to be shown are well-known and/or by now obvious
facts. First, let us say that a derivation is
irredundant just in case no sequent occurs more than once
on a branch thereof. Recalling The Denestation and
Reduction Theorems, it is clear that
Theorem 3.8.1. Irredundancy Theorem
Any sequent E is L4L derivable iff sr(dN(E)) has an
irredundant, e-reduced derivation.
201
Next, let us specify as follows a proof search
procedure which produces the LRW:(LTW:) proof search tree
of E for any sequent E:
(1) Enter sr(dN(E)) as the bottom node;
(2) above each sequent E' occurring with height k
(in the tree so far constructed) (a) enter
nothing, if E' is an axiom, (b) otherwise enter
(in some assumed order) all e-reduced sequents
E" such that E" is a premise of some L4 Rw:(L 4 TW:)
inference of which E' is the conclusion and such
that the tree remains irredundant.
Obviously
Lemma 3.8.3. Effectiveness Lemma
The proof search procedure thus specified is effective.
Now let us say that a (possibly null) tree T' is a
subtree of a tree T iff it is the result of deleting
some (possibly no) sequent occurrences in T and all
sequent occurrences above them. Then by the Irredundancy
Theorem and the above specification
Lemma 3.8.4. Completeness Lemma.
The proof search procedure is complete, i.e., E is
L4L derivable iff some subtree of the proof search tree
of Eisa L4L derivation of sr(dN(E)).
As usual, a tree has the finite fork property
202
iff it has at most finitely many nodes of any given height;
and a tree has the finite branch property iff each of its
branches contains at most finitely many nodes. And recall
Konigs Lemma. A tree is finite iff it has the finite
fork and finite branch property.
Now, by inspection of the rules
Lemma 3.8.5. The proof search tree of any sequent E
has the finite fork property.
Of course L4L has the Subformula Property. But
more important for our purposes
Lemma 3.8.6. For any inference of L4L, every formula
constituent of a premise thereof is a subformula of a
formula constituent of the conclusion.
At last we have
Lemma 3. 8. "(. The proof search tree of any sequent E
has the finite branch property.
203
Proof. Choose an arbitrary sequent, say E, and let
m = deg(sr(dN(E))). By the Counting Lemma there are at
most finitely many e-reduced structures of degree < m
built up from subformulae of formula constituents of E.
Whence by the specification of the proof search procedure,
Lemma 3.8.6 and the Degree Lemma, there can be but a finite
number of different sequents, occurring no more than once,
on any given branch of the proof search tree of E - which
completes the proof.
~~~
So we conclude straightaway
Lemma 3.8.8. Finitude Lemma
The proof search tree of any sequent E is finite
Finally, by the Effectiveness, Completeness and
Finitude Lemmas and the equivalence of Theorem 3.5.4,
we get our main result
Theorem 3.8.2. L4Rw: and L4Tw: are decidable.
And then by the L4L Equivalence Theorem
Theorem 3.8.3. 0 0
RW+ and TW+ are decidable, which completes
the business of the Chapter.
SNOI.LS3:fi0 N3:d0 GN\f Sl1fiS3:l:! 9NIGn1::JNO::J ·p l:!Hld\fHJ
204
SECTION 1. Introduction
This, the concluding chapter, looks backward and
forward. In §2 we collect some easy results for various
fragments of the logics treated in the previous chapter.
And in §3 we discuss the problems of using our techniques
to show EW+ decidable.
In §4 we formulate modified Display Logics for
TW0
1 and RWol and prove appropriate equivalences. We then
outline a proposed method of extending our basic decidability
argument to these systems, filling it in as far as we now
can. The conservative extension results of §1.8 show that
the decidability of these boolean systems would suffice
for the decidability of TW and RW.
Finally, we conclude this work with a discussion in
§5 of what we take to be one of the most interesting
questions that has arisen from our research, namely,
whether or not RW+is equivalent to uRW+.
205
SECTION 2 Decidable Fragments.
The L-systems of the previous chapter give
separation results for the logics considered there. The
1-systems for+ and+,& with or without o, as well as the
positive systems without o can be shown equivalent to
their axiomatic counterparts, thus giving simple proofs of
conservative extension. 1 Most of these results are already
known. The most extensive list of such currently in print
is to be found in Meyer and Routley 74. But a more
complete and updated report will be found in RLRI.
Of course, given the above conservative extensions
and equivalences, the decision procedure of the previous 0 0
chapter also decides the relevant fragments of TW+ and RW+.
Of these, only one has been previously published. That is
of course the duly famous PW. It was first shown to be
decidable using the "merge" Gentzen systems of AB75. The
result is essentially recorded there on p.69. A second
proof of its decidability is given in Martin 79 using
semantic techniquesdeveloped therein.
Many of these results can be duplicated using the
subscripted Gentzen systems of Chapter 2. The G-systems
without v-rules and without K ~and W 1- can be shown
equivalent to T+& and R+&' respectively, by simply proving
the axioms (Cut and modus ponens will stay in force) and
then translating into the appropriate semilattice semantics
of Chapter 1 § 9, making use of Urquhart Theorem 1 given
there. Alternatively, one could translate into the
operational semantics of Fine 74 or of RLRI. The same
goes for T+ and R+.
206
And of course, the same remarks apply mutatis
mutandis for fragments of GTW+ and GRW+. Thus the decision
procedure of Chapter 2 gives yet another proof of
decidability for the pure implication and implication
conjunction fragments of TW+ and RW+.
207
FOOTNOTES
1Fragments with only one of & and v, can be formulated
without any extensional structural rules, and hence
without e-sequences period. It is another reason for
liking the particular forms of & ~ and ~ & which we use.
SECTION 3 E+ and EW+
The straightforward way to formulate the
LE-systems is to add
Cit f- r 1.(X;tl r 2 l- c
r 1 Ct;X)r 2 J-C
to the appropriate LT-systems of the previous chapter.
The rule is admissible on translation in E:t and in ot
EW+ . In the analysis of the rules, the permuted
occurrence of t can be counted as principal, to allow
208
closure under parametric substitution as usual. The proof
of Cut then goes through as in Chapter 3.
However, the proof of Vanishing-t runs into trouble.
Ct ~forces one to argue that t can be vanished as a right
constituent as well as a left constituent of an intensional
structure, as was the case with the LR-systems. But then
trouble arises in cases such as
B, ~ (X;(Y;t) ~C
Y;X;t ~ C
One can show X;Y ~Con inductive hypothesis. But lacking
a general rule of intensional permutation, the way is
apparently blocked to the desired Y;X ~C.
A solution to this problem is to formulate LE+
and LEW+ by exchanging Ct ~ for
t ~A r 1 Br 2 ~c X~ A t ~ B t+ ~ and t ~ o
r 1 (t;A+B)r 2 ~ c t;X~AoB
The L2E-systems are then specified by allowing
emptiness on the left and adding f- t as an axiom.
209
Vanishing-t and all of its works will then go through, as
will the argument for decidability for L4Ew:t
However, the normal argument for Cut breaks down
in this formulation for the case of the left cut sequent
following by f-+ and the right cut sequent following by
the matching t+ f- . And our attempts to build into the
Cut Theorem itself the required permutation of t look far
from promising. Further, an inductive argument to the
effect that Cit is admissible breaks down on the case for
B'i ~ .. We suspect that these systems are simply too weak.
The situation can be salvaged - somewhat. In the
first place, the original formulation of LE:t with Cit~
can be shown equivalent to E:t, which we axiomatize as in
Routley and Meyer 72 by adding
E+Ax. t+A+A
ot t t to T+ . EW+, of course, comes by adding E+Ax to TW+.
Now let us formulate L'EW~ by dropping t- r and the fusion rules from our original formulation of
ot LEW+ . One can then show
Indefinite Equivalence.
some t-structure X, X r A
A is a theorem of EW~ iff
is derivable in L'EW~.
for
The reason for dropping o is that without t- f- the proof
of admissibility of R3 is blocked.
210
The rule Citr presents no new problems for denesting
and reduction. And it is obviously degree preserving.
So the argument of the previous chapter can be applied to
show LEW~ decidable.
But because of the indefiniteness of the equivalence,
this does not suffice to show EW~ decidable. This is an
altogether peculiar situation. One can hardly believe that
it is irremediable. What one would really like to do, is
to replace Cit ~ by a more general rule of restricted
permutation. But we know of no way to specify in advance
just which structures are permutable. So we leave as an
Open Question. Is EW+ decidable?
211
SECTION 4. Extensions and Decidability
We noted in Chapter 3 §8 that our decidability
technique breaks down in the presence of intensional
contraction. Specifically, Wi ~ is not degree preserving.
And there appears to be no straightforward modification of
the technique which can cope with this rule. Indeed, the e.s\.o..l,..\,.;c;.\...,.s, ~ ... u.."'<io..c,d-al.. ~L~':l.
8+ geptainly ~uts the oegs in recent
"'' T, I faveur
result of Urquhart E'" n_,._~ R ,_ or H+ Baing uneesi9aGl~. As was noted in Meyer and
l Giambrone 80, if R+ is undecidable, so is T+,E+,T,E and R.
However, there is some hope of extending our
decision procedure to TW and RW, by way of the Display 2
Logics of Belnap 8+, to a discussion of which we now turn.
In some sense, Display Logic brings the ideas of
Dunn 75 and Meyer 76 to their logical conclusion.
If we can have a structural analogue of o, why not of other
connectives? Belnap 8+ shows that we can, and reaps the
benefits by showing that Display Logics can be given
for an enormous range of logics, some well-known and
others yet undreamt of.
Here we will concern ourselves with only the Display
Logics for TW 01 and RW 01 , which we call DTW and DRW for
the sake of simplicity. We assume£ to have the appropriate
vocabulary.
Equivalence will be shown via TWo 1 t and RW01
t,
in the language of which it will be convenient to define
Df. f =df -t, and
D+. A+B =df ~A+B.
212
For the Display Logics thermselves we will want
five structural connectives, represented by III, 1;1, 1,1,
1·1, 111, respectively. - and I are structural analogues
of -and I, of course. The other connectives are context
sensitive. I alternately stands in fort and f, ; for o
and +, and , for & and v.
We again use lxl, lyl, etc. as structural variables;
and parentheses will be used to disambiguate inscriptions.
Structures are defined recursively:
l. I and A are structures, for any formula A;
2. x·, xl, (X;Y) and (X,Y) are structures, for any
structures X andY.
Of course, sequents are of the form X ~Y.
DTW and DRW can then be specified from the following
axioms and rules.
AXIOMS
A I- A
RULES
Display Equivalences
X,Y~Z<>X~YI,z
X;Y~Z "'Xl-Y-;Z
X ~Y,Z .. X,YI ~z .. X ~Z,Y
X ~Y,Z .. X;Y- ~z .. X ~Z;Y
X~Y .. yl~xl <>XTI~y
x J- Y .. x- ~ Y .. x-- ~ Y
Structural Rules
I ~ X~ y
I ;X ~ Y
B; ~ W; (X;Y) ~ Z
(W;X) ;Y ~ Z
c; ~ (W;X) ;Y ~ Z (W;Y); X~ Z
c' ~ (W,X) ,Y j- Z (W,Y),Xj-Z
w' ~ Y,YJ-Z
Y F z
Logical Rules
& j-
v j-
A,B j- Z A&B I- Z
A~X B~Y AvB!-X,Y
I- j-
B,; ~
CI; ~
CI, j-
K' ~
~ &
l- v
I ;X ~ Y X ~ y
W; (X;Y) ~ Z (X;W) ;Y ~ Z
X; Y ~ Z Y ;X ~ Z
X, Y ~ Z Y ,X ~ Z
X ~ Z X, Y ~ Z
X~A Y~B X,Y ~A&B
X j- A ,B X j- AvB
213
X~A B~Y + ~ A+B ~ x- ;Y
1- A;B ~z o Ao B ~ Z
~j-A-1-Z ~A 1- Z
AI 1- Z I 1- lA 1- Z
X;A I- B ~ + X~ A+B
~o Z~A Y~B X;Y ~ AoB
1--
I-I
Z 1- AZ I- ~A
Z 1- AI Z I- lA
All of the axioms, logical rules, display
equivalences, I 1-, I- 1-, C, ~,
214
CI, 1-, W, 1- and K, 1- are common to both DTW and DRW. For ,
DTW add B; 1- and B ; 1- • For DRW add C; 1- and CI; 1- instead.
Derivations are as usual. And we say that a
formula A is provable in DTW (DRW) just in case I ~A is
derivable therein. We will want to show that DTW and DRW
are equivalent to TW 01 and RW 01 , respectively.
All of the axioms of TW01 (RW 01 ) are provable in
DTW (DRW). To give the flavor of these Gentzen systems
we prove two of the negation axioms (!del indicates use
of one or more display equivalences.)
Axl6. B 1- B de B- 1- B-
~t-
A 1- A --£1-B-+ 1-
A+"£ 1- A-·B-de
A+~B; B 1- A-~~
A+-B;B I- ~A 1-+ A+-B 1- B+~A
I ~ I :A+-B 1- B+-A
~-+ I 1- A +-B+. B+~A
215
Ax21. c 1- c de Cl f- Cl
f-1 Cl f- IC
K' f-Cl 1 (A; B) f- IC CI, f-
(A; B) 1 Cl f- IC de A;B f- CIC f- v A;Bf-CviC 1- +
A f- B+. C viC I f-I ; A f- B+. C viC f-+
If- A+.B+.CviC
We leave it to the reader to show that the other
axioms are provable and that R2 and R3 are admissible
rules. To show Rl admissible requires a Cut Theorem. But
we have one ready-made. In §4 of Belnap 8+, a very
general proof of Cut in Display Logic is given which covers
DTW and DRW. So
Belnap Lemma 1. Cut is admissible in DTW and DRW, i.e.,
if X f-A and A f-Y are derivable, so is X f-Y;
and the reader can now easily show that Rl is admissible.
(Note, however, that I-f- must be used.) So
Lemma 4.4.1. A is provable in TW 01 (RW 01 ) only if it is
provable in DTW (DRW).
One can show the other half of the equivalence by
translating sequents into formulae with the result that
the translate of any derivable sequent is a theorem. But
since we have an eye to decidability, we will move to a
216
new formulation and complete the equivalence for it.
Although we do not have a proof of decidability,
we can go some way toward extending our techniques to one.
Here we take them as far as we can, sketch a proposal
for completing the work and discuss the difficulties
involved.
In the first place, we will need some sort of
reduction. As we commented in Chapter 3, a binary
extensional structure is not convenient for such. Since
the notation for DTW and DRW is relatively simple, we will
trade in the extensional binary structuring for sets. But
this will not suffice, since we still have ''negative''
structural connectives. So we could still nest a
"structurally negated" extensional structure within other
extensional structures.
However, when one notes that the boolean systems
have DeMorgan Laws for both boolean and DeMorgan negation,
the solution becomes obvious. Structural negation will
be driven inside of sets automatically.3
Even so, negative structural connectives still
present a problem. For the Display Equivalences allow
these connectives to appear and disappear. If decidability
is to be shown, we must get some control over how many
negative structural connectives can ''bind'' a given structure.
The following facts about the boolean systems give the
solution:
217
Fact 4.4.1. A is provably equivalent to IIA and to --A.
Fact 4.4.2. -lA is provably equivalent to !~A.
Fact 4.4.3. The boolean systems are closed under replacement
of provable equivalents.
So we can effectively "permute'' and "contract" negative
structure connectives. The simple and obvious thing to do
is to trade in negative structuring for sets of negation
formula connectives.
We now present new formulations of DTW and DRW,
which we call lsTWI and lsRWI, whose structures (s-structures)
will be ordered pairs and sets thereof. So let us
simultaneously define quasi-structures and s-structures,
using a,S,y, etc. as variables over quasi-structures,
X,Y,Z, etc. as variables over s-structures and x,y,z, etc.
as variables ranging over subsets of {~,1}:
(l) Formulae are quasi-structures, and so is I;
(2) if a is a quasi-structure and x is a subset of
{~,1}, then <a,x> is an s-structure;
(3) if X and Y are structures, then (X;Y) is a
(4)
quasi-structure; and
if X1 , ••• ,X ares-structures which are ordered pairs n
(note, not sets), then {X 1 , ••• ,X} is an s-structure. --- ---- n
218
Naturally, s-sequents are of the form X 1- Y.
Next, let us make some notational conventions and
define some useful operations. First let us identify
(s-structures which are) singletons with their sole
element, i.e., {<a,x>} = <a,x>, for instance. And when
convenient, we will write lal for '<a,¢>1. For example,
we allow ourselves to write IAI for '<A,¢>1.
We use lui for ordinary set union, I_! for binary
set complementation, and r~l for symmetric difference
(see Kuratowski and Mostowski 68, for example) which can
be defined by (where a and b are sets)
a~ b = (a-b)u(b-a); or equivalently, a~ b = (aub)-(anb).
And let us define, for any quasi-structures a 1 , .•• ,an and
any subsets of {-,1} XJ, ... ,xn y, '
{<a 1 ,x 1>, ... ,<a ,x >} U y = {<a 1 ,(x 1-'-y)>, ... ,<a ,(xn~Y)>}. n n n
Recall for this definition that any s-structure which is
an ordered pair has been identified with the singleton set
containing it. Further, that same convention makes
expressions of the form rxuYI well-defined.
Keeping these definitions and conventions firmly
in mind it is easy to specify the axioms and rules of sTW
and sRW. Interpret commas as set union, interpret
•x-1 as Xu{~}, andlx1 1 as Xu {I} in the statement of
219
the rules for DTW and DRW. sTW and sRW can then be specied
as DTW and DRW were. However, C, f- , CI, f- and W, f- are
now redundant, so we drop them.
Given this specification, it is clear that
Lemma 4.4.2. DTW and DRW are contained on the obvious
translation in sTW and sRW, respectively.
Then by Lemmas 4.4.1 and 4.4.2, we have immediately
Lemma 4.4.3. A is provable in TW 0'(RW 01 ) only if it is
provable in sTW(sRW).
To complete the desired equivalence, we will want
to translate sequents into formulae. First, we define
simultaneously two functions c and a (consequent and
antecedent- cf., §2.3 and 2.4 of Belnap 8+) from s-structures
and quasi-structures into formulae: For any formula A,
any quasi-structure a and s-structures X andY,
1. c(A) = a(A) = A; for any formula A;
2. c (I) = f and a (I) = t;
3. c(<a,¢>) = c(a) and a(<a,¢>) = a(a); for any
quasi-structure a;
4. c(X;Y) = c(X)+c(Y), and a(X;Y) = a(X)oa(Y), for any
s-structures X andY.
5. c(<a,{~}>) = ~(a(a)), and a(<a,{~]>) = ~(c(a)), for
any quasi-structure a;
6. c(<ct,x U {"""1}>) = l(a(<a,x>)) and a(<a,x U {I)>=
I( c(<a,x>) )4; and
7 . cfX1, ... ,X} = c(X 1)v ... vc(X ),and a({X1 , ... ,X }) = n n n
220
a(x 1 )& ... &a(X ), for any distinct s-structure X1•···•X, n n n > 1. (We assume an ordering of s-structures.)
Then for any s-structuresX andY, let t(X~Y) =
a(X) + c(Y).
Now with Facts 4.4.1 - 4.4.3, the reader can
easily verify
Lemma 4.4.4. X ~y is derivable in sTW(sRW) only if
t(X ~ Y) is a theorem of TWo It (RWolt).
And it is now easy to show
Theorem 4.4.1. For any formula A in the language of
TW 01, A is provable in TW 01 (RW 01
) iff it is provable in
sTW(sRW).
Proof. Left to right is immediate from Lemma 4.4.3. For
right to left, assume I ~A is derivable in one of the olt
s-systems. By Lemma 4.4.4, t+A is a theorem of TW olt or of RW , as the case may be. But t is also a theorem
(it's an axiom), whence A is a theorem. Then by Theorem
8 , ol ol 1. . 3, A lS a theorem of TW ( RW ) , to complete the proof.
221
And by Theorem l. 8 . 2, we get
Theorem 4.4,2, For any formula A in the language of TW,
A is provable in TW(RW) iff it is provable in sTW(sRW).
We now have the desired equivalences, and good
extensional control . But two questions remain for
applying our technique for decidability.
Question l. Vanishing-t (Vanishing-I)
Can the systems be formulated without I- ~· without losing
equivalences.
The answer to this question must (should?) be
yes. Our preferred method of going empty on the left-
and now thus empty on the right - presents technical
difficulties on how to translate sequents that are empty
on one side or other. For consider an s-derivation
ending as follows;
(l) ( 2)
( 3)
( 4)
I ;A ~ B
I~ A+B
<A+B {-}>]-<I {-}>
-(A+B) ~<I {-}>
<I {-I}> ~<-(A+B) {I}>
<-(A+B) ,{1,-}>~<I,{"l}>
If the systems were allowed to be empty, we would be
forced to interpret emptiness on the left alternatively
as t inland as 1-t in (3). Emptiness on the right is
222
analogous in (2) and (4). And this is only the tip of the
iceberg.
The simpler course would seem to be to leave I
in the system, stay non-empty and argue to the effect that
I;X f-Z is derivable just in case X rZ is. But in any
event, an argument for Vanishing-t will not be a
straightforward adaptation of the argument of §3.5.
For the analogue of Lemma 3. 5.1 is not immediately
forthcoming. I's can move in and out of sets with members
other than I, as illustrated below
I r p+p
n,q} r p+p
I r { <q, fl}> ,p+p}
It may be the case that no Vanishing-t Theorem
can be had. Or that the most one could hope for would be
the minimum requirement that I;I f-Z be derivable just in
case I f-Z is. For the time being, the question remains
open.
Question 2. Degree
Is there a determinable upper bound on the degree of a
sequent that can occur in a derivation of a given sequent.
Obviously, the first step is to find an
appropriate definition of degree. The following immediately
suggests itself:
223
(1) deg(I) = 0;
(2) deg(A) is the total number of ~·sand o 1 s occurring
in A;
(3) deg(X;Y) = deg(X) + deg(Y) + 1;
(4) deg(<a,x>) = deg(a);
(5) deg({X 1, ... ,Xn}) = max({deg(X 1), ... ,deg(Xn)}); and
(6) deg(X ~ Y) = deg(X) + deg(Y).
That is, ignore negation and count degree essentially as
before.
But on this definition, certain boolean display
equivalences fail to be degree-preserving, as shown by: 5
{p,p+p} ~ A~.B~C
P ~ {<p+p,CI}>,A+.B~C}
One possible solution to this problem is to define
a normal form for sequents in such a way that no s-structures
of the form Xu <a,y u {I}> occurs in it, and then count
the degree of a sequent as the degree of its normal form.
But if I ~ and ~I are to be degree preserving, such normal
forming would almost certainly have 6 to be radical enough
to eliminate s-structures of the form Xu lA.
At this point, we would not hazard a guess as to
whether such a strategy can be effected. So for the time
being, Question 2 also remains unanswered. And we conclude
this section with the partially independent
·pueees s~~ e~ dBIISUE
6D1QBPTDDp M~ PUB
MH DJV tDTqDpf8Bp [0
MM PUE LoMffi BdV "SUOT4SDnb uedo
225
FOOTNOTES
1see addendum to §1.3.
2our discussion of Display Logics and the presentation
of DTW and DRW are strictly tailored to present purpose.
A more general discussion of Display Logic is contained
in Chapter 1 § 2 , where the use of 'Display Logics I as
opposed to IDisplay Logicl is explained.
3This point, as well as the other points on negation below,
can also be found in §5.8 of Belnap 8+.
45 and 6 embody an arbitrary decision to translate
boolean negation to the outside of De Morgan negation.
The reverse procedure would do as well.
5The example also shows a further difficulty, namely,
K,r, for going empty.
6we suspect that one would even have to change the notion
of degree for formulae.
226 SECTION 5. RW = VRW ? + +.
Before getting into the question, we will want
to make some assumptions to simplify the discussion. We
have stated before that neither the uRW+ semantics nor
NuRW+ (which comes from NuR+ by putting the obvious
disjointness restriction on +E) are known to be equivalent
to uRW+. However, it seems very likely that the proofs of
Charlwood Theorem 1 and 2 will go through mutatis
. 1 mutand2s. For the sake of the discussion to follow, we
ASSUME that the three are equivalent.
The question of the equivalence of RW+ and uRW+
arose when it was noted that the standard counter examples
to the equivalence of R+ and uR+ are invalid in the duRW+
semantics given in the first chapter. The best known of
these is
Thl. (P+qvr)&(q+r)+.p+r
Thl. is provable in uR+, but not in R+. But it and its
known mates require contraction for their proof.
To see that Thl is not a theorem of uRW+, it will
suffice to give a refuting model in the duRW+ semantics,
since it is straightforward to show that uRW+ is consistent
with respect to that semantics. So let K be the power set
of {1,2}. Obviously~= <K,0,v> is a duRW+m.s. Then let
V make p true at {2}, q true at {1,2), and otherwise make
a sentential parameter false at an element of K. The
reader can quickly check that the associated interpretation
makes (p+qvr)&(q+r) true at {1), and p+r false at {1).
227
Whence Thl is false at ¢, as required.
So let us officially pose the
Open Question. Are RW+ and uRW+ (theoremwise) equivalent?
This is a very interesting question, and one which
has been raised independently by at least one other
researcher in the field, Professor Robert Bull of New
Zealand. 2 A positive answer to the question would yield a
very simple semantics for RW+. But in any event, the
process of discovering the answer should throw more light
on the relationship between the traditional Relevant
Logics and their semilattice cousins.
We suspect that the two systems are
equivalent, but the suspicion is largely based on negative
evidence, namely failure to date in finding a counter
example. But for the time being, this question must
remain as an interesting problem for further research.
FOOTNOTES
1It seems to us that the ''book-keeping'' of such an
adaptation of Charlwood's completeness proof could be
more easily managed using the duRW+ semantics given
in Chapter 1.
2Reported in correspondence of July 1982.
228
229
BIBLIOGRAPHY
Anderson, A.R. and Belnap, N.D., Jr.
[1975] Entailment: The Logic of Relevance and Necessity, Vol. 1, Princeton University Press, Princeton, New Jersey.
Belnap, N.D., Jr.
[1959]
[1960]
[198+]
"Pure Rigorous Implication as a Sequenzen-kalkul", Journal of Symbolic Logic, 24, 282-83.
A Formal Analysis of Entailment, Technical Report No. 7, Contract No. SAR/Nonr-609(16), Office of Naval Research, New Haven.
"Display Logic", Journal of Philosophical Logic.
Belnap, N.D., Jr., Gupta, A. and Dunn, J.M.
[1980] "A Consecution Calculus for Positi've Implication with Necessity", Journal of Philosophical Logic, 9, 343-62.
Belnap, N.D., Jr., and Wallace, J.R.
[1965]
Charlwood, G.
[1978]
[1981 ]
Curry, H.B.
[1963]
"A Decision Procedure for the System E- of Entailment with Negation", Zeitschrift JUr Mathemtische Logik und Grundlagen der Mathematik, 11, 277-289.
Representations of Semi lattice Relevance Logic, University of Toronto Doctoral Dissertation.
"An Axiomatic Version of P.ositive Semilattice Relevance Logic", Journal of Symbolic Logic, 46, 231-239.
Foundations of Mathematical Logic, McGraw-Hill, New York.
Curry, H.B. and Feys, R.
[1958] Combinatory Logic, Vol. 1, North-Holland, Amsterdam.
Dunn, J .M.
[1973] "A 'Gentzen System' for Positive Relevant Implication", (Abstract), Journal of Symbolic Logic, 38, 356-357.
Kron, A.
[1976]
[1978]
[1980]
[1981]
"Deduction Theorems for T, E and R reconsidered", Zeitschrift j"ur Mathematische Logik und Grundlagen der Mathematik, 22, 261-64.
"Decision Procedures for Two Positive Relevance Logics", Reports on Mathematical Logic, l 0, 61-78.
"Gentzen Formulations of Two Positive Relevance Logics" , Studia Logica, 39, 381-40 3.
"Corrections", Studia Logica, 40, 311.
Kuratowski, K. and Mostowski, A.
[1968] Set Theory, North-Holland, Amsterdam.
Lukasiewicz, J.
[1920] "0 Logi ce Troji vartosciowej", Ruch Filozoficzny,
231
5, 169-71. (Reprinted in translation in McCall 67, 16-18, as "On 3-valued Logic".)
Martin, E.P.
[1978] The P-W Problem, Doctoral Dissertation, Australian National University, Canberra.
Martin, E.P. and Meyer, R.K.
[198+ J "Solution to the P-W Problem", Jdurnal of Symbolic Logic.
Meredith, C.A. and Prior, A.N.
[1963]
Meyer, R.K.
"Notes on the Axiomatics of the Propositional Calculus", Notre Dame Journal of Formal Logic, 4, 171-87.
[1966] Topics in Modal and Many-Valued Logic, Doctoral Dissertation, Univerity of Pittsburgh, Pennsylvania.
[l976a] "Metacompl eteness", Notre Dame Journal of Formal Logic, 17, 501-516.
[l976b] "A General Gentzen System for Implicational Calculi", Relevance Logic Newsletter, l, 189-201.
[1979] "Sentential Constants in R", Research Paper No.2, Logic Group, R.S.S.S., Australian National University, Canberra.
[19++] "Improved Decision Procedures for Pure Relevant Logics", Typescript 1973.
Dunn, J.M.
[1975]
Fine, K.
[1974]
Gentzen, G.
[1935]
[1969]
Harrop, R.
[1956]
Kleene, S.C.
[1962]
Konig, D.
[1927]
Kripke, S.A.
[1959 J
Kron, A.
[1973]
"Consecution Formulation of Positive R with Co-tenability and t", in Anderson and Belnap [1975], 381-391.
"Models for Entailment", Journal of Philosophical Logie, 3, 347-372.
"Untersuchungen Uber das logische Schliessen", Mathematische Zeitsehrift, 39, 176-210 and 405-431. (Reprinted in translation in Gentzen [1969], 68-131, as "Investigations into Logical Deduction") .
The CoUected Papers of Gerhard Gentzen, ed. M. Szabo, North-Holland, Amsterdam.
"On Disjunctions and Existential Statements in Intuitionistic Logic", Mathematisehe Annelen, 132, 347-64.
Introduction to Metamathematics, North-Holland, Amsterdam.
230
Uber eine Schlussweise aus dem Endlichen ins Unendliche (Punktmengen-Kartenfarben-Verwandtschaftsbeziehungen-Schachspiel). Acta Litterarum ae Scientiarum (Sectio Seientiarum Mathematicarum), Vol.3, 121-130. (More easily available in Theorie der endliehen und unendlichen Graphen. Akademische Verlagsgesellschaft, Leipzig, 1936. Republished, New York (Chelsea) 1950.)
"The Problem of Entailment", (Abstract), Journal of Symbolic Logic, 24, 324.
"Deduction Theorems for Relevant Logics", Zeitschrift fur Mathematische Logik und GrundZagen der Mathematik, 19' 85-92.
232
Meyer, R.K. and Giambrone, S.
[1980]
[1981]
"R+ Is Contained in T/, Bulletin of the Section of Logic of the Polish Academy of Sciences, 9, 30-33.
"Strict Implication in T", Logigue et Analyse, 94, 267-69.
Meyer, R.K. and McRobbie, M.A.
[1979]
[1982]
"Firesets and Relevant Implication", Research Paper No. 3, Logic Group, R.S.S.S., Australian National University, Canberra. Revised version printed as Meyer and McRobbie [1982].
"Multisets and Relevant Implication", Australian Journal of Philosophy, 60, 107-39 and 265-81.
Meyer, R.K. and Routley, R.
[1973a] "Classical Relevant Logics I", Studia Logica, 32, 51-68.
[1973b] "An Undecidable Relevant Logic", Zeitschrift fUr Mathematische Logik und Grundlagen der Mathematik, 19, 389-397.
[1974a] "Classical Relevant Logics II", Studia Logica, 33, 183-94.
[1974b] "E is a conservative extension of EI", Philosophia, 4, 223-49.
Meyer, R.K., Routley, R. and Dunn, J.M.
[1979] "Curry's Paradox", Analysis, 39, 124-28.
Meyer, R.K. and Slaney, J.
[1980]
Mine, G.E.
[1972]
McCall, S.
[1967]
McRobbie, M.A.
[1979]
"Abe 1 ian Logic (from A to Z)", Research Paper No. 7, Logic Group, R.S.S.S., Australian National University, Canberra. (Reprinted in abbreviated form in Routley, Priest and Norman [1983].
"Teorema ob Ustranimosti Se~enia dla Relevantnyh Logi k", in Mati jasevi c and Sl i senko, 90-97. (Reprinted in translation in Journal of Soviet Mathematics, 6 [1976], 422-428, as "Cut-Elimination Theorem for Relevant Logics").
Polish Logic, 1920-1939, Clarendon Press, Oxford.
A Proof Theoretic Investigation of Relevant and Modal Logics, Doctoral Dissertation, Australian National University, Canberra.
Powers, L.
[1976]
Quine, W.V.O.
[1966]
"On P-W", Relevance Logic Newsletter, 1 , 131-42.
Methods of logic, revised edition, Holt, Rinehart and Winston, New York.
Routley, R. and Meyer, R.K.
[1972a] "The Semantics of Entailment III", Journal of Philosophical Logic, 1, 192-208.
[1972b] "Algebraic Analysis of Entailment I", Logique et Analyse, 15, 407-428.
[1973] "The Semantics of Entailment", Truth, Synt= and Modality, ed. Hughes Leblanc, North-Holland, Amsterdam, 199-243.
[198+] "Semantics of Entailment IV: E, TI' and Appendix 1 of Routley and Meyer et al.
II II 7f ,
Routley, R., Meyer, R.K. et al.
[198+] Relevant Logics and Their Rivals, Ridgeview, Atascadero, California.
Routley, R., Priest, G. and Norman, J. (ed.)
[1983] Paraconsistent Logics, Philosophia Verlag, Munich.
Slaney, J.
233
[1980] Computers and Relevant Logic: A Project in Computing Matrix Model Structures for Propositional Logics, Australian National University Doctoral Dissertation.
[1983]
[198+]
Smiley, T.
[1959]
Urquhart, A.
"RWX Is Not Curry Consistent", in Routley, Priest and Norman [1983].
"A Meta-completeness Theorem for Contraction-free Relevant Logics", Studia Logica (special issue on paraconsistent logics, edited by Priest & Routley).
"Entailment and Deducibility", Proceedings of the Aristotelian Society, 59, 233-254.
[1972a] "The Completeness of Weak Implication", Theoria, Vol.37, 274-82.
234
Urquhart, A.
[1972b] "Semantics for Relevant Logics", Jou:mal of Symbolic Logic, 357, 159-69.
[1973] The Semantics of Entailment, University of Pittsburgh Doctoral Dissertation, Ann Arbor (University Microfilms).
[1982] "Relevant Implication and Projective liieometry", unpublished manuscript.
[198+] "The Undecidability of Entailment and Relevant Implication", as yet unpublished manuscript.