+ All Categories
Home > Documents > [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning...

[Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning...

Date post: 10-Oct-2016
Category:
Upload: andrei
View: 215 times
Download: 1 times
Share this document with a friend
15
Focused Natural Deduction Taus Brock-Nannestad and Carsten Sch¨ urmann IT University of Copenhagen {tbro,carsten}@itu.dk Abstract. Natural deduction for intuitionistic linear logic is known to be full of non-deterministic choices. In order to control these choices, we combine ideas from intercalation and focusing to arrive at the calculus of focused natural deduction. The calculus is shown to be sound and complete with respect to first-order intuitionistic linear natural deduction and the backward linear focusing calculus. 1 Introduction The idea of focusing goes back to Andreoli [1] and gives an answer to the question on how to control non-determinism in proof search for the classical sequent calculus for fragments of linear logic. It removes the “bureaucracy” that arises due to the permutability of inference rules. Historically, the idea of focusing has influenced a great deal of research [2,3,5], all centered around sequent style systems. There is one area, however, which has been suspiciously neglected in all of this: natural deduction. In our view, there are two explanations. First, for various reasons we don’t want to get into here, most theorem proving systems are based on the sequent calculus. Therefore it is not surprising that most of the effort has gone into the study of sequent calculi as evidenced by results in uniform derivations [8] and of course focusing itself. Second, it is possible to characterize the set of more “normalized” proofs for natural deduction in intuitionistic logic. Many such characterizations have been given in the past, for example, the intercalation calculus [9,10]. Backward chaining from the conclusion will only use introduction rules until only atomic formulas are leftover, and similarly, forward chaining will only use elimination rules. Theorem provers for natural deduction, implementations of logical frameworks and bi-directional type checkers are all inspired by this idea of intercalation. One of its advantages is that the aforementioned “bureaucracy” never arises in part because the book-keeping regarding hypotheses is done externally and not within the formal system. In this paper we refine intercalation by ideas from focusing, resulting in a calculus with an even stricter notion of proof. This is useful when searching for and working with natural deduction derivations. The hallmark characteristic of focusing is its two phases. First, invertible rules are applied eagerly, until both context and goal are non-invertible. This phase This work was in part supported by NABITT grant 2106-07-0019 of the Danish Strategic Research Council. C. Ferm¨ uller and A. Voronkov (Eds.): LPAR-17, LNCS 6397, pp. 157–171, 2010. c Springer-Verlag Berlin Heidelberg 2010
Transcript
Page 1: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

Focused Natural Deduction�

Taus Brock-Nannestad and Carsten Schurmann

IT University of Copenhagen{tbro,carsten}@itu.dk

Abstract. Natural deduction for intuitionistic linear logic is known tobe full of non-deterministic choices. In order to control these choices, wecombine ideas from intercalation and focusing to arrive at the calculusof focused natural deduction. The calculus is shown to be sound andcomplete with respect to first-order intuitionistic linear natural deductionand the backward linear focusing calculus.

1 Introduction

The idea of focusing goes back to Andreoli [1] and gives an answer to the questionon how to control non-determinism in proof search for the classical sequentcalculus for fragments of linear logic. It removes the “bureaucracy” that arisesdue to the permutability of inference rules. Historically, the idea of focusinghas influenced a great deal of research [2,3,5], all centered around sequent stylesystems. There is one area, however, which has been suspiciously neglected inall of this: natural deduction. In our view, there are two explanations.

First, for various reasons we don’t want to get into here, most theorem provingsystems are based on the sequent calculus. Therefore it is not surprising thatmost of the effort has gone into the study of sequent calculi as evidenced byresults in uniform derivations [8] and of course focusing itself.

Second, it is possible to characterize the set of more “normalized” proofs fornatural deduction in intuitionistic logic. Many such characterizations have beengiven in the past, for example, the intercalation calculus [9,10]. Backward chainingfrom the conclusion will only use introduction rules until only atomic formulas areleftover, and similarly, forward chaining will only use elimination rules.

Theorem provers for natural deduction, implementations of logical frameworksand bi-directional type checkers are all inspired by this idea of intercalation. Oneof its advantages is that the aforementioned “bureaucracy” never arises in partbecause the book-keeping regarding hypotheses is done externally and not withinthe formal system. In this paper we refine intercalation by ideas from focusing,resulting in a calculus with an even stricter notion of proof. This is useful whensearching for and working with natural deduction derivations.

The hallmark characteristic of focusing is its two phases. First, invertible rulesare applied eagerly, until both context and goal are non-invertible. This phase� This work was in part supported by NABITT grant 2106-07-0019 of the Danish

Strategic Research Council.

C. Fermuller and A. Voronkov (Eds.): LPAR-17, LNCS 6397, pp. 157–171, 2010.c© Springer-Verlag Berlin Heidelberg 2010

Page 2: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

158 T. Brock-Nannestad and C. Schurmann

is called the inversion or asynchronous phase. Second, a single formula takenfrom the context or the goal is selected and focused upon, applying a maximalsequence of non-invertible rules to this formula and its subformulas. This isknown as the focusing or synchronous phase. The central idea of focusing is topolarize connectives into two groups. A connective is flagged negative when therespective introduction rule is invertible but the elimination rule(s) are not, andit is considered positive in the opposite case. Each connective of linear logic iseither positive or negative, the polarity of atoms is uncommitted.

The main motivation for our work is to explore how proof theory, naturaldeduction in particular, can be used to explain concurrent systems. In that ourgoals are similar to those of the concurrent logical framework research projectCLF [11]. Pragmatically speaking, in this paper, we remove all bureaucraticnon-determinism from logical natural deduction derivations (as described byfocusing), so that we can, in future work, use the remaining non-determinism tocharacterize concurrent systems. That the resulting focused natural deductioncalculus is complete is perhaps the most important contribution of the paper.

Consider for example the judgment a, a � 1 � 1 ⊕ b ⇑, for which we wouldlike to find a derivation in the intercalation calculus. The judgment should beread as find a canonical proof of 1⊕ b from the linear assumptions a and a � 1.As nothing is known about b, one might hope (at least intuitively) for one uniquederivation. However, there are two, which we will next inspect in turn.

hypa � 1 � a � 1 ⇓

hypa � a ⇓ ⇓⇑a � a ⇑

� Ea, a � 1 � 1 ⇓ 1I· � 1 ⇑

1Ea, a � 1 � 1 ⇑ ⊕ I1

a, a � 1 � 1⊕ b ⇑The experienced eye might have already spotted, that this derivation is negativelyfocused, because the two upper rules in the left-most chain of inference rules hypand �E form a focus. However, it is not positively focused. The rule ⊕ I1 andthe rule 1I ought to form a focus (both are positively polarized), but the 1E rulebreaks it. This rule is the only inversion rule in this derivation, and hence it ismaximally inverted.

hypa � 1 � a � 1 ⇓

hypa � a ⇓ ⇓⇑a � a ⇑

� Ea, a � 1 � 1 ⇓

1I· � 1 ⇑ ⊕ I1· � 1⊕ b ⇑1E

a, a � 1 � 1 ⊕ b ⇑By permuting 1E and ⊕ I1 we can restore the focus, and again, the inversionphase of this derivation (1E) is maximal. In summary, for intuitionistic linearlogic, the intercalation calculus does not rule out as many derivations as it should.

Page 3: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

Focused Natural Deduction 159

In the remainder of this paper, we define and discuss the calculus of focusednatural deductions. It is a simple, yet deep, generalization of the intercalationformulation. Among the two derivations above only the second one is derivable infocused natural deduction. The main idea behind this calculus is to distinguishnegative from positive connectives and to make the related coercions explicitas connectives ↑P and ↓N . We call these connectives delay connectives, becausethey effectively delay positive and negative focusing phases. To ensure maxi-mal asynchronous phases, we use a generalization of the patterns introduced byZeilberger in [12].

The paper is organized as follows. In Section 2 we introduce the focusednatural deduction calculus. Next, we show soundness and completeness withrespect to first-order intuitionistic linear natural deduction in Section 3 andwith respect to the backward linear focusing calculus of Pfenning, Chaudhuriand Price [4] in Section 4. We conclude and assess results in Section 5.

2 Natural Deduction for Intuitionistic Linear Logic

First we give the definition of the intercalation calculus. The syntax of linearlogic is standard.

A, B, C ::= a | A ⊗ B | 1 | A ⊕ B | 0 | ∃x.A | A & B | � | A � B | !A | ∀x.A

As for the judgment, we use a two zone formulation with a turnstile with doublebars, Γ ; Δ � A ⇑, which reads as there is a canonical proof of A from intuitionis-tic assumptions Γ and linear assumptions Δ. Conversely, we define Γ ; Δ � A ⇓for atomic derivation of A. The inference rules are standard and depicted inFigure 1.

2.1 Focused Natural Deduction

We split the connectives of linear logic into two disjoint groups based on theirinversion properties. Connectives that are invertible on the right of the turnstileand non-invertible on the left become negative connectives. Conversely, con-nectives that are invertible on the left and non-invertible on the right becomepositive connectives. This gives us the following syntax of polarized formulas:

P, Q ::= a+ | P ⊗ Q | 1 | P ⊕ Q | 0 | ∃x.P | !N | ↓NN, M ::= a− | N & M | � | P � N | ∀x.N | ↑P

We use P, Q for positive propositions, and N, M for negative propositions. We useA, B for propositions where the polarity does not matter. The syntax is similarto the ones presented in [2,7]. Additionally we use the following shorthand:

γ+ ::= ↑P | a−, γ− ::= ↓N | a+

Page 4: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

160 T. Brock-Nannestad and C. Schurmann

Γ ; Δ � a ⇓⇓⇑

Γ ; Δ � a ⇑hyp

Γ ; A � A ⇓uhyp

Γ, A; · � A ⇓

1IΓ ; · � 1 ⇑

Γ ; Δ1 � 1 ⇓ Γ ; Δ2 � C ⇑1E

Γ ; Δ1, Δ2 � C ⇑�I

Γ ; Δ � � ⇑Γ ; Δ1 � 0 ⇓

0EΓ ; Δ1, Δ2 � C ⇑

Γ ; Δ1 � A ⇑ Γ ; Δ2 � B ⇑⊗ I

Γ ; Δ1, Δ2 � A ⊗ B ⇑Γ ; Δ1 � A ⊗ B ⇓ Γ ; Δ2, A, B � C ⇑

⊗EΓ ; Δ1, Δ2 � C ⇑

Γ ; Δ � Ai ⇑ ⊕ Ii

Γ ; Δ � A1 ⊕ A2 ⇑Γ ; Δ1 � A ⊕ B ⇓ Γ ; Δ2, A � C ⇑ Γ ; Δ2, B � C ⇑

⊕EΓ ; Δ1, Δ2 � C ⇑

Γ ; Δ � A ⇑ Γ ; Δ � B ⇑&I

Γ ; Δ � A & B ⇑Γ ; Δ � A1 & A2 ⇓

&Ei

Γ ; Δ � Ai ⇓Γ ; Δ � [a/x]A ⇑

∀Ia

Γ ; Δ � ∀x.A ⇑Γ ; Δ � ∀x.A ⇓

∀EΓ ; Δ � [t/x]A ⇓

Γ ; Δ, A � B ⇑� I

Γ ; Δ � A� B ⇑Γ ; Δ1 � A� B ⇓ Γ ; Δ2 � A ⇑

�EΓ ; Δ1, Δ2 � B ⇓

Γ ; · � A ⇑!I

Γ ; · � !A ⇑Γ ; Δ1 � !A ⇓ Γ, A; Δ2 � C ⇑

!EΓ ; Δ1, Δ2 � C ⇑

Γ ; Δ � [t/x]A ⇑∃I

Γ ; Δ � ∃x.A ⇑Γ ; Δ1 � ∃x.A ⇓ Γ ; Δ2, [a/x]A � C ⇑

∃Ea

Γ ; Δ1, Δ2 � C ⇑

Fig. 1. Linear natural deduction (in intercalation notation)

Patterns. We use the concept of patterns, as seen in [6,12], to capture the decom-position of formulas that takes place during the asynchronous phase of focusing.The previous notion of patterns is extended to work with unrestricted hypothesesand quantifiers.

Pattern judgments have the form Γ ; Δ � P and Γ ; Δ � N > γ+, the latterof which corresponds to decomposing N into γ+ using only negative eliminationrules. These judgments are derived using the inference rules given in Figure 2.

We require that the variable a in the judgments concerning the quantifierssatisfies the eigenvariable condition, and does not appear below the rule thatmentions said variable.

Note that pattern derivations Γ ; Δ � P and Γ ; Δ � N > γ+ are entirelydriven by the structure of P and N . In particular, this means that when wequantify over all patterns Γ ; Δ � P � N > γ+ for a given formula P � N , thisis equivalent to quantifying over all patterns Γ1; Δ1 � P and Γ2; Δ2 � N > γ+.

A crucial part of the definition of this system, is that there are only finitelymany patterns for any given polarized formula. For this reason, we treat theunrestricted context in the patterns as a multiset of formulas. This also meansthat we need to treat two patterns as equal if they only differ in the choiceof eigenvariables in the pattern rules for the quantifiers. This is a reasonabledecision, as the eigenvariable condition ensures that the variables are fresh, hencethe actual names of the variables are irrelevant. With these conditions in place,

Page 5: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

Focused Natural Deduction 161

·; γ− � γ− ·; · � 1

Γ1; Δ1 � P Γ2; Δ2 � Q

Γ1, Γ2; Δ1, Δ2 � P ⊗ Q

Γ ;Δ � Pi

Γ ;Δ � P1 ⊕ P2N ; · � !N

Γ ;Δ � [a/x]P

Γ ;Δ � ∃x.P

·; · � γ+ > γ+

Γ ; Δ � Ni > γ+

Γ ;Δ � N1 & N2 > γ+

Γ1; Δ1 � P Γ2; Δ2 � N > γ+

Γ1, Γ2; Δ1, Δ2 � P � N > γ+

Γ ; Δ � [a/x]N > γ+

Γ ; Δ � ∀x.N > γ+

Fig. 2. Positive and negative pattern judgments

it is straightforward to prove that for any given formula, there are only finitelymany patterns for said formula.

Inference rules. For the judgments in the polarized system, we use a turnstilewith a single vertical bar. The inference rules can be seen in Figure 3.

The quantification used in the ↑E and ↓I rules is merely a notational conve-nience to ease the formulation of the rules. It is a shorthand for the finitelymany premises corresponding to the patterns of the principal formula. Thus,the number of premises for these rules may vary depending on the principalformula. This does not, however, pose a problem when checking that a rule hasbeen applied properly, as we can reconstruct the necessary subgoals from theformulas ↑P and ↓N . As usual, we require that the eigenvariables introduced bythe patterns must be fresh, i.e. may not occur further down in the derivation.

As an example, we show the only possible proof of the statement given in theintroduction. We have chosen to polarize both atoms positively, and elided theempty intuitionistic context.

hyp↓(a � ↑1) ↓(a � ↑1) ⇓ ↓E↓(a � ↑1) a � ↑1 ⇓

hypa a ⇓ ⇓⇑a a ⇑

�Ea, ↓(a � ↑1) ↑1 ⇓

1I· 1 ⇑ ⊕ I1· 1⊕ b ⇑ ↑I· ↑(1 ⊕ b) ⇑ ↑Ea, ↓(a � ↑1) ↑(1 ⊕ b) ⇑

Note that the pattern judgment for 1 does not appear in this derivation. Indeed,the pattern judgments are only used to specify which premises should be presentin a valid application of the ↑E and ↓I rules.

The use of patterns in the ↑E and ↓I rules collapses the positive and negativeasynchronous phases into a single rule application. Thus we equate proofs thatonly differ in the order in which the negative introduction and positive elimina-tion rules are applied. For instance, the assumption (A ⊗ B) ⊗ (C ⊗ D) can beeliminated two ways:

Page 6: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

162 T. Brock-Nannestad and C. Schurmann

Γ ;Δ � a ⇓⇓⇑

Γ ;Δ � a ⇑hyp

Γ ; P � P ⇓uhyp

Γ, N ; · � N ⇓1I

Γ ; · � 1 ⇑Γ ; Δ1 � P ⇑ Γ ; Δ2 � Q ⇑

⊗ IΓ ; Δ1, Δ2 � P ⊗ Q ⇑

Γ ; Δ � Pi ⇑ ⊕ Ii

Γ ;Δ � P1 ⊕ P2 ⇑Γ ; Δ � [t/x]P ⇑

∃IΓ ; Δ � ∃x.P ⇑

Γ ; · � ↓N ⇑!I

Γ ; · � !N ⇑Γ ;Δ � N1 & N2 ⇓

&Ei

Γ ;Δ � Ni ⇓Γ ; Δ � ∀x.N ⇓

∀EΓ ; Δ � [t/x]N ⇓

Γ ; Δ1 � P � N ⇓ Γ ; Δ2 � P ⇑� E

Γ ; Δ1, Δ2 � N ⇓Γ ; Δ � P ⇑ ↑IΓ ; Δ � ↑P ⇑

Γ ; Δ1 � ↑P ⇓for all ΓP ; ΔP � P

Γ, ΓP ; Δ2, ΔP � γ+ ⇑ ↑EΓ1; Δ1, Δ2 � γ+ ⇓

Γ ; Δ � ↓N ⇓ ↓EΓ ; Δ � N ⇓

for all ΓN ; ΔN � N > γ+

Γ, ΓN ; Δ, ΔN � γ+ ⇑ ↓IΓ ; Δ � ↓N ⇑

Fig. 3. Focused linear natural deduction

(A ⊗ B) ⊗ (C ⊗ D) � A ⊗ B, C ⊗ D � A, B, C ⊗ D � A, B, C, D

(A ⊗ B) ⊗ (C ⊗ D) � A ⊗ B, C ⊗ D � A ⊗ B, C, D � A, B, C, D

corresponding to which assumptions act as the principal formula for the ⊗E rule.By using patterns, this unnecessary distinction is avoided, as there is only onepattern for (A ⊗ B) ⊗ (C ⊗ D), specifically the pattern ·; A, B, C, D.

Note that the rules �I and 0E are captured by the above rules, as there areno patterns of the form Γ ; Δ � 0 and Γ ; Δ � � > γ+.

The polarity restrictions on the hyp and uhyp rules are justified by notingthat � is the internalized version of the linear hypothetical judgments. In par-ticular, this means that the linear context can only contain positive formulas;any negative formulas must be in delayed form ↓N . Unrestricted formulas, on theother hand, are not delayed, as choosing whether or not to use an unrestrictedresource is always a non-deterministic (hence synchronous) choice.

By inspecting the polarized rules, we may observe the following:

1. In a derivation of the judgment Γ ; Δ A ⇑ where A is positive (e.g. A =P ⊗Q), the final rule must be the positive introduction rule corresponding toA, or ⇓⇑ if A is an atom. This follows from the fact that positive eliminationsare only applicable when the succedent is negative.

2. The only rule applicable to the judgment Γ ; Δ A ⇓ where A is negative(e.g. A = P � N), is the appropriate elimination rule for A, or ⇓⇑ if A isan atom.

Page 7: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

Focused Natural Deduction 163

Based on the above observations, we define the following synchronous phasesbased on the polarity of the succedent and whether it is atomic or canonical.

Γ ; Δ P ⇑ Positive focusing, initiated by ↑I

Γ ; Δ N ⇓ Negative focusing, initiated by ↓E

By our use of patterns, we collapse the asynchronous phases, which would oth-erwise have corresponded to the judgments Γ ; Δ P ⇓ and Γ ; Δ N ⇑.

The positive focusing phase proceeds in a bottom-up fashion, and is initi-ated by using the ↑I rule to remove the positive shift in front of the formula tobe focused. The focused formula is then decomposed by a sequence of positiveintroduction rules. This phase ends when the succedent becomes negative oratomic.

The negative focusing phase proceeds in a top-down fashion, and is initiatedby choosing a negative or delayed negative formula in either the linear or un-restricted context, and applying a sequence of negative elimination rules to thejudgment Γ ; ↓N ↓N ⇓ or Γ, N ; · N ⇓, given by either the hyp rule or theuhyp rule. The phase terminates when the succedent is either positive or atomic.In the former case, the subderivation must end in the ↑E rule, and in the lattercase in the ⇓⇑ rule.

Note that because the positive elimination rules restrict the succedent of theconclusion to be of the form γ+, it is not possible to apply the ↑E rule insidea positive focusing phase. As negative focusing ends in positive elimination orcoercion, it is not possible to perform negative focusing inside a positive focusingphase. Likewise, the negative focusing phase cannot be interrupted.

It is in this sense the above system is maximally focused — once a formulais chosen for focusing, it must be decomposed fully (i.e. keep the focus) beforeother formulas can be focused or otherwise eliminated.

3 Soundness and Completeness

A note on notation. To formulate the soundness and completeness theorems, weneed to be able to talk about when the entire linear context is covered by somepattern. This is done using the judgment Γ ′; Δ′ � Δ. The following inferencerules define this judgment:

·; · � ·Γ ′; Δ′ � Δ ΓP ; ΔP � P

Γ ′, ΓP ; Δ′, ΔP � Δ, P

We will tacitly use the fact that this judgment is well-behaved with regard tosplitting the context Δ. In other words, that Γ ′; Δ′ � Δ1, Δ2 if and only ifΓ ′ = Γ ′

1, Γ′2 and Δ′ = Δ′

1, Δ′2 where Γ ′

i ; Δ′i � Δi for i = 1, 2.

Soundness. To prove soundness, we define a function from polarized formulasto regular formulas. This is simply the function (−)e that erases all occurrencesof the positive and negative shifts, i.e. (↑P )e = P e and (↓N)e = Ne, whilst(P � N)e = P e � Ne and (∀x.N)e = ∀x.Ne and so on.

Page 8: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

164 T. Brock-Nannestad and C. Schurmann

Theorem 1 (Soundness of polarized derivations). The following proper-ties hold:

1. If Γ ; Δ A ⇑ then Γ e; Δe � Ae ⇑.2. If Γ ; Δ A ⇓ then Γ e; Δe � Ae ⇓.3. If for all Γ ′; Δ′ � Ω and ΓA; ΔA � A > γ+ we have Γ, Γ ′, ΓA; Δ, Δ′, ΔA

γ+ ⇑, then Γ e; Δe, Ωe � Ae ⇑.

Proof. The first two claims are proved by induction on the structure of the givenderivations. In the case of the ↑E and ↓I rules, we reconstruct the asynchronousphases, hence the third hypothesis is needed.

We prove the third claim by an induction on the number of connectives in Ωand A. This is needed when reconstructing a tensor, as Ω will then grow in size.The base case for this induction is when all formulas in Ω are of the form γ−

and A is of the form γ+. In this case, we appeal to the first induction hypothesis.We show a few representative cases here:

Case A = P � N : By inversion on ΓA; ΔA � A > γ+, we get ΓA = ΓP , ΓN

and ΔA = ΔP , ΔN such that ΓP ; ΔP � P and ΓN ; ΔN � N > γ+, henceΓ ′, ΓP ; Δ′, ΔP � Ω, P and by the induction hypothesis Γ e; Δe, Ωe, P e �Ne ⇑, hence by applying � I we get the desired derivation of Γ e; Δe, Ωe �(P � N)e ⇑.

Case Ω = Ω1, P ⊗ Q, Ω2: By assumption, Γ ′1, ΓPQ, Γ ′

2; Δ′1, ΔPQ, Δ′

2 � Ω1, P ⊗Q, Ω2, hence by inversion we have ΓPQ = ΓP , ΓQ and ΔPQ = ΔP , ΔQ suchthat ΓP ; ΔP � P and ΓQ; ΔQ � Q. Then Γ ′

1, ΓP , ΓQ, Γ ′2; Δ

′1, ΔP , ΔQ, Δ′

2 �Ω1, P, Q, Ω2, hence by the induction hypothesis Γ e; Δe, Ωe

1 , Pe, Qe, Ωe

2 �Ce ⇑. Applying ⊗E to this judgment and the hypothesis judgment Γ e; (P ⊗Q)e � P e⊗Qe ⇓, we get the desired derivation of Γ e; Δe, Ωe

1 , (P ⊗Q)e, Ωe2 �

Ce ⇑. �Polarizing formulae. To prove that the polarized system is complete with regardto the natural deduction calculus depicted in Figure 1, we first need to find away of converting regular propositions into polarized propositions. To do this,we define the following two mutually recursive functions, (−)p and (−)n, bystructural induction on the syntax of unpolarized formulas.

1p = 1 0p = 0 �n = �(A ⊗ B)p = Ap ⊗ Bp (A ⊕ B)p = Ap ⊕ Bp (!A)p = !An

(A & B)n = An & Bn (A � B)n = Ap � Bn

(∃x.A)p = ∃x.Ap (∀x.A)n = ∀x.An

The above definitions handle the cases where the polarity of the formula insidethe parentheses matches the polarizing function that is applied, i.e. cases of theform Nn and P p. All that remains is to handle the cases where the polarity doesnot match, which we do with the following definition:

An = ↑Ap when A is positive

Ap = ↓An when A is negative

Page 9: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

Focused Natural Deduction 165

In the case of atoms we assume that there is some fixed assignment of polarityto atoms, and define ap = a for positive atoms and an = a for negative atoms.Atoms with the wrong polarity, e.g. an for a positive atom a are handled bythe Pn case above, i.e. an = ↑ap = ↑a for positive atoms, and conversely for thenegative atoms.

In short, whenever a subformula of positive polarity appears in a place wherenegative polarity is expected, we add a positive shift to account for this fact,and vice versa. Thus, a formula such as (a & b) � (c⊗ d) is mapped by (−)n to↓(↑a&b) � ↑(c⊗↓d), when a, c are positive, and b, d are negative. These functionsare extended in the obvious way to contexts, in the sense that Δp means applyingthe function (−)p to each formula in Δ.

Of course, this is not the only way of polarizing a given formula, as we mayalways add redundant shifts ↑↓N and ↓↑P inside our formulas. The above pro-cedure gives a minimal polarization in that there are no redundant negative orpositive shifts. Note that (Ap)e = (An)e = A for all unpolarized formulae A,and (P e)p = P and (Ne)n = N for minimally polarized formulae P and N .

Completeness. Before we can prove completeness, we must prove a series oflemmas. First of all, we need the usual substitution properties.

Lemma 1 (Substituting properties for linear resources). The followingsubstitution properties hold:

1. If Γ ; Δ1 N ⇓ and Γ ; Δ2,↓N C ⇑ then Γ ; Δ1, Δ2 C ⇑

2. If Γ ; Δ1 N ⇓ and Γ ; Δ2,↓N C ⇓ and the latter derivation is not an

instance of the hyp rule then Γ ; Δ1, Δ2 C ⇓.

Proof. We proceed by mutual induction on the derivations of Γ ; Δ2,↑N C ⇓

and Γ ; Δ2,↑N C ⇑. In all cases, we proceed by applying the induction hypoth-

esis to the premise which contains the formula ↑N , and then reapply the appro-priate rule. This is possible if the premise is not a derivation of Γ ; ↓N ↓N ⇓using the hyp rule. This can only happen in the case of the ↓E rule, in whichcase, C = N and Δ2 is empty. Hence we may then apply the following reduction:

hypΓ ; ↓N ↓N ⇓ ↓EΓ ; ↓N N ⇓

�···

Γ ; Δ1 N ⇓

�Corollary 1. If Γ ; Δ1 An ⇓ and for all patterns ΓA; ΔA � Ap we haveΓ, ΓA; Δ2, ΔA γ+ ⇑ then Γ ; Δ1, Δ2 γ+ ⇑.

Proof. If A is positive, then An = ↑Ap, and the result follows by applying the↑E rule. If A is negative, then Ap = ↓An, hence there is only one pattern for Ap,specifically ·; ↓An � ↓An and the result follows from Lemma 1. �For the synchronous rules, we need to capture the fact that we may push pos-itive elimination rules below positive introduction rules. To formulate this, we

Page 10: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

166 T. Brock-Nannestad and C. Schurmann

introduce a new function (−)d that always produces a delayed formula. Thus,we define Ad = ↑Ap when A is positive, and Ad = ↓An when A is negative. Wefirst note the following

Lemma 2. If for all ΓA; ΔA � An > γ+ we have Γ, ΓA; Δ, ΔA γ+ ⇑, thenΓ ; Δ Ad ⇑.

Proof. By case analysis on the polarity of A. If A is positive then An = ↑Ap,hence ·; · � ↑Ap > ↑Ap is a pattern for An. Thus, Γ ; Δ ↑Ap ⇑ as desired.

If A is negative, then Ad = ↓An, hence by the rule ↓I, we get the desiredresult. �

We are now able to prove that under certain circumstances we can change apositively polarized canonical premise to the corresponding delayed canonicalpremise.

Lemma 3. For any admissible rule of the form

Γ ; Δ Ap ⇑Σ

Γ ; Δ, Δ′ γ+ ⇑the following rule is admissible

Γ ; Δ Ad ⇑Σd

Γ ; Δ, Δ′ γ+ ⇑Proof. By case analysis on the polarity of A and induction on the derivationof Γ ; Δ Ad ⇑. If A is negative, Ad = ↓An = Ap, hence the result followsimmediately by applying Σ. If A is positive, Ad = ↑Ap, hence the final rule ofthe derivation is either ↑I or ↑E. If the derivation ends in ↑I, the result again byapplying Σ to the premise of this rule. If the derivation ends in ↑E, we apply theinduction hypothesis to the second premise, and reapply the ↑E rule. �

The above argument easily extends to admissible rules with multiple premises,hence we get the following

Corollary 2. The following rules are admissible:

Γ ; Δ1 Ad ⇑ Γ ; Δ2 Bd ⇑ ⊗dIΓ ; Δ1, Δ2 (A ⊗ B)n ⇑

Γ ; Δ Adi ⇑ ⊕d

i IΓ ; Δ (A1 ⊕ A2)n ⇑

Γ ; Δ ([t/x]A)d ⇑ ∃dIΓ ; Δ (∃x.A)n ⇑

Also, if Γ ; Δ1 Ad ⇓ and for all ΓB; ΔB � Bp we have Γ, ΓB; Δ2, ΔB γ+ ⇑,then Γ ; Δ1, Δ2, (A � B)p γ+ ⇑.

Page 11: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

Focused Natural Deduction 167

Proof. The first few rules are easy applications of the previous lemma to thepositive introduction rules. The property holds by using the lemma on the fol-lowing argument: If Γ ; Δ1 Ap ⇑, then Γ ; Δ1, (A � B)p Bn ⇑ by �E, ↓Eand hyp on (A � B)p. With the additional assumption that Γ, ΓB ; Δ2, ΔB γ+

for all patterns ΓB; ΔB � Bp, by applying Lemma 1, we get the desired resultΓ ; Δ1, Δ2, (A � B)p γ+ ⇑. �Before we formulate the completeness theorem, we have to decide which polar-izing function to apply to the hypotheses and the conclusion. We would liketranslate a derivation of Γ ; Δ � A ⇑ into a derivation Γ x; Δy Az ⇑, whereeach of x, y and z is one of the above functions. In the case of x and y, thenature of the uhyp and hyp rules force us to choose x = n and y = p. If z = p,we should be able to derive ·; (a ⊗ b)p ap ⊗ bp ⇑, as this is clearly derivable inthe unpolarized system. In the polarized system, however, we are forced to entera focusing phase prematurely requiring us to split a single element context intotwo parts. None of the resulting subgoals are provable. Therefore z = n. Thus,as far as the canonical judgments are concerned, the completeness theorem willtake derivations of Γ ; Δ � A ⇑ to derivations of Γ n; Δp An ⇑.

As for atomic derivations, these cannot in general be transferred to atomicderivations in the focused system. The reader may verify this by checking that·; (a ⊗ b), (a ⊗ b) � c � c ⇓ is derivable in the unpolarized system, whereas·; (a⊗ b), ↓((a⊗ b) � c) c ⇓ is not derivable in the polarized system. Thus, weneed to generalize the induction hypothesis in the completeness theorem.

Theorem 2 (Completeness). The following properties hold:

1. Given Γ ; Δ � A ⇑ and patterns Γ ′; Δ′ � Δp and ΓA; ΔA � An > γ+, thenΓ n, Γ ′, ΓA; Δ′, ΔA γ+ ⇑.

2. If Γ1; Δ1 � A ⇓ and Γ ′; Δ′ � Δp1 and for all ΓA; ΔA � Ap, then we have

Γ2, ΓA; Δ2, ΔA γ+ ⇑ then Γ n1 , Γ2, Γ

′; Δ1, Δ2, Δ′ γ+ ⇑.

Proof. By induction on the derivations of Γ ; Δ � A ⇑ and Γ ; Δ � A ⇓. We givea few representative cases.

Case ⊗E: DΓ ; Δ1 � A ⊗ B ⇓

EΓ ; Δ2, A, B � C ⇑ ⊗E

Γ ; Δ1, Δ2 � C ⇑

Γ ′1, Γ

′2; Δ

′1, Δ

′2 � Δp

1, Δp2 Assumption.(1)

ΓC ; ΔC � Cn > γ+ Assumption.(2)ΓA, ΓB; ΔA, ΔB � Ap, Bp Assumption.(3)Γ ′

2, ΓA, ΓB; Δ′1, Δ

′2, ΔA, ΔB � Δp

2, Ap, Bp By (1) and (3).(4)

Γ n, Γ ′2, ΓA, ΓB, ΓC ; Δ′

2, ΔA, ΔB, ΔC γ+ ⇑ By i.h.1 on E , (2), (4).(5)ΓA, ΓB; ΔA, ΔB � Ap ⊗ Bp By the pattern rule for ⊗.

Page 12: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

168 T. Brock-Nannestad and C. Schurmann

ΓA, ΓB; ΔA, ΔB � (A ⊗ B)p By the defn. of (−)p.

Γ n, Γ ′1, Γ

′2, ΓC ; Δ′

1, Δ′2, ΔC γ+ ⇑ By i.h. 2 on D and (5).

Case � I: DΓ ; Δ, A � B ⇑

� IΓ ; Δ � A � B ⇑

Γ ′; Δ′ � Δp Assumption.(1)ΓAB; ΔAB � (A � B)n Assumption.ΓAB; ΔAB � Ap � Bn By defn. of (−)n.ΓAB = ΓA, ΓB and ΔAB = ΔA, ΔB such that(2)ΓA; ΔA � Ap,(3)

ΓB; ΔB � Bn > γ+ By inversion.

Γ n, Γ ′, ΓA, ΓB; Δ′, ΔA, ΔB γ+ ⇑ By i.h. 1 on D, (1) and (3).

Γ n, Γ ′, ΓAB; Δ′, ΔAB γ+ ⇑ By (2).

Case !I: DΓ ; · � A ⇑

!IΓ ; · � !A ⇑

Γ ′; Δ′ � · Assumption.Γ ′ = Δ′ = · By inversion.

ΓA; ΔA � (!A)n > γ+ Assumption.

ΓA = ΔA = ·, γ+ = ↑(!An) By inversion.

Γ ′A; Δ′

A � An > γ+ Assumption(1)

Γ n, Γ ′A; Δ′

A γ+ ⇑ By i.h. 1 on D and (1).

∀(Γ ′A; Δ′

A � An > γ+) : Γ n, Γ ′A; Δ′

A γ+ ⇑ By discharging (1).

Γ n; · ↓An ⇑ By ↓I.Γ n; · !An By !I.

Γ n; · ↑(!An) By ↑I.

Case ⊗ I: DΓ ; Δ1 � A ⇑

EΓ ; Δ2 � B ⇑ ⊗ I

Γ ; Δ1, Δ2 � A ⊗ B ⇑

Γ ′1, Γ

′2; Δ

′1, Δ

′2 � Δp

1, Δp2 Assumption.

Page 13: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

Focused Natural Deduction 169

ΓAB; ΔAB � (A ⊗ B)n > γ+ Assumption.

ΓAB = ΔAB = ·, γ+ = (A ⊗ B)n By inversion.Γ n, Γ ′

1; Δ′1 An ⇑ By i.h. 1 on D.

Γ n, Γ ′2; Δ

′2 Bn ⇑ By i.h. 1 on E .

Γ n, Γ ′1, Γ

′2; Δ

′1, Δ

′2 (A ⊗ B)n ⇑ By ⊗dI. �

As the mapping of derivations that is implicit in the proof of the soundnesstheorem preserves (maximal) synchronous phases and reconstructs (maximal)asynchronous phases, one may prove the following corollary.

Corollary 3. For any proof of a judgment Γ ; Δ � A ⇑ there exists a proof ofthe same judgment with maximal focusing and inversion phases.

4 Relation to the Backward Linear Focusing Calculus

In this section we consider the connection between our system of focused lin-ear natural deduction, and the backward linear focusing calculus of Pfenning,Chaudhuri and Price [4]. The main result of this section is a very direct sound-ness and completeness result between these two systems. The syntax of formulasin the sequent system is the same as the unpolarized system in Section 2. Thesame distinction between positive and negative connectives is made, but no shiftsare present to move from positive to negative and vice versa. Additionally, theshorthand P− and N+ is used in a similar fashion to our γ− and γ+.

The judgments of their system have either two or three contexts of the fol-lowing form: Γ is a context of unrestricted hypotheses, Δ is a linear context ofhypotheses of the form N+. A third context Ω which is both linear and orderedmay also be present. This context may contain formulas of any polarity.

There are four different kinds of sequents:

Γ ; Δ � A right-focal sequent with A under focus.

Γ ; Δ; A Q− left-focal sequent with A under focus.

Γ ; Δ; Ω =⇒ ·; Q−

Γ ; Δ; Ω =⇒ C; · Active sequents

The focal sequents correspond to the synchronous phases where a positive ornegative formula is decomposed maximally. Conversely, the active sequents cor-respond to the asynchronous phase where non-focal formulas in Ω and the for-mula C may be decomposed asynchronously. The goal ·; Q− signifies that Q−

has been inverted maximally, and is no longer active.

Theorem 3 (Soundness w.r.t. backward linear focusing calculus).The following properties hold:

1. If Γ ; Δ P ⇑ then Γ e; Δe � P e.2. If Γ ; Δ N ⇓ and Γ e; Δ′; Ne Q− then Γ e; Δe, Δ′; · =⇒ ·; Q−.

Page 14: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

170 T. Brock-Nannestad and C. Schurmann

3. If Γ ′; ΓC ; Δ, Δ′, ΔC γ+ ⇑ for all Γ ′; Δ′ � Ω and ΓC ; ΔC � C > γ+, thenΓ e; Δe; Ωe =⇒ Ce; ·.

4. If Γ ; Γ ′; Δ, Δ′ γ+ ⇑ for all Γ ′; Δ′ � Ω, then Γ e; Δe; Ωe =⇒ ·; (γ+)e.

Theorem 4 (Completeness w.r.t. backward linear focusing calculus).The following properties hold:

1. If Γ ; Δ � A then Γ n; Δp Ap ⇑.2. If Γ ; Δ; A Q− and Γ n; Δ′ An ⇓ then Γ n; Δp, Δ′ (Q−)n ⇑.3. If Γ ; Δ; Ω =⇒ C; · then Γ n, Γ ′, ΓC ; Δp, Δ′, ΔC γ+ ⇑ for all Γ ′; Δ′ � Ωp

and ΓC ; ΔC � Cn > γ+.4. If Γ ; Δ; Ω =⇒ ·; Q− then Γ n, Γ ′; Δp, Δ′ (Q−)n ⇑ for all Γ ′; Δ′ � Ωp.

The proofs of the above theorems may be seen as a way of transferring proofsbetween the two systems, and its action may be summarized as follows:

– The soundness proof maps synchronous to synchronous phases, and recon-structs inversion phases from the principal formulas in the ↑E and ↓I rules.

– The completeness proof takes synchronous phases to synchronous phases,and collapses asynchronous phases into ↑E and ↓I rules.

In particular, this leads to the following

Corollary 4. The mapping of proofs induced by the soundness theorem is in-jective on proofs of judgments of the form Γ ; Δ A ⇑ where Γ , Δ and A areminimally polarized.

Consequently, if we consider proofs of minimally polarized judgments, the currentsystem has the same number or fewer proofs of that judgment than the backwardfocused sequent calculus has proofs of the corresponding sequent.

5 Conclusion and Related Work

Using the concepts of focusing and patterns, we have presented a natural deduc-tion formulation of first-order intuitionistic linear logic that ensures the maxi-mality of both synchronous and asynchronous phases. This removes a large partof the bureaucracy and unnecessary choices that were present in the previousformulation.

In [4], completeness with regard to the unfocused linear sequent calculus isestablished by proving focused versions of cut admissibility and identity. Becauseof the four different kinds of sequents and the three contexts, the proof of cutadmissibility consists of more than 20 different kinds of cuts, all connected inan intricate induction order. In contrast, the proofs of the soundness and com-pleteness results we have established are relatively straightforward. This is inpart because we only have two judgments, and also because the intercalationformulation of linear natural deduction is already negatively focused.

From Corollary 4, it follows that our system, when restricted to minimallypolarized formulas, has the same number or fewer proofs than the backward

Page 15: [Lecture Notes in Computer Science] Logic for Programming, Artificial Intelligence, and Reasoning Volume 6397 || Focused Natural Deduction

Focused Natural Deduction 171

linear focusing calculus. This is only the case if we consider minimally polar-ized formulas, however. In particular this opens the possibility of using differentpolarization strategies to capture different classes of proofs.

In our formulation, we have chosen to use patterns only as a means of ensuringmaximal asynchronous phases. It is possible to extend the use of patterns to thesynchronous phases as well, but there are several reasons why we have chosennot to do this. The first and most compelling reason is that it is not necessary toensure maximal synchronous phases. The restrictions on the succedent of the ↑Erule suffices to ensure the maximality of the synchronous phases. Second, the useof patterns for the asynchronous phase extends easily to the case of quantifiersbecause the actual choice of eigenvariables does not matter — only freshness isimportant. Interpreting the pattern rules for quantifiers in a synchronous setting,we would need to substitute appropriately chosen terms for these variables, tocapture the behavior of the ∀E and ∃I rules. This would complicate the systemconsiderably.

References

1. Andreoli, J.: Logic programming with focusing proofs in linear logic. Journal ofLogic and Computation 2(3), 297 (1992)

2. Chaudhuri, K.: Focusing strategies in the sequent calculus of synthetic connec-tives. In: Cervesato, I., Veith, H., Voronkov, A. (eds.) LPAR 2008. LNCS (LNAI),vol. 5330, pp. 467–481. Springer, Heidelberg (2008)

3. Chaudhuri, K., Miller, D., Saurin, A.: Canonical sequent proofs via multi-focusing.In: Fifth International Conference on Theoretical Computer Science, vol. 273, pp.383–396. Springer, Heidelberg (2008)

4. Chaudhuri, K., Pfenning, F., Price, G.: A logical characterization of forward andbackward chaining in the inverse method. Journal of Automated Reasoning 40(2),133–177 (2008)

5. Krishnaswami, N.R.: Focusing on pattern matching. In: Proceedings of the 36thannual ACM SIGPLAN-SIGACT symposium on Principles of programming lan-guages, pp. 366–378. ACM, New York (2009)

6. Licata, D.R., Zeilberger, N., Harper, R.: Focusing on binding and computation. In:LICS, pp. 241–252. IEEE Computer Society, Los Alamitos (2008)

7. McLaughlin, S., Pfenning, F.: Efficient intuitionistic theorem proving with thepolarized inverse method. In: Proceedings of the 22nd International Conference onAutomated Deduction, p. 244. Springer, Heidelberg (2009)

8. Miller, D., Nadathur, G., Pfenning, F., Scedrov, A.: Uniform proofs as a foundationfor logic programming. Ann. Pure Appl. Logic 51(1-2), 125–157 (1991)

9. Prawitz, D.: Natural Deduction. Almquist & Wiksell, Stockholm (1965)10. Sieg, W., Byrnes, J.: Normal natural deduction proofs (in classical logic). Studia

Logica 60(1), 67–106 (1998)11. Watkins, K., Cervesato, I., Pfenning, F., Walker, D.: A concurrent logical frame-

work: The propositional fragment. In: Berardi, S., Coppo, M., Damiani, F. (eds.)TYPES 2003. LNCS, vol. 3085, pp. 355–377. Springer, Heidelberg (2004)

12. Zeilberger, N.: Focusing and higher-order abstract syntax. In: Necula, G.C.,Wadler, P. (eds.) POPL, pp. 359–369. ACM, New York (2008)


Recommended