109
Completeness of Verification System with Separation Logic for Recursive Procedures Mahmudul Faisal Al Ameen Doctor of Philosophy Department of Informatics School of Multidisciplinary Sciences SOKENDAI (e Graduate University for Advanced Studies) Tokyo, Japan September 2015

thesis

Embed Size (px)

Citation preview

Page 1: thesis

Completeness of Verification SystemwithSeparation Logic for Recursive Procedures

Mahmudul Faisal Al Ameen

Doctor of Philosophy

Department of InformaticsSchool of Multidisciplinary Sciences

SOKENDAI (TheGraduate University for Advanced Studies)Tokyo, Japan

September 2015

Page 2: thesis

A dissertation submitted tothe Department of Informatics

School of Multidisciplinary SciencesSOKENDAI (TheGraduate University for Advanced Studies)

in partial fulfillment of the requirements forthe degree of

Doctor of Philosophy

Advisory Committee

Makoto TATSUTA National Institute of Informatics, SOKENDAIZhenjiang HU National Institute of Informatics, SOKENDAIMakotoKANAZAWA National Institute of Informatics, SOKENDAIShin Nakajima National Institute of Informatics, SOKENDAIYukiyoshi Kameyama University of Tsukuba

Page 3: thesis

Thesis advisor: Makoto Tatsuta Mahmudul Faisal Al Ameen

SOKENDAI (TheGraduateUniversity forAdvanced Studies)

AbstractCompleteness of Verification Systemwith Separation Logic for

Recursive Procedures

The contributions of this dissertation are two results; the first result gives a new

complete Hoare’s logic system for recursive procedures, and the second result proves

the completeness of the verification system based on Hoare’s logic and separation

logic for recursive procedures. The first result is a complete verification system for

reasoning about while programs with recursive procedures that can be extended to

separation logic. To obtain it, this work introduces two new inference rules, shows

derivability of an inference rule and removes other redundant inference rules and an

unsound axiom for showing completeness. The second result is a complete verifica-

tion system, which is an extension of Hoare’s logic and separation logic for mutual

recursive procedures. To obtain the second result, the language of while programs

with recursive procedures is extended with commands to allocate, access, mutate and

deallocate shared resources, and the logical system from the first result is extended

with the backward reasoning rules of Hoare’s logic and separation logic. Moreover,

it is shown that the assertion language of separation logic is expressive relative to the

programs. It also introduces a novel expression that is used to describe the complete

information of a given state in a precondition. In addition, this work uses the nec-

essary and sufficient precondition of a program for the abort-free execution, which

enables to utilize the strongest postconditions.

iii

Page 4: thesis

I would like to dedicate this thesis to my parents for their love, sac-rifice and support all through my life.

Page 5: thesis

Acknowledgments

I wish to express my deepest gratitude to my advisor, Prof. Makoto Tatsuta, forgiving me the opportunity, pre-admission supports and recommendations to studyin SOKENDAI with NII scholarship. His guidance with extreme patience is the keydriving force for my today’s success. His support not only encouraged but also nour-ished me this last six years.Without his sincere efforts, I could not be in today’s posi-tion. His lessons helped me realizing the rigid nature of mathematics and logic.

I am very grateful to my co-supervisor Asst. Prof. Makoto Kanazawa for beingone of the most important supporting force for my work. His suggestions at the logicseminars have been a very good basis for elaborating and structuring my research; hegave me enough feedback to mature my ideas while always pointing me interestingdirections.

I would like to thank Prof. Zhenjiang Hu for providing valuable remarks, allowingme to improve the papers drafts and clarify my arguments. He has shown me a veryimportant horizon of program composition that enabled me to realize programs asmathematical objects. His care and support especially kept me optimistic in my hardtimes.

I am grateful to Prof. Kazushize Terui for his suggestions and support in the earlystage of my study.

I would like to express my sincere appreciation to Dr. Daisuke Kimura for givingme a long time support as an elder brother. I am indebted toDr. Takayuki Koai for his

v

Page 6: thesis

sinceremost effort tomakemy life easy in Japan. Indifferent occasions, their academictutoring, as well as mental supports, are never to be forgotten. Their explanation ofseveral basic and advanced topics often made me understand difficult mathematicaltopics in an easy way. Special thanks toDr. Kazuhiro Inaba for an important criticismto show a gap in one of my earlier solution. Dr. Kazuyuki Asada’s advice and Mr.Toshihiko Uchida’s friendly support also helped me to reach in this position. I amvery grateful to all of them. Moreover, I thank to Mr. Koike Atsushi for sharing hisLATEXclass file, based on which the class file for this dissertation is prepared.

I thank National Institute of Informatics for providing me the necessary financialand resource support. Further, I also thankNII officials at the student support sectionfor their administrative assistances.

I owe to Prof. Dr. Shorif Uddin for driving me and directly helping me to apply toNII for the doctoral study. I also thank Dr. Zahed Hasan and Mr. Enayetur Rahmanfor their contributions in my life and research career.

Finally, I thank my parents for their love, sacrifices, long patience and endless en-couragements. I thank my wife for her invaluable and direct care and support in thefinal year. I also thank my sister, relatives and friends for their great mental supportthat helped me to handle the challenging research situations.

Page 7: thesis

Contents

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Main Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Background 7

3 New Complete System of Hoare’s Logic with Recursive Proce-dures 173.1 Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3 Logical System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.4 Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Separation Logic for Recursive Procedures 254.1 Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.2 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.3 Logical System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

5 Soundness and Completeness 575.1 Soundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585.2 Expressiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675.3 Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

6 Conclusion 97

1

Page 8: thesis
Page 9: thesis

1

1Introduction

1.1 Motivation

It is widely accepted that a program is needed to be verified to ensure that it is cor-rect. A correct program guarantees to perform the given task as expected. It is veryimportant to ensure the safety of the mission-critical, medical, spacecraft, nuclear re-actor, financial, genetic-engineering and simulator programs. Moreover, everyone de-sires bug-free programs.

Formal verification is intended to deliver programs which are completely free ofbugs or defects. It verifies the source code of a program statically. So formal verifi-cation does not depend on the execution of a program. The time required to verify aprogram depends on neither its runtime andmemory complexity nor the magnitudeof its inputs. A program is required to be verified only once since it does not depend

Page 10: thesis

2 Chapter 1. Introduction

on test cases. Hence, formal verification of programs is important to save both timeand expenses commercially and for its supremacy theoretically. Among formal verifi-cation approaches, model checking andHoare’s logic are prominent. Model checkingcomputeswhether amodel satisfies a given specification, whereasHoare’s logic showsit for all models by provability [8].

Since it was proposed by Hoare [7], numerous works on Hoare’s logic have beendone [1, 4–6, 10]. Several extensions have also been proposed [1], among whichsome attempted to verify programs that access heap or shared resources. But untilthe twenty-first century begins, very few of them were simple enough to use. On theother hand, since the development of programming languages likeC andC++, the us-age of pointers in programs (which are called pointer programs) gained much popu-larity for their ability to use sharedmemory and other resources directly and for fasterexecution. Yet this ability also causes crashed programs for some reasons because it isdifficult to keep track of each memory operation. It may lead unsafe heap operation.A program crash occurs when the program tries to access a memory cell that has al-ready been deallocated before or when amemory cell is accessed before its allocation.So apparently it became necessary to have an extension ofHoare’s logic that can verifysuch pointer programs. In 2002, Reynolds proposed separation logic [13]. It was abreakthrough to achieve the ability to verify pointer programs. Especially it can guar-antee safe heap operations of programs. Although recently we can find several workson separation logic [2, 9] and its extensions and applications [3, 11, 12], there are fewworks found to show their completeness [14]. Tatsuta et al. [14] shows the complete-ness of the separation logic for pointer programs which is introduced in [13]. In thispaper, we will show the completeness of an extended logical system. Our logical sys-tem is intended to verify pointer programswithmutual recursive procedures. Amongseveral versions of the same inference rule Reynolds offered in [13] for separationlogic, a concise set of backward reasoning rules has been chosen in [14]. The laterwork in [14] also offers rigorous mathematical discussions. The problems regardingthe completeness ofHoare’s logic, the concept of relative completeness, completenessofHoare’s logicwith recursive procedures andmanyother important topics have beendiscussed in detail in [1]. Our work begins with [14] and [1].

In modern days, programs are written in segments with procedures, which make

Page 11: thesis

1.2Main Contribution 3

the programs shorter in size and logically structured, and increases the reusability ofcode. So it is important to use procedures and heap operations (use of sharedmutableresources) both in a single program. Verifying programswith such features is themainmotivation of our work.

1.2 MainContribution

Our goal is to give a relatively complete logical system that can be used for reason-ing about pointer programs with mutual recursive procedures. A logical system forsoftware verification is called complete if every true judgment can be derived fromthat system. It ensures the strength of the logical system such that no further develop-ment is necessary for it. If all true asserted programs are provable in Hoare’s systemwhere all true assertions are provided, we call it a relatively complete system. We willshow the relative completeness of our system. A language is expressive if the weakestprecondition can be defined in the language. We will also show that our language ofspecification is expressive for our programs. Relative completeness is discussed vastlyin [1, 5]. In this paper, relative completeness is sometimes paraphrased as complete-ness when it is not ambiguous.

Themain contributions of our paper are as follows:

(1) A new complete logical system for Hoare’s logic for recursive procedures.

(2) A new logical system for verification of pointer programs and recursive proce-dures.

(3) Proving the soundness and the completeness theorems.

(4) Proving that our assertion language is expressive relative our programs.

We know that Hoare’s logic with recursive procedures is complete [1]. We alsoknow that Hoare’s logic with separation logic is complete [14]. But we do not knowif Hoare’s logic and separation logic for recursive procedures is complete.

In order to achieve our contributions, we will first construct our logical system by

Page 12: thesis

4 Chapter 1. Introduction

combining the axioms and inference rules of [1] and [14]. Then we will prove theexpressiveness by coding the states in a similar way to [14]. At last we will follow asimilar strategy in [1] to prove the completeness.

Although onemay feel it easy to combine these two logical systems to achieve sucha complete system, in reality it is not the case. Now we will discuss some challengeswe face to prove its relative completeness.

(1)The axiom (AXIOM 9: INVARIANCE AXIOM), which is essential in [1] toshow completeness, is not sound in separation logic.

(2) In the completeness proof of the extension of Hoare’s logic for the recursiveprocedures in [1], the expression−→x = −→z (−→x are all program variables and−→z arefresh) is used to describe the complete information of a given state in a precondition.A state in Hoare’s logic is only a store, which is a function assigning to each variable avalue. In separation logic, a state is a pair of a store and a heap. So the same expres-sion cannot be used for a similar purpose for a heap, because a store information maycontain variables x , . . . , xm which are assigned z , . . . , zm respectively, while a heapinformation consists of the set of the physical addresses only in the heap and theircorresponding values. The vector notation cannot express the general information ofthe size of the heap and its changes because of allocation and deallocation of memorycells.

(3) Another challenge is to utilize the strongest postcondition of a preconditionand a program. In case a program aborts in a state for which the precondition is valid,the strongest postcondition of the precondition and the program does not exist. Bututilizing the strongest postcondition is necessary for completeness proof, because thecompleteness proof of [1] depends on it.

Now it is necessary to solve these obstacles for the proof of the completeness of oursystem. That is why it is quite challenging to solve the completeness theorem whichis our principal goal.

The solutions to the challenges stated above are as follows:

(1) We will give an inference rule (inv-conj) as an alternative to the axiom (AX-

Page 13: thesis

1.2Main Contribution 5

IOM9: INVARIANCEAXIOM) in [1]. Itwill accept apure assertionwhichdoesnothave a variable common to the program. We will also give an inference rule (exists)that is analogous to the existential introduction rule in the first-order predicate cal-culus. We will show that the inference rule (RULE 10: SUBSTITUTION RULE I)in [1] is derivable in our system. Since the inference rules (RULE 11: SUBSTITU-TION RULE II) and (RULE 12: CONJUNCTION RULE) in [1] are redundant inour system, we will remove them. It gives us the new complete system for Hoare’slogic for mutual recursive procedures. We will extend this system with the inferencerules in [14] to give the verification system for pointer programs with mututal recur-sive procedures. As a result, the set of our axioms and inference rules will be quitedifferent from the union of those of [1] and [14].

(2) We will give an appropriate assertion to describe the complete information ofa given state in a precondition. Beside the expression −→x = −→z for the store infor-mation, we will additionally use the expression Heap(xh) for the heap information,where xh keeps a natural number that is obtained by a coding of the current heap.

(3) For pointer programs, it is difficult to utilize the strongest postcondition be-cause it is impossible to assert a postcondition for A and P where P may abort in astate for which A is true. We use APTrue as the abort-free condition of A and P.For the existence of the strongest postcondition, it is necessary for APTrue to betrue. We will give the necessary and sufficient preconditionWP,True(

−→x ) for the factthat the program Pwill never abort.

Outline of This Paper

Our background will be presented in Chapter 2. A new complete Hoare’s logic forrecursive procedures will be given in Chapter 3. Wewill define our languages, seman-tics and the logical system in Chapter 4. In Chapter 5, we will prove the soundness,expressiveness and completeness. We will conclude in Chapter 6.

Page 14: thesis
Page 15: thesis

7

2Background

Hoare introduced an axiomatic method, Hoare’s logic, to prove the correctness ofprograms in 1969 [7]. Floyd’s intermediate assertion method was behind the ap-proach of Hoare’s logic. Besides its great influence in designing and verifying pro-grams, it has also been used to define the semantics of programming languages.

Among many works which extended the approach of Hoare to prove a programcorrect, some were not useful or difficult to use. In 1981, Apt presented a survey ofvarious results concerning the approach ofHoare in [1]. Hiswork emphasizedmainlyon the soundness and completeness issues. He first presented the proof system forwhile programs along with its soundness, expressiveness, and completeness. He thenpresented the extension of it to recursive procedures, local variable declarations andprocedures with parameters with corresponding soundness and completeness.

Although several extensions of Hoare’s logic found in literature, until the work of

Page 16: thesis

8 Chapter 2. Background

Reynolds in 2002 [13], an extension to pointer programs wasmissing. Reynolds pro-vided the separation logic, which is an assertion language for shared mutable datastructures and resources. In his work, he also extended Hoare’s logic to pointer pro-grams with several sets of logical rules and showed its soundness. Tatsuta chose onlythe rules with forward reasoning and proved its completeness in [14]. The scope ofthis logical system was limited only to while programs with memory access opera-tions.

Since this work intends to extend Hoare’s logic and separation logic to mutual re-cursive procedures, it will be based on [1] and [14].

Hoare’s Logic forRecursive Procedures

In this section, we will discuss the verification system for while programs with re-cursive procedures given in [1]. Here we will present the language of the while pro-grams with recursive procedures and the assertions, their semantics, a logical systemto reason about the programs and the completeness proof. We will extend this proofto pointer programs with recursive procedures later.

Language

The language of assertion in [1] is the first order language with equality of Peanoarithmetic. Its variables are denoted by x, y, z,w, . . .. Expressions, denoted by e, aredefined by e ::= x | | | e+ e | e× e. A quantifier-free formula b is defined by

b ::= e = e | e < e | ¬b | b ∧ b | b ∨ b | b→ b.

The formula (of the assertion language), denoted by A,B,C, is defined by

A ::= e = e | e < e | ¬b | b ∧ b | b ∨ b | b→ b | ∀xA | ∃xA.

Page 17: thesis

9

Recursiveprocedures aredenotedbyR. While programsextended to recursivepro-cedures, denoted by P,Q, is defined in [1] by

P,Q ::= x := e| if (b) then (P) else (P)| while (b) do (P)| P; P| skip| R.

We assume that procedure R is declared with its bodyQ.

The basic formula of Hoare’s logic is composed with three elements. They are twoassertions A and B and a program P. It is expressed in the form

APB

that is also called a correctness formula or an asserted program. Here A and B arecalled the precondition and the postcondition of the program P respectively. When-ever A is true before the execution of P and the execution terminates, B is true afterthe execution.

Semantics

States, denotedby s, are defined in [1] as a function from the set of variablesV to theset of natural numbersN. The semantics of the programming language is defined firstby JPK− for the programs those do not contain procedures, which is a partial functionfrom States to States.

The semantics of assertions is denoted by JAKs that gives us the truth value of A atthe state s.

Page 18: thesis

10 Chapter 2. Background

Definition 2.0.1 The definition of semantics of programs is given below.

Jx := eK−(s) = s[x := JeKs],Jif (b) then (P ) else (P )K−(s) =

JP K(s) if JbKs = TrueJP K(s) otherwise,

Jwhile (b) do (P)K− =

s if JbKs = FalseJwhile (b) do (P)K−(JPK−(s)) otherwise,JP ; P K−(s) = JP K−(JP K−(s))JskipK−(s) = s.

In order to define the semantics of programs which include recursive procedures,Apt provided the approximation semantics of programs. He defined a procedure-lessprogram P(n) by induction on n:

P( ) = Ω,

P(n+ ) = P[Q(n)/R].

He then defined the semantics of programs by

JPK =∞∪i=

JP[Q(i)/R]K−

An asserted program (or a correctness formula) APB is defined to be true ifand only if for all states s, s′, if JAKs = True and JPK(s) = s′ then JBKs′ = True.

Logical System

The logical system H given in [1] consists of the following axioms and inferencerules. Here Γ is used as a set of asserted programs. A judgment is defined as Γ ⊢APB. var(P) is defined as all the variables appeared in the execution of P.

Page 19: thesis

11

AXIOM 1: ASSIGNMENTAXIOM

Γ ⊢ A[x := e]x := eA (assignment)

RULE 2: COMPOSITIONRULE

Γ ⊢ AP C Γ ⊢ CP BΓ ⊢ AP ; P B (composition)

RULE 3: if-then-else RULE

Γ ⊢ A ∧ bP B Γ ⊢ A ∧ ¬bP BΓ ⊢ Aif (b) then (P ) else (P )B (if-then-else)

RULE 4: while RULE

Γ ⊢ A ∧ bPAΓ ⊢ Awhile (b) do (P)A ∧ ¬b (while)

RULE 5: CONSEQUENCE RULE

Γ ⊢ A PB Γ ⊢ APB (consequence)

(A→ A , B → B true)

RULE 8: RECURSIONRULE

Γ ∪ ARB ⊢ AQBΓ ⊢ ARB (recursion)

AXIOM 9: INVARIANCE AXIOM

Γ ⊢ ARA (invariance)(FV(A) ∩ var(R) = ∅)

Page 20: thesis

12 Chapter 2. Background

RULE 10: SUBSTITUTIONRULE I

Γ ⊢ ARBΓ ⊢ A[−→z := −→y ]RB[−→z := −→y ]

(substitution I)(−→y ,−→z ∈ var(R))

RULE 11: SUBSTITUTIONRULE II

Γ ⊢ ARBΓ ⊢ A[−→z := −→y ]RB

(substitution II)(−→z ∈ var(R) ∪ FV(B))

RULE 12: CONJUNCTIONRULE

Γ ⊢ ARB Γ ⊢ A′RCΓ ⊢ A ∧ A′RB ∧ C (conjunction)

An asserted formula Ax := eA[x := e]may seems fitmore for the assignmentaxiom. But it is not. For an example like x = a ∧ b = a+ x := b(x = a ∧ b =a+ )[x := b], that is x = a∧b = a+ x := b(b = a∧b = a+ ), is not obvi-ously true. Rather (x = b)[x := b]x := bx = b, that is b = bx := bx = bis true. The composition rule is similar to the cut rule in concept. When twoprogramsare executed one after another, the postcondition of the former one is the precondi-tion of the later. Theprecondition of the former one and the postcondition of the laterone are preserved for the execution of the composition of those two programs. Theif-then-else rule comes from the fact that truth value of b determines the execution ofeither P or P . The rule itself is very natural. The while rule is a bit tricky. Here Ais called a loop invariant, which is an assertion that is preserved before and after theexecution of P. The truthness of b triggers execution of P and naturally the executionterminates only when b is false.

The consequence rule is not any ordinary rule like others. Here the important factis that a stronger precondition and a weaker postcondition may replace respectivelythe precondition and the postcondition of a valid asserted program without affectingits validity. With an example, itmayhelp tounderstand it better. Theassertedprogram

Page 21: thesis

13

x = ax := x+ x = a+ is indeed valid. But from assignment axiom we mayget only (x = a+ )[x := x+ ]x := x+ x = a+ , that is x+ = a+ x :=x + x = a + . Since x = a→ x + = a + , with the help of the consequencerule now we finally get x = ax := x+ x = a+ .

Recursion rule states that if the assumption of a valid asserted program for a re-cursive procedure gives us a valid asserted program for its body, we can say that theasserted program for the procedure is indeed valid. The invariance axiom confirmsus that the precondition is preserved in the postcondition if none of its variables isaccessed by the execution of a recursive procedure. Substitution rule I allows vari-able substitution in the assertions in an asserted program if the recursive proceduredoes not access the substituted and substituting variables. Substitution rules II allowsthe substitution of only the variables in the precondition if those neither appear inthe recursive procedure nor in the postcondition. Conjunction rule allows the pre-conditions and the postconditions of two asserted programs for the same recursiveprocedure to be conjoined.

Soundness

In [1], ⊢H APB denotes the fact that APB is provable in the logical sys-temH, which uses the assumption that all the true assertions are provided (for conse-quence rule). In his work, the notion of the truth of an asserted program is introducedwhere he chose the standard interpretation of the assertion languagewith the domainof natural numbers.

Apt called an asserted program valid if it is true under all interpretations. He alsocalled a proof rule sound if for all interpretations it preserves the truth of asserted pro-grams. Since it is easy to prove that the axioms are valid and the proof rules are sound,it can be said that the logical system is proved to be sound by induction on the lengthof proofs.

His soundness theorem claims that for every asserted program APB in thelogical systemH, if⊢H APB is provable under the presence of all true assertionsthen APB is true.

Page 22: thesis

14 Chapter 2. Background

Completeness in the sense ofCook

Thestrongest postcondition of an assertion and a programand theweakest precon-dition of a program and an assertion have a key role in defining the completeness inthe sense of Cook of such a proof system where general completeness does not hold.Now we will define the strongest postcondition and the weakest precondition.

Definition 2.0.2 Thestrongest postcondition of an assertionAanda programP is definedby

SP(A, P) = s′ | ∃s(JAKs ∧ JPK(s) = s′) .

The weakest precondition of a program P and an assertion A is defined by

WP(P,A) = s | ∀s′(JPK(s) = s′ → JAKs′) .Definition 2.0.3 An assertion languageL is said to be expressive relative to the set ofprograms P if for all assertions A ∈ L and programs P ∈ P , there exists an assertionS ∈ L which defines the strongest postcondition SP(A, P).

Definition 2.0.4 A proof system G for a set of programs P is said to be complete in thesense of Cook if, for all L such that L is expressive relative to P and for every assertedformula APB, if APB is true then ⊢G APB is provable.

Apt presented the proof of the completeness of the system in the sense of Cook in[1] using two central lemmas. We will present them and discuss their proof. Assumethat−→x is the sequence of all variables which occur in P and−→z is a sequence of somenew variables and both of their lengths are same. Assume that the assertion languageis expressive for the logical system. So, there exists an assertion S that defines thestrongest postcondition of−→x = −→z and R. The asserted program −→x = −→z RSis the most general formula for R, since any other true asserted program about R canbe derived from −→x = −→z RS. This claim is the contents of the first lemma.

Lemma 2.0.5 (Apt 1981) if APB is true then −→x = −→z RS ⊢ APB isprovable provided that all the true assertions are given.

Proof.

Page 23: thesis

15

It is proved by induction on P where the most interesting case is P = R. Othercases are similar to that of the systemH in [1].

Suppose thatP isR. AssumeARB is true. Wehave⊢ −→x = −→z RS. LetAbe A[−→z := −→u ] and B be B[−→z := −→u ] where−→u ∈ FV(B) ∪ var(R). By invarianceaxiom, ⊢ A [−→x := −→z ]RA [−→x := −→z ] is provable. By the conjunction rule,⊢ −→x = −→z ∧ A [−→x := −→z ]RS ∧ A [−→x := −→z ] is provable since FV(A [−→x :=−→z ]) ∩ var(R) = ∅. We now show that S ∧ A [−→x := −→z ]→ B .

Assume JS ∧ A [−→x := −→z ]Ks = True. By definition JSKs = True. By the prop-erty of the strongest postcondition, there exists a state s′ such that JRiK(s′) = s andJ−→x = −→z Ks′ = True.

By invariance axiom, ⊢ ¬A [−→x := −→z ]R¬A [−→x := −→z ] is provable. Theby conjunction rule ⊢ −→x = −→z ∧ ¬A [−→x := −→z ]RiS ∧ ¬A [−→x := −→z ]. Bysoundness, −→x = −→z ∧ ¬A [−→x := −→z ]RiS ∧ ¬A [−→x := −→z ] is true. Nowsuppose that ¬JA [−→x := −→z ]Ks′ = True. Hence J−→x = −→z ∧ ¬A [−→x := −→z ]Ks′ =

True. Therefore, ¬JA [−→x := −→z ]Ks = True. But JA [−→x := −→z ]Ks = True by theassumption for s′. It contradicts the assumption and hence JA [−→x := −→z ]Ks′ = True.

Since−→x = −→z ∧A [−→x := −→z ]→A , wehave JA Ks′ = True. Then JAKs′[−→z :=

−−→s′(u)]

=

True. Then JRK(s′[−→z :=−−→s′(u)]) = s[−→z :=

−→s(u)] since−→z ,−→u ∈ var(R). Then by

definition, JBKs[−→z :=−→u ]True. Then by definition, JB Ks = True. Hence S ∧ A [−→x :=−→z ]→ B is true.

Then by the consequence rule, ⊢ −→x = −→z ∧ A [−→x := −→z ]RB is provable.Then by the substitution rule II, ⊢ −→x = −→x ∧ A RB is provable. Then by theconsequence rule, ⊢ A RiB is provable. By the substitution rule I, ⊢ A [−→u :=−→z ]RiB [−→u := −→z ]. We have A→ A [−→u := −→z ] and B [−→u := −→z ]→ B. Then bythe consequence rule, ⊢ APB provable. 2

Lemma 2.0.6 (Apt 1981) The next lemma in [1] claims that ⊢ −→x = −→z RS isprovable.

Proof.

Page 24: thesis

16 Chapter 2. Background

By definition of S, −→x = −→z RS is true and hence −→x = −→z QS is truesince JRK = JQK. By the Lemma 2.0.5, −→x = −→z RS ⊢ −→x = −→z QS isprovable. By the recursion rule, ⊢ −→x = −→z RS is provable. 2

Thecompleteness theorem states that if an asserted program APB is true then⊢ APB is provablewhere all the true assertions are given. It is the central conceptof completeness in the sense of Cook.

Theorem 2.0.7 (Apt 1981) If APB is true then ⊢ APB is provable.

Proof.

Assume APB is true. By Lemma 2.0.5, −→x = −→z RS ⊢ APB isprovable. By Lemma 2.0.6, ⊢ −→x = −→z RS is provable. Then ⊢ APB isprovable. 2

Page 25: thesis

17

3NewComplete System ofHoare’s Logic

with Recursive Procedures

We introduce a complete system of Hoare’s logic with recursive procedures. Aptgave a system for the same purpose and showed its completeness in [1]. Our systemis obtained fromApt’s system by replacing the invariance axiom, the substitution ruleI, the substitution rule II, and the conjunction rule by the rules (inv-conj) and (ex-ists). Apt suggested without proofs that one could replace them by his substitutionrule I, (inv-conj), and (exists) to get another complete system. We prove that the sub-stitution rule I can actually be derived in our system. We also give a detailed proof ofthe completeness of our system.

Page 26: thesis

18 Chapter 3. NewComplete System ofHoare’s Logic with Recursive Procedures

3.1 Language

Our assertion is a formula A of Peano arithmetic. We define the language L asfollows.

Definition 3.1.1 Formulas A are defined by

A ::= e = e | e < e | ¬A | A ∧ A | A ∨ A | A→ A | ∀xA | ∃xA

We will sometimes call a formula an assertion.

We define FV(A) as the set of free variables in A. We define FV(e) similarly.

Our program is a while-program with parameterless recursive procedures.

Definition 3.1.2 Programs, denoted by P,Q, are defined by

P,Q ::= x := e| if (b) then (P) else (P)| while (b) do (P)| P; P| skip| Ri.

b is a formula without the quantifiers. Ri is a parameter-less procedure name havingQi as its definition body. We define the languageL− asL excluding the construct R.

An asserted program is defined by APB, which means the partial correctness.

3.2 Semantics

Definition 3.2.1 We define the semantics of our programming language. For a programP, its meaning JPK is defined as a partial function from States to States. We will define

Page 27: thesis

3.3 Logical System 19

JPK(r ) as the resulting state after termination of the execution of P with the initial state r .If the execution of P with the initial state r does not terminate, we leave JPK(r ) undefined.In order to define JPK, we would like to define JPK− for all P in the language L−. We defineJPK− by induction on P inL− as follows:

Jx := eK−(s) = s[x := JeKs],Jif (b) then (P ) else (P )K−(s) =

JP K(s) if JbKs = TrueJP K(s) otherwise,

Jwhile (b) do (P)K− =

s if JbKs = FalseJwhile (b) do (P)K−(JPK−(s)) otherwise,JP ; P K−(s) = JP K−(JP K−(s))JskipK−(s) = s.

Definition 3.2.2 For an asserted program APB, the meaning of APB is de-fined as True or False. APB is defined to be True if the following holds.

For all s and s′, if JAKs = True and JPK(s) = s′, then JBKs′ = True.

Definition 3.2.3 The semantics of P inL is defined by

JPK(s) = s′ if JP(i)K−(s)|i ≥ = s′undefined if JP(i)K−(s)|i ≥ = ∅

3.3 Logical System

This section defines the logical system.

We will write A[x := e] for the formula obtained from A by replacing x by e.

Definition 3.3.1 Our logical system consists of the following inference rules. As men-tioned in previous section, wewill useΓ for a set of asserted programs. A judgment is definedas Γ ⊢ APB.

Page 28: thesis

20 Chapter 3. NewComplete System ofHoare’s Logic with Recursive Procedures

Γ ⊢ AskipA (skip)

Γ, APB ⊢ APB (axiom)

Γ ⊢ A[x := e]x := eA (assignment)

Γ ⊢ A ∧ bP B Γ ⊢ A ∧ ¬bP BΓ ⊢ Aif (b) then (P ) else (P )B (if )

Γ ⊢ A ∧ bPAΓ ⊢ Awhile (b) do (P)A ∧ ¬b (while )

Γ ⊢ AP C Γ ⊢ CP BΓ ⊢ AP ; P B (comp)

Γ ⊢ A PB Γ ⊢ APB (conseq)

(A→ A , B → B)

Γ ∪ AiRiBi|i = , . . . , nproc ⊢ A Q B ...

Γ ∪ AiRiBi|i = , . . . , nproc ⊢ AnprocQnprocBnprocΓ ⊢ AiRiBi

(recursion)

Γ ⊢ APCΓ ⊢ A ∧ BPC ∧ B (inv-conj)

(FV(B) ∩Mod(P) = ∅)

Page 29: thesis

3.4 Completeness 21

Γ ⊢ APBΓ ⊢ ∃x.APB (exists)

(x /∈ FV(B) ∪ EFV(P))

We say APB is provable and we write ⊢ APB, when ⊢ APB can bederived by these inference rules.

The rule (exists) is analogous to the rule existential introduction of propositionalcalculus.

3.4 Completeness

Lemma 3.4.1 If AP B is true and JP K = JP K then AP B is true.

Proof.

By definition.

Definition 3.4.2 X is called the strongest postcondition of P and A if and only if the fol-lowing holds.

(1) For all s, s′, if JAKs = True and JPK(s) = s′ then s′ ∈ X.

(2) For all Y, if ∀s, s′(JAKs = True ∧ JPK(s) = s′ → s′ ∈ Y) then X ⊆ Y.

Definition 3.4.3 SA,P(−→x ) is defined as the strongest postcondition for A and P.

SA,P(−→x ) gives the strongest assertion S such that APS is true.

Lemma 3.4.4 If⊢ APB then⊢ A[−→x := −→z ]PB[−→x := −→z ]where−→z ,−→x ∈EFV(P).

Proof. Assume ⊢ APB and−→z ∈ EFV(P). Then by (inv-conj), ⊢ A ∧ −→x =−→z PB ∧ −→x = −→z . We have B ∧ −→x = −→z → B[−→x := −→z ]. Then by (con-seq), ⊢ A ∧ −→x = −→z PB[−→x := −→z ]. Then by (exists), ⊢ ∃−→z (A ∧ −→x =

Page 30: thesis

22 Chapter 3. NewComplete System ofHoare’s Logic with Recursive Procedures

−→z )PB[−→x := −→z ]. We have A[−→x := −→z ] → ∃−→z (A ∧ −→x = −→z ). Then by(conseq), ⊢ A[−→x := −→z ]PB[−→x := −→z ]. 2

Lemma 3.4.5 If APB is true and −→z ∈ EFV(P) then Γ ⊢ APB whereΓ = −→x = −→z RiS−→x =−→z ,Ri(

−→x )|i = , . . . , n, −→x = x , . . . , xm and xj|j =, . . . ,m = EFV(P).

Proof. Wewill prove it by induction on P. We will consider the cases of P.

If P is other than Ri, the proof of these cases are similar to those of completenessproof of H given in [1].

Case P is Ri.

Assume ARiB is true and z ∈ EFV(Ri). We have Γ ⊢ −→x =−→z RiS−→x =−→z ,Ri(

−→x ). Let A be A[−→z := −→u ] and B be B[−→z := −→u ] where−→u ∈ FV(B) ∪ EFV(Ri). By the rule (inv-conj), Γ ⊢ −→x = −→z ∧ A [−→x :=−→z ]RiS−→x =−→z ,Ri(

−→x ) ∧ A [−→x := −→z ] since FV(A [−→x := −→z ]) ∩ Mod(Ri) = ∅.We now show that S−→x =−→z ,Ri(

−→x ) ∧ A [−→x := −→z ]→ B .

Assume JS−→x =−→z ,R(−→x ) ∧ A [−→x := −→z ]Ks = True. By definitionJS−→x =−→z ,Ri(

−→x )Ks = True. By the property of the strongest postcondition, thereexists a state s′ such that JRiK(s′) = s and J−→x = −→z Ks′ = True.

Now suppose that ¬JA [−→x := −→z ]Ks′ = True. By (inv-conj), Γ ⊢ −→x = −→z ∧¬A [−→x := −→z ]RiS−→x =−→z ,Ri(

−→x ) ∧ ¬A [−→x := −→z ]. Since Γ is true by definition,by soundness, −→x = −→z ∧ ¬A [−→x := −→z ]RiS−→x =−→z ,Ri(

−→x ) ∧ ¬A [−→x := −→z ] istrue. Then by definition ¬JA [−→x := −→z ]Ks = True. But JA [−→x := −→z ]Ks = True. Itcontradicts the assumption and hence JA [−→x := −→z ]Ks′ = True.

Since−→x = −→z ∧A [−→x := −→z ]→A , wehave JA Ks′ = True. Then JAKs′[−→z :=

−−→s′(u)]

=

True. Then JRiK(s′[−→z :=−−→s′(u)]) = s[−→z :=

−→s(u)] since−→z ,−→u ∈ EFV(Ri). Then by

definition, JBKs[−→z :=−→u ]True. Then by definition, JB Ks = True. Hence S−→x =−→z ,R(−→x )∧

A [−→x := −→z ] → B is true. Then by the rule (conseq), Γ ⊢ −→x = −→z ∧ A [−→x :=−→z ]RiB . Then by the rule (exists),Γ ⊢ ∃−→z (−→x = −→z ∧A [−→x := −→z ])RiB .We have A → ∃−→z (−→x = −→z ∧ A [−→x := −→z ]). Then by the rule (conseq), Γ ⊢

Page 31: thesis

3.4 Completeness 23

A RiB . By Lemma 3.4.4, Γ ⊢ A [−→u := −→z ]RiB [−→u := −→z ]. We haveA→ A [−→u := −→z ] and B [−→u := −→z ]→ B. Then by (conseq), Γ ⊢ APB. 2

Next lemma shows that the hypothesis −→x = −→z (−→x )RS−→x =−→z ,R(−→x ) used in

lemma 5.3.7 is provable in the our system.

Lemma 3.4.6 ⊢ −→x = −→z RjS−→x =−→z ,Rj(−→x ) for j = , . . . , n where −→x =

x , . . . , xm, xj|j = , . . . ,m =∪n

i= EFV(Ri) and−→z ∈∪n

i= EFV(Ri).

Proof.

Assume −→z ∈∪n

i= EFV(Ri) and −→x = x , . . . , xm where xj|j = , . . . ,m =∪ni= EFV(Ri).

Fix j. Assume J−→x = −→z Ks = True. Assume JQjK(s) = r where Qj is the bodyof Rj. Then by Lemma 3.4.1, JRjK(s) = r. By definition, JS−→x =−→z ,Rj(

−→x )Kr = True.Then by definition, −→x = −→z QjS−→x =−→z ,Rj(

−→x ) is true. By Lemma 3.4.5, −→x =−→z RiS−→x =−→z ,Ri(

−→x )|i = , . . . , n ⊢ −→x = −→z QjS−→x =−→z ,Rj(−→x ). Hence,

−→x = −→z RiS−→x =−→z ,Ri(−→x )|i = , . . . , n ⊢ −→x = −→z QjS−→x =−→z ,Rj(

−→x )for all j = , . . . , n. Then by the rule (recursion), ⊢ −→x = −→z RjS−→x =−→z ,Rj(

−→x ).2

The following theorem is the key theorem of this paper. It says that our system iscomplete.

Theorem 3.4.7 If APB is true then APB is provable.

Proof.

Assume APB is true. Let −→z be such that −→z ∈∪n

i= EFV(Ri) ∪ EFV(P)and−→x = x , . . . , xm where xj|j = , . . . ,m =

∪ni= EFV(Ri) ∪ EFV(P). Then by

Lemma3.4.5,−→x = −→z RiS−→x =−→z ,Ri(−→x )|i = , . . . , n ⊢ APB. ByLemma

3.4.6, ⊢ −→x = −→z RiS−→x =−→z ,Ri(−→x ) for i = , . . . , n. Then we have ⊢ APB.

2

Apt’s system cannot be extended to separation logic, because his invariance axiomis inconsistentwith separation logic. On theother hand,we can extendour system to averification systemwith separation logic and recursiveprocedures in a straightforwardway.

Page 32: thesis
Page 33: thesis

25

4Separation Logic for Recursive Procedures

Page 34: thesis

26 Chapter 4. Separation Logic for Recursive Procedures

4.1 Language

This section defines our programming language and our assertion language. Ourprogramming language inherits from the pointer programs in Reynolds’ paper [13].Our assertion language is also the same as in [13], which is basedonPeano arithmetic.

Base Language

We first define our base language, which will be used later for both a part of ourprogramming language and a part of our assertion language. It is essentially a first-order language for Peano arithmetic. We call its formula a pure formula. We will usei, j, k, l,m, n for natural numbers. Our base language is defined as follows. We havevariables x, y, z,w, . . . and constants , , null, denoted by c. The symbol null meansthenull pointer. Wehave function symbols+ and×. Ourpredicate symbols, denotedby p, are = and <. Terms, denoted by t, are defined as x | c | f(t, . . . , t). Our pureformulas, denoted by A, are defined by

A ::= p(t, t) | ¬A | A ∧ A | A ∨ A | A→ A | ∀xA | ∃xA

where p(t, t) is t = t or t < t. Here p(t , t ) means that the predicate p holds forterms t , t . The other formula constructions mean usual logical connectives. We willsometimes write the number n to denote the term + ( + ( + . . . ( + ))) (ntimes of +).

Programming Language

Next we define our programming language, which is an extension of while pro-grams to pointers and procedures. Its expressions are terms of the base language. Thatis, our expressions, denoted by e, are defined by e ::= x | | | null | e + e | e × e.Expressions mean natural numbers or pointers.

Its boolean expressions, denoted by b, are quantifier-free pure formulas and defined

Page 35: thesis

4.1 Language 27

by b ::= e = e | e < e | ¬b | b ∧ b | b ∨ b | b→ b. Boolean expressions are used asconditions in a program.

We assume procedure names R , . . . ,Rnproc for some nproc. We will write R for theseprocedure names.

Definition 4.1.1 Programs, denoted by P,Q, are defined by

P ::= x := e (assignment)| if (b) then (P) else (P) (conditional)| while (b) do (P) (iteration)| P; P (composition)| skip (no operation)| x := cons(e, e) (allocation)| x := [e] (lookup)| [e] := e (mutation)| dispose(e) (deallocation)| R (mutual recursive procedure name)

Rmeans a procedure name without parameters.

We write L for the set of programs. We write L− for the set of programs that donot contain procedure names.

The statement x := cons(e , e ) allocates two new consecutive memory cells, putsthe values of e and e in the respective cells, and assigns the first address to x. Thestatement x := [e] looks up the content of thememory cell at the address e and assignsit to x. The statement [e ] := e changes the content of thememory cell at the addresse by e . The statement dispose(e) deallocates the memory cell at the address e.

The programs x := e, skip, x := cons(e , e ), x := [e], [e ] := e and dispose(e)are called atomic programs. A program P is called pure if it doesn’t contain any ofx := cons(e , e ), x := [e], [e ] := e or dispose(e). It means that a pure programnever access memory cells directly.

We call Procedure R(Q) a procedure declaration where R is a procedure name and

Page 36: thesis

28 Chapter 4. Separation Logic for Recursive Procedures

Q is a program. The programQ is said to be the body of R. This means that we definethe procedure name Rwith its procedure bodyQ.

We assume the procedure declarations

Procedure R (Q ), . . . , Procedure Rnproc(Qnproc)

that gives procedure definitions to all procedure names in the rest of the paper. Weallowmutual recursive procedure calls.

Assertion Language andAsserted Programs

Our assertion language is a first-order language extended by the separating con-junction ∗ and the separating implication −∗ as well as emp and 7→. Its variables,constants, function symbols, and terms are the same as those of the base language.We have predicate symbols=,< and 7→ and a predicate constant emp. Our assertionlanguage is defined as follows.

Definition 4.1.2 Formulas A are defined by

A ::= emp | e = e | e < e | e 7→ e | ¬A | A ∧ A | A ∨ A | A→ A | ∀xA |∃xA | A ∗ A | A −∗ A

We will sometimes call a formula an assertion.

We define FV(A) as the set of free variables in A. We define FV(e) similarly.

The symbol emp means the current heap is empty. The formula e 7→ e meansthe current heap has only one cell at the address e and its content is e . The formulaA ∗ Bmeans the current heap can be split into some two disjoint heaps such that theformula A holds at one heap and the formula B holds at the other heap. The formulaA −∗ Bmeans that for any heap disjoint from the current heap such that the formulaA holds at the heap, the formula B holds at the new heap obtained from the currentheap and the heap by combining them.

Page 37: thesis

4.1 Language 29

We use vector notation to denote a sequence. For example, −→e denotes the se-quence e , . . . , en of expressions.

Definition 4.1.3 The expression APB is called an asserted program, where A,B areformulas and P is a program.

This means the program Pwith its precondition A and its postcondition B.

Unfolding of Procedures

We define the set of procedure names which are visible in a program. It will benecessary later in defining dependency relation between two procedures.

Definition 4.1.4 The set PN(P) of procedure names in P is defined as follows.

PN(P) = ∅ if P is atomic,PN(if (b) then (P ) else (P )) = PN(P ) ∪ PN(P ),

PN(P ; P ) = PN(P ) ∪ PN(P ),

PN(while (b) do (P)) = PN(P),PN(Ri) = Ri.

We define the dependency relation between two procedures. When a procedurename appears in the body of another procedure, we say the latter procedure dependson the former procedure at level 1. When one procedure depends on another and thelatter one again depends on the third one, we say the first one also depends on thethird one. In this case, the level of the third dependency is determined by summingup the levels of first and second dependencies mentioned above.

Definition 4.1.5 We define the relation Rik; Rj as follows:

Ri ; Ri,

Ri ; Rj if PN(Qi) ∋ Rj,

Rik; Rj if Ri = R′ ; R′ ; . . . ; R′

k = Rj for some R′ , . . . ,R′k.

Page 38: thesis

30 Chapter 4. Separation Logic for Recursive Procedures

Procedures dependency PD(Ri, k) of a procedure name Ri up to level k is defined byPD(Ri, k) = Rj | Ri

l; Rj, l ≤ k .

This relation will be used to define EFV(P) andMod(P) as well as the semantics of P.

Note that (1) PD(Ri, k) ⊆ PD(Ri, k+ ) for all k and (2) PD(Ri, k) ⊆ Ri | i =, . . . , nproc where nproc is the number of procedures in the declaration.

The following first lemma will show that once procedures dependencies of a pro-cedure up to two consecutive levels are the same, it is the same up to any higher leveltoo. The second claim states that nproc − is sufficient for the largest level.

Lemma 4.1.6 (1) If PD(Ri, k) = PD(Ri, k + ) then PD(Ri, k) = PD(Ri, k + l) forall k, l ∈ N.

(2) PD(Ri, k) ⊆ PD(Ri, nproc − ) for all k.

Proof.

(1) It is proved by induction on l.

Case 1. l be .

Its proof is immediate.

Case 2. l be l′ + .

Assume PD(Ri, k) = PD(Ri, k + ). We can show that if R′i ∈ PD(Ri, k) then

R′i ∈ PD(Ri, k + l′ + ). Now we will show that if R′

i ∈ PD(Ri, k + l′ + ) thenR′i ∈ PD(Ri, k). Assume R′

i ∈ PD(Ri, k + l′ + ). Then we have Rj such that Rj ∈PD(Ri, k + l′) and Rj ; R′

i . By induction hypothesis, PD(Ri, k) = PD(Ri, k + l′).Then Rj ∈ PD(Ri, k). Then by definition, R′

i ∈ PD(Ri, k + ). Then R′i ∈ PD(Ri, k)

by the assumption. Therefore, PD(Ri, k) = PD(Ri, k+ l′ + ).

(2) We will show that PD(Ri,m) = PD(Ri,m + ) for some m < nproc bycontradiction. Assume for all m < nproc, PD(Ri,m) = PD(Ri,m + ). ThenPD(Ri,m) ⫋ PD(Ri,m + ). Then we have | PD(Ri,m) | ≥ m + and hence| PD(Ri, nproc) | ≥ nproc + . But PD(Ri, nproc) ⊆ Ri | i = , . . . , nproc and then| PD(Ri, nproc) | ≤ nproc. It is a contradiction. Therefore, PD(Ri,m) = PD(Ri,m+ )

Page 39: thesis

4.1 Language 31

for some m < nproc. By (1), we have now PD(Ri, nproc − ) = PD(Ri, nproc − + l)for all l.

Therefore, PD(Ri, k) ⊆ PD(Ri, nproc − ) for all k. 2We need to define some closed program that never terminates in order to define

unfolding of a program for a specific number of times. First we will defineΩ.

Definition 4.1.7 We defineΩ as

while ( = ) do (skip)

Substitution of a program for a procedure name is defined below.

Definition 4.1.8 Let −→P ′ = P′, . . . , P′nproc where P′i is a program. P[

−→P ′] is defined byinduction on P as follows:

P[−→P ′] = P if P is atomic,(if (b) then (P ) else (P ))[

−→P ′] = (if (b) then (P [−→P ′]) else (P [

−→P ′])),

(while (b) do (P))[−→P ′] = (while (b) do (P[−→P ′])),

(P ; P )[−→P ′] = (P [

−→P ′]; P [−→P ′]),

(Ri)[−→P ′] = P′i.

P[−→P ′] is a programobtained fromPby replacing theprocedurenamesR , . . . ,Rnproc

by P′, . . . , P′nproc respectively.

Unfolding transforms a program in languageL into a program in languageL−. Dis-cussions on the programs in languageL can be reduced to those inL−, which are ei-ther easy or already shown elsewhere. P(k) denotes P where each procedure name isunfolded only k times. P( ) just replaces a procedure name by Ω, since a procedurename is not unfolded any more, which means the procedure name is supposed to benot executed. Here we present the unfolding of a program.

Definition 4.1.9 LetΩi = Ω for ≤ i ≤ nproc. We define P(k) for k ≥ as follows:

Page 40: thesis

32 Chapter 4. Separation Logic for Recursive Procedures

P( ) = P[Ω , . . . ,Ωnproc ],

P(k+ ) = P[Q(k), . . . ,Q(k)nproc ].

Sometimes we will call P(k) as the k-times unfolding of the program P.

We present some basic properties of unfolded programs.

Proposition 4.1.10 (1) P(k) = P[−→R(k)].

(2) R( )i = Ω.

(3) R(k+ )i = Qi[

−→R(k)].

Proof.

(1) By case analysis of k.

Case 1. k = .

P( ) = P[−→Ω] by definition. By (2), P[

−→Ω] = P[

−→R( )]. Therefore, P( ) = P[

−→R( )].

Case 2. k = k′ + .

P(k′+ ) = P[−−→Q(k′)]. Since R(k′+ )

i = Ri[−−→Q(k′)] = Q(k′)

i , P[−−→Q(k′)] = P[

−−−→R(k′+ )]. There-

fore, P(k′+ ) = P[−−−→R(k′+ )].

(2) By definition we have R( )i = Ri[

−→Ω] = Ω.

(3) By definition we have R(k+ )i = Ri[

−−→Q(k)] = Q(k)

i . By (1), Q(k)i = Qi[

−→R(k)].

Therefore, R(k+ )i = Qi[

−→R(k)]. 2

Thenext two definitionswill define the set of the free variables (FV(P)) and the setof the variables that can bemodified (Mod (P)). Generally speaking, the left variableof the symbol := in a program is a modifiable variable. First we will define abovementioned two sets for a program in its syntactic structure. Next, it will be used todefine the set of free variables (extended free variables, EFV) and the set ofmodifiablevariables (Mod(P)) that may appear in the execution of the program. Since ‘a freevariable’ has an ordinary meaning without procedure calls, we will use ‘an extendedfree variable’ for that with procedure calls.

Page 41: thesis

4.1 Language 33

Definition 4.1.11 We define FV(P) for P inL− and EFV(P) for P inL as follows:

FV(x := e) = x ∪ FV(e),FV(if (b) then (P ) else (P )) = FV(b) ∪ FV(P ) ∪ FV(P ),

FV(while (b) do (P)) = FV(b) ∪ FV(P),FV(P ; P ) = FV(P ) ∪ FV(P ),

FV(skip) = ∅,FV(x := cons(e , e )) = x ∪ FV(e ) ∪ FV(e ),FV(x := [e]) = x ∪ FV(e),FV([e ] := e ) = FV(e ) ∪ FV(e ),FV(dispose(e)) = FV(e),EFV(P) = FV(P(nproc)).

The expression FV(P) is the set of variables that occur in P. EFV(P) is the set of vari-ables that may be used in the execution of P.

The expression FV(O , . . . ,Om) is defined as FV(O )∪ . . .∪ FV(Om)whenOi isa formula, an expression, or a program.

Definition 4.1.12 We define Mod (P) for P inL− andMod(P) for P inL as follows:

Mod (x := e) = x,Mod (if (b) then (P ) else (P )) = Mod (P ) ∪Mod (P ),

Mod (while (b) do (P)) = Mod (P),Mod (P ; P ) = Mod (P ) ∪Mod (P ),

Mod (skip) = ∅,Mod (x := cons(e , e )) = x,Mod (x := [e]) = x,Mod ([e ] := e ) = ∅,Mod (dispose(e)) = ∅,Mod(P) = Mod (P(nproc)).

The expressionMod(P) is the set of variables that may be modified by P.

Page 42: thesis

34 Chapter 4. Separation Logic for Recursive Procedures

Lemma 4.1.13 (1) FV(R(k+ )i ) =

∪Rj∈PD(Ri,k) FV(Qj).

(2) Mod (R(k+ )i ) =

∪Rj∈PD(Ri,k)Mod (Qj).

Proof.

(1) It is proved by induction on k.

Case 1. k = .

By definition, FV(R( )i ) = FV(Ri[

−−→Q( )]) = FV(Q( )

i ) = FV(Qi[−→Ω]) = FV(Qi).

Since PD(Ri, ) = Ri, we have FV(Qi) =∪

Rj∈PD(Ri, ) FV(Qj).

Case 2. k to be k′ + .

By Proposition 4.1.10 (3), FV(R(k′+ )i ) = FV(Qi[

−−−→R(k′+ )]). Then we have

FV(R(k′+ )i ) = FV(Qi) ∪

∪Ri;Rj FV(R

(k′+ )j ). By induction hypothesis, we

have FV(R(k′+ )i ) = FV(Qi) ∪

∪Ri;Rj

∪Rm∈PD(Rj,k′) FV(Qm). Then we have

FV(R(k′+ )i ) = FV(Qi) ∪

∪Rm∈PD(Ri,k′+ ) FV(Qm). Therefore, FV(R(k′+ )

i ) =∪Rj∈PD(Ri,k′+ )(FV(Qj)).

(2) Its proof is similar to (1). 2

Proposition 4.1.14 (1) FV(P(k)) ⊆ EFV(P) for all k.

(2) Mod (P(k)) ⊆ Mod(P) for all k.

Proof.

(1) Fix k. If k = , the claim trivially holds. Assume k > . By Lemma 4.1.6(2), we have PD(Ri, k − ) ⊆ PD(Ri, nproc − ). Then

∪Rj∈PD(Ri,k− ) FV(Qj) ⊆∪

Rj∈PD(Ri,nproc− ) FV(Qj). Then by Proposition 4.1.13 (1), FV(R(k)i ) ⊆ FV(R(nproc)

i ).

Then we have FV(P)∪∪

Ri∈PN(P) FV(R(k)i ) ⊆ FV(P)∪

∪Ri∈PN(P) FV(R

(nproc)i ). Then

byProposition4.1.10 (1)wehaveFV(P(k)) ⊆ FV(P(nproc)). Bydefinition, FV(P(k)) ⊆EFV(P).

(2) Its proof is similar to (1). 2

Page 43: thesis

4.2 Semantics 35

4.2 Semantics

The semantics of our programming language and our assertion language is definedin this section. Our semantics is based on the same structure as that in Reynolds’paper [13] except the following simplification: (1) values are natural numbers, (2)addresses are non-zero natural numbers, and (3) null is .

The setN is defined as the set of natural numbers. The set Vars is defined as the setof variables in the base language. The set Locs is defined as the set n ∈ N|n > .

For sets S , S , f : S → S means that f is a function from S to S . f : S →fin Smeans that f is a finite function from S to S , that is, there is a finite subset S′ of Sand f : S′ → S . Dom(f) denotes the domain of the function f. The expression p(S)denotes the power set of the set S. For a function f : A → B and a subset C ⊆ A, thefunction f|C : C → B is defined by f|C(x) = f(x) for x ∈ C.

A store is defined as a function fromVars → N, and denoted by s. A heap is definedas a finite function from Locs →fin N, and denoted by h. A value is a natural number.An address is a positive natural number. The null pointer is . A store assigns a valueto each variable. A heap assigns a value to an address in its finite domain.

The store s[x := n , . . . , xk := nk] is defined by s′ such that s′(xi) = ni ands′(y) = s(y) for y ∈ x , . . . , xk. The heap h[m := n , . . . ,mk := nk] is definedby h′ such that h′(mi) = ni and h′(y) = h(y) for y ∈ Dom(h) − m , . . . ,mk.The store s[x := n , . . . , xk := nk] is the same as s except values for the variablesx , . . . , xk. The heap h[m := n , . . . ,mk := nk] is the same as h except the contentsof the memory cells at the addressesm , . . . ,mk.

We will write h = h + h when Dom(h) = Dom(h ) ∪ Dom(h ), Dom(h ) ∩Dom(h ) = ∅, h(x) = h (x) for x ∈ Dom(h ), and h(x) = h (x) for x ∈ Dom(h ).The heap h is divided into the two disjoint heaps h and h when h = h + h .

A state is defined as (s, h). The set States is defined as the set of states. The statefor a pointer program is specified by the store and the heap, since pointer programsmanipulate memory heaps as well as variable assignments.

Page 44: thesis

36 Chapter 4. Separation Logic for Recursive Procedures

Definition 4.2.1 We define the semantics of our base language by the standard model ofnatural numbers and JnullK = . That is, we suppose J K = , J K = , J+K = +,J×K = ×, J=K = (=), and J<K = (<). For a store s, an expression e, and a pureformula A, according to the interpretation of a first-order language, the meaning JeKs isdefined as a natural number and the meaning JAKs is defined as True or False.Theexpression JeKs and JAKs are the value of e under the store s, and the truth value ofA under the store s, respectively.

Semantics of Programs

The relation⊆ over the functions of type States∪abort → p(States∪abort)is necessary to define the semantics for while (b) do (P).

Definition 4.2.2 We define ⊆ for functions F,G: States ∪ abort → p(States ∪abort). F ⊆ G is defined to hold if ∀r ∈ States (F(r) ⊆ G(r)).

Definition 4.2.3 We define the semantics of our programming language. For a programP, its meaning JPK is defined as a function from States ∪ abort to p(States ∪ abort).We will define JPK(r ) as the set of all the possible resulting states after the execution of Pterminates with the initial state r . In particular, if the execution of P with the initial state rdoes not terminate, we will define JPK(r ) as the empty set ∅. The set JPK(r , . . . , rm) isdefined as

∪mi= JPK(ri). In order to define JPK we would like to define JPK− for all P in the

Page 45: thesis

4.2 Semantics 37

languageL−. We define JPK− by induction on P inL− as follows:

JPK−(abort) = abort,

Jx := eK−((s, h)) = (s[x := JeKs], h),Jif (b) then (P ) else (P )K−((s, h)) = JP K((s, h)) if JbKs = True,JP K((s, h)) otherwise,

Jwhile (b) do (P)K− is the least function satisfyingJwhile (b) do (P)K−(abort) = abort,Jwhile (b) do (P)K−((s, h)) = (s, h) if JbKs = False,Jwhile (b) do (P)K−((s, h)) =∪ Jwhile (b) do (P)K−(r) | r ∈ JPK−((s, h)) otherwise,

JP ; P K−((s, h)) = ∪ JP K−(r) | r ∈ JP K−((s, h)) ,

JskipK−((s, h)) = (s, h),

Jx := cons(e , e )K−((s, h)) =(s[x := n], h[n := Je Ks, n+ := Je Ks]) | n > , n, n+ ∈ Dom(h),

Jx := [e]K−((s, h)) =

(s[x := h(JeKs)], h)abort

if JeKs ∈ Dom(h),otherwise,

J[e ] := e K−((s, h)) =

(s, h[Je Ks := Je Ks])abort

if JeKs ∈ Dom(h),otherwise,

Jdispose(e)K−((s, h)) =

(s, h|Dom(h)−JeKs)abort

if JeKs ∈ Dom(h),otherwise.

We present two propositions which together state that the semantics of awhile (b) do (P) can be constructed by the semantics of P.

Proposition 4.2.4 r ∈ Jwhile (b) do (P)K−((s, h)) if and only if there exist m ≥ ,r , . . . , rm such that r = (s, h), rm = r, JbKri = True and ri+ ∈ JPK−(ri) for ≤ i <m, and one of the following holds:

(1) r = abort and JbKr = False,(2) r = abort,

Page 46: thesis

38 Chapter 4. Separation Logic for Recursive Procedures

where we write JbK(s′,h′) for JbKs′ .Proof. First we will show the only-if-part. Let F((s, h)) = r | m ≥

, r = (s, h), rm = r, ri = (si, hi), JbKsi = True, ri+ ∈ JPK−(ri) ( ≤ ∀i <

m), (r = (sm, hm) ∧ JbKsm = False) ∨ r = abort and F(abort) = abort.We will show that F satisfies the inequations obtained from the equations forJwhile (b) do (P)K− by replacing= by⊇. That is, we will show

F(abort) ⊇ abort,F((s, h)) ⊇ (s, h) if JbKs = False,F((s, h)) ⊇

∪ F(r) | r ∈ JPK−((s, h)) if JbKs = True.

By the well-known least fixed point theorem [17], these inequations imply ouronly-if-part.

The first inequation immediately holds by the definition of F. For the second in-equation, since JbKs = False, by takingm to be , we have (s, h) ∈ F((s, h)). We willshow the third inequation. Assume JbKs = True and r is in the right-hand side. Wewill show r ∈ F((s, h)).

We have q ∈ JPK−((s, h)) and r ∈ F(q).

Case 1. q = (s′′, h′′).

By the definition of r ∈ F(q), we have m ≥ , r = (s′′, h′′), rm = r, ri =

(si, hi), JbKsi = True, ri+ ∈ JPK−(ri) ( ≤ ∀i < m), and one of the followingholds: either r = (sm, hm) and JbKsm = False, or r = abort.

Letm′ = m+ , r′ = (s, h), and r′i = ri− for < i ≤ m′. By takingm and ri to bem′ and r′i respectively in the definition of F, we have r ∈ F((s, h)).

Case 2. q = abort.

By the definition of F we have r = abort. By taking m = , we have abort ∈F((s, h)).

Page 47: thesis

4.2 Semantics 39

Next we will show the if-part by induction on m. We assume the right-hand side.We will show r ∈ Jwhile (b) do (P)K−((s, h)).

If m = then we have JbKs = False and r = (s, h). Hence r ∈Jwhile (b) do (P)K−((s, h)).Suppose m > . We have m and r , . . . , rm satisfying the conditions (1) or

(2). If r = abort, we have m = and abort ∈ JPK−((s, h)). Hence r ∈Jwhile (b) do (P)K−((s, h)) in this case. Suppose r = abort. By induction hy-pothesis for m − , we have r ∈ Jwhile (b) do (P)K−(r ). Since JbKs = True andr ∈ JPK−((s, h)), by the definition we have r ∈ Jwhile (b) do (P)K−((s, h)). 2

The following proposition characterizes the two properties of the semantics ofΩ.

Proposition 4.2.5 (1) JΩK−(abort) = abort.

(2) JΩK−((s, h)) = ∅.

Proof.

(1) By definition, JΩK−(abort) = abort.

(2) By definition, JΩK−((s, h)) = Jwhile ( = ) do (skip)K−((s, h)). SinceJskipK−((s′, h′)) ∋ abort for all s′, h′, by Proposition 4.2.4, abort ∈Jwhile ( = ) do (skip)K−((s, h)). Since J = Ks′ = True for all s′, by Proposition4.2.4 we have (s′, h′) ∈ Jwhile ( = ) do (skip)K−((s, h)) for all s′, h′. Therefore,Jwhile ( = ) do (skip)K−((s, h)) = ∅. 2

Lemma 4.2.6 If P′i, P′′i ∈ L− ( ≤ i ≤ nproc) and JP′iK− ⊆ JP′′i K− for all i thenJP[−→P′ ]K− ⊆ JP[−→P′′]K− where P ∈ L and P[−→P′ ] ∈ L−.

Proof.

(1) By induction on P. Assume JP′iK− ⊆ JP′′i K− for all i.

Case 1. P is atomic.

Since P[−→P′ ]=P=P[

−→P′′], the claim holds.

Case 2. P is if (b) then (P ) else (P ).

Page 48: thesis

40 Chapter 4. Separation Logic for Recursive Procedures

Wewill show that JP[−→P′ ]K−(r) ⊆ JP[−→P′′]K−(r).Case 2.1. r is abort.

By definition, JP[−→P′ ]K−(abort) = abort = JP[−→P′′]K−(abort). HenceJP[−→P′ ]K− ⊆ JP[−→P′′]K−.Case 2.2. r is (s, h).

Let r to be (s, h). Suppose JbKs = True. Then by definition, JP[−→P′ ]K−(r) =JP [−→P′ ]K−(r) and JP[−→P′′]K−(r) = JP [

−→P′′]K−(r). By induction hypothesis,JP [

−→P′ ]K− ⊆ JP [

−→P′′]K−. Therefore, JP[−→P′ ]K− ⊆ JP[−→P′′]K−. In the case JbKs = False,

it can be proved similarly.

Case 3. P is while (b) do (P ).

We will show that JP[−→P′ ]K−(r) ⊆ JP[−→P′′]K−(r).Case 3.1. r is abort.

The case is similar to 2.1.

Case 3.2. r to be (s, h).

Case JbKs = False. By definition, Jwhile (b) do (P [−→P′ ])K−((s, h)) = (s, h) =Jwhile (b) do (P [

−→P′′])K−((s, h)). Hence JP[−→P′ ]K− ⊆ JP[−→P′′]K−.

Case JbKs = True. Assume Jwhile (b) do (P [−→P′ ])K−((s, h)) ∋ r′. By Proposi-

tion 4.2.4, we have m ≥ , r , . . . , rm such that r = (s, h), ri+ ∈ JP [−→P′ ]K−(ri)

and JbKri = True for ≤ i < m such that either r′ = abort andJbKr′ = False, or r′ = abort. By induction hypothesis, ri+ ∈ JP [−→P′′]K−(ri). ThenJwhile (b) do (P [

−→P′′])K−(r) ∋ r′.

Then Jwhile (b) do (P [−→P′ ])K−(r) ⊆ Jwhile (b) do (P [

−→P′′])K−(r). Therefore,JP[−→P′ ]K− ⊆ JP[−→P′′]K−.

Case 4. P is P ; P .

By induction hypothesis, JP [−→P′ ]K− ⊆ JP [

−→P′′]K− and JP [

−→P′ ]K− ⊆ JP [

−→P′′]K−.

Therefore, JP[−→P′ ]K− ⊆ JP[−→P′′]K−.

Page 49: thesis

4.2 Semantics 41

Case 5. Ri.

We have Ri[−→P′ ] = P′i and Ri[

−→P′′] = P′′i . Hence JRi[

−→P′ ]K− ⊆ JRi[

−→P′′]K−.

2

We have already defined the semantics of P in the language L−. We define thesemantics of P in L. The semantics we will define is usually called an approximatingsemantics.

Definition 4.2.7 The semantics of P inL is defined by JPK(r) = ∪∞i= (JP(i)K−(r)).

Note thatQ(k)i is a program of the languageL− and doesn’t contain Ri.

Remark that for P inL−, we have JPK = JPK− since P(k) = P.

Lemma 4.2.8 For all k, JP(k)K− ⊆ JP(k+ )K−.Proof. By induction on k.

Case 1. k = .

We have JΩK−(r) = ∅ for all r by Proposition 4.2.5 (2). Then for all i, we haveJΩK− ⊆ JQ( )i K−. Then by Lemma 4.2.6, JP[−→Ω]K ⊆ JP[−−→Q( )]K. Then JP( )K− ⊆JP( )K−.

Case 2. k > .

Let k be k′ + . We have Q(k) = Q(k′+ ) = Q[−−→Q(k′)] and Q(k+ ) = Q(k′+ ) =

Q[−−−→Q(k′+ )]. By induction hypothesis, JQ(k′)K− ⊆ JQ(k′+ )K−. By Lemma 4.2.6, we

now have JP[−−→Q(k′)]K− ⊆ JP[−−−→Q(k′+ )]K−. Then JP(k)K− ⊆ JP(k+ )K−. 2

Semantics of Assertions

Definition 4.2.9 We define the semantics of the assertion language. For an assertion Aand a state (s, h), themeaning JAK(s,h) is defined as True or False. JAK(s,h) is the truth value

Page 50: thesis

42 Chapter 4. Separation Logic for Recursive Procedures

of A at the state (s, h). JAK(s,h) is defined by induction on A as follows:

JempK(s,h) = True if Dom(h) = ∅,Je = e K(s,h) = (Je Ks = Je Ks),Je < e K(s,h) = (Je Ks < Je Ks),Je 7→ e K(s,h) = True if Dom(h) = Je Ks and h(Je Ks) = Je Ks,J¬AK(s,h) = (not JAK(s,h)),JA ∧ BK(s,h) = (JAK(s,h) and JBK(s,h)),JA ∨ BK(s,h) = (JAK(s,h) or JBK(s,h)),JA→ BK(s,h) = (JAK(s,h) implies JBK(s,h)),J∀xAK(s,h) = True if JAK(s[x:=m],h) = True for all m ∈ N,J∃xAK(s,h) = True if JAK(s[x:=m],h) = True for some m ∈ N,JA ∗ BK(s,h) = True if h = h + h andJAK(s,h ) = JBK(s,h ) = True for some h , h ,JA −∗ BK(s,h) = True if h = h + h andJAK(s,h ) = True implies JBK(s,h ) = True for all h , h .

We say A is true when JAK(s,h) = True for all (s, h).

Semantics of Asserted Program

Finally we need to define the semantics of the asserted programs. It is basically thesame as in [13].

Definition 4.2.10 For an asserted program APB, the meaning of APB is de-fined as True or False. APB is defined to be True if both of the following hold.

(1) for all (s, h), if JAK(s,h) = True, then JPK((s, h)) ∋ abort.

(2) for all (s, h) and (s′, h′), if JAK(s,h) = True and JPK((s, h)) ∋ (s′, h′), thenJBK(s′,h′) = True.

Remark that semantics of an asserted program with a non-terminating program isalways True. Because, according to the definition of semantics, a resulting abort stateof a program implies termination of its execution.

Page 51: thesis

4.2 Semantics 43

In our system, judgments are in the formΓ ⊢ APBwhereΓ is a set of assertedprograms. Here we will define semantics of a judgment.

We say Γ is true if all APB in Γ are true.

Definition 4.2.11 We sayΓ ⊢ APB is true when the following holds: APB istrue if AiPiBi is true for all AiPiBi ∈ Γ.

The asserted program APBmeans abort-free partial correctness and also im-plies partial correctness in the standard sense. Namely, it means that the execution ofthe programPwith all the initial states which satisfyA never aborts, that is, P does notaccess any of the unallocated addresses during the execution. It is one of the strongestfeature of the system.

The next lemma shows that the semantics of a procedure call (or procedure name)is the same as that of its body.

Lemma 4.2.12 JRiK = JQiK.Proof. By definition we have JQiK(r) =

∪∞k= (JQ(k)

i K−(r)). Since Q(k)i =

Ri[−−→Q(k)] = R(k+ )

i , we have∪∞

k= (JQ(k)i K−(r)) =

∪∞k= (JR(k+ )

i K−(r)). Then byProposition 4.2.5 (2), JQiK(r) =

∪∞k= (JR(k+ )

i K−(r)) ∪ JΩK−(r). Then JQiK(r) =∪∞k= (JR(k+ )

i K−(r)) ∪ JR( )i K−(r) by Proposition 4.1.10(2). Then JQiK(r) =∪∞

k= (JR(k)i K−(r)). Therefore, JQiK = JRiK. 2

We define the equality of two stores over a set of variables. Suppose some storesare equal over the set of free variables of a program. If we execute the program withthose stores in the states, then the stores in the resulting states are still equal over thesame set of free variables.

Definition 4.2.13 For a set V of variables, we define=V as follows: s =V s′ if and only ifs(x) = s′(x) for all x ∈ V.

Definition 4.2.14 We first define [s , s ,V] to denote the store s such that s =V s ands =Vc s for stores s , s and a set of variables V.

Lemma 4.2.15 Suppose P ∈ L−.

Page 52: thesis

44 Chapter 4. Separation Logic for Recursive Procedures

(1) If JPK−((s, h)) ∋ (s , h ) then s =Mod (P)c s.

(2) If s =FV(P) s′, JPK−((s, h)) ∋ (s , h ) and s′ = [s , s′, FV(P)] thenJPK−((s′, h)) ∋ (s′, h ).

(3) If s =FV(P) s′ and JPK−((s, h)) ∋ abort, then JPK−((s′, h)) ∋ abort.

Proof.

(1)We will show the claim by induction on P. We consider cases according to P.

Case 1. x := e.

Assume Jx := eK−((s, h)) ∋ (s , h ). By definition, s = s[x := JeKs] and h = h .Then for all y = x, s(y) = s (y). By definition, Mod (x := e) = x. Therefore,s =Mod (P)c s.

Case 2. P is if (b) then (P ) else (P ).

Assume JPK−((s, h)) ∋ (s , h ).

Case JbKs = True. By definition, we have JP K−((s, h)) ∋ (s , h ). By inductionhypothesis, s =Mod (P )c s . WehaveMod (P)c ⊆ Mod (P )c. Therefore, s =Mod (P)c s .

Case JbKs = False can be shown as above.

Case 3. P is while (b) do (P ).

Assume JPK−((s, h)) ∋ (s , h ).

Case JbKs = True. By Proposition 4.2.4, we have s , . . . , sm, h , . . . , hm such that(s, h) = (s , h ), (s , h ) = (sm, hm), for all i = , . . . ,m − , JP K−((si, hi)) ∋(si+ , hi+ ), JbKsi = True and JbKsm = False. By induction hypothesis, for all i =, . . . ,m− , si =Mod (P )c si+ . SinceMod (P)c ⊆ Mod (P )c, for all i = , . . . ,m− ,si =Mod (P)c si+ . Therefore, s =Mod (P)c s .

Case JbKs = False. Then by definition, s = s . Then s =Mod (P)c s .

Case 4. P ; P .

Page 53: thesis

4.2 Semantics 45

Assume JP ; P K−((s, h)) ∋ (s , h ). By definition, we have s , h such thatJP K−((s, h)) ∋ (s , h ) and JP K−((s , h )) ∋ (s , h ).

By induction hypothesis, s =Mod (P )c s and s =Mod (P )c s . Since Mod (P)c ⊆Mod (P )c and Mod (P)c ⊆ Mod (P )c, we have s =Mod (P)c s and s =Mod (P)c s .Therefore, s =Mod (P)c s.

Case 5. skip.

Its proof is immediate.

Case 6. x := cons(e , e ).

Assume Jx := cons(e , e )K−((s, h)) ∋ (s , h ). By definition, (s , h ) is in (s[x := m], h[m := Je Ks,m+ := Je Ks]) | m > , m, m+ ∈ Dom(h) . So,for all y = x, s(y) = s (y). By definition, Mod (x := cons(e , e )) = x. Therefore,s =Mod (x:=cons(e ,e ))c s .

Case 7. x := [e].

Assume Jx := [e]K−((s, h)) ∋ (s , h ). By definition, (s[x := h(JeKs)], h). So,for all y = x, s(y) = s (y). By definition, Mod (x := [e]) = x. Therefore,s =Mod (x:=[e])c s .

Case 8. [e ] := e .

Assume J[e ] := e K−((s, h)) ∋ (s , h ). By definition, s = s . Therefore,s =Mod ([e ]:=e )c s .

Case 9. dispose(e).

Assume Jdispose(e)K−((s, h)) ∋ (s , h ). By definition, s = s . Therefore,s =Mod (dispose(e))c s .

(2)We will show the claim by induction on P. We consider cases according to P.

Case 1. P is x := e.

Assume s =FV(x:=e) s′, Jx := eK−((s, h)) ∋ (s , h ) and s′ = [s , s′, FV(x := e)].Then by definition, s = s[x := JeKs] and s′ =FV(x:=e) s . Then s′(x) = s (x) = JeKs =

Page 54: thesis

46 Chapter 4. Separation Logic for Recursive Procedures

JeKs′ . We have s′ =FV(x:=e)c s′ and for all y ∈ FV(e) and y = x, s′(y) = s (y) = s(y) =s′(y). Then we have s′ = s′[x := JeKs′ ]. Therefore, Jx := eK−((s′, h)) ∋ (s′, h ).

Case 2. P is if (b) then (P ) else (P ).

Assume s =FV(if (b) then (P ) else (P )) s′, Jif (b) then (P ) else (P )K−((s, h)) ∋ (s , h )and s′ = [s , s′, FV(if (b) then (P ) else (P ))].

Let JbKs = True. Then by definition JP K−((s, h)) ∋ (s , h ). By (1), s =Mod (P )c sand since FV(P )c ⊆ Mod (P )c, we have s =FV(P )c s . We also have s =FV(P) s′ byassumption and s′ =FV(P) s by definition and hence s′ =FV(P ) s and s′ =FV(P ) sbecause FV(P ) ⊆ FV(P). Then for all y ∈ FV(P)− FV(P ), s′(y) = s(y) = s (y) =s′(y). Since s′ =FV(P)−FV(P ) s′ and s′ =FV(P)c s′ are true, we have that s′ =FV(P )c s′

holds. Then s′ = [s , s′, FV(P )]. By induction hypothesis, JP K−((s′, h)) ∋ (s′, h ).Then by definition JPK−((s′, h)) ∋ (s′, h ).

Again let JbKs = False. In the sameway as abovewe can prove that JPK−((s′, h)) ∋(s′, h ).

Case 3. P is while (b) do (P ).

Assume s =FV(while (b) do (P )) s′, Jwhile (b) do (P )K−((s, h)) ∋ (s , h ) and s′ =[s , s′, FV(while (b) do (P ))]. We have (s, h) = (q , h′ ), …, (qm, h′m) = (s , h ) suchthat JP K−((qi, h′i)) ∋ (qi+ , h′i+ ) and JbKqi = True for all ≤ i < m and JbKqm =

False by Proposition 4.2.4. Let q′i = [qi, s′, FV(P )] for all ≤ i ≤ m.

We will show that s′ = q′ and s′ = q′m. We have q′ = [s, s′, FV(P )]. Thenq′ =FV(P )c s′. Since s =FV(P ) s′ and q′ =FV(P ) s, we have q′ =FV(P ) s′. Therefore,s′ = q′ . We have q′m = [s , s′, FV(P )]. We also have s′ = [s , s′, FV(P)]. Thenq′m =FV(P ) s′. By (1), s =Mod (P)c s . Then s =FV(P )c s since Mod (P)c = Mod (P )c

and Mod (P )c ⊇ FV(P )c. We have q′m =FV(P)c s′ since FV(P)c ⊆ FV(P )c andq′m =FV(P )c s′. Then q′m =FV(P)c s′. Since q′m =FV(P )c s′, s′ =FV(P) s and s =FV(P) s′,for all y ∈ FV(P) − FV(P ), q′m(y) = s′(y) = s(y) = s (y) = s′(y). Therefore,s′ = q′m. Hence s′ = q′ , s′ = q′m.

Since q′i =FV(P )c s′ by definition, q′i+ = [qi+ , q′i, FV(P )] for ≤ i < m.By induction hypothesis, JP K−((q′i, hi)) ∋ (q′i+ , hi+ ). We also have JbKq′i =

Page 55: thesis

4.2 Semantics 47

True for ≤ i < m and JbKq′m = False. Hence by Proposition 4.2.4,Jwhile (b) do (P )K−((s′, h)) ∋ (s′, h ).

Case 4. P is P ; P .

Assume s =FV(P ;P ) s′, JP ; P K−((s, h)) ∋ (s , h ) and s′ = [s , s′, FV(P ; P )].By definition, we know that JP K−((s, h)) ∋ (s , h ) and JP K−((s , h )) ∋ (s , h )for some s , h . By (1), s =Mod (P )c s and s =Mod (P )c s and then s =FV(P )c sand s =FV(P )c s . Let take s′ such that s′ = [s , s′, FV(P )]. Then s′ =FV(P ) s ands′ =FV(P )c s′. Then for all y ∈ FV(P )∩ FV(P ), s′ (y) = s (y). For all y ∈ FV(P )−FV(P ), s′ (y) = s′(y) = s(y) = s (y). So, s′ =FV(P ) s . Since s′ =FV(P ;P ) s we haves′ =FV(P ) s . For all y ∈ FV(P ) − FV(P ), s′(y) = s (y) by s′ = [s , s′, FV(P ; P )],s (y) = s (y) by s =FV(P )c s , s (y) = s′ (y) by s′ =FV(P ) s and hence s′(y) = s′ (y).For all y ∈ FV(P ; P ), s′(y) = s′(y) by s′ = [s , s′, FV(P ; P )] and s′(y) = s′ (y)by s′ = [s , s′, FV(P )] and hence s′ (y) = s′(y). Then we have s′ =FV(P )c s′. Thens′ = [s , s′ , FV(P )].

Hence, by induction hypothesis JP K−((s′, h)) ∋ (s′ , h ) and JP K−((s′ , h )) ∋(s′, h ). Therefore, by definition JP ; P K−((s′, h)) ∋ (s′, h ).

Case 5. P is skip.

Its proof is immediate.

Case 6. P is x := cons(e , e ).

Assume s =FV(x:=cons(e ,e )) s′, Jx := cons(e , e )K−((s, h)) ∋ (s , h ) and s′ =

[s , s′, FV(x := cons(e , e ))]. Then by definition, s = s[x := m] for some m >

where m,m + ∈ Dom(h) and s′ =FV(x:=cons(e ,e )) s . Then s′(x) = s (x) = m.We have s′ =FV(x:=cons(e ,e ))c s′. For all y ∈ FV(x := cons(e , e )) − x,s′(y) = s (y) = s(y) = s′(y). Then we have s′ = s′[x := m]. Therefore, by defi-nition Jx := cons(e , e )K−((s′, h)) ∋ (s′, h ).

Case 7. P is x := [e].

Assume s =FV(x:=[e]) s′, Jx := [e]K−((s, h)) ∋ (s , h ) and s′ = [s , s′, FV(x :=

[e])]. Then by definition, s = s[x := h(JeKs)] and s′ =FV(x:=[e]) s . Then s′(x) =

Page 56: thesis

48 Chapter 4. Separation Logic for Recursive Procedures

s (x) = h(JeKs) = h(JeKs′). We have s′ =FV(x:=[e])c s′ and hence for all y = x andy ∈ FV(x := [e]), s′(y) = s (y) = s(y) = s′(y). Then we have s′ = s′[x := h(JeKs′)].Therefore, Jx := [e]K−((s′, h)) ∋ (s′, h ).

Case 8. P is [e ] := e .

Assume s =FV([e ]:=e ) s′, J[e ] := e K−((s, h)) ∋ (s , h ) and s′ = [s , s′, FV([e ] :=e )]. Then by definition, s = s and s′ =FV([e ]:=e ) s . For all y ∈ FV([e ] := e ), wehave s′(y) = s (y) = s(y) = s′(y). Then we have s′ = s′. Since h = h[Je Ks := Je Ks],we have h = h[Je Ks′ := Je Ks′ ]. Therefore, J[e ] := e K−((s′, h)) ∋ (s′, h ).

Case 9. P is dispose(e).

Assume s =FV(dispose(e)) s′, Jdispose(e)K−((s, h)) ∋ (s , h ) and s′ =

[s , s′, FV(dispose(e))]. Then by definition, s = s and s′ =FV(dispose(e)) s . Forall y ∈ FV(dispose(e)), we have s′(y) = s (y) = s(y) = s′(y). Then wehave s′ = s′. Since h = h|Dom(h)−JeKs , we have h = h|Dom(h)−JeKs′ . Therefore,Jdispose(e)K−((s′, h)) ∋ (s′, h ).

(3)We will show the claim by induction on P. We consider cases according to P.

Case 1. x := e.

Assume s =FV(x:=e) s′ and Jx := eK−((s, h)) ∋ abort. By definition,Jx := eK−((s, h)) = (s[x := JeKs], h). Therefore, Jx := eK−((s, h)) ∋ abort isFalse. Then Jx := eK−((s, h)) ∋ abort implies Jx := eK−((s′, h)) ∋ abort.

Case 2. P is if (b) then (P ) else (P ).

Assume s =FV(P) s′.

Case 2.1. JbKs = True. Then JbKs′ = True. Assume JPK−((s, h)) ∋ abort.By definition, JP K((s, h)) ∋ abort. By definition FV(P ) ⊆ FV(P) and thens =FV(P ) s′. By induction hypothesis, JP K−((s′, h)) ∋ abort. Therefore, by defi-nition JPK−((s′, h)) ∋ abort.

Case 2.2. JbKs = False can be proved similarly.

Case 3. P is while (b) do (P ).

Page 57: thesis

4.2 Semantics 49

Assume s =FV(while (b) do (P )) s′.

Case 3.1. JbKs = True. Then JbKs′ = True. Assume Jwhile (b) do (P )K((s, h)) ∋abort. By Proposition 4.2.4, we have s , . . . , sm, h , . . . , hm such that (s, h) =

(s , h ), JP K−((si, hi)) ∋ (si+ , hi+ ) and JbKsi = True for all i = , . . . ,m −, JP K−((sm, hm)) ∋ abort and JbKsm = True. By definition FV(P ) ⊆FV(while (b) do (P )) and then s =FV(P ) s′. Let s′i = [si, s′, FV(P)] for all i =

, . . . ,m. Then s′ =FV(P)c s′ and s′ =FV(P) s = s =FV(P) s′ and hence s′ = s′.We will show that s′i+ = [si+ , s′i, FV(P )] for i = , . . . ,m− .

For that we will show that s′i+ =FV(P )c s′i . By (1), si+ =Mod (P )c si and hencesi+ =FV(P )c si. For all y ∈ FV(P) − FV(P ), s′i+ (y) = si+ (y) = si(y) = s′i(y). Forall y ∈ FV(P), s′i+ (y) = s′(y) = s′i(y). Hence we have s′i+ =FV(P )c s′i .

Since s′i =FV(P) si, we have s′i+ =FV(P ) si+ and then s′i+ = [si+ , s′i, FV(P )].

Then by (2), we have JP K−((s′i, hi)) ∋ (s′i+ , hi+ ) for all i = , . . . ,m − . Sinces′m =FV(P) sm, we also have JbKs′m = True, s′m =FV(P ) sm and then by induction hy-pothesis JP K−((s′m, hm)) ∋ abort. Now we have (s′, h) = (s′, h ), JP K−((s′i, hi)) ∋(s′i+ , hi+ ) and JbKs′i = True for all i = , . . . ,m− , JP K−((s′m, hm)) ∋ abort andJbKs′m = True. Therefore, by Proposition 4.2.4, Jwhile (b) do (P)K−((s′, h)) ∋ abort.

Case 3.2. JbKs = False. Then JbKs′ = False. AssumeJwhile (b) do (P)K−((s, h)) ∋ abort. By definition it is false.

Case 4. P ; P .

Assume s =FV(P ;P ) s′ and JP ; P K((s, h)) ∋ abort. By definition,∪ JP K−(r) | r ∈ JP K−((s, h)) ∋ abort. Then either JP K−((s, h)) ∋ abort

or JP K−((s, h)) ∋ (s , h ) and JP K−((s , h )) ∋ abort for some s , h . AssumeJP K−((s, h)) ∋ abort. Since FV(P ) ⊆ FV(P ; P ), we have s =FV(P ) s′. By in-duction hypothesis, JP K−((s′, h)) ∋ abort. Since JP K−(abort) ∋ abort, we haveJP ; P K−((s′, h)) ∋ abort.

Now assume, JP K−((s, h)) ∋ (s , h ) and JP K−((s , h )) ∋ abort. We haves =FV(P ) s′. Let s′ = [s , s′, FV(P )]. Then by induction hypothesis (2),JP K−((s′, h )) ∋ (s′, h ).

Page 58: thesis

50 Chapter 4. Separation Logic for Recursive Procedures

By (1), s =FV(P )c s and s′ =FV(P )c s′. Then for all y ∈ FV(P ) ∩ FV(P ),s (y) = s′(y). Since FV(P ) − FV(P ) ⊆ Mod (P )c, for all y ∈ FV(P ) − FV(P ),we have s (y) = s(y) = s′(y) = s′(y). Then s =FV(P ) s′. By induction hypothesis,JP K−((s′, h )) ∋ abort. Then, JP ; P K−((s′, h )) ∋ abort.

Case 5. skip.

Its proof is immediate.

Case 6. x := cons(e , e ).

Assume s =FV(x:=cons(e ,e )) s′ and Jx := cons(e , e )K−((s, h)) ∋ abort. But bydef-inition, Jx := cons(e , e )K−((s, h)) ∋ abort. Then Jx := cons(e , e )K−((s, h)) ∋abort implies Jx := cons(e , e )K−((s′, h)) ∋ abort.

Case 7. x := [e].

Assume s =FV(x:=[e]) s′ and Jx := [e]K−((s, h)) ∋ abort. Then by definition,JeKs /∈ Dom(h). Since s =FV(x:=[e]) s′, we have JeKs = JeKs′ . Then JeKs′ /∈ Dom(h).Therefore, Jx := [e]K−((s′, h)) ∋ abort.

Case 8. [e ] := e .

Assume s =FV([e ]:=e ) s′ and J[e ] := e K−((s, h)) ∋ abort. Then by definition,Je Ks /∈ Dom(h). Since s =FV([e ]:=e ) s′, we have Je Ks = Je Ks′ . Then Je Ks′ /∈Dom(h). Therefore, J[e ] := e K−((s′, h)) ∋ abort.

Case 9. dispose(e).

Assume s =FV(dispose(e)) s′ and Jdispose(e)K−((s, h)) ∋ abort. Then by definition,JeKs /∈ Dom(h). Since s =FV(dispose(e)) s′, we have JeKs = JeKs′ . Then JeKs′ /∈ Dom(h).Therefore, Jdispose(e)K−((s′, h)) ∋ abort. 2

Lemma 4.2.16 Suppose P ∈ L.

(1) If s =EFV(P) s′ and JPK((s, h)) ∋ abort, then JPK((s′, h)) ∋ abort.

(2) If s =EFV(P) s′ and JPK((s, h)) ∋ (s , h ) and s′ = [s , s′, EFV(P)] thenJPK((s′, h)) ∋ (s′, h ).

Page 59: thesis

4.2 Semantics 51

(3) If JPK((s, h)) ∋ (s , h ) then s =Mod(P)c s.

Proof.

(1) Assume s =EFV(P) s′ and JPK((s, h)) ∋ abort. Then by definition,JP(k)K−((s, h)) ∋ abort for some k. Since we have FV(P(k)) ⊆ EFV(P) byProposition 4.1.14 (1), we have s =FV(P(k)) s′. By Lemma 4.2.15 (3), we haveJP(k)K−((s′, h)) ∋ abort. Then by definition, JPK((s′, h)) ∋ abort.

(2) Assume s =EFV(P) s′, JPK((s, h)) ∋ (s , h ) and s′ = [s , s′,EFV(P)]. Then bydefinition, JP(k )K((s, h)) ∋ (s , h ) for some k . By definition, EFV(P) = FV(P(nproc))where nproc is the number of our procedure names. Let k = max(k , nproc). Then byLemma 4.2.8, JP(k)K((s, h)) ∋ (s , h ). Since FV(P(nproc)) = FV(P(k)) by Proposi-tion 4.1.14 (1), we have FV(P(k)) = EFV(P). Then s′ = [s , s′, FV(P(k))]. Then byLemma 4.2.15 (2), JP(k)K−((s′, h)) ∋ (s′, h ). Therefore, JPK((s′, h)) ∋ (s′, h ).

(3)We can show the claim in a similar way to (1). 2

Page 60: thesis

52 Chapter 4. Separation Logic for Recursive Procedures

4.3 Logical System

This section defines our logical system. It is the extension of Reynolds’ system pre-sented in [13] for mutual recursive procedure call.

We will write A[x := e] for the formula obtained from A by replacing x by e. Wewill write the formula e 7→ e , e to denote (e 7→ e ) ∗ (e+ 7→ e ).

Definition 4.3.1 Our logical system consists of the following inference rules. As men-tioned in the previous section, we will use Γ for a set of asserted programs. A judgmentis defined as Γ ⊢ APB.

Γ ⊢ AskipA (skip)

Γ, APB ⊢ APB (axiom)

Γ ⊢ A[x := e]x := eA (assignment)

Γ ⊢ A ∧ bP B Γ ⊢ A ∧ ¬bP BΓ ⊢ Aif (b) then (P ) else (P )B (if )

Γ ⊢ A ∧ bPAΓ ⊢ Awhile (b) do (P)A ∧ ¬b (while )

Γ ⊢ AP C Γ ⊢ CP BΓ ⊢ AP ; P B (comp)

Γ ⊢ A PB Γ ⊢ APB (conseq)

(A→ A , B → B)

Page 61: thesis

4.3 Logical System 53

Γ ⊢ ∀x′((x′ 7→ e , e ) −∗ A[x := x′])x := cons(e , e )A

(cons)

(x′ ∈ FV(e , e ,A))

Γ ⊢ ∃x′(e 7→ x′ ∗ (e 7→ x′ −∗ A[x := x′]))x := [e]A

(lookup)

(x′ ∈ FV(e,A))

Γ ⊢ (∃x(e 7→ x)) ∗ (e 7→ e −∗ A)[e ] := e A

(mutation)

(x ∈ FV(e ))

Γ ⊢ (∃x(e 7→ x)) ∗ Adispose(e)A (dispose)

(x ∈ FV(e))

Γ ∪ AiRiBi|i = , . . . , nproc ⊢ A Q B ...

Γ ∪ AiRiBi|i = , . . . , nproc ⊢ AnprocQnprocBnprocΓ ⊢ AiRiBi

(recursion)

Γ ⊢ APCΓ ⊢ A ∧ BPC ∧ B (inv-conj)

(FV(B) ∩Mod(P) = ∅ and B is pure)

Γ ⊢ APBΓ ⊢ ∃x.APB (exists)

(x /∈ FV(B) ∪ EFV(P))

We say APB is provable and we write ⊢ APB, when ⊢ APB can bederived by these inference rules.

We explain inference rules by the following simple example.

Page 62: thesis

54 Chapter 4. Separation Logic for Recursive Procedures

Example

Let A be the abbreviation of ∀y(y ≥ x ∧ y < z ↔ ∃w(y 7→ w) ∗ True). Supposewe have the procedure declaration

Procedure R (if (x < z) then (dispose(x); x := x+ ;R ) else (skip))

We will show that ⊢ AR emp is provable in our system. This means we have aheap with some consecutive allocation of cells and the program R deallocates themall. Let Γ be AR emp. The axiom (assignment) gives Γ ⊢ A[x := x +

]x := x+ A. The axiom (axiom) givesΓ ⊢ AR emp. By the inference rule(comp), we have Γ ⊢ A[x := x + ]x := x + ;R emp. The axiom (dispose)gives Γ ⊢ (∃y(x 7→ y)) ∗ A[x := x + ]dispose(x)A[x := x + ]. Then by theinference rule (comp), we haveΓ ⊢ (∃y(x 7→ y))∗A[x := x+ ]dispose(x); x :=x+ ;R emp. Theaxiom(skip) givesΓ ⊢ A∧¬(x < z)skipA∧¬(x < z). WehaveA∧x < z→(∃y(x 7→ y))∗A[x := x+ ] andA∧¬(x < z)→emp. Thenby theinference rule (conseq), we have Γ ⊢ A ∧ x < zdispose(x); x := x+ ;R empand Γ ⊢ A ∧ ¬(x < z)skipemp. By applying the inference rule (if ), we getΓ ⊢ Aif (x < z) then (dispose(x); x := x + ;R ) else (skip)emp. By theinference rule (recursion), we have ⊢ AR emp is provable.

We have two new rules those appears neither in [1] nor in [13]. Those are the rules(inv-conj) and (exists). We have found that the axiom (AXIOM 9: INVARIANCEAXIOM) in [1] is not sound for our system. The definition of the axiomwill be givenin the next chapter. The asserted program x = [y] := x = is a counterexample since it is not true although it is provable by the axiom. It causes abort forthe state (s, h) = (s′[x := , y := ], ∅) though Jx = K(s,h) is true. Another counterexample is empx := cons( , )emp. Although it does not abort, it is not true.So, we have introduced the rule (inv-conj). It is a variant of the combination of theaxiom (AXIOM 9: INVARIANCE AXIOM) and the rule (RULE 12: CONJUNC-TIONRULE) of [1]. That is, the truthness of a pure assertion for the post-executionstate does not changed from that of the pre-execution state if its set of free variables aremutually exclusive to the set ofmodifiable variables of the program. It is a special case

Page 63: thesis

4.3 Logical System 55

of frame rule in [13]. The rule (exists) is analogous to the rule existential introductionof propositional calculus.

Page 64: thesis
Page 65: thesis

57

5Soundness andCompleteness

Page 66: thesis

58 Chapter 5. Soundness andCompleteness

5.1 Soundness

In this section we will prove soundness of our system. That means we will showthat if a judgment Γ ⊢ APB is derivable in our system then Γ ⊢ APB isalso true.

We say an unfolded program is correct when all level of the unfolding of the pro-gram is correct. For a given specification, correctness of a program is the same as thatof its unfolded transformation.

It is straightforward toconstruct a logical systembycollecting axiomsand inferencerules from both Hoare’s logic for recursive procedure and separation logic to verifypointer programswith recursive procedures. But this system is unsound. It is becausethe invariance axiom is not sound in separation logic. At the end of this section, wewill give an unsound example.

Proposition 5.1.1 APB is true if and only if for all k, AP(k)B is true.

Proof. First we will show from the left-hand side to the right-hand side. AssumeAPB is true. Assume JAKr = True. Then by definition, JPK(r) ∋ abort. Thenby definition,

∪∞k= (JP(k)K−(r)) ∋ abort. Then for all k, JP(k)K−(r) ∋ abort. Fix

k. Assume JP(k)K−(r) ∋ r′. Then we have∪∞

k= (JP(k)K−(r)) ∋ r′. Then again bydefinition, JPK(r) ∋ r′. Then JBKr′ = True. Then AP(k)B is true for all k.

Next we will show the right-hand side to the left-hand side. Assume AP(k)Bis true for all k. Assume JAKr = True. Fix k. Then by definition, JP(k)K(r) ∋ abort.Then JP(k)K−(r) ∋ abort. Then

∪∞k= (JP(k)K−(r)) ∋ abort. Then by definition,JPK(r) ∋ abort. Assume JPK(r) ∋ r′. Then by definition,

∪∞k= (JP(k)K−(r)) ∋ r′.

Then we have JP(k)K−(r) ∋ r′ for some k. Then by definition, JP(k)K(r) ∋ r′. ThenJBKr′ = True. Then APB is true. 2

Definition 5.1.2 For a set Γ of asserted programs, Γ(k) is defined by AiPiBi |≤ i ≤ n (k) = AiP(k)i Bi | ≤ i ≤ n .

If the truth of an assertion depends only on a store, it is not altered in the presenceof any heap. This easy lemma will be necessary for some formal discussions later.

Page 67: thesis

5.1 Soundness 59

Lemma 5.1.3 JAKs = JAK(s,h) for a pure formulaAwhere the left-hand side is the seman-tics for the base language and the right-hand side is the semantics for the assertion language.

Proof. This is proved by induction on A. If A is B ∧ C, we have JAKs = (JBKs andJCKs) and JAK(s,h) = (JBK(s,h) and JCK(s,h)), and both sides are the same, since by theinduction hypothesis we have JBKs = JBK(s,h) and JCKs = JCK(s,h). Other cases aresimilarly proved. 2

The following lemma is important for the soundness of the inference rule(recursion).

Lemma 5.1.4 If AiR(k)i Bi | ≤ i ≤ nproc ⊢ AiQ(k)

i Bi is true for all i andk then ⊢ AiR(k)

i Bi true for all i.

Proof. Assume AiR(k)i Bi | ≤ i ≤ nproc ⊢ AiQ(k)

i Bi is true for all i andk. We will show AiR(k)

i Bi true for all i by induction on k.

Case 1. k = .

By Proposition 4.1.10 (2) R( )i = Ω. Then AiΩBi is true.

Case 2. k = k′ + .

We assume that AiR(k)i Bi | ≤ i ≤ nproc ⊢ AiQ(k)

i Bi is true for all k.By induction hypothesis, AiR(k′)

i Bi is true for all i. By the assumption for k′, wehave AiRk′

i Bi | ≤ i ≤ nproc ⊢ AiQk′i Bi true. Hence AiQ(k′)

i Bi istrue. By Proposition 4.1.10 (3), R(k′+ )

i = Q(k′)i . Hence AiR(k)

i Bi is true for all i.2

In the following lemma we consider an assertion and a program such that the as-sertion has no free variable that can be modified by the program. In such a case, thetruth of the assertion is preserved even after the execution of the program.

Lemma 5.1.5 If A is pure, P doesn’t contain any procedure names, JAK(s,h) = True,FV(A) ∩Mod (P) = ∅ and JPK−((s, h)) ∋ (s′, h′) then JAK(s′,h′) = True.

Proof. Assume A is pure, P doesn’t contain any procedure names, JAK(s,h) = True,FV(A)∩Mod (P) = ∅ and JPK−((s, h)) ∋ (s′, h′). By Lemma 4.2.15 (1), s =Mod(P)c

Page 68: thesis

60 Chapter 5. Soundness andCompleteness

s′. Since FV(A) ⊆ Mod (P)c, we have s =FV(A) s′. By Lemma 5.1.3, JAKs=True.Hence JAKs′=True. By Lemma 5.1.3, JAK(s′,h′) = True. 2

The following lemmawill be necessary to show the soundness of the inference rule(inv-conj).

Lemma 5.1.6 If Pdoesn’t contain anyprocedure names, B is pure,Mod (P)∩FV(B) = ∅and APC is true, then A ∧ BPC ∧ B is true.

Proof. Assume B is pure, Mod (P) ∩ FV(B) = ∅ and APC is true. We have toprove A ∧ BPC ∧ B is true.

Assume JA ∧ BK(s,h) = True. Then JAK(s,h) = True and JBK(s,h) = True. ThenJPK((s, h)) ∋ abort. Assume JPK((s, h)) ∋ (s′, h′). Then JCK(s′,h′) = True. ByLemma5.1.5, JBK(s′,h′) = True. Then JB ∧ CK(s′,h′) = True. Hence,A∧BPC∧Bis true. 2

Lemma 5.1.7 If Γ ⊢ APB is provable then Γ(k) ⊢ AP(k)B is true for all k.

Proof. It is proved by induction on the proof. We consider cases according to thelast rule.

Case 1. (skip).

Its proof is immediate.

Case 2. (axiom).

Assume Γ, APB ⊢ APB is provable. Then Γ(k), AP(k)B ⊢AP(k)B is true for all k.

Case 3. (assignment).

Assume Γ ⊢ A[x := e]x := eA is provable. Assume JA[x := e]K(s,h) =

True. Let n be JeKs. By the definition we have Jx := eK((s, h)) = (s , h) wheres = s[x := n]. Since JA[x := e]K(s,h) = JAK(s ,h), we have JAK(s ,h) = True. HenceA[x := e]x := eA is true. Thenbydefinition,Γ(k) ⊢ A[x := e](x := e)(k)Ais true.

Page 69: thesis

5.1 Soundness 61

Case 4. (if).

We have the asserted program Γ ⊢ A ∧ bP B and Γ ⊢ A ∧ ¬bP Bwhich are provable. By induction hypothesis, Γ(k) ⊢ A ∧ bP(k)B andΓ(k) ⊢ A ∧ ¬bP(k)B are true.

Now we will show that Γ(k) ⊢ A(if (b) then (P ) else (P ))(k)B is true for allk. Assume Γ(k) is true. Then A ∧ bP(k)B and A ∧ ¬bP(k)B are true.

Assume JAK(s,h) is true.Case 4.1. JbKs is true. By Lemma 5.1.3, we have JbK(s,h) is true. By def-

inition, JA ∧ bK(s,h) is true. Assume J(if (b) then (P ) else (P ))(k)K((s, h)) =JP(k)K((s, h)) ∋ r. Then r = abort and JBKr is true.Case 4.2. JbKs is false. Then the fact that r = abort and JBKr is true is similarly

proved to Case 1.

Then A(if (b) then (P ) else (P ))(k)B is true. Then by definition Γ(k) ⊢A(if b then P else P )(k)B is true for all k.

Case 5. (while).

B is in the form A ∧ ¬b and by (while) rule we have the asserted program Γ ⊢A∧ bPAwhich is provable. By induction hypothesis,Γ(k) ⊢ A∧ bP(k)A istrue for all k.

Now we will show thatΓ(k) ⊢ A(while (b) do (P))(k)¬b∧ A is true. AssumeΓ(k) is true. Then A ∧ bP(k)A is true.

Let define the function F : States ∪ abort → p(States ∪ abort) by

F(abort) = abort,F((s, h)) = (s , h ) | JA ∧ ¬bK(s ,h ) = True ∪ r ∈ States | JAK(s,h) = False.

We will show that F satisfies the inequations obtained from the equations for

Page 70: thesis

62 Chapter 5. Soundness andCompleteness

J(while (b) do (P))(k)K by replacing= by⊇. That is, we will show

F(abort) ⊇ abort,F((s, h)) ⊇ (s, h) if JbKs = False,F((s, h)) ⊇

∪ F(r) | r ∈ JP(k)K((s, h)) if JbKs = True.

Thefirst inequation immediately holds by thedefinitionofF. Wewill show the secondinequation. Assume JbKs = False. By Lemma 5.1.3, we have JbK(s,h) = False. IfJAK(s,h) = False, then the inequation holds by the definition of F. If JAK(s,h) = True,then JA ∧ ¬bK(s,h) = True, and the inequation holds by the definition of F.

Wewill show the last inequation. If JAK(s,h) = False, then it holds by the definitionof F. Assume JAK(s,h) = True, JbKs = True, and r′ is in the right-hand side. Wewill show that r′ ∈ F((s, h)). We have some r such that JP(k)K((s, h)) ∋ r andF(r ) ∋ r′. By Lemma 5.1.3, we have JbK(s,h) = True. Hence JA ∧ bK(s,h) = True.Since A ∧ bP(k)A is true we have r = abort and JAKr = True. Then we haver′ = abort and JA ∧ ¬bKr′ = True. Hence r′ ∈ F((s, h)).

We have shown that F satisfies the inequations. By the least fixed point theo-rem [17], we have F ⊇ J(while (b) do (P))(k)K. If JAK(s,h) = True and r′ ∈J(while (b) do (P))(k)K((s, h)), then r′ ∈ F((s, h)), and we have r′ = abort andJA ∧ ¬bKr′ = True. Therefore Γ(k) ⊢ A(while (b) do (P))(k)A ∧ ¬b is true.

Case 6. (comp).

We have the asserted programΓ ⊢ AP C andΓ ⊢ CP Bwhich are prov-able for some C. By induction hypothesis, Γ(k) ⊢ AP(k)C and Γ ⊢ CP(k)Bare true for all k.

Now we will show that Γ(k) ⊢ A(P ; P )(k)B is true for all k. Assume Γ(k) istrue. Then AP(k)C and CP(k)B are true.

Assume JAK(s,h) = True and J(P ; P )(k)K((s, h)) ∋ r. We will show r = abortand JBKr = True. We have r such that JP(k)K((s, h)) ∋ r and JP(k)K(r ) ∋ r. Hencewe have r = abort and JCKr = True. Since JP(k)K(r ) ∋ r, we have r = abort andJBKr = True. Hence A(P ; P )(k)B is true. Then Γ(k) ⊢ A(P ; P )(k)B is

Page 71: thesis

5.1 Soundness 63

true for all k.

Case 7. (conseq).

We have the asserted program Γ ⊢ A′PB′ which is provable where A → A′

andB′→B. By induction hypothesis for the assumption,Γ(k) ⊢ A′P(k)B′ is truefor all k.

Now we will show that Γ(k) ⊢ AP(k)B is true for all k. Assume Γ(k) is true.Then A′P(k)B′ is true.

Assume JAK(s,h) = True and JP(k)K((s, h)) ∋ r. We will show r = abort andJBKr = True. By the side condition JA→ A K(s,h) = True, we have JA K(s,h) = True.Hence r = abort and JB Kr = True. By the side condition JB → BKr = True, wehave JBKr = True. Hence AP(k)B is true. ThenΓ(k) ⊢ AP(k)B is true for allk.

Case 8. (cons).

Assume Γ ⊢ ∀x′((x′ 7→ e , e ) −∗ A[x := x′])x :=

cons(e , e )A is provable. Assume J∀x′(x′ 7→ e , e −∗ A[x := x′])K(s,h) = Trueand Jx := cons(e , e )K((s, h)) ∋ r. We will show r = abort and JAKr = True. LetP be x := cons(e , e ), n be Je Ks, and n be Je Ks. By the definition of JPK, r = abortand r = (s , h )where s = s[x := n] and h = h[n := n , n+ := n ] for some n suchthat n > and n, n+ ∈ Dom(h). By J∀x′(x′ 7→ e , e −∗ A[x := x′])K(s,h) = True,we have Jx′ 7→ e , e −∗ A[x := x′]K(s′,h) = True where s′ = s[x′ := n]. Leth = ∅[n := n , n+ := n ]. We have h = h+ h . Then Jx′ 7→ e , e K(s′,h ) = Truesince x′ ∈ FV(e , e ). Since Jx′ 7→ e , e −∗ A[x := x′]K(s′,h) = True, we haveJA[x := x′]K(s′,h ) = True. Hence JAK(s ,h ) = True, since x′ ∈ FV(A). Nowwe have ∀x′((x′ 7→ e , e ) −∗ A[x := x′])x := cons(e , e )A is true. ThenΓ(k) ⊢ ∀x′((x′ 7→ e , e ) −∗ A[x := x′])(x := cons(e , e ))(k)A is true for all k.

Case 9. (lookup).

Assume Γ ⊢ ∃x′(e 7→ x′ ∗ (e 7→ x′ −∗ A[x := x′]))x := [e]Ais provable. Assume J∃x′((e 7→ x′) ∗ ((e 7→ x′) −∗ A[x := x′]))K(s,h) = True andJx := [e]K((s, h)) ∋ r. We will show r = abort and JAKr = True. Let n be

Page 72: thesis

64 Chapter 5. Soundness andCompleteness

JeKs. We have some n such that J(e 7→ x′) ∗ (e 7→ x′ −∗ A[x := x′])K(s′,h) = Truewhere s′ = s[x′ := n ]. Hence we have h , h such that h = ∅[n := n ],h = h + h and Je 7→ x′ −∗ A[x := x′]K(s′,h ) = True. By Je 7→ x′K(s′,h ) = True,JA[x := x′]K(s′,h) = True. Since x′ ∈ FV(e), we have JeKs′ = JeKs = n. Sincen ∈ Dom(h), we have r = abort and r = (s , h) where s = s[x := n ].Since JA[x := x′]K(s′,h) = JAK(s ,h) by x′ ∈ FV(A), we have JAKr = True. Then∃x′(e 7→ x′ ∗ (e 7→ x′ −∗ A[x := x′]))x := [e]A is true. Then we haveΓ(k) ⊢ ∃x′(e 7→ x′ ∗ (e 7→ x′ −∗ A[x := x′]))(x := [e])(k)A is true for all k.

Case 10. (mutation).

Assume Γ ⊢ (∃x(e 7→ x)) ∗ (e 7→ e −∗ A)[e ] := e A is provable.Assume J(∃x(e 7→ x)) ∗ (e 7→ e −∗ A)K(s,h) = True and J[e ] := e K((s, h)) ∋ r.We will show r = abort and JAKr = True. Let n be Je Ks and n be Je Ks. We haveh , h such that Je 7→ e −∗ AK(s,h ) and h = h + h and J∃x(e 7→ x)K(s,h ) = True.Hence we have some n such that Je 7→ xK(s′,h ) = True where s′ = s[x := n ]. Sincex ∈ FV(e ), we have Je Ks′ = Je Ks = n. We also have Dom(h ) = n. Sincen ∈ Dom(h), r = abort and r = (s, h′) where h′ = h[n := n ]. Let h′ be ∅[n := n ].Then h′ = h′ + h and Je 7→ e K(s,h′) = True. Since Je 7→ e −∗ AK(s,h ) = True, wehave JAK(s,h′) = True. Hence (∃x(e 7→ x)) ∗ (e 7→ e −∗ A)[e ] := e A is true.ThereforeΓ(k) ⊢ (∃x(e 7→ x)) ∗ (e 7→ e −∗ A)([e ] := e )(k)A is true for all k.

Case 11. (dispose).

Assume Γ ⊢ (∃x(e 7→ x)) ∗ Adispose(e)A is provable. AssumeJ(∃x(e 7→ x)) ∗ AK(s,h) = True and Jdispose(e)K((s, h)) ∋ r. We will showr = abort and JAKr. Let n be JeKs. Then we have h , h such that h = h + h ,J∃x(e 7→ x)K(s,h ) = True and JAK(s,h ) = True. Hence we have some n such thatJe 7→ xK(s′,h ) = True where s′ = s[x := n ]. Since JeKs′ = JeKs = n by x ∈ FV(e),Dom(h ) = n and h (n) = n . Since n ∈ Dom(h), r = abort and r = (s, h′ )where h′ = h|Dom(h)−n. Hence h = h′ . Therefore JAKr = True. Hence (∃x(e 7→x)) ∗ Adispose(e)A is true. Then Γ(k) ⊢ (∃x(e 7→ x)) ∗ A(dispose(e))(k)Ais true for all k.

Case 12. (recursion).

Page 73: thesis

5.1 Soundness 65

We have the asserted program Γ ∪ AiRiBi | ≤ i ≤ nproc ⊢ AiQiBithat are provable for all i. Fix i and k. By induction hypothesis,Γ(k)∪ AiR(k)

i Bi |≤ i ≤ nproc ⊢ AiQ(k)

i Bi is true. Assume Γ(k) is true. Then AiR(k)i Bi |

≤ i ≤ nproc ⊢ AiQ(k)i Bi is true.

By Lemma 5.1.4, ⊢ AiR(k)i Bi is true. Then Γ(k) ⊢ AiR(k)

i Bi is true.

Case 13. (inv-conj).

We have Γ ⊢ APC is provable and Mod(P) ∩ FV(B) = ∅. By inductionhypothesis, Γ(k) ⊢ AP(k)C is true for all k.

Fix k. Now we will show that Γ(k) ⊢ A ∧ BP(k)C ∧ B is true. Assume Γ(k) istrue. Then AP(k)C is true.

By definition, Mod(P) = Mod (P(nproc)).

Since we have Mod (P(k)) ⊆ Mod (P(nproc)) hence FV(B) ∩ Mod (P(k)) = ∅. ByLemma 5.1.6, we have A ∧ BP(k)C ∧ B is true.

Case 14. (exists).

We have the asserted programΓ ⊢ APB, which is provable and x ∈ FV(B)∪EFV(P). By induction hypothesis, Γ(k) ⊢ AP(k)B is true for all k. Since x ∈EFV(P), by definition x ∈ FV(P(nproc)). Since FV(P(k)) ⊆ FV(P(nproc)), we have x ∈FV(P(k)).

Fix k. Now we will show that Γ(k) ⊢ ∃xAP(k)B is true. Assume Γ(k) is true.Then AP(k)B is true.

Assume J∃xAK(s,h) = True and JP(k)K−((s, h)) ∋ r. Then by definitionJAK(s[x:=m],h) = True for somem. By definition JP(k)K−((s[x := m], h)) ∋ abort. Wehave s =FV(P(k)) s[x := m] since x ∈ FV(P(k)). Then by Lemma 4.2.15(3), r = abort.Assume r = (s′, h′). Then by Lemma 4.2.15(1), we have s′ =Mod (P(k))c s and hences′ =FV(P(k))c s. Now let s′ = [s′, s[x := m], FV(P(k))]. Then by Lemma 4.2.15(2),JP(k)K−((s[x := m], h)) ∋ (s′, h′). Then by definition, JBK(s′,h′) = True. We haves′(y) = s′(y) for all y ∈ FV(P(k)). We also have s′(y) = s(y) = s′(y) for all y = x and

Page 74: thesis

66 Chapter 5. Soundness andCompleteness

y ∈ FV(P(k)). Hence s′ =xc s′. Since x ∈ FV(B), we have JBK(s′,h′) = True. Thenby definition ∃xAP(k)B is true. Therefore, Γ(k) ⊢ ∃xAP(k)B is true. 2

We present the soundness theorem. It is one of the important theorems in ourpaper.

Theorem 5.1.8 (Soundness) If Γ ⊢ APB is provable, Γ ⊢ APB is true.

Proof. Assume Γ ⊢ APB is provable. By Lemma 5.1.7, Γ(k) ⊢ AP(k)Btrue for all k. By Proposition 5.1.1, Γ ⊢ APB is true. 2

We will give an unsound example for the naive logical system obtained by takingthe union of axioms and inference rules from Hoare’s logic for recursive proceduresand separation logic.

Definition 5.1.9 The axiom (AXIOM 9: INVARIANCE AXIOM) has been defined in[1] as:

⊢ APA (invariance)(FV(A) ∩ EFV(P) = ∅)

Definition 5.1.10 We define the logical system ‘Separation+Invariance Logic’ as the log-ical system obtained from the separation logic by adding the axiom (invariance).

Proposition 5.1.11 The axiom (invariance) is not sound in Separation+InvarianceLogic.

Proof. ⊢ empx := cons(e , e )emp is provable by the axiom (invariance) inthe system Separation+Invariance Logic since FV(emp)∩EFV(x := cons(e , e )) =∅. However it is apparently false. 2

Page 75: thesis

5.2 Expressiveness 67

5.2 Expressiveness

Coding of Assertions

This section proves the expressiveness theorem. Our technique is to extend theexpressiveness theorem given in [14] tomutual recursive procedure calls. In this sec-tion, we first assume that for given assertions and programs, we fix some sequence−→xof variables that contains the free variables of the assertions and the extended free vari-ables of the programs. We will next define the formulas Store−→x (m) and Heap(m) fordescribing the current store and the current heap respectively. Next we will providethe pure formulas EEvale,−→x (n, k) and BEvalA,−→x (n), which express the meaning of theexpression e and the pure formulaA respectively. Thenwewill define the pure formulaHEvalA(m) for expressing the meaning of the assertion A at the heap bym. By usingit, we will define the pure formula EvalA,−→x (n,m), which expresses the meaning of theassertionA. Wewill also define the pure formulaExecP,−→x (n,m) for themeaningof theprogram P. Finally we will define the formulaWP,A(

−→x ) for the weakest preconditionof the program P and the assertion A, and we will prove the expressiveness theoremthat statesWP,A(

−→x ) indeed expresses the weakest precondition.

We assume a standard surjective pairing function on natural numbers. For naturalnumbers n,m, we will write (n,m) to denote the number that represents the pair of nandm. We also assume a standard surjective coding of a sequence of natural numbersby a natural number. We will write ⟨n , . . . , nk⟩ for the number that represents thesequencen , . . . , nk. When thenumbern represents a sequence, lh(n) and (n)i denotethe length of the sequence and the i-th element of the sequence respectively.

The following predicates for handling sequences are known to be definable in thelanguage of Peano arithmetic. Pair(k, n,m) is defined to hold if k is the numberthat represents the pair of n and m. Lh(n, k) is defined to hold if k is the length ofthe sequence represented by n. That is, Lh(⟨n , . . . , nk⟩, k) holds. Elem(n, i, k) isdefined to hold if k is the i-th element in the sequence represented by n. That is,Elem(⟨n , . . . , nk⟩, i− , ni) holds.

Page 76: thesis

68 Chapter 5. Soundness andCompleteness

We code the piece of the store s for variables x , . . . , xk by the number ⟨n , . . . , nk⟩where s(xi) = ni. We code the heap h by the number ⟨m , . . . ,mk⟩whereDom(h) =n , . . . , nk, < n < . . . < nk andmi is the number that represents the pair of niand h(ni). We code the result of a program execution by coding abort and (s, h) by0 and k + respectively where the piece of s is coded by n, h is coded by m, and k isthe pair of numbers n andm. The number that represents a given heap is unique. Forexample, the number ⟨( , ), ( , )⟩ represents the heap h such thatDom(h) = , and h( ) = , h( ) = . Note that for heap representation we do not think thenumbers ⟨( , ), ( , )⟩ or ⟨( , ), ( , ), ( , )⟩ since n < n is violated.

We say A is true at (s, h) when JAK(s,h) = True. The formula A ↔ B is defined as(A→ B) ∧ (B→ A).

∅ sometimes denotes the empty heap, that is, ∅(x) is undefined for all x ∈ N.

We define the following pure formulas. First we define coding of our assertion lan-guage. It is the same as [14].

Lesslh(i, n) = ∃x(Lh(n, x) ∧ i < x),Addseq(k, n,m) = ∃x(Lh(n, x) ∧ Lh(m, x+ )) ∧ Elem(m, , k)∧

∀yx(Lesslh(y, n) ∧ Elem(n, y, x)→ Elem(m, y+ , x)).

The predicate Lesslh(i, n) means i < lh(n). The predicate Addseq(k, n,m) means⟨k⟩ · n = mwhere · denotes the concatenation of sequences.

Definition 5.2.1 We define the formulas Storex ,...,xn(m) and Heap(m).

Storex ,...,xn(m) = Lh(m, n) ∧ Elem(m, , x ) ∧ . . . ∧ Elem(m, n− , xn),Lookup(m, l, k) = ∃yz(Lesslh(y,m) ∧ Elem(m, y, z) ∧ Pair(z, l, k)),

IsHeap(m) = ∀ix y z x y z (Lesslh(i+ ,m) ∧ Elem(m, i, x )∧Pair(x , y , z ) ∧ Elem(m, i+ , x ) ∧ Pair(x , y , z )→< y ∧ y < y ),

Heap(m) = IsHeap(m) ∧ ∀xy(Lookup(m, x, y) ↔ (x 7→ y ∗ True)).

The predicate Storex ,...,xn(⟨m , . . . ,mn⟩) means s(xi) = mi where s is the current

Page 77: thesis

5.2 Expressiveness 69

store. The predicate Lookup(m, l, k) means h(l) = k where m represents the heaph. The predicate IsHeap is defined so that IsHeap(m)means that there is some heapthat the number m represents. The predicate Heap(⟨(l , n ), . . . , (lk, nk)⟩) meansDom(h) = l , . . . , lk, < l < . . . < lk and h(li) = ni where h is the currentheap.

Definition 5.2.2 We define the pure formulas EEvale,−→x (n, k) for the expression e andBEvalA,−→x (n) for the pure formula A where we suppose−→x includes FV(e) and FV(A) re-spectively.

EEvale,−→x (n, k) = ∃−→x (Store−→x (n) ∧ e = k),BEvalA,−→x (n) = ∃−→x (Store−→x (n) ∧ A).

EEvale,−→x (n, k)means JeKs = kwhere n represents the store s. BEvalA,−→x (n)means JAKs =True where n represents the store s.

We define the following pure formulas.

Pair (z, x, y) = ∃w(z = w+ ∧ Pair(w, x, y) ∧ IsHeap(y)),Domain(k,m) = ∃yLookup(m, k, y),

Separate(m,m ,m ) = IsHeap(m) ∧ IsHeap(m ) ∧ IsHeap(m )∧∀x(∃y(Elem(m, y, x)) ↔ ∃y(Elem(m , y, x)∨Elem(m , y, x))) ∧ ∀x x y y (Lookup(m , x , y )∧Lookup(m , x , y )→ x = x ).

Domain(k,m) means k ∈ Dom(h) where m represents the heap h.Separate(m,m ,m ) means h = h + h where m, m , and m represent theheaps h, h , and h respectively.

Definition 5.2.3 We define the pure formula HEvalA(x) for the assertion A by induction

Page 78: thesis

70 Chapter 5. Soundness andCompleteness

on A.

HEvalA(m) = A (A is a pure formula),HEvalemp(m) = ¬∃xyLookup(m, x, y),

HEvale 7→e (m) = e > ∧ ∀xy(Lookup(m, x, y) ↔ x = e ∧ y = e ),HEval¬A(m) = ¬HEvalA(m),HEvalA∧B(m) = HEvalA(m) ∧ HEvalB(m),HEvalA∨B(m) = HEvalA(m) ∨ HEvalB(m),HEvalA→B(m) = HEvalA(m)→ HEvalB(m),HEval∀xA(m) = ∀xHEvalA(m),HEval∃xA(m) = ∃xHEvalA(m),HEvalA∗B(m) = ∃y y (Separate(m, y , y ) ∧ HEvalA(y ) ∧ HEvalB(y )),HEvalA−∗B(m) = ∀y y (HEvalA(y ) ∧ Separate(y ,m, y )→ HEvalB(y )).

HEvalA(m)means JAK(s,h) = True where s is the current store and m represents theheap h.

Definition 5.2.4 We define the pure formula EvalA,−→x (n,m) for the assertion A. We sup-pose−→x includes FV(A).

EvalA,−→x (n,m) = ∃−→x (Store−→x (n) ∧ IsHeap(m) ∧ HEvalA(m)).

EvalA,−→x (n,m)means JAK(s,h) = True where n represents the store s andm representsthe heap h.

Page 79: thesis

5.2 Expressiveness 71

Coding of Programs

We define the following pure formulas.

New (n,m) = n > ∧ ¬Domain(n,m) ∧ ¬Domain(n+ ,m),ChangeStorex ,...,xn,xi(m , k,m ) = Lh(m , n+ ) ∧ Lh(m , n+ )∧

∀yx(y < n+ ∧ y = i ∧ Elem(m , y, x)→ Elem(m , y, x))∧Elem(m , i, k),

ChangeHeap(m , l, k,m ) = ∀xy(x = l→ (Lookup(m , x, y) ↔Lookup(m , x, y))) ∧ Lookup(m , l, k).

New (n,m) means n is the address of free cells in h where m represents the heaph. That is, the address n can be used by the next x := cons(e , e ) statement.ChangeStorex ,...,xn,xi(m , k,m ) means m represents the store s[xi := k] where mrepresents the store s. ChangeHeap(m , l, k,m )meansm represents the heap h[l :=k]wherem represents the heap h.

We say the number n represents the result r if r = abort and n = or r = (s, h)and n = (m, k) + wherem represents the store s and k represents the heap h.

Next we extend coding of programs used in [14] to mutual recursive procedurecalls.

Definition 5.2.5 We define the pure formula ExecUP,−→x (m, n , n ) by induction on(m, P) in Figure 5.2.1. We define

ExecP,−→x (n , n ) = ∃k(ExecUP,−→x (k, n , n ))

ExecUP,−→x (k, n , n ) is true if and only if the following: when we execute the k-levelunfolding P(k) of the program P from the state coded by n , one of the possible result-ing states is the state coded by n . The predicate ExecP,−→x (n , n )means JPK(r ) ∋ rwhere n and n represent r and r respectively.

Page 80: thesis

72 Chapter 5. Soundness andCompleteness

ExecUx:=e,−→x (m, n , n ) = (n = → n = )∧(n > →∃y z y w(Pair (n , y , z ) ∧ EEvale,−→x (y ,w)∧ChangeStore−→x ,x(y ,w, y ) ∧ Pair (n , y , z ))),

ExecUif (b) then (P ) else (P ),−→x (m, n , n ) = (n = → n = )∧(n > →∃xy(Pair (n , x, y) ∧ (BEvalb,−→x (x)→ExecUP ,−→x (m, n , n )) ∧ (¬BEvalb,−→x (x)→ ExecUP ,−→x (m, n , n )))),

ExecUwhile (b) do (P),−→x (m, n , n ) = (n = → n = )∧(n > →∃wz(Lh(w, z+ ) ∧ Elem(w, , n ) ∧ Elem(w, z, n )∧∀w (w < z→∃z z w w (Elem(w,w , z ) ∧ Elem(w,w + , z )∧z > ∧ Pair (z ,w ,w ) ∧ BEvalb,−→x (w ) ∧ ExecUP,−→x (m, z , z ))))∧(n > →∃yz(Pair (n , y, z) ∧ ¬BEvalb,−→x (y)))),

ExecUP ;P ,−→x (m, n , n ) = ∃z(ExecUP ,−→x (m, n , z) ∧ ExecUP ,−→x (m, z, n )),

ExecUskip,−→x (m, n , n ) = (n = n ),

ExecUx:=cons(e ,e ),−→x (m, n , n ) = (n = → n = )∧(n > →∃y z y z ww w (Pair (n , y , z ) ∧ EEvale ,−→x (y ,w )∧EEvale ,−→x (y ,w ) ∧ New (w, z ) ∧ ChangeStore−→x ,x(y ,w, y )∧∀xy(x = w ∧ x = w+ → (Lookup(z , x, y) ↔ Lookup(z , x, y)))∧Lookup(z ,w,w ) ∧ Lookup(z ,w+ ,w ) ∧ Pair (n , y , z ))),

ExecUx:=[e],−→x (m, n , n ) = (n = → n = )∧(n > →∃y z y ww (Pair (n , y , z ) ∧ EEvale,−→x (y ,w)∧(¬Domain(w, z )→ n = ) ∧ (Domain(w, z )→Lookup(z ,w,w ) ∧ ChangeStore−→x ,x(y ,w , y ) ∧ Pair (n , y , z )))),

ExecU[e ]:=e ,−→x (m, n , n ) = (n = → n = )∧(n > →∃y z z w w (Pair (n , y , z ) ∧ EEvale ,−→x (y ,w )∧EEvale ,−→x (y ,w ) ∧ (¬Domain(w , z )→ n = )∧(Domain(w , z )→ ChangeHeap(z ,w ,w , z ) ∧ Pair (n , y , z )))),

ExecUdispose(e),−→x (m, n , n ) = (n = → n = )∧(n > →∃y z z w(Pair (n , y , z ) ∧ EEvale,−→x (y ,w)∧(¬Domain(w, z )→ n = ) ∧ (Domain(w, z )→∀xy(Lookup(z , x, y) ∧ x = w ↔ Lookup(z , x, y)) ∧ Pair (n , y , z )))),

ExecURi,−→x ( , n , n ) = (n = ∧ n = ),

ExecURi,−→x (k+ , n , n ) = ExecUQi,−→x (k, n , n ).

Figure 5.2.1: Definition of ExecU

Page 81: thesis

5.2 Expressiveness 73

We define the following abbreviations. Note that they are not formulas.

Storecodex ,...,xn(m, s) = Lh(m, n) ∧ ∀i < n(Elem(m, i, s(xi+ ))),

Heapcode(m, h) = IsHeap(m) ∧ ∀ln(h(l) = n ↔ Lookup(m, l, n)),Result−→x (n, r) = n = ∧ r = abort∨

n > ∧ ∃shyz(r = (s, h) ∧ Pair (n, y, z)∧Storecode−→x (y, s) ∧Heapcode(z, h)).

Storecodex ,...,xn(m, s)means that the numberm is the code that represents the stores for variables x , . . . , xn. Heapcode(m, h)means the numberm is the code that rep-resents the heap h. Result−→x (n, r)means the number n represents the result r.

The following lemma says that ExecUP,−→x (k, n , n ) simulates the execution of thek level of unfolding of the program P.

Lemma 5.2.6 (1) ExecUΩ,−→x ( , n , n ) is true if and only if n = n = .

(2) ExecUP,−→x (k, n , n ) is true if and only if ExecUP(k),−→x ( , n , n ) is true.

(3) ExecUP,−→x (k, n , n ) is true if and only if ExecP(k),−→x (n , n ) is true.

Proof. (1)Wewill show from the left-hand side to the right-hand side. Assume thatExecUΩ,−→x ( , n , n ) is true. By definition, ExecUwhile ( = ) do (skip),−→x ( , n , n ) is true.

Case 1. n = . Then by definition, n = .

Case 2. n > . Then ∃wz(Lh(w, z + ) ∧ Elem(w, , n ) ∧ Elem(w, z, n ) ∧∀w (w < z → ∃z z w w (Elem(w,w , z ) ∧ Elem(w,w + , z ) ∧ z > ∧Pair (z ,w ,w ) ∧ BEval = ,−→x (w ) ∧ ExecUskip,−→x ( , z , z )))) ∧ (n > →∃yz(Pair (n , y, z) ∧ ¬BEvalb,−→x (y))) holds.

Case 2.1. n > . Since BEval = ,−→x (y) is true, ∃yz(Pair (n , y, z) ∧¬BEval = ,−→x (y)) is false. Hence we do not have this case.

Case 2.2. n = . We have w = ⟨w , . . . ,wn⟩ and w = n ,wn = n and for all≤ i < n,ExecUskip,−→x ( ,wi,wi+ ) is true. Then for all ≤ i < n, wi = wi+ . Since,

n = , we have n = . It contradicts the assumption.

Page 82: thesis

74 Chapter 5. Soundness andCompleteness

Therefore, n = n = .

The opposite direction can be shown directly by definition.

(2) Proved by induction on (k, P). We will consider the cases of k.

Case 1. k = .

Here only important case is when P is Ri. Because, induction hypothesis provesother cases in a similar way to those in Case 2.

Case 1.1. P is Ri.

By definition, ExecUP,−→x (k, n , n ) is n = n = . By Proposition 4.1.10 (2),ExecUP(k),−→x ( , n , n ) is ExecUΩ,−→x ( , n , n ). By (1), they are equivalent.

Case 2. k = k + .

Case 2.1. P is atomic.

P is P(k), by definition.

Since ExecUP,−→x (k, n , n ) does not depend on k, ExecUP,−→x (k, n , n ) is the sameas ExecUP(k),−→x ( , n , n ).

Case 2.2. P is if (b) then (P ) else (P ).

Suppose we have some x and y such that Pair (n , x, y) is true. ExecUP,−→x (k, n , n )

is equivalent to ExecUP ,−→x (k, n , n ) when BEvalb,−→x (x) is true andExecUP ,−→x (k, n , n )when¬BEvalb,−→x (x) is true. By induction hypothesis, it is equiv-alent to ExecUP(k),−→x ( , n , n ) when BEvalb,−→x (x) is true and ExecUP(k),−→x ( , n , n )

when¬BEvalb,−→x (x) is true. Hence it is equivalent to ExecUP(k),−→x ( , n , n ).

Case 2.3. P is P ; P .

ExecUP,−→x (k, n , n ) is true if and only if ExecUP ,−→x (k, n , n ) is true andExecUP ,−→x (k, n , n ) is true for some n . By induction hypothesis, it equivalent to thefact that ExecUP(k),−→x ( , n , n ) is true and ExecUP(k),−→x ( , n , n ) is true. Therefore, itis equivalent to ExecUP(k),−→x ( , n , n ).

Page 83: thesis

5.2 Expressiveness 75

Case 2.4. P is while (b) do (P ).

ExecUP,−→x (k, n , n ) is true if and only if ExecUP ,−→x (k,mi,mi+ ) is true for all ≤i < l for some l, wherem = n andml = n by definition. By induction hypothesis, itis equivalent to ExecUP(k),−→x ( ,mi,mi+ ) for all such i. Therefore, ExecUP,−→x (k, n , n )

is true if and only if ExecUP(k),−→x ( , n , n ) is true.

Case 2.5. P is Ri.

By definition ExecUP,−→x (k, n , n ) is ExecUQi,−→x (k , n , n ). By induction hypoth-

esis, it is equivalent to ExecUQ(k )i ,−→x ( , n , n ). Since R(k)

i = Ri[−−→Q(k )] = Q(k )

i bydefinition, ExecUR(k)i ,−→x ( , n , n ) is ExecUQ(k )

i ,−→x ( , n , n ).

Therefore, ExecUP,−→x (k, n , n ) is equivalent to ExecUR(k)i ,−→x ( , n , n ).

(3) From the left-hand side to the right-hand side. Assume the left-hand side. By(2), ExecUP(k),−→x ( , n , n ) is true. Hence ExecP(k),−→x (n , n ) is true.

From the right-hand side to the left-hand side. Assume the right-hand side. ThenExecUP(k),−→x (m, n , n ) is true for some m. It is the same as ExecUP(k),−→x ( , n , n ).Therefore, by (2), ExecUP,−→x (k, n , n ) is true. 2

Representation Lemma forAssertions

The next lemma shows that the pure formulas EEvale,−→x (n, k), BEvalA,−→x (n),HEvalA(m) and EvalA,−→x (n,m) actually have the meaning we explained above. Thenext lemma can be proved in the same way as [14].

Lemma 5.2.7 (Representation of Assertions) (1) EEvale,−→x (n, k) is true if and onlyif ∃s(Storecode−→x (n, s) ∧ JeKs = k) holds.

(2) BEvalA,−→x (n) is true if and only if ∃s(Storecode−→x (n, s) ∧ JAKs = True) holds.

(3) If Heapcode(m, h) holds then JHEvalA(m)Ks = JAK(s,h) also holds.(4) EvalA,−→x (n,m) is true if and only if ∃sh(Storecode−→x (n, s) ∧ Heapcode(m, h) ∧JAK(s,h) = True) holds.

Page 84: thesis

76 Chapter 5. Soundness andCompleteness

Proof. (1)This is similarly proved to (2).

(2) The left-hand side is equivalent to ∀s(J∃−→x (Store−→x (n) ∧ A)Ks = True).It is equivalent to ∃s(JStore−→x (n) ∧ AKs = True). Hence it is equivalent to∃s(JStore−→x (n)Ks = True ∧ JAKs = True). Since JStore−→x (n)Ks = True is equiv-alent to Storecode−→x (n, s), we have the claim.

(3) By induction on A, we will show that ∀mh(Heapcode(m, h) →(JHEvalA(m)Ks = True ↔ JAK(s,h) = True)) holds. Assume that Heapcode(m, h)holds. We will show that JHEvalA(m)Ks = True ↔ JAK(s,h) = True holds. Weconsider cases according to A.

Case 1. A is a pure formula. We have HEvalA(m) = A and the claim holds.

Case 2. A = emp. By definition HEvalA(m) = ¬∃xyLookup(m, x, y). By defi-nition, JempK(s,h) = True if and only if Dom(h) = ∅. Dom(h) = ∅ if and only if¬∃xyLookup(m, x, y) is true. Hence we have the claim.

Case 3. A = e 7→ e . Let ki be JeiKs. All of JHEvalA(m)Ks = True, k > ∧∀xy(Lookup(m, x, y) ↔ x = k ∧ y = k ), h = ∅[k := k ], and JAK(s,h) = True areequivalent. Hence the claim holds.

Case 4. A = A ∗ A .

From the left-hand side to the right-hand side. AssumeJHEvalA(m)Ks = True. We will show JAK(s,h) = True. We haveJSeparate(m, y , y ) ∧HEvalA (y ) ∧HEvalA (y )Ks[y :=m ,y :=m ] = True for somem ,m . Then JHEvalAi(mi)Ks = True. We have hi such that Heapcode(mi, hi) holds.Then h = h + h . By induction hypothesis with JHEvalAi(mi)Ks = True, we haveJAiK(s,hi) = True. Hence JAK(s,h) = True.

From the right-hand side to the left-hand side. Assume JAK(s,h) = True. We willshow JHEvalA(m)Ks = True. There are h , h such that h = h + h and JAiK(s,hi) =True. We have m ,m such that Heapcode(mi, hi) holds. Then Separate(m,m ,m )

is true. By induction hypothesis for Ai, we have JHEvalAi(mi)Ks = True. HenceJHEvalA(m)Ks = True holds by taking y = m and y = m .

Page 85: thesis

5.2 Expressiveness 77

Case 5. A = A −∗ A .

From the left-hand side to the right-hand side. Assume JHEvalA(m)Ks =

True. We will show JAK(s,h) = True. Assume JA K(s,h ) = True andh + h exists. We will show JA K(s,h+h ) = True. We have m ,m such thatHeapcode(m , h ) and Heapcode(m , h + h ) hold. By induction hypothesis for A ,we have JHEvalA (m )Ks = True. We also have Separate(m ,m,m ) is true. From theassumption, we have JHEvalA (m )Ks = True. By induction hypothesis for A , wehave JA K(s,h+h ) = True.

From the right-hand side to the left-hand side. Assume JAK(s,h) =

True. We will show JHEvalA(m)Ks = True. Fix m ,m and assumeJHEvalA (m ) ∧ Separate(m ,m,m )Ks = True. We will show JHEvalA (m )Ks =

True. We have h , h such that Heapcode(m , h ) and Heapcode(m , h ) hold. Thenh = h + h . By induction hypothesis for A , we have JA K(s,h ) = True. From theassumption, we have JA K(s,h ) = True. By induction hypothesis for A , we haveJHEvalA (m )Ks = True.

CasesA = ¬A ,A ∧A ,A ∨A ,A →A , ∀xA ,∃xA are proved straightforwardlyby using induction hypothesis.

(4) The right-hand side is equivalent to ∃h(Heapcode(m, h) ∧∃s(Storecode−→x (n, s) ∧ JAK(s,h) = True)). Since JAK(s,h) = JHEvalA(m)Ksunder Heapcode(m, h) by (3), it is equivalent to ∃h(Heapcode(m, h) ∧∃s(Storecode−→x (n, s) ∧ JHEvalA(m)Ks = True)). It is equivalent to ∃s(IsHeap(m) ∧Storecode−→x (n, s) ∧ JHEvalA(m)Ks = True). It can be shown from (2) thatBEvalHEvalA(m),−→x (n) = True if and only if ∃s(Storecode−→x (n, s) ∧ JHEvalA(m)Ks =True) holds. Hence it is equivalent to IsHeap(m) ∧ BEvalHEvalA(m),−→x (n). By defini-tion, it is equivalent to ∃−→x (IsHeap(m) ∧ Store−→x (n) ∧ HEvalA(m)), which is theleft-hand side by the definition of EvalA,−→x . 2

Page 86: thesis

78 Chapter 5. Soundness andCompleteness

Representation Lemma for Programs

The next lemma shows that the pure formula ExecUP,−→x (k, n , n ) actually have themeaning we explained above.

Lemma 5.2.8 (Representation of Programs) (1) If ExecUP,−→x (k, n , n ) is true, thenfor all r such that Result−→x (n , r ), we have r such that Result−→x (n , r ) and JP(k)K−(r ) ∋r .

(2) If JP(k)K−(r ) ∋ r , Result−→x (n , r ), and Result−→x (n , r ) hold, thenExecUP,−→x (k, n , n ) is true.

Proof.

(1)We will prove it by induction on (k, P). We will consider the cases of P.

Case 1. P is x := e.

Assume that ExecUx:=e,−→x (k, n , n ) is true and r is given. If n = , then n =

and r = abort, and by taking r to be abort we have r ∈ J(x := e)(k)K−(r ). As-sume n > . Then Pair (n , y , z ), EEvale,−→x (y ,w), ChangeStore−→x ,x(y ,w, y ) andPair (n , y , z ) are true for some values for y , y , z ,w. Now we have s, h such thatr = (s, h). Let n be the value of w. Let r = (s[x := n], h). Then we have JeKs = n.Therefore Result−→x (n , r ) and r ∈ J(x := e)(k)K−(r ).

Case 2. P is P ; P .

Assume that ExecUP ;P ,−→x (k, n , n ) is true and r is given. We have z such thatExecUP ,−→x (k, n , z) ∧ ExecUP ,−→x (k, z, n ) is true. By induction hypothesis for P , wehave r such that Result−→x (z, r ) and r ∈ JP(k)K−(r ) hold. By induction hypoth-esis for P , we have r such that Result−→x (n , r ) and r ∈ JP(k)K−(r ). Thereforer ∈ J(P ; P )(k)K−(r ).

Case 3. P is if (b) then (P ) else (P ).

Assume that ExecUif (b) then (P ) else (P ),−→x (k, n , n ) is true and r is given. If n =

, then n = and r = abort, and by taking r to be abort we have

Page 87: thesis

5.2 Expressiveness 79

J(if (b) then (P ) else (P ))(k)K−(r ) ∋ r . Assume n > . We have x,y such thatPair (n , x, y) is true. Let r = (s, h). Then Storecode−→x (x, s) holds.

Case 3.1. JbKs = True. By Lemma 5.2.7 (2), BEvalb,−→x (x) istrue. Then ExecUP ,−→x (k, n , n ) is true. By induction hypothesis, wehave r such that Result−→x (n , r ) and JP(k)K−(r ) ∋ r . By definition,J(if (b) then (P ) else (P ))(k)K−(r ) ∋ r .

Case 3.2. JbKs = False. In the same way as above, we can show this case.

Case 4. P is while (b) do (P ).

Assume that ExecUwhile (b) do (P ),−→x (k, n , n ) is true and r is given. If n =

, then n = and r = abort, and by taking r to be abort we haveJ(while (b) do (P ))(k)K−(r ) ∋ r . Assume n > . Let y , z be such thatPair (n , y , z ) is true. We have w = ⟨w , . . . ,wn⟩, w = n ,wn = n andExecUP ,−→x (k,wi,wi+ ) for all < i < n. We also have either wn = or wn >

and the fact that BEvalb,−→x (yn) is true where Pair (n , yn, zn) is true for some yn, zn.By repeatedly using induction hypothesis for P , we have r′, . . . , r′n such that r′ = r ,Result−→x (wi, r′i) for all < i ≤ n, and JP(k)K−(r′i) ∋ r′i+ for all < i < n. By Lemma5.2.7 (2), we also have either rn = abort or JbKsn = False and rn = abort wherern = (sn, hn). Let r = r′n. By Proposition 4.2.4, Jwhile (b) do (P(k))K−(r ) ∋ r .Then by definition, J(while (b) do (P ))(k)K−(r ) ∋ r .

Case 5. P is skip.

Its proof is immediate.

Case 6. P is x := cons(e , e ).

Assume that ExecUx:=cons(e ,e ),−→x (k, n , n ) is true and r is given. If n = ,then n = and r = abort, and by taking r to be abort we have r ∈J(x := cons(e , e ))(k)K−(r ). Assume n > . We have s, h such that r = (s, h).Let n be the value of w in the definition of ExecUx:=cons(e ,e ),−→x (k, n , n ). Let r =

(s[x := n], h[n := Je Ks, n+ := Je Ks]). Then n, n + ∈ Dom(h). Thereforer ∈ J(x := cons(e , e ))(k)K−(r ). We also have Result−→x (n , r ).

Page 88: thesis

80 Chapter 5. Soundness andCompleteness

Case 7. P is x := [e].

Assume that ExecUx:=[e],−→x (k, n , n ) is true and r is given. If n = , then n =

and r = abort, and by taking r to be abort we have r ∈ Jx := [e]K−(r ). Assumen > . We have s, h such that r = (s, h). Take r such that either r = (s[x :=

h(JeKs)], h) and JeKs ∈ Dom(h) or r = abort and Je Ks ∈ Dom(h). Then r ∈J(x := [e])(k)K−(r ). We also have Result−→x (n , r ).

Case 8. P is [e ] := e .

Assume that ExecU[e ]:=e ,−→x (k, n , n ) is true and r is given. If n = , then n =

and r = abort, and by taking r to be abort we have r ∈ J[e ] := e K−(r ). Assumen > . We have s, h such that r = (s, h). Take r such that either Je Ks ∈ Dom(h)and r = (s, h[Je Ks := Je Ks]) or Je Ks ∈ Dom(h) and r = abort. Then r ∈J([e ] := e )(k)K−(r ). We also have Result−→x (n , r ).

Case 9. P is dispose(e).

Assume that ExecUdispose(e),−→x (k, n , n ) is true and r is given. If n = , thenn = and r = abort, and by taking r to be abort we have r ∈ Jdispose(e)K−(r ).Assume n > . We have s, h such that r = (s, h). Take r such that eitherr = (s, h|Dom(h)−JeKs) and JeKs ∈ Dom(h) or r = abort and Je Ks ∈ Dom(h).Then r ∈ J(dispose(e))(k)K−(r ). We also have Result−→x (n , r ).

Case 10. P is Ri. We consider cases according to k.

Case 10.1. k = .

Assume ExecURi,−→x ( , n , n ) is true and r is given. By definition, n = n = andr = abort. Let r be abort. Then the claim holds.

Case 10.2. k = k′ + .

By definition ExecURi,−→x (k′ + , n , n ) = ExecUQi,−→x (k′, n , n ) and Q(k′)

i =

R(k′+ )i . By induction hypothesis for k′, the claim holds for ExecUQi,

−→x (k′, n , n )

and JQ(k′)i K−(r ) ∋ r . Hence the claim holds for ExecURi,−→x (k′ + , n , n ) andJR(k′+ )

i K−(r ) ∋ r .

Page 89: thesis

5.2 Expressiveness 81

(2)We will prove it by induction on (k, P). We will consider the cases of P.

Case 1. P is x := e.

Assume the conditions. We will show that ExecUx:=e,−→x (k, n , n ) is true. If r =

abort, then r = abort and we have n = n = , and hence ExecUx:=e,−→x (k, n , n )

is true. Now assume r = (s, h). We have some n , n such that Result−→x (n , r ) andResult−→x (n , r ) hold. Then n > . We have y , z such that Pair (n , y , z ) is true andStorecode−→x (y , s) and Heapcode(z , h) hold. We also have EEvale,−→x (y , n).

Then r = (s , h) where s = (s[x := n]). Then we have y , z such thatStorecode(y , s ) and Heapcode(z , h) hold. Then ChangeStore−→x ,x(y , n, y ) is trueand z = z . Then by definition, ExecUx:=e,−→x (k, n , n ) is true.

Case 2. P is P ; P .

Assume the conditions. We will show that ExecUP,−→x (k, n , n ) is true. We haveJ(P ; P )(k)K−(r ) ∋ r . By definition, we have r such that r ∈ JP(k)K−(r ) and r ∈JP(k)K−(r ). Suppose z is such that Result−→x (z, r ) holds. By induction hypothesis forP , ExecUP ,−→x (k, n , z) is true. By induction hypothesis for P , ExecUP ,−→x (k, z, n ) istrue. Hence by definition ExecUP,−→x (k, n , n ) is true.

Case 3. P is if (b) then (P ) else (P ).

Assume the conditions. We will show that ExecUP,−→x (k, n , n ) is true. We haveJP(k)K−(r ) ∋ r . If r = abort, then r = abort and we have n = n = , and henceExecP,−→x (k, n , n ) is true. Assume r = (s, h). Then n > . We have y ,z such thatPair (n , y , z ) is true.

Case 3.1. BEvalb,−→x (y ) is true. By Lemma 5.2.7 (2), Storecode−→x (y , s) andJbKs = True hold. Then by definition, JP(k)K−(r ) ∋ r . By induction hypothesis,ExecUP ,−→x (k, n , n ) is true. Then by definition, ExecUP,−→x (k, n , n ) is true.

Case 3.2. BEvalb,−→x (y ) is false. In the same way as above we can show the claim.

Case 4. P is while (b) do (P ).

Page 90: thesis

82 Chapter 5. Soundness andCompleteness

Assume the conditions. We have J(while (b)(P ))(k)K−(r ) ∋ r . Ifr = abort, then r = abort and we have n = n = . HenceExecUwhile (b) do (P ),−→x (k, n , n ) is true. Now assume r = (s, h). By Proposition 4.2.4,we have (s , h ), . . . , (sm− , hm− ), rm such that (s, h) = (s , h ), for all i = , . . . ,m−, JP(k)K−((si, hi)) ∋ (si+ , hi+ ), JbKsi = True, JP(k)K−((sm− , hm− )) ∋ rm, eitherJbKsm− = True and rm = abort or rm = (sm, hm) and JbKsm = False for some sm, hm.

Then we have z , . . . , zm such that for all i = , . . . ,m − , Result−→x (zi, (si, hi))holds and either zm = or Result−→x (zm, (sm, hm)) holds. Then z = n and zm = n .We also have y , . . . , ym, y′, . . . , y′m such that for all i = , . . . ,m − , BEvalb,−→x (yi)is true where Pair (zi, yi, y′i) is true, and either zm = or Pair (zm, ym, y′m) is trueand BEvalb,−→x (ym) is false. For all i = , . . . ,m − , by induction hypothesis,ExecUP ,−→x (k, zi, zi+ ) is true. Then by definition, ExecUwhile (b) do (P ),−→x (k, n , n ) istrue.

Case 5. P is skip.

Its proof is immediate.

Case 6. P is x := cons(e , e ).

Assume the conditions. We will show that ExecUx:=cons(e ,e ),−→x (k, n , n ) is true.We have r ∈ J(x := cons(e , e ))(k)K−(r ). If r = abort, then r = abort andwe have n = n = , and hence ExecUx:=cons(e ,e ),−→x (k, n , n ) is true. Now as-sume r = (s, h). Then r = (s , h ) where s = (s[x := n]), h = h[n :=Je Ks, n+ := Je Ks] and n, n+ ∈ Dom(h). Then n > . We also have y , z , y , zsuch that Pair (n , y , z ) and Pair (n , y , z ) are true. Then Storecode−→x (y , s),Heapcode(z , h), Storecode−→x (y , s ) and Heapcode(z , h ) hold. Let w = n, w =Je Ks and w = Je Ks. Then by Lemma 5.2.7 (1), EEvale ,−→x (y ,w ), EEvale ,−→x (y ,w )

and New (w, z ) are true. Then ChangeStore−→x ,x(y ,w, y ), ∀xy(x = w ∧ x = w +

→(Lookup(z , x, y) ↔ Lookup(z , x, y))) andLookup(z ,w,w )∧Lookup(z ,w+,w ) are true. Then by definition, ExecUx:=cons(e ,e ),−→x (k, n , n ) is true.

Case 7. P is x := [e].

Page 91: thesis

5.2 Expressiveness 83

Assume the conditions. Wewill show that ExecUx:=[e],−→x (k, n , n ) is true. We haver ∈ J(x := [e])(k)K−(r ). If r = abort, then r = abort and we have n = n = ,and hence ExecUx:=[e],−→x (k, n , n ) is true. Now assume r = (s, h). Then n > . Wehave y , z such thatPair (n , y , z ) is true andStorecode−→x (y , s) andHeapcode(z , h)hold. Let w be JeKs.

Assume that Domain(w, z ) is true. By Lemma 5.2.7 (1), EEvale,−→x (y ,w) is true.Then JeKs ∈ Dom(h). Then r = (s , h ) where h(w) = w , s = (s[x := w ])

and h = h. Then we have y such that Pair (n , y , z ) is true and Storecode−→x (y , s )and Heapcode(z , h ) hold. Then Lookup(z ,w,w ) and ChangeStore−→x ,x(y ,w , y )are true. Now assume that ¬Domain(w, z ) is true. Then JeKs ∈ Dom(h). Thenr = abort and hence n = . Then by definition, ExecUx:=[e],−→x (k, n , n ) is true inboth cases.

Case 8. P is [e ] := e .

Assume the conditions. We will show that ExecU[e ]:=e ,−→x (k, n , n ) is true. Wehave r ∈ J([e ] := e )(k)K−(r ). If r = abort, then r = abort and we have n =

n = , and hence ExecU[e ]:=e ,−→x (k, n , n ) is true. Now assume r = (s, h). Thenn > . We have y , z such that Pair (n , y , z ) is true and Storecode−→x (y , s) andHeapcode(z , h) hold. Let w be Je Ks and w be Je Ks. Assume that Domain(w, z ) istrue. By Lemma 5.2.7 (1), EEvale ,−→x (y ,w) and EEvale ,−→x (y ,w ) are true. Then wehave w ∈ Dom(h). Then r = (s , h ) where s = s and h = h[w := w ]. Then wehave z such that Pair (n , y , z ) is true and Storecode−→x (y , s ) andHeapcode(z , h )

hold. Then ChangeHeap(z ,w,w , z ) is true. Now assume that ¬Domain(w, z ) istrue. Then JeKs ∈ Dom(h). Then r = abort and hence n = . Then by definition,ExecU[e ]:=e ,−→x (k, n , n ) is true in both cases.

Case 9. P is dispose(e).

Assume the conditions. We will show that ExecUdispose(e),−→x (k, n , n ) is true. Wehave r ∈ J(dispose(e))(k)K−(r ). If r = abort, then r = abort and we haven = n = , and hence ExecUdispose(e),−→x (k, n , n ) is true. Now assume r = (s, h).Then n > . We have y , z such that Pair (n , y , z ) is true and Storecode−→x (y , s)and Heapcode(z , h) hold. Let w be JeKs. By Lemma 5.2.7 (1), EEvale,−→x (y ,w) is

Page 92: thesis

84 Chapter 5. Soundness andCompleteness

true. Assume that Domain(w, z ) is true. Then we have JeKs ∈ Dom(h). Thenr = (s , h ) where s = s and h = h|Dom(h)−JeKs. Then we have z suchthat Pair (n , y , z ) is true and Storecode−→x (y , s ) and Heapcode(z , h ) hold. Then∀xy(Lookup(z , x, y) ∧ x = w ↔ Lookup(z , x, y)) is true. Now assume that¬Domain(w, z ) is true. Then JeKs ∈ Dom(h). Then r = abort and hence n = .Then by definition, ExecUdispose(e),−→x (k, n , n ) is true in both cases.

Case 10. P is Ri. We consider cases according to k.

Case 10.1. k = .

Assume that the conditions. By Proposition 4.2.5 JR( )i K−(abort) ∋ abort and

for all s, h, JR( )i K−((s, h)) = ∅. Hence we have r = r = abort. Since

Result−→x (n , r ) and Result−→x (n , r ) hold, by definition n = n = . Then by def-inition ExecURi,−→x ( , n , n ) is true.

Case 10.2. k = k′ + .

This case is proved in a similar way to that in (1). 2

The next lemma shows that the pure formula ExecP,−→x (n , n ) actually have themeaning we explained above.

Lemma 5.2.9 (1) If ExecP,−→x (n , n ) is true, then for all r such that Result−→x (n , r ), wehave r such that Result−→x (n , r ) and JPK(r ) ∋ r .

(2) If JPK(r ) ∋ r , Result−→x (n , r ), and Result−→x (n , r ) hold, then ExecP,−→x (n , n ) istrue.

Proof.

(1) By using Lemma 5.2.8 (1), we can prove the claim in a similar way to (2).

(2) Assume JPK(r ) ∋ r , Result−→x (n , r ), and Result−→x (n , r ). Then we have ksuch that JP(k)K−(r ) ∋ r . FromLemma5.2.8 (2), ExecUP,−→x (k, n , n ) is true. HenceExecUP,−→x (k, n , n ) is true. 2

Page 93: thesis

5.2 Expressiveness 85

Weakest Precondition

We define the weakest precondition for a program and a postcondition. We alsodefine a formulaWP,A(

−→x ) and show that it is theweakest preconditionof theprogramP and the postcondition A.

Definition 5.2.10 For a program P and an assertion A, the weakest precondition for Pand A under the standard interpretation is defined as the set (s, h) | ∀r(JPK((s, h)) ∋r→ r = abort ∧ JAKr = True) .

Since we have defined Exec and have shown Lemma 5.2.9 for our extended pro-gramming language with procedure calls, we can show the existence of the assertionof the weakest precondition in the same way as [14].

Definition 5.2.11 We define the formula WP,A(−→x ) for the program P and the assertion

A. We fix some sequence−→x of the variables that includes EFV(P) ∪ FV(A).

WP,A(−→x ) = ∀xyzw(Store−→x (x) ∧ Heap(y) ∧ Pair (z, x, y) ∧ ExecP,−→x (z,w)→w > ∧ ∃y z (Pair (w, y , z ) ∧ EvalA,−→x (y , z ))).

WP,A(−→x ) means the weakest precondition for P and A. That is, WP,A(

−→x ) gives theweakest assertion W such that WPA is true. Note that all the free variables inWP,A(

−→x ) are−→x and they appear only in Store−→x (x).

The next lemma says thatWP,A(−→x ) indeed describes the weakest precondition for

P and A.

Lemma 5.2.12 (1) WP,A(−→x )PA is true.

(2) If JPK((s, h)) ∋ r implies r = abort and JAKr = True for all r, thenJWP,A(−→x )K(s,h) = True.

Proof. (1) Assume JWP,A(−→x )K(s,h) = True and JPK((s, h)) ∋ r. We will show

r = abort and JAKr = True. We have n ,n ,n and m such that Result−→x (n , (s, h))and Result−→x (n , r) hold and Pair (n , n,m) is true. We have JStore−→x (n)K(s,h) andJHeap(m)K(s,h) are true and Storecode−→x (n, s) and Heapcode(m, h) hold. By Lemma

Page 94: thesis

86 Chapter 5. Soundness andCompleteness

5.2.9 (2), ExecP,−→x (n , n ) is true.

By letting x = n, y = m, z = n ,w = n in the definition of WP,A(−→x ),

from JWP,A(−→x )K(s,h) = True, we have n > ∧ ∃y z (Pair (n , y , z ) ∧

EvalA,−→x (y , z )). By n > , r = abort. Let r = (s , h ). We have n′,m′

such that Pair (n , n′,m′) and EvalA,−→x (n′,m′) are true. By Lemma 5.2.7 (4), wehave s′, h′ such that Storecode−→x (n′, s′) ∧ Heapcode(m′, h′) ∧ JAK(s′,h′) holds. SinceStorecode−→x (n′, s ) andHeapcode(m′, h ) hold, we have s′ =−→x s and h′ = h . HenceJAK(s ,h ) = True, that is, JAKr = True.

(2) Assume that for all r, JPK((s, h)) ∋ r implies r = abort ∧ JAKr = True. Wewill show JWP,A(

−→x )K(s,h) = True. Fix n,m, n , n and assume Store−→x (n), Heap(m),Pair (n , n,m), and ExecP,−→x (n , n ) are true at (s, h). We will show that n > and∃y z (Pair (n , y , z ) ∧ EvalA,−→x (y , z )) is true.

We have Result−→x (n , (s, h)). By Lemma 5.2.9 (1) with ExecP,−→x (n , n ), we haver such that JPK((s, h)) ∋ r and Result−→x (n , r ). By the assumption, JAKr =

True. Let r be (s , h ). We have n′,m′ such that Pair (n , n′,m′) is true. ThenStorecode−→x (n′, s ) and Heapcode(m′, h ) hold. Since the right-hand side of Lemma5.2.7 (4) holds by letting s = s and h = h , we have EvalA,−→x (n′,m′). Hence∃y z (Pair (n , y , z ) ∧ EvalA,−→x (y , z )) is true by taking y = n′ and z = m′.

Therefore, ∀xyzw′(Store−→x (x) ∧ Heap(y) ∧ Pair (z, x, y) ∧ ExecP,−→x (z,w′) →w′ > ∧ ∃y z (Pair (w′, y , z ) ∧ EvalA,−→x (y , z ))) is true at (s, h), that is,JWP,A(

−→x )K(s,h) = True. 2

We present the main theorem about expressiveness.

Theorem 5.2.13 (Expressiveness) Our assertion language is expressive for our pro-gramming language under the standard interpretation. Namely for every program P andassertion A, there is a formula W such that JWK(s,h) is true if and only if (s, h) is in theweakest precondition for P and A under the standard interpretation.

Proof. Since Lemma 5.2.12 (1) and (2) show WP,A(−→x ) defines the weakest pre-

condition for P and A under the standard interpretation, the weakest precondition isdefinable in our language. 2

Page 95: thesis

5.3 Completeness 87

5.3 Completeness

This section is the most important section of this paper. It shows that our systemis complete. Although the proof technique here is inspired from [1], we introducesome important concepts to show the completeness of our system. We also definethe strongest postcondition in the same way as the weakest precondition in order tofollow a similar story of proofs in [1].

For the form of Γ ⊢ APBwhere Γ is empty, we will write ⊢ APB.

Strongest Postcondition

APTrue is true when P does not abort at a state for which A is true. We callAPTrue the abort-free condition for A and P.

The set S of states is called the strongest postcondition for A and P if(1) For all rr′, JAKr = True and JPK(r) ∋ r′ implies r′ = abort and r′ ∈ S.(2) For all S′, we have S ⊆ S′ if for all rr′, JAKr = True and JPK(r) ∋ r′ implies

r′ = abort and r′ ∈ S.Note that the strongest postcondition S forA and P exists if P does not abort at any

state that satisfies A. Since we have defined Exec and have shown Lemma 5.2.9 forour extendedprogramming languagewith procedure calls, we candefine the assertionthat describes the strongest postcondition of A and P.

Definition 5.3.1 We define the pure formula SA,P(−→x ) for the assertion A and the pro-gram P. We fix some sequence−→x of the variables that includes EFV(P) ∪ FV(A).

SA,P(−→x ) = ∃xyzw(EvalA,−→x (x, y) ∧ Pair (z, x, y) ∧ ExecP,−→x (z,w)∧∃y z (Pair (w, y , z ) ∧ Store−→x (y ) ∧ Heap(z ))).

SA,P(−→x ) describes the strongest postcondition for A and P. That is, SA,P(−→x ) givesthe strongest assertion B such that APB is true, under the condition that P doesnot abort at any state that satisfiesA. Note that all the free variables in SA,P(−→x ) are−→x

Page 96: thesis

88 Chapter 5. Soundness andCompleteness

and they appear only in Store−→x (x).

Lemma 5.3.2 (1) If APTrue is true then APSA,P(−→x ) is true.

(2) If JSA,P(−→x )K(s′,h′) is true then there exists s, h such that JAK(s,h) is true andJPK((s, h)) ∋ (s′, h′).

Proof. (1) Assume that APTrue is true. Assume JAK(s,h) = True.Then JPK((s, h)) ∋ abort. Assume JPK((s, h)) ∋ (s′, h′). Let −→x =

x , . . . , xn. Let n be ⟨s(x ), . . . , s(xn)⟩ and m be ⟨(l , h(l )), . . . , (lm, h(lm))⟩where Dom(h) = li | i ≤ m . Let n be ⟨s′(x ), . . . , s′(xn)⟩, m be⟨(l′ , h′(l′ )), . . . , (l′m′ , h′(l′m′))⟩ where Dom(h′) = l′i | i ≤ m′ . Let p be(n ,m ) + and p be (n ,m ) + . Then Pair (p , n ,m ) and Pair (p , n ,m )

are true. Then Storecode−→x (n , s), Heapcode(m , h), Storecode−→x (n , s′) andHeapcode(m , h′) hold. Then we have Result−→x (p , (s, h)) and Result−→x (p , (s′, h′)).Then ∃sh(Storecode−→x (n , s) ∧ Heapcode(m , h) ∧ JAK(s,h) = True) holds. ByLemma 5.2.7 (4), EvalA,−→x (n ,m ) is true. By Lemma 5.2.9 (2), ExecP,−→x (p , p )

is true. By definition, we also have JStore−→x (n ) ∧Heap(m )K(s′,h′) is true.By taking y, z,w,w , y , z to be n ,m , p , p , n ,m , J∃yzww y z (EvalA,−→x (y, z) ∧Pair (w, y, z)∧ ExecP,−→x (w,w )∧ Pair (w , y , z )∧ Store−→x (y )∧Heap(z ))K(s′,h′) istrue. Then by definition, JSA,P(−→x )K(s′,h′) = True. Therefore, APSA,P(x) is true.

(2) Assume that JSA,P(−→x )K(s′,h′) is true. By definition, we have n ,m , p , p , n ,msuch that EvalA,−→x (n ,m ), Pair (p , n ,m ), ExecP,−→x (p , p ), Pair (p , n ,m ),JStore−→x (n )K(s′,h′) = True and JHeap(m )K(s′,h′) = True hold. Then we haveStorecode−→x (n , s′) andHeapcode(m , h′). By Lemma5.2.7 (4)with EvalA,−→x (n ,m ),we have s , h such that Storecode−→x (n , s ), Heapcode(m , h), and JAK(s ,h) = True.Then we have Result−→x (p , (s , h)). Since we have ExecP,−→x (p , p ), by Lemma 5.2.9(1), we have r such that Result−→x (p , r ) and JPK((s , h)) ∋ r . Since p > , wehave r = abort. Take s , h′ such that r = (s , h′). We define s to be [s , s′,−→x ]. Sinces =EFV(P) s , by Lemma 4.2.16 (2), JPK((s, h)) ∋ ([s , s,EFV(P)], h′).

We will show s′ = [s , s,EFV(P)]. We have s =−→x s′ since Storecode(n , s ) andStorecode(n , s′). Hence s′ =EFV(P) s . By the definition of s, we have s′ =(−→x )c s.Since s =−→x s by the definition of s, s =EFV(P)c s by Lemma 4.2.16 (3), and s =−→x s′,

Page 97: thesis

5.3 Completeness 89

we have s =−→x −EFV(P) s′. Hence s =EFV(P)c s′. Therefore s′ = [s , s,EFV(P)].

Hence JPK((s, h)) ∋ (s′, h′). We also have JAK(s,h) is true since s =−→x s . 2

Remark. The following is shown by Lemma 5.3.2 (2): if APB is true, thenSA,P(−→x )→ B is true.

Auxiliary Lemmas

Lemma 5.3.3 A→∃x(Heap(x) ∧ HEvalA(x)) is true.

Proof. Assume JAK(s,h) = True. Let m be ⟨(k , l ), . . . , (kn, ln)⟩ where h(ki) = lifor i = , . . . , n and Dom(h) = ki | i = , . . . , n and < k < . . . < kn.Then Heapcode(m, h) holds. Then JHeap(m)K(s,h) = True. Then by Lemma 5.2.7(3), JHEvalA(m)K(s,h) = True. By taking x to be m, ∃x(Heap(x) ∧ HEvalA(x)) istrue. 2

Lemma 5.3.4 If APB is true, then A→WP,B(−→x ) is true.

Proof. Assume JAK(s,h) = True. We will show JWP,B(−→x )K(s,h) = True.

Assume JPK((s, h)) ∋ r. Since APB is true, r = abort and JBKr = True.Hence we have JPK((s, h)) ∋ r implies r = abort and JBKr = True. By Lemma5.2.12 (2), we have JWP,B(

−→x )K(s,h) = True.

Hence A→WP,B(−→x ) is true. 2

The following lemma (1) shows that the inference rule (RULE 10: SUBSTITU-TIONRULE I) in [1] is derivable in our system.

Lemma 5.3.5 (1) If Γ ⊢ APB is provable and −→u ,−→w ∈ EFV(P), then Γ ⊢A[−→u := −→w ]PB[−→u := −→w ] is provable.

(2) If Γ ⊢ APB is true and −→u ,−→w ∈ EFV(P), then Γ ⊢ A[−→u :=−→w ]PB[−→u := −→w ] is true.

Proof.

Page 98: thesis

90 Chapter 5. Soundness andCompleteness

(1) Assume Γ ⊢ APB and −→u ,−→w ∈ EFV(P). Then by (inv-conj), Γ ⊢A ∧ −→u = −→w PB ∧ −→u = −→w . We have B ∧ −→u = −→w → B[−→u := −→w ]. Then by(conseq), Γ ⊢ A ∧ −→u = −→w PB[−→u := −→w ]. Then by (exists), Γ ⊢ ∃−→u (A ∧−→u = −→w )PB[−→u := −→w ]. We have A[−→u := −→w ]→∃−→u (A ∧ −→u = −→w ). Then by(conseq), Γ ⊢ A[−→u := −→w ]PB[−→u := −→w ].

(2) Assume Γ ⊢ APB is true and−→u ,−→w ∈ EFV(P). Let A be A[−→u := −→w ]

and B be B[−→u := −→w ].

Assume JA K(s ,h ) = True and r ∈ JPK((s , h )). We will show r = abort andJB Kr = True.

Let s′ be s [−→u := s (−→w )]. Then JAK(s′,h ) = True. Since s =EFV(P) s′, by Lemma4.2.16 (1), we have r = abort. Let r be (s , h ) and s′ be [s , s′,EFV(P)]. By Lemma4.2.16 (2), we have (s′ , h ) ∈ JPK((s′, h )). Then JBK(s′ ,h ) = True.

We will show s′ = s [−→u := s (−→w )]. Since s′ = s [−→u := s (−→w )] by the defi-nition and s [−→u := s (−→w )] =EFV(P)c s [−→u := s (−→w )] by −→w ∈ EFV(P), we haves′ =EFV(P)c s [−→u := s (−→w )]. By combining it with s [−→u := s (−→w )] = [s , s [−→u :=

s (−→w )],EFV(P)] by −→u ∈ EFV(P), we have s [−→u := s (−→w )] = [s , s′,EFV(P)].Hence s [−→u := s (−→w )] = s′ by the definition of s′ .

Hence JBK(s [−→u :=s (−→w )],h ) = True. Therefore JB K(s ,h ) = True. 2Let V be

∪nproci= EFV(Ri). From now, we assume that for given assertions and pro-

grams that we will discuss, we fix some sequence−→x ′ of variables that contains V, thefree variables of the assertions, and the extended free variables of the programs. Wenext choose the sequence of mutually distinct variables −→y = y , . . . , yl such thaty , . . . , yl = V. Then we choose the sequence of variables−→z = z , . . . , zl and avariable xh such that any variables in them and−→x ′ are mutually distinct. Finally forconstruction of the weakest preconditions and the strongest post conditions, we take−→x to be the union of−→x ′,−→z , and xh.

The next definition plays a key role in our completeness proof.

Definition 5.3.6 We define Gi as−→y = −→z ∧Heap(xh) ∧WRi,True(−→x ). We also define

Fi as GiRiSGi,Ri(−→x ).

Page 99: thesis

5.3 Completeness 91

HereGi has three purposes. First, the expression−→y = −→z enables us to describe com-plete information of a given store, which is inspired from [1]. Second, the expres-sion Heap(xh) enables us to describe complete information of a given heap. Third,WRi,True(

−→x ) ensures abort-free execution of the program, which enables us to use thestrongest postcondition.

Main Proofs

The following Lemma is the key lemma to prove the completeness theorem.

Lemma 5.3.7 If APB is true then F , . . . , Fnproc ⊢ APB is provable.

Proof. Wewill prove it by induction on P. We will consider the cases of P.

Case 1. P is x := e.

Wewill show thatA→B[x := e] is true. Assume JAK(s,h) = True. Let n be JeKs. Wehave Jx := eK((s, h)) = (s , h) where s = s[x := n]. Since Ax := eB is true,JBK(s ,h) = True. Since JBK(s ,h) = JB[x := e]K(s,h), we have JB[x := e]K(s,h) = True.Hence A→ B[x := e] is true.

By applying the rule (conseq) to the fact thatA→B[x := e] is true and⊢ B[x :=e]x := eB from the axiom (assignment), we have ⊢ Ax := eB. Therefore,F , . . . , Fnproc ⊢ Ax := eB.

Case 2. P is if (b) then (P ) else (P ).

Assume that APB is true. First, we will show that A ∧ bP B is true. As-sume JA ∧ bK(s,h) = True and JP K((s, h)) ∋ r. We have JbKs = JbK(s,h) = Trueby Lemma 5.1.3. Hence JPK((s, h)) = JP K((s, h)) ∋ r. Then r = abort andJBKr = True. Hence A ∧ bP B is true.

Similarly A ∧ ¬bP B is true.

By induction hypothesis for P and P , we have F , . . . , Fnproc ⊢ A ∧ bP B andF , . . . , Fnproc ⊢ A ∧ ¬bP B. By the rule (if), therefore, we have F , . . . , Fnproc ⊢Aif (b) then (P ) else (P )B.

Page 100: thesis

92 Chapter 5. Soundness andCompleteness

Case 3. P is while (b) do (P ).

Assume that APB is true. Let C beWP,B(−→x ).

We will show that C ∧ bP C is true. Assume JC ∧ bK(s,h) = True andJP K((s, h)) ∋ r. We will show r = abort and JCKr = True. We have JbKs =JbK(s,h) = True by Lemma 5.1.3. By the definition of JPK, we have JPK((s, h)) ⊇JPK(r). Since CPB is true by Lemma 5.2.12 (1), from JCK(s,h) = True, we haveJPK((s, h)) ∋ abort. Hence r = abort. Assume JPK(r) ∋ r′. Then r′ ∈ JPK((s, h))and we have r′ = abort and JBKr′ = True. By Lemma 5.2.12 (2), we have JCKr =True. Hence C ∧ bP C is true.

By induction hypothesis for P , we have F , . . . , Fnproc ⊢ C ∧ bP C.

By Lemma 5.3.4 with APB being true, A→ C is true.

Wewill show thatC∧¬b→B is true. Assume JC ∧ ¬bK(s,h) = True. Wewill showJBK(s,h) = True. We have J¬bKs = J¬bK(s,h) = True. Hence JPK((s, h)) = (s, h).Since CPB is true by Lemma 5.2.12 (1), from JCK(s,h) = True and JPK((s, h)) =(s, h), we have JBK(s,h) = True. Hence C ∧ ¬b→ B is true.

Since F , . . . , Fnproc ⊢ C ∧ bP C, by the rule (while), we have F , . . . , Fnproc ⊢CPC ∧ ¬b. Since A → C and C ∧ ¬b → B are true, by the rule (conseq), wehave F , . . . , Fnproc ⊢ APB.

Case 4. P is P ; P .

Assume that AP ; P B is true. Let P be P ; P andC beWP ,B(−→x ). By Lemma

5.2.12 (1), CP B is true.

Wewill show that AP C is true. Assume JAK(s,h) = True and JP K((s, h)) ∋ r.We will show r = abort and JCKr = True. Since APB is true, JPK((s, h)) ∋abort. Since JPK((s, h)) ⊇ JP K(r) by the definition of JPK, r = abort. We will showJCKr = True. Assume JP K(r) ∋ r . Then JPK((s, h)) ∋ r . Then we have r = abortand JBKr = True. By Lemma 5.2.12 (2), JCKr = True. Hence AP C is true.

By induction hypothesis for P and P , we have F , . . . , Fnproc ⊢ AP C andF , . . . , Fnproc ⊢ CP B. By the rule (comp), we have therefore F , . . . , Fnproc ⊢

Page 101: thesis

5.3 Completeness 93

AP ; P B.

Case 5. P is x := cons(e , e ).

Assume Ax := cons(e , e )B is true. Let x′ be such that x′ ∈ FV(e , e ,B) andC be ∀x′((x′ 7→ e , e ) −∗ B[x := x′]).

We will show that A → C is true. Assume JAK(s,h) = True. Fix n. Lets′ = s[x′ := n]. We will show J(x′ 7→ e , e ) −∗ B[x := x′]K(s′,h) = True. AssumeJx′ 7→ e , e K(s′,h ) = True. Then h = ∅[n := Je Ks′ , n + := Je Ks′ ]. Assume thath+ h exists. Let h = h+ h . Now we will prove that JB[x := x′]K(s′,h ) = True.

Let s = s[x := n]. Since Jx := cons(e , e )K((s, h)) ∋ (s , h ) by definition, wehave JBK(s ,h ) = True. Since JB[x := x′]K(s′,h ) = JBK(s ,h ) by x′ ∈ FV(B), we haveJB[x := x′]K(s′,h ) = True. Therefore J(x′ 7→ e , e ) −∗ B[x := x′]K(s′,h) = True.

Hence J(x′ 7→ e , e ) −∗ B[x := x′]K(s′,h) = True for all n. HenceJ∀x′((x′ 7→ e , e ) −∗ B[x := x′])K(s,h) = True. Hence A→ C is true.

Since ⊢ Cx := cons(e , e )B is provable by the axiom (cons) and A →C is true, we have ⊢ Ax := cons(e , e )B by the rule (conseq). Therefore,F , . . . , Fnproc ⊢ Ax := cons(e , e )B.

Case 6. P is x := [e].

Assume that Ax := [e]B is true. Let x′ ∈ FV(e,B), and C be ∃x′(e 7→ x′ ∗(e 7→ x′ −∗ B[x := x′])).

We will show A→ C. Assume JAK(s,h) = True. We will show JCK(s,h) = True.

Let n be JeKs. Since APB is true, JPK((s, h)) ∋ abort. Hence n ∈ Dom(h).Let h(n) = n . We have JPK((s, h)) = (s , h) and JBK(s ,h) = True where s =

s[x := n ]. Let h = h|n, h = h|Dom(h)−n, and s′ = s[x′ := n ]. Then h = h + h .

We have Je 7→ x′K(s′,h ) = True since JeKs′ = JeKs = n by x′ ∈ FV(e).

We will show Je 7→ x′ −∗ B[x := x′]K(s′,h ) = True. Assume Je 7→ x′K(s′,h′) =

True and h + h′ exists. We have h = h′. Hence h′ + h = h. From JBK(s ,h) =

Page 102: thesis

94 Chapter 5. Soundness andCompleteness

JB[x := x′]K(s′,h) byx′ ∈ FV(B) and JBK(s ,h) = True,wehave JB[x := x′]K(s′,h +h′) =

True. Hence Je 7→ x′ −∗ B[x := x′]K(s′,h ) = True.

Combining them,wehave Je 7→ x′ ∗ (e 7→ x′ −∗ B[x := x′])K(s′,h) = True. HenceJCK(s,h) = True. Hence A→ C is true.

By the axiom (lookup), ⊢ CPB is provable. Since A→ C is true, by the rule(conseq), we have ⊢ APB. Therefore, F , . . . , Fnproc ⊢ Ax := [e]B.

Case 7. P is [e ] := e .

Assume that A[e ] := e B is true. Let x ∈ FV(e ) and C be (∃x(e 7→ x)) ∗(e 7→ e −∗ B).

Wewill show thatA→C is true. Assume JAK(s,h) = True. Wewill show JCK(s,h) =True.

Let n = Je Ks. Since APB is true, JPK((s, h)) ∋ abort. Hence n ∈ Dom(h).Let h = h|n and h = h|Dom(h)−n . Nowwewill show J∃x(e 7→ x)K(s,h ) = Trueand Je 7→ e −∗ BK(s,h ) = True.

Let n be h (n ). Then Je 7→ xK(s[x:=n ],h ) = True. Then J∃x(e 7→ x)K(s,h ) =

True.

Assume Je 7→ e K(s,h ) = True. Then h = ∅[Je Ks := Je Ks]. By definitionJPK((s, h)) = (s, h ) where h = h + h . Then JBK(s,h ) = True. ThenJe 7→ e −∗ BK(s,h ) = True.

Hence A→ C is true.

By the axiom (mutation),⊢ CPB is provable. SinceA→C is true, by the rule(conseq), we have ⊢ APB. Therefore, F , . . . , Fnproc ⊢ A[e ] := e B.

Case 8. P is dispose(e).

Assume that Adispose(e)B is true. Let x ∈ FV(e) andC be (∃x(e 7→ x)) ∗B.

Wewill show thatA→C is true. Assume JAK(s,h) = True. Wewill show JCK(s,h) =True. Let n = JeKs. SinceAPB is true, JPK((s, h)) ∋ abort. Hence n ∈ Dom(h).

Page 103: thesis

5.3 Completeness 95

Hence JPK((s, h)) = (s, h ) and JBK(s,h ) = True where h = h|Dom(h)−n. Letn = h(n), h = ∅[n := n ], and s′ = s[x := n ]. We have h = h + h . Since JeKs′ =JeKs = n by x ∈ FV(e), we have Je 7→ xK(s′,h ) = True. Hence J∃x(e 7→ x)K(s,h ) =

True. Hence JCK(s,h) = True. Hence A→ C is true.

By the axiom (dispose), ⊢ CPB is provable. Since A→ C is true, by the rule(conseq), we have ⊢ APB. Therefore, F , . . . , Fnproc ⊢ Adispose(e)B.

Case 9. P is Ri.

Assume that ARiB is true. We have F , . . . , Fnproc ⊢ Fi. Notethat the variables in −→z ,xh,FV(A) ∪ FV(B) ∪ EFV(Ri) are mutuallydistinct according to our global assumption. By the rule (inv-conj),F , . . . , Fnproc ⊢ Gi ∧ HEvalA[−→y :=−→z ](xh)RiSGi,Ri(

−→x ) ∧ HEvalA[−→y :=−→z ](xh)since FV(HEvalA[−→y :=−→z ](xh)) ∩Mod(Ri) = ∅.

We will prove SGi,Ri(−→x ) ∧ HEvalA[−→y :=−→z ](xh) → B. Assume thatJSGi,Ri(

−→x ) ∧HEvalA[−→y :=−→z ](xh)K(s′,h′) is true. Then JSGi,Ri(−→x )K(s′,h′) andJHEvalA[−→y :=−→z ](xh)K(s′,h′) are true. By Lemma 5.3.2 (2), we have s, h such thatJGiK(s,h) is true and JRiK((s, h)) ∋ (s′, h′).

Now we will show JHEvalA[−→y :=−→z ](xh)K(s,h) = True by contradiction. AssumeJHEvalA[−→y :=−→z ](xh)K(s,h) = False. Then J¬HEvalA[−→y :=−→z ](xh)K(s,h) = True.By (inv-conj), F , . . . , Fnproc ⊢ Gi ∧ ¬HEvalA[−→y :=−→z ](xh)RiSGi,Ri(

−→x ) ∧¬HEvalA[−→y :=−→z ](xh). By Theorem 5.1.8, F , . . . , Fnproc ⊢ Gi ∧¬HEvalA[−→y :=−→z ](xh)RiSGi,Ri(

−→x )∧¬HEvalA[−→y :=−→z ](xh) is true. SinceF , . . . , Fnprocare true by Lemma 5.3.2 (1) with the fact that GiRiTrue is true by Lemma 5.2.12(1), Gi ∧ ¬HEvalA[−→y :=−→z ](xh)RiSGi,Ri(

−→x ) ∧ ¬HEvalA[−→y :=−→z ](xh) is true.

By the definition of the truth of asserted programs, we haveJ¬HEvalA[−→y :=−→z ](xh)K(s′,h′) = True. Then JHEvalA[−→y :=−→z ](xh)K(s′,h′) = False. But itcontradicts with JHEvalA[−→y :=−→z ](xh)K(s′,h′) = True. Thus JHEvalA[−→y :=−→z ](xh)K(s,h) =True.

Then s(−→z ) = s(−→y ) and JHeap(xh) ∧WRi,True(−→x ) ∧HEvalA(xh)K(s,h) is true.

Since Heapcode(s(xh), h) holds and JHEvalA(xh)K(s,h) is true, by Lemma 5.2.7 (3),

Page 104: thesis

96 Chapter 5. Soundness andCompleteness

we have JAK(s,h) = True.

Since ARiB is true, JBK(s′,h′) is true. Then SGi,Ri(−→x )∧HEvalA[−→y :=−→z ](xh)→B

is true.

Then by (conseq) rule, F , . . . , Fnproc ⊢ Gi ∧ HEvalA[−→y :=−→z ](xh)RiB is prov-able. By (exists) rule, F , . . . , Fnproc ⊢ ∃−→z xh(−→y = −→z ∧Heap(xh)∧WRi,True(

−→x )∧HEvalA[−→y :=−→z ](xh))RiB. Naturally ARiTrue is true. Then by Lemma 5.3.4,A → WRi,True(

−→x ) is true. By Lemma 5.3.3, A → ∃−→z xh(−→y = −→z ∧ Heap(xh) ∧HEvalA[−→y :=−→z ](xh)) is true and then we have A → ∃−→z xh(−→y = −→z ∧ Heap(xh) ∧HEvalA[−→y :=−→z ](xh) ∧ WRi,True(

−→x )). Then by (conseq) rule, F , . . . , Fnproc ⊢ARiB. 2

Next Lemma shows that the hypothesis F , . . . , Fnproc used in lemma 5.3.7 are prov-able in the our system.

Lemma 5.3.8 ⊢ Fi is provable for i = , . . . , nproc.

Proof.

Fix i. Take fresh variables −→z ′ and x′h. Let G′i be Gi[

−→z := −→z ′, xh := x′h]and S′Gi,Ri(

−→x ) be SGi,Ri(−→x )[−→z := −→z ′, xh := x′h]. By Lemma 5.2.12 (1),

WRi,True(−→x )RiTrue is true. Then GiRiTrue is true. Then by Lemma 5.3.2

(1), GiRiSGi,Ri(−→x ) is true. By Lemma 4.2.12, JRiK = JQiK whereQi is the body

of Ri. Hence GiQiSGi,Ri(−→x ) is true. By Lemma 5.3.5 (2), G′

iQiS′Gi,Ri(−→x )

is true. Then by Lemma 5.3.7, F , . . . , Fnproc ⊢ G′iQiS′Gi,Ri(

−→x ) is provable. ByLemma 5.3.5 (1), F , . . . , Fnproc ⊢ GiQiSGi,Ri(

−→x ) is provable. By (recursion)rule, ⊢ Fi is provable. 2

The following theorem is our central result of this paper. It says that our system iscomplete.

Theorem 5.3.9 If APB is true then ⊢ APB is provable.

Proof. Assume APB is true. Then by Lemma 5.3.7, F , . . . , Fnproc ⊢ APBis provable. By Lemma 5.3.8, ⊢ Fi is provable for i = , . . . , nproc. Then we have⊢ APB. 2

Page 105: thesis

97

6Conclusion

In this work, we have given a new logical system forHoare’s logic for recursive pro-cedures by introducing two new inference rules and removing other redundant in-ference rules so that it can be extended to separation logic. We have also proved itsrelative completeness in the sense of Cook.

In our work, we have also presented a logical system that can verify all terminatingprograms written in the language proposed in [13] extended with mutual recursiveprocedures. Our assertion language is exactly the sameas thatof [13]. Wehaveprovedthat the assertion language is expressive relative to the programs. We have given thepure formula that describes the strongest postcondition of a precondition and a pro-gram in separation logic. To utilize it, we have also given the necessary and sufficientprecondition of a program for the abort-free execution. Finally, we have shown thatour proposed system is sound and relatively complete.

Page 106: thesis

98 Chapter 6. Conclusion

Several extensions of the current system are possible and it is important to studytheir completeness. Moreover, it is necessary to be able to verify programs writtenin modern programming languages which are enriched with newer features. Amongthem, enhancement of procedures that can handle parameters is important.

Including implementation of the system, other future works can be bug tracking inprograms and program synthesization using our system.

Page 107: thesis

99

References

[1] K.R. Apt, Ten Years of Hoare’s Logic: A Survey — Part I, ACM Transactions onProgramming Languages and Systems 3 (4) (1981) 431–483.

[2] J. Berdine, C. Calcagno, and P.W. O’Hearn, Symbolic Execution with Separa-tion Logic, In: Proceedings of the Third Asian Symposium on ProgrammingLanguages and Systems (APLAS2005), Lecture Notes in Computer Science 3780(2005) 52–68.

[3] J Berdine, C Calcagno and PWO’Hearn, Smallfoot: Modular Automatic Asser-tion Checking with Separation Logic, In: 4th Formal Methods for Componentsand Objects, Lecture Notes in Computer Science 4111, (2006).

[4] J.A. Bergstra and J.V. Tucker, Expressiveness and the Completeness of Hoare’sLogic, Journal Computer and System Sciences 25 (3) (1982) 267–284.

[5] S.A. Cook, Soundness and completeness of an axiom system for program verifi-cation, SIAM Journal on Computing 7 (1) (1978) 70–90.

[6] J.Y. Halpern, A good Hoare axiom system for an ALGOL-like language, In:Proceedings of 11th ACM symposium on Principles of programming languages(POPL84) (1984) 262–271.

[7] C.A.R. Hoare, An axiomatic basis for computer programming, Commun. ACM12 (10) (1969) 576–580,583.

[8] M. Huth, M. Ryan, In: Logic in Computer Science: Modeling and Reasoning aboutSystems, second edition, Cambridge University Press, 2004.

[9] S. Ishtiaq and P.W. O’Hearn, BI as an Assertion Language for Mutable DataStructures, In: Proceedings of 28th ACM Symposium on Principles of ProgrammingLanguages (POPL2001) (2001) 14–26.

Page 108: thesis

100 References

[10] B. Josko,Onexpressive interpretationsof aHoare-logic forClarke’s languageL4,In: Proceedings of 1st Annual Symposium of Theoretical Aspects of ComputerScience (STACS 84), Lecture Notes in Computer Science 166 (1984) 73–84.

[11] H. H. Nguyen, C. David, S.C. Qin, and W.N. Chin, Automated Verification ofShape and Size Properties Via Separation Logic, In: Proceedings of 8th Interna-tional Conference on Verification, Model Checking, and Abstract Interpretation(VMCAI 2007), Lecture Notes in Computer Science 4349 (2007) 251–266.

[12] H.H. Nguyen and W.N. Chin, Enhancing Program Verification with Lemmas,In: Proceedings of 20th International Conference on Computer Aided Verifica-tion (CAV 2008), Lecture Notes in Computer Science 5123 (2008) 355–369.

[13] J.C. Reynolds, Separation Logic: A Logic for Shared Mutable Data Structures,In: Proceedings of Seventeenth Annual IEEE Symposium on Logic in Computer Sci-ence (LICS2002) (2002) 55–74.

[14] M. Tatsuta, W.N. Chin, andM.F. Al Ameen, Completeness of Pointer ProgramVerification by Separation Logic, In: Proceeding of 7th IEEE International Confer-ence on Software Engineering and Formal Methods (SEFM 2009) 179–188.

[15] P.W. O’Hearn and D. J. Pym, The logic of bunched implications In: Bulletin ofSymbolic Logic, 5(2), June 1999, 215–244

[16] Peter O’Hearn, John Reynolds, Hongseok Yang, Local Reasoning about Pro-grams that Alter Data Structures In: Proceedings of CSL’01, LNCS 2142, Paris,2001. 1–19

[17] Tarski, Alfred. A lattice-theoretical fixpoint theoremand its applications. In: Pa-cific J. Math. 5 (1955), no. 2, 285–309.

Page 109: thesis

101

Index

P[−→P ′], 31Ω, 31

Asserted program, 29Assertion langauge, 28

Base language, 26

Coding of assertions, 69Coding of base language, 67Coding of programs, 71Completeness theorem, 96

Empty heap, emp, 28Expressiveness theorem, 86Extended Free variable, 33

Free variable of a program, 33Free variable of an assertion, 28

Invariance axiom, 66

Logical system, 52

Modifiable variables, 33

Number of Procedures, nproc, 27

Procedures dependencies, k;, 29

Program unfolding to level k, P(k), 31Programming language, 26Pure formula, 26

Recursive Procedures, 27Representation Lemma for Assertions,

75Representation Lemma for Programs, 78

Semantics of a judgement, 43Semantics of asserted programs, 42Semantics of assertions, 42Semantics of base language, 36Semantics of programs inL, 41Semantics of programs inL−, 37Separation conjunction, ∗, 28Separation implication,−∗, 28Set of visible procedures, PN(P), 29Singleton heap, 7→, 28Soundness theorem, 66Strongest Postcondition, 87

The languageL, 27The languageL−, 27

Unfolding of a judgment, 58

Weakest precondition, 85