60
ALEXANDRU BALTAG and LAWRENCE S. MOSS LOGICS FOR EPISTEMIC PROGRAMS ABSTRACT. We construct logical languages which allow one to represent a variety of possible types of changes affecting the information states of agents in a multi-agent setting. We formalize these changes by defining a notion of epistemic program. The languages are two-sorted sets that contain not only sentences but also actions or programs. This is as in dynamic logic, and indeed our languages are not significantly more complicated than dynamic logics. But the semantics is more complicated. In general, the semantics of an epistemic program is what we call a program model. This is a Kripke model of ‘actions’, representing the agents’ uncertainty about the current action in a similar way that Kripke models of ‘states’ are commonly used in epistemic logic to represent the agents’ uncer- tainty about the current state of the system. Program models induce changes affecting agents’ information, which we represent as changes of the state model, called epistemic updates. Formally, an update consists of two operations: the first is called the update map, and it takes every state model to another state model, called the updated model; the second gives, for each input state model, a transition relation between the states of that model and the states of the updated model. Each variety of epistemic actions, such as public announcements or completely private announcements to groups, gives what we call an action signature, and then each family of action signatures gives a logical language. The construction of these languages is the main topic of this paper. We also mention the systems that capture the valid sentences of our logics. But we defer to a separate paper the completeness proof. The basic operation used in the semantics is called the update product. A version of this was introduced in Baltag et al. (1998), and the presentation here improves on the earlier one. The update product is used to obtain from any program model the corresponding epistemic update, thus allowing us to compute changes of information or belief. This point is of interest independently of our logical languages. We illustrate the update product and our logical languages with many examples throughout the paper. 1. INTRODUCTION Traditional epistemic puzzles often deal with changes of knowledge that come about in various ways. Perhaps the most popular examples are the puzzles revolving around the fact that a declaration of ignorance of some sentence A may well lead to knowledge of A. We have in mind the scen- arios that go by names such as the Muddy Children, the Cheating Spouses, the Three Wisemen, and the like. The standard treatment of these matters (a) introduces the Kripke semantics of modal logic so as to formalize the Synthese 139: 165–224, 2004. Knowledge, Rationality & Action 1–60, 2004. © 2004 Kluwer Academic Publishers. Printed in the Netherlands. [1]

Logics for Epistemic Programs

Embed Size (px)

Citation preview

ALEXANDRU BALTAG and LAWRENCE S. MOSS

LOGICS FOR EPISTEMIC PROGRAMS

ABSTRACT. We construct logical languages which allow one to represent a variety ofpossible types of changes affecting the information states of agents in a multi-agent setting.We formalize these changes by defining a notion of epistemic program. The languages aretwo-sorted sets that contain not only sentences but also actions or programs. This is asin dynamic logic, and indeed our languages are not significantly more complicated thandynamic logics. But the semantics is more complicated. In general, the semantics of anepistemic program is what we call a program model. This is a Kripke model of ‘actions’,representing the agents’ uncertainty about the current action in a similar way that Kripkemodels of ‘states’ are commonly used in epistemic logic to represent the agents’ uncer-tainty about the current state of the system. Program models induce changes affectingagents’ information, which we represent as changes of the state model, called epistemicupdates. Formally, an update consists of two operations: the first is called the update map,and it takes every state model to another state model, called the updated model; the secondgives, for each input state model, a transition relation between the states of that model andthe states of the updated model.

Each variety of epistemic actions, such as public announcements or completely privateannouncements to groups, gives what we call an action signature, and then each family ofaction signatures gives a logical language. The construction of these languages is the maintopic of this paper. We also mention the systems that capture the valid sentences of ourlogics. But we defer to a separate paper the completeness proof.

The basic operation used in the semantics is called the update product. A version of thiswas introduced in Baltag et al. (1998), and the presentation here improves on the earlierone. The update product is used to obtain from any program model the correspondingepistemic update, thus allowing us to compute changes of information or belief. This pointis of interest independently of our logical languages. We illustrate the update product andour logical languages with many examples throughout the paper.

1. INTRODUCTION

Traditional epistemic puzzles often deal with changes of knowledge thatcome about in various ways. Perhaps the most popular examples are thepuzzles revolving around the fact that a declaration of ignorance of somesentence A may well lead to knowledge of A. We have in mind the scen-arios that go by names such as the Muddy Children, the Cheating Spouses,the Three Wisemen, and the like. The standard treatment of these matters(a) introduces the Kripke semantics of modal logic so as to formalize the

Synthese 139: 165–224, 2004.Knowledge, Rationality & Action 1–60, 2004.© 2004 Kluwer Academic Publishers. Printed in the Netherlands.

[ 1 ]

166 LOGICS FOR EPISTEMIC PROGRAMS

informal notions of knowledge and common knowledge; (b) formalizes oneof the scenarios as a particular model; (c) and finally shows how the form-alized notions of knowledge and common knowledge illuminate some keyaspects of the overall scenario. The informal notion of knowledge which isclosest to what is captured in the traditional semantics is probably justifiedtrue belief. But more generally, one can consider justifiable beliefs, regard-less of whether or not they happen to be true or not; in many contexts,agents may be deceived by certain actions, without necessarily losing theirrationality. Thus, such beliefs, and the justifiable changes affecting thesebeliefs, may be accepted as a proper subject for logical investigation. Thesuccessful treatment of a host of puzzles leads naturally to the following

THESIS I. Let s be a social situation involving the intuitive concepts ofknowledge, justifiable beliefs and common knowledge among a group ofagents. Assume that s is presented in such a way that all the relevant fea-tures of s pertaining to knowledge, beliefs and common knowledge arecompletely determined. Then we may associate to s a mathematical modelS. (S is a multi-agent Kripke model; we call these epistemic state models.)The point of the association is that all intuitive judgements concerning s

correspond to formal assertions concerning S, and vice-versa.

We are not aware of any previous formulations of this thesis. Neverthe-less, some version of this thesis is probably responsible for the appeal ofepistemic logic.

We shall not be concerned in this paper with a defense of this thesis,but instead we return to our opening point related to change. Dynamicepistemic logic, dynamic doxastic logic, and related formalisms, attemptto incorporate change from model to model in the syntax and semantics ofa logical language. We are especially concerned with changes that resultfrom information-updating actions of various sorts. Our overall aim is toformally represent epistemic actions, and we associate to each of them acorresponding update. By “updates” we shall mean operations defined onthe space of all state models, operations which are meant to represent well-defined, systematic changes in the information states of all agents. By an“epistemic action” (or program) we shall mean a representation of the waysuch a change “looks” to each agent.

Perhaps the paradigm case of an epistemic action is a public announce-ment. The first goal is to say in a general way what the effect of a (public)announcement should be on a model. It is natural to model such an-nouncements by the logical notion of relativization: publicly announcinga sentence causes all agents to restrict attention to the worlds where the

[ 2 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 167

sentence was true (before the announcement). Note that the informal no-tion of announcement takes situations to situations, and the formal notionof relativization is an operation taking models to models.

In this paper, we wish to consider many types of epistemic actions thatare more difficult to handle than public announcements. These includehalf-transparent, half-opaque types of actions, such as announcementsto groups in a completely private way, announcements that include thepossibility that outsiders suspect the announcement but this suspicion islost on the insiders, private announcements which are secretely intercep-ted by outsiders etc. We may also consider types of actions exhibitinginformation-loss and misinformation, where agents are deceived by othersor by themselves.

THESIS II. Let σ be a social “action” involving and affecting the know-ledge (beliefs, common knowledge) of agents. This naturally induces achange of situation; i.e., an operation o taking situations s into situationso(s). Assume that o is presented by assertions concerning knowledge,beliefs and common knowledge facts about s and o(s), and that o iscompletely determined by these assertions. Then

(a) We may associate to the action σ a mathematical model � whichwe call an epistemic action model. (� is also a multi-agent Kripkemodel.) The point again is that all the intuitive features of, andjudgments about, σ correspond to formal properties of �.

(b) There is an operation ⊗ taking a state model S and an action model �

and returning a new state model S ⊗ �. So each � induces an updateoperation O on state models: O(S) = S ⊗ �.

(c) The update O is a faithful model of the situation change o, in the sensethat for all s: if s corresponds to S as in Thesis I, then again o(s)

corresponds to O(S) in the same way; i.e. all intuitive judgementsconcerning o(s) correspond to formal assertions concerning O(S),and vice-versa.

Our aim in this paper is not to offer a full conceptual defense of thesetwo theses. Instead, we will justify the intuitions behind them through ex-amples and usage. We shall use them to build logical languages and modelsand show how these can be applied to the analysis of natural examples of“social situations” and “social actions”. As in the case of standard possible-worlds semantics (for which a ‘strong’, ontological defense is hard, maybeeven impossible, to give), the usefulness of these formal developments may

[ 3 ]

168 LOGICS FOR EPISTEMIC PROGRAMS

provide a ‘weak’, implicit defense of the philosophical claims underlyingour semantics.

Our method of defining updates is quite general and leads to logicsof epistemic programs, extending standard systems of epistemic logic byadding updates as new operators. These logical languages also incorporatefeatures of propositional dynamic logic. Special cases of our logic, deal-ing only with public or semi-public announcements to mutually isolatedgroups, have been considered in Plaza (1989), Gerbrandy (199a, b), andGerbrandy and Groeneveld (1997). But our overall setting is much moreliberal, since it allows for all the above-mentioned types of actions. Wefeel it would be interesting to study further examples with an eye towardsapplications, but we leave this to other papers.

In our logical systems, we capture only the epistemic aspect of thesereal actions, disregarding other (intentional) aspects. In particular, to keepthings simple we only deal with “‘purely epistemic” actions; i.e., the onesthat do not change the facts of the world, but affect only the agents’ beliefsabout the world. However, this is not an essential limitation, as our formalsetting can be easily adapted to express fact-changing actions.

On the semantic side, the main original technical contribution of ourpaper lies in our decision to represent not only the epistemic states, but also(for the first time) the epistemic actions. For this, we use action models,which are epistemic Kripke models of “actions”, similar to the standardKripke structures of “states”. While for states, these structures represent inthe usual way the uncertainty of each agent concerning the current state ofthe system, we similarly use action signatures to represent the uncertaintyof each agent concerning the current action taking place. For example,there will be a single action signature that represents public announce-ments. There will be a different action signature representing a completelyprivate announcement to one specified agent, etc. The intuition is that weare dealing with potentially “half-opaque/half-transparent” actions, aboutwhich the agents may be incompletely informed, or even completely mis-informed. The components (“possible worlds”) of an action model arecalled “simple” actions, since they are meant to represent particularlysimple kinds of actions, whose epistemic impact is uniform on states: theinformational features of a simple action are intrinsic to the action, andthus are independent of the informational features of the states to whichit can be applied. This independence is subject to only one restriction: theaction’s presuppositions or conditions of possibility, which a state mustsatisfy in order for the action to be executable. Thus, besides the epistemicstructure, simple actions have preconditions, defining their domain of ap-plicability: not every action is possible in every state. We model the update

[ 4 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 169

of a state by an action as a partial update operation, given by a restrictedproduct of the two structures: the uncertainties present in the given stateand the given action are multiplied, while the “impossible” combinationsof states and actions are eliminated (by testing the actions’ preconditionson the state). The underlying intuition is that, since the agent’s uncertain-ties concerning the state and the ones concerning the simple action aremutually independent, the two uncertainties must be multiplied, exceptthat we insist on applying an action only to those states which satisfy itsprecondition.

On the syntactic side, we use a mixture of dynamic and epistemic lo-gic, with dynamic modalities associated to each action signature, and withcommon-knowledge modalities for various groups of agents (in additionto the usual individual-knowledge operators). In this paper, we presenta sound system for this logic. The logic includes an Action-KnowledgeAxiom that generalizes similar axioms found in other papers in the area;(cf. Gergrand 1999a, b; Plaza 1989). The main original features of oursystem is an inference rule which we call the Action Rule. This allowsone to infer sentences expressing common knowledge facts which holdafter an epistemic action. From another point of view, the Action Ruleexpresses what might be called a notion of “epistemic (co)recursion”.Overall, the Action-Knowledge Axiom and the Action Rule express fun-damental formal features of the interaction between action and knowledgein multi-agent systems. The logic is studied further in our paper withS. Solecki (Baltag et al. 1998). There we present the completeness anddecidability of the logic, and we prove various expressivity results.

For Impatient Readers. The main logical systems of the paper are presen-ted in Section 4.2, and to read that one would only have to read thedefinition in Section 4.1 first. To understand the semantics, one should readin addition Sections 2.1, 2.3, and 3.1–3.4. But we know that our systemswould not be of any interest if the motivation were not so great. For thisreason, we have included many examples and discussions, particularly inthe sections of the paper preceding the introduction of the logics. Readersmay read as much or as little of that material as they desire. Indeed, somereaders may find our examples and discussion of more interest than thelogical systems themselves. The main logical systems are presented inSection 5.

Technical Results. Concerning our systems will appear in other papers.The completeness/decidability result for the main systems of this paperwill appear in a paper (Baltag et al. 1998) written with Sławomir Solecki;

[ 5 ]

170 LOGICS FOR EPISTEMIC PROGRAMS

this paper also contains results on the expressive power of our systems. Forstronger systems of interest, there are undecidability results; (cf. Miller andMoss (2003)).

1.1. Scenarios

Probably the best way to enter our our overall subject is to consider some“epistemic scenarios.” These give the reader some idea of what the generalsubject is about, and they also provide us with test problems at differentpoints.

SCENARIO 1. The Concealed Coin. A and B enter a large room contain-ing a remote-control mechanical coin flipper. One presses a button, andthe coin spins through the air, landing in a small box on a table. The boxcloses. The two people are much too far to see the coin.

The main contents of any representation of the relevant knowledgestates of A and B are that (a) there are two alternatives, heads and tails;(b) neither party knows which alternative holds; and (c) that (a) and (b)are common knowledge. The need for the notion of common knowledgein reasoning about multi-agent interaction is by now standard in appliedepistemic logic, and so we take it as unproblematic that one would want(c) to come out in any representation of this scenario.

Perhaps the clearest way to represent this scenario is with a diagram:

In more standard terms, we have a set of two alternatives, call them x andy. We also have some atomic information, that x represents the possiblefact of the coin is lying heads up and that y represents the other possiblefact. Finally, we also have some extra apparatus needed to indicate that, nomatter the actual disposition of the coin, A and B think both alternativesare possible. Following the standard practice in epistemic logic, we takethis apparatus to be accessibility relations between the alternatives. Thediagram should be read as saying that were the actual state of the world tobe x, say, then A and B would still entertain both alternatives.

SCENARIO 2. The Coin Revealed to Show Heads. A and B sit down. Oneopens the box and puts the coin on the table for both to see. It’s heads. Theresult of this scenario is a model which again is easy to grasp. It consists ofone state; call it s. Each agent knows the state in the sense that they thinks is the only possible state.

[ 6 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 171

We also shall be interested in keeping track of the relation of the modelin Scenario 1 and the model just above. We indicate this in the followingway:

The first thing to note is that the dotted connection is a partial function; aswe shall see later, this is the hallmark of a deterministic epistemic action.But we also will see quite soon that the before-after relation is not always apartial function; so there are epistemic actions which are not deterministic.Another thing to note at this point is that the dotted connection is not sub-scripted with an agent or a set of agents. It does not represent alternativepossibility with respect to anyone, but instead stands for the before-afterrelation between two models: it is a transition relation, going from inputstates to the corresponding output states. In this example, the transitionrelation is in fact a partial function whose domain is the set of states whichcould possibly be subject to the action of revealing heads. This is possiblein only one of the two starting states.

SCENARIO 2.1. The Coin Revealed to Show Tails. As a variation on Scen-ario 2, there is a different Scenario in which the coin is revealed in thesame way to both A and B but with the change that tails shows. Our fullrepresentation is:

SCENARIO 2.2. The Coin Revealed. Finally, we can consider the non-deterministic sum of publicly revealing heads and publicly revealing tails.The coin is revealed to both A and B, but all that we as external modelerscan say is that either they learned that heads shows, or that they learnedthat tails shows. Our representation is

[ 7 ]

172 LOGICS FOR EPISTEMIC PROGRAMS

Observe that, although non-deterministically defined, this is still a determ-inistic action: the relation described by the dotted connection is still afunction.

SCENARIO 3. A Semi-private Viewing of Heads. The following is an al-ternative to the scenarios above in which there is a public revelation. AfterScenario 1, A opens the box herself. The coin is lying heads up. B observesA open the box but does not see the coin. And A also does not disclosewhether it is heads or tails.

No matter which alternative holds, B would consider both as possible, andA would be certain which was the case

SCENARIO 3.1. B’s Turn. After Scenario 3, B takes a turn and opens thebox the same way. We expect that after both have individually opened theboxes they see the same things; moreover, they know this will happen. Thistime, we begin with the end of Scenario 3, and we end with the same endas in the public revelation of heads:

SCENARIO 4. Cheating. After Scenario 1, A secretly opens the box her-self. The coin is lying Heads up. B does not observe A open the box, andindeed A is certain that B did not suspect that anything happened afterthey sat down. This is substantially more difficult conceptually, and therepresentation is accordingly more involved.

Such cheating is like an announcement that results in A’s being surethat the coin lies heads up, and B learns nothing. But the problem is howto model the fact that, after the announcement, B knows nothing (new). Wecannot just delete all arrows for B to represent such lack of knowledge: thiswould actually increase B’s (false) ‘knowledge’, by adding to his set ofbeliefs new ones; for example, he’ll believe it is not possible that the coinis lying heads up. Deleting arrows always corresponds to increasing ‘in-formation’ (even if sometimes this is just by adding false information). Butthis seems wrong in our case, since B’s possibilities should be unchangedby A’s secret cheating. Instead, our representation of the informationalchange induced by such cheating should be:

[ 8 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 173

(1)

The main point is that after A cheats, B does not think that the actual stateof the world is even possible. The states that B thinks are possible shouldonly be two states which are the same as the “before” states, or states whichare in some important way similar to those. In addition to the way the stateactually is, where A knows the coin lies heads up or rather as an aspectof that state, we need to consider other states to represent the possibilitiesthat B must entertain.

We have indicated names s, t , and u of the possibilities in (1). (Wecould have done this with the scenarios we already considered, but therewas less of a reason.) State s has a different status from t and u: while t andu are the states those that B thinks could hold, s is the one that A knows tobe the case. Note that the substructure determined by t and u is isomorphicto the “before” structure. This is the way that we formalize the intuitionthat the states available to B after cheating are essentially the same as thestates before cheating.

SCENARIO 5. More Cheating. After Scenario 4, B does the very samething. That is, B opens the box quite secretly. We leave the reader theinstructive exercise of working out a representation. We shall return to thismatter in Section 3.5, where we solve the problem based on our generalmachinery. We merely mention now that part of the goal of the paper isprecisely to develop tools to build representations of complex epistemicexamples such as this one.

SCENARIO 6. Lying. After Scenario 1, A convinces B that the coinlies heads up and that she knows this. In fact, she is lying. Here is ourrepresentation:

SCENARIO 7. Pick a Card. As another alternative to Scenario 3, C walksin and tells both A and B at the same time that she has a card which eithersays H, T, or is blank. In the first two cases the card describes truly the

[ 9 ]

174 LOGICS FOR EPISTEMIC PROGRAMS

state of the coin in the box, and in the last case the intention is that noinformation is given. Then C gives the card to A in the presence of B.Here is our representation:

The idea here is that t and u represent states where the card was blank, s

the state where the card showed H, and t the state where the card showedT.

SCENARIO 8. Common Knowledge of (Unfounded) Suspicion. As yetanother alternative to Scenario 3, suppose that after A and B make theiroriginal entrance, A has not looked, but B has some suspicion concerningA’s cheating; so B considers possible that she (A) has secretely opened thebox (but B cannot be sure of this, so he also considers possible that nothinghappened); moreover, we assume there is common knowledge of this (B’s)suspicion. That is, we want to think of one single action (a knowing glanceby A, for example) that results in all of this. The representation of “before”and “after” is as follows:

One should compare this with Scenario 7. The blank card there is a parallelto no looking here; card with H is parallel to A’s looking and seeing H; andsimilarly for T. This accounts for the similarity in the models. The maindifference is that Scenario 7 was described in such a way that we do notknow what the card says; in this scenario we stipulate that A definitely didnot look. This accounts for the difference in dotted lines between the twofigures.

[ 10 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 175

SCENARIO 8.1. Private Communication about the Other. After Scenario8, someone stands in the doorway behind A and raises a finger. This indic-ates to B that A has not cheated. The communication between B and thethird party was completely private. Again, we leave the representation asan exercise to the reader.

One Goal. of the paper is to propose a theoretical understanding of theserepresentations. It turns out that there are simple operations and generalinsights which allow one to construct them. One operation allows one topass, for example, from the representation of Scenario 1 to that of Scenario4; moreover, the operation is so applicable that it allows us to add privateknowledge for one agent in a “private” way to any representation.

1.2. Further Contents of This Paper

Section 2 continues our introduction by both reviewing some of the mainbackground concepts of epistemic logic, and also by situating our logicalsystems with respect to well-known systems. Section 2.1 presents a new“big picture” of the world of epistemic actions. While the work that wedo could be understood without the conceptual part of the big picture, itprobably would help the reader to work through our definitions.

Section 3 begins the technical part of the paper in earnest, and here werevisit some of the Scenarios of Section 1.1 and their pictures. The ideais to get a logical system for reasoning with, and about, these kinds ofmodels. Section 4 gives the syntax and semantics of our logical systems.These are studied further in Sections 5. For example, we present sound andcomplete logical systems (with the proofs of soundness and completenessdeferred to Baltag et al. (2003)).

Endnotes. We gather at the ends of our sections some remarks about theliterature and how our work fits in with that of others.

We mentioned other work in dynamic epistemic logic and dynamicdoxastic logic. Much more on these logics and many other proposals inepistemic logic may be found in Gochet and Gribomont (2003), and Meyerand van der Hoek (1995). Also, a survey of many topics the area of inform-ation update and communication may be found in van Benthem’s papers(2000, 2001a, b).

The ideas behind several of the scenarios in Section 1.1 are to be foundin several places: see, e.g., Plaza (1989), Gerbrandy (1999a, b), Gerbrandyand Groeneveld (1997), and van Ditmarsch (2000, 2001). We shall discussthese papers in more detail later. Our scenarios go beyond the work ofthese papers. Specifically, our treatment of the actions in Scenarios 6 and

[ 11 ]

176 LOGICS FOR EPISTEMIC PROGRAMS

8 seems new. Also, our use of the relation between “before” and “after”(given by the dotted arrows) is new.

2. EPISTEMIC UPDATES AND OUR TARGET LOGICS

We fix a set AtSen of atomic sentences, and a set A of agents. All of ourdefinitions depend on AtSen and A, but for the most part we omit mentionof these.

2.1. State Models and Epistemic Propositions

A state model is a triple S = (S,A→ S, ‖ · ‖S) consisting of a set S of

“states”; a family of binary accessibility relationsA→ S ⊆ S × S, one for

each agent A ∈ A; and a “valuation” (or a “truth” map) ‖.‖S : AtSenP (S),assigning to each atomic sentence p a set ‖p‖S of states. When dealingwith a single fixed state model S, we often drop the subscript S from allthe notation.

In a state model, atomic sentences are supposed to represent non-epistemic, “objective” facts of the world, which can be thought of asproperties of states; the valuation tells us which facts hold at which states.The accessibility relations model the agents’ epistemic uncertainty about

the current state. That is, to say that sA→ t in S means that in the model, in

state s, agent A considers it possible that the state is t .

DEFINITION. Let StateModels be the collection of all state models. Anepistemic proposition is an operation ϕ defined on StateModels such thatfor all S ∈ StateModels, ϕS ⊆ S.

The collection of epistemic propositions is closed in various ways.

1. For each atomic sentence p we have an atomic proposition p withpS = ‖p‖S.1

2. If ϕ is an epistemic proposition, then so is ¬ϕ, where (¬ϕ)S = S \ϕS.3. If C is a set or class of epistemic propositions, then so is

∧C, with

(∧

C)S =⋂

{ϕS : ϕ ∈ C}.4. Taking C above to be empty, we have an “always true” epistemic

proposition tr, with trS = S.5. We also may take C in part (3) to be a two-element set {ϕ,ψ}; here we

write ϕ ∧ ψ instead of∧{ϕ,ψ}. We see that if ϕ and ψ are epistemic

propositions, then so is ϕ ∧ ψ , with (ϕ ∧ ψ)S = ϕS ∩ ψS.

[ 12 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 177

6. If ϕ is an epistemic proposition and A ∈ A, then �Aϕ is an epistemicproposition, with

(�Aϕ)S = {s ∈ S : if sA→ t , then t ∈ ϕS}.(2)

7. If ϕ is an epistemic proposition and B ⊆ A, then �∗Bϕ is an epistemic

proposition, with

(�∗Bϕ)S = {s ∈ S : if s−→B∗

t , then t ∈ ϕS}.Here s−→B∗

t iff there is a sequence

s = u0 →A1 u1 →A2 · · · →Anun+1 = t

where A1, . . . , An ∈ B. In other words, there is a sequence of arrowsfrom the set B taking s to t . We allow n = 0 here, so −→B∗ includesthe identity relation on S. To see that �∗

Bϕ is indeed an epistemicproposition, we use parts 3 and 6 above; we may also argue directly,of course.

2.2. Epistemic Logic Recalled

In this section, we review the basic definitions of modal logic. In terms ofour work in Section 2.1 above, the infinitary modal propositions are thesmallest collection of epistemic propositions containing the propositions pcorresponding to atomic sentences p and closed under negation ¬, infinit-ary conjunction

∧, and the agent modalities �A. The finitary propositions

are the smallest collection closed the same way, except that we replace∧

by its special cases tr and the binary conjunction operation.

Syntactic and Semantic Notions. It will be important for us to make a sharpdistinction between syntactic and semantic notions. For example, we speakof atomic sentences and atomic propositions. The difference for us is thatatomic sentences are entirely syntactic objects: we won’t treat an atomicsentence p as anything except an unanalyzed mathematical object. On theother hand, this atomic sentence p also has associated with it the atomicproposition p. For us p will be a function whose domain is the (proper classof) state models, and it is defined by

pS = {s ∈ S : s ∈ ‖p‖S}.(3)

This difference may seem pedantic at first, and surely there are times whenit is sensible to blur it. But for various reasons that will hopefully becomeclear, we need to insist on it.

[ 13 ]

178 LOGICS FOR EPISTEMIC PROGRAMS

Up until now, the only syntactic objects have been the atomic sentencesp ∈ AtSen. But we can build the collections of finitary and infinitaryatomic sentences by the same definitions that we have seen, and thenthe work of the past section is the semantics of our logical systems. Forexample, we have sentences p ∧ q, �A¬p, and �∗

Bq. These then havecorresponding epistemic propositions as their semantics: p∧q, �A¬p, and�∗

Bq, respectively. Note that the latter is a properly infinitary proposition(and so �∗

Bq is a properly infinitary sentence); it abbreviates an infiniteconjunction.

The rest of this section studies examples of the semantics and it alsomakes the connection of the formal system with the informal notions ofknowledge, belief and common knowledge. We shall study Scenario 3 ofSection 1.1, where A opens the box herself to see heads in a semi-publicway: B sees A open the box but not the result, A is aware of this, etc. Wewant to study the model after the opening. We represented this as

We first must represent this picture as a bona fide state model S3 in oursense.2 The picture includes no explicit states, but we must fix some statesto have a representation. We choose distinct objects s and t . Then we takeas our state model S3, as defined by

S3 = {s, t}A→ = {(s, s), (t, t)}→B = {(s, s), (s, t), (t, s), (t, t)}

‖H‖ = {s}‖T‖ = {t}

In Figure 1, we list some sentences of English along with their transla-tions into standard epistemic logic. We also have calculated the semanticsof the formal sentences in the model S3. It should be stressed that thesemantics is exactly the one defined in the previous section. For example,

‖�AT‖S3 = {u ∈ S3 : if uA→ v, then v ∈ ‖T‖S3}

= {u ∈ S3 : if uA→ v, then v = t}

= {t}

We also emphasize two other points. First, the translation of English tothe formal system is based on the rendering of “A knows” (or “A has ajustified belief that”) as �A, and of “it is common knowledge that” by�∗

A,B . Second, the chart bears a relation to intuitions that one naturally hasabout the states s and t . Recall that s is the state that obtains after A looks

[ 14 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 179

English Formal rendering Semantics

the coin shows heads H {t}A knows the coin shows heads �AH {s}A knows the coin shows tails �AT {t}B knows that the coin shows �BH ∅headA knows that B doesn’t know �A¬�BH {s, t}it’s headsB knows that A knows that �B�A¬�BH {s, t}B doesn’t know it’s headsit is common knowledge that �∗

A,B(�AH ∨ �AT) {s, t}either A knows it’s heads or A

knows that it’s tailsit is common knowledge that �∗

A,B¬(�BH ∨ �BT) {s, t}B doesn’t know the state ofthe coin

Figure 1. Examples of translations and semantics.

and sees that the coin is lying heads up. The state t , on the other hand, is astate that would have been the case had A seen tails when she looked.

2.3. Updates

A transition relation between state models S and T is a relation betweenthe sets S and T . We write r : S → T for this. An update r is a pair ofoperations

r = (S �→ S(r), S �→ rS),

where for each S ∈ StateModels, rS : S → S(r) is a transition relation.We call S �→ S(r) the update map, and S �→ rS the update relation.

EXAMPLE 2.1. Let ϕ be an epistemic proposition. We get an updatePub ϕ which represents the public announcement of ϕ. For each S,S(Pub ϕ) is the sub-state-model of S determined by the states in ϕS. In thissubmodel, information about atomic sentences and accessibility relationsis simply inherited from the larger model. The update relation (Pub ϕ)S isthe inverse of the inclusion relation of S(Pub ϕ) into S.

EXAMPLE 2.2. We also get a different update ?ϕ which represents testingwhether ϕ is true. Here we take S(?ϕ) to be the model whose state set is

({0} × ϕS) ∪ ({1} × S).

[ 15 ]

180 LOGICS FOR EPISTEMIC PROGRAMS

The arrows are defined by (i, s)A→(j, t) iff s

A→ t in S and j = 1. (Notethat states of the form (0, s) are never the targets of arrows in the newmodel.) Finally, we set

‖p‖S(?ϕ) = {(i, s) ∈ S(?ϕ) : s ∈ ‖p‖S}.

The relation (?ϕ)S is the set of pairs (s, (0, s)) such that s ∈ ϕS.

We shall study these two examples (and many others) in the sequel, andin particular we shall justify the names “public announcement” and “test”.For now, we continue our general discussion by noting that the collectionof updates is closed in various ways.

1. Skip: there is an update 1 with S(1) = S, and 1S is the identity relationon S.

2. Sequential Composition: if r and s are epistemic updates, then theircomposition r; s is again an epistemic update, where S(r; s) = S(r)(s),and (r; s)S = rS; sS(r). Here, we use on the right side the usualcomposition ; of relations.3

3. Disjoint Union (or Non-deterministic choice): If X is any set of epi-stemic updates, then the disjoint union

⊔X r is an epistemic update,

defined as follows. The set of states of the model⊔

X r is the disjointunion of all the sets of states in each model S(r):

{(s, r) : r ∈ X and s ∈ S(r)}.

Similarly, each accessibility relationA→ is defined as the disjoint union

of the corresponding accessibility relations in each model:

(t, r)A→(u, s) iff if r = s and t

A→ u in S(r).

The valuation ‖p‖ in S(⊔

X r) is the disjoint union of the valuations ineach state model:

‖p‖ = {(s, r) : r ∈ X and s ∈ ‖p‖S(r)}.Finally, the update relation (

⊔X r)S between S and S(

⊔X r) is the

union of all the update relations rS:

t (⊔X

r)S (u, s) iff tsS u.

4. Special case: Binary Union. The (disjoint) union of two epistemicupdates r and s is an update r s, given by r s = ⊔{r, s}.

[ 16 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 181

5. Another special case: Kleene star (iteration). We have the operation ofKleene star on updates:

r∗ =⊔

{1, r, r · r, . . . , rn, . . .}where rn is recursively defined by r0 = 1, rn+1 = rn; r.

1. Crash: We can also take X = ∅ in part 3. This gives an update 0 suchthat S(0) is the empty model for each S, and 0S is the empty relation.

The operations r; s, r s and r∗ are the natural analogues of the op-erations of relational composition, union of relations and iteration of arelation, and also of the regular operations on programs in PDL. The in-tended meanings are: for r; s, sequential composition (do r, then do s); forr s, non-deterministic choice (do either r or s); for r∗, iteration (repeat rsome finite number of times).

A New Operation: Synamic Modalities for Updates. If ϕ is an epistemicproposition and r an update, then [r]ϕ is an epistemic proposition definedby

([r]ϕ)S = {s ∈ S : if s rS t , then t ∈ ϕS(r)}.(4)

We should compare (4) and (2). The point is that we may treat updates ina similar manner to other box-like modalities; the structure given by anupdate allows us to do this. This point leads to the formal languages whichwe shall construct in due course. But we can illustrate the idea even now.Suppose we want to interpret the sentence [Pub H]�AH in our very firstmodel, the model S1 pictured again below:

We shall call the two states s (where H holds) and t . Again, we want todetermine [[[Pub H]�AH]]S1 . We already have in Example 2.1 a generaldefinition of Pub H as an update, so we can calculate S1(Pub H) and also(Pub H)S1 . We indicate these in the picture

The one-state model on the right is S1(Pub H), and the dotted arrow shows(Pub H)S1 . So we calculate:

[[[Pub H]�AH]]S1= {s ∈ S1 : whenever s (Pub H)S1 t , then also t ∈ [[�AH]]S1(Pub H)}.

[ 17 ]

182 LOGICS FOR EPISTEMIC PROGRAMS

In S1(Pub H), the state satisfies �AH. Thus [[[Pub H]�AH]]S1 ={s, t}. It might be more interesting to consider 〈Pub H〉�AH; this is¬[Pub H]¬�AH. Similar calculations show that

[[〈Pub H〉�AH]]S1

= {s ∈ S1 : for some t , s (Pub H)S1 t and t ∈ [[�AH]]S1(Pub H)}.The point here is that we have a general semantics for sentences like

[Pub H]�AH, and this semantics crucially uses Equation (4). That is, todetermine the truth set of a sentence like [Pub H]�AH in a particular modelS, one applies the update map to S and works with the update relationbetween the S and the S([Pub H]); one also uses the semantics of �AH inthe new model. This overall point is one of the two leading features of ourapproach; the other is the update product which we introduce in Section 3.

2.4. The Target Logical Systems

This paper presents a number of logical systems which contain epistemicoperators of various types. These operators are closely related to aspects ofthe scenarios of Section 1.1. The logics themselves are presented formallyin Section 4, but this work takes a fair amount of preparation. We delaythis until after we motivate the subject, and so we turn to an informalpresentation of the syntax and semantics of some logics. The overall idea isthat the operators we study correspond roughly to the shapes of the actionmodels which we shall see in Section 3.5.

THE LOGIC OF PUBLIC ANNOUNCEMENTS. We take a two-placesentential operator [Pub −]−. That is, we want an operation taking sen-tences ϕ and ψ to a new sentence [Pub ϕ]ψ , and we want a logic closedunder this operation. The intended interpretation of [Pub ϕ]ψ is: assumingthat ϕ holds, then announcing it results in a state where ψ holds. Theannouncement here should be completely public. The semantics of everysentence χ will be an epistemic proposition [[χ]] in the sense of Section2.1.

Note that we refer to this operator as a two-place one. This just meansthat it takes two sentences and returns another sentence. We also think of[Pub ϕ] as a modal operator in its own right, on a par with the knowledgeoperators �A. And so we shall think of Pub as something which takessentences into one-place modal operators.

We also consider a dual 〈Pub ϕ〉 to [Pub ϕ]. As one expects, thesemantics will arrange that 〈Pub ϕ〉ψ and ¬[Pub ϕ]¬ψ be logicallyequivalent. (That is, they will hold at the same states.) Thus, the intended

[ 18 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 183

interpretation of 〈Pub ϕ〉ψ is: ϕ holds, and announcing it results in a statewhere ψ holds. As one indication of the difference, in S1, y |= [Pub H]true(vacuously, as it were: it is not possible to make a true announcement of Hin y). But we do have y |= ¬〈Pub H〉true; this is how our semantics worksout. Once again we only consider true announcements. Our semanticswill arrange that ϕ → (〈Pub ϕ〉ψ ↔ [Pub ϕ]ψ). So in working withthe example scenarios, the difference between the two modalities will besmall.

Further, we iterate the announcement operation, obtaining sentencessuch as [Pub p][Pub q]r. We also want to consider announcements aboutannouncements, as in 〈Pub 〈Pub ϕ〉ψ〉χ . This says that it is possible toannounce publicly 〈Pub ϕ〉ψ , and as a result of a true announcement ofthis sentence, χ will hold.

THE LOGIC OF COMPLETELY PRIVATE ANNOUNCEMENTS TOGROUPS. This time, the syntax is more complicated. If ϕ and ψ aresentences and B is a set of agents, then [PriB ϕ]ψ is again a sentence. Theintended interpretation of this is: assuming that ϕ holds, then announcingit publicly to the subgroup B in such a way that outsiders do not evensuspect that the announcement happened results in a state where ψ holds.(The “Pri” in the notation stands for “private.”) For example, this kind ofannouncement occurs in the passage from Scenario 1 to Scenario 4; that is,“cheating” is a kind of private announcement to the “group” {A}. We wantit to be the case that in S1,

x |= 〈Pri{A} H〉(�AH ∧ ¬�B�AH).

That is, in x, it is possible to announce H to A privately (since H is true inx), and by so doing we have a new state where A knows this fact, but B

does not know that A knows it.The logic of private announcements to groups allows as modal oper-

ators [PriB ϕ] for all sentences ϕ of the logic. We shall show that thislogical system extends the logic of public announcements, the idea beingthat when we take B to be the full set A of agents, Pub ϕ and PriAϕ shouldbe equivalent in the appropriate sense.

THE LOGIC OF COMMON KNOWLEDGE OF ALTERNATIVES. If ϕ

and ψ are sentences and B is a set of agents, then [CkaB �ϕ]ψ is againa sentence. The intended interpretation of this is: assuming that ϕ1 holds,then announcing it publicly to the subgroup B in such a way it is commonknowledge to the set of all agents that the announcement was one of �ϕ

[ 19 ]

184 LOGICS FOR EPISTEMIC PROGRAMS

Syntactic Semanticsentence ϕ epistemic proposition ϕ

language L, L(�), etc. state model Saction signature � update rbasic action expression σ �ψ epistemic program model

(�, �,ψ1, . . . ψn)

program π , action α canonical action model �

Figure 2. The main notions in this paper.

results in a state where ψ holds. For example, consider Scenario 3. In S1,we have

x |= ¬�AH ∧ 〈Cka{A} H, T〉(�A(H ∧ ¬�B�AH) ∧ �B(�AH ∨ �AT)).

The action here is A learning that either the coin lies heads up or that it liestails up, and this is done in such a way that B knows that A learns one ofthe two alternatives but not which one. Before the action, A does not knowthat the coin lies heads up. As a result, A knows this, and knows that B

does not know it.

At this point, we have the syntax of some logical languages and alsoexamples of what we intend the semantics to be. We do not yet have theformal semantics of any of these languages, of course. However, evenbefore we turn to this, we want to be clear that our goal in this paperis to study a very wide class of logical systems, including ones for therepresentation of all possible “announcement types.” The reasons to beinterested in this approach rather than to study a few separate logics are asfollows: (1) it is more general and elegant to have a unified presentation;and (2) it gives a formal account of the notion of an “announcement type”which would be otherwise lacking and which should be of independentinterest. So what we will really be concerned with is:

THE LOGIC OF ALL POSSIBLE EPISTEMIC ACTIONS. If α is an epi-stemic action and ϕ is a sentence, then [α]ϕ is again a sentence. Here α

is some sort of epistemic action (which of course we shall define in duecourse); the point is that α might involve arbitrarily complex patterns ofsuspicion. In this way, we shall recover the four logical systems mentionedabove as fragments of the larger logic of all possible epistemic actions.

Our Claims in This Paper. Here are some of the claims of the paper aboutthe logical languages we shall construct and about our notion of updates.

[ 20 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 185

Figure 3. Languages in this paper.

1. Each type of epistemic action, such as public announcements, com-pletely private announcements, announcements with common know-ledge of suspicion, etc, corresponds to a natural collection of updatesas we have defined them above.

2. Each type also gives rise to a logical language with that type of actionas a primitive construct. Moreover, it is possible to precisely definethe syntax of the language to insure closure under the construct. Thatis, we should be able to formulate a language with announcementsnot just about atomic facts, but about announcements themselves,announcements about announcements, etc.

2.5. A Guide to the Concepts in This Paper

In this section, we catalog the main notions that we shall introduce in duecourse. After this, we turn to a discussion of the particular languages thatwe study, and we situate them with existing logical languages.

Recall the we insist on a distinction of syntactic and semantic objectsin this paper. We list in Figure 2 the main notions that we will need. Wedo this mostly to help readers as they explore the different notions. Wemention now that the various notions are not developed in the order listed;indeed, we have tried hard to organize this paper in a way that will beeasiest to read and understand. For example, one of our main goals is topresent a set of languages (syntactic objects) and their semantics (utilizingsemantic objects).

Languages. This paper studies a number of languages, and to help thereader we list these in Figure 2. Actually, what we study are not individuallanguages, but rather families of languages parameterized by differentchoices of primitives. It is standard in modal logic to begin with a set of

[ 21 ]

186 LOGICS FOR EPISTEMIC PROGRAMS

atomic propositions, and we do the same. The difference is that we shallcall these atomic sentences in order to make a distinction between theseessentially syntactic objects and the semantic propositions that we studybeginning in Section 2.3. This is our first parameter, a set AtSen of atomicsentences. The second is a set A of agents.

Given these, L0 is ordinary modal logic with the elements of AtSen asatomic sentences and with agent-knowledge (or belief) modalities �A forA ∈ A.

We add common-knowledge operators �∗B , for sets of agents B ⊆ A,

to get a larger language L1. In Figure 2, we note the fact that L0 is literallya subset of L1 by using the inclusion arrow. The syntax and semantics ofL0 and L1 are presented in Figure 4.

Another close neighbor of the system in this paper is PropositionalDynamic Logic (PDL). PDL was first formulated by Fischer and Ladner(1979), following the introduction of dynamic logic in Pratt (1976). Thesyntax and the main clauses in the semantics of PDL are shown in Figure5.

We may also take L0 and close under countable conjunctions (andhence also disjunctions). We call this language Lω

0 . Note that L1 is not lit-erally a subset of Lω

0 , but there is a translation of L1 into Lω0 that preserves

the semantics. We would indicate this in a chart with a dashed arrow. Goingfurther, we may close under arbitrary (set-sized) boolean operations; thislanguage is then called L∞

0 .PDL is propositional dynamic logic, formulated with atomic programs

a replacing the agents A that we have fixed. We might note that we cantranslate L1 into PDL. The main clauses in the translation are:

(�Aϕ)t = [a]ϕt

(�∗Bϕ)t = [(⋃b∈B b)∗]ϕt

Beginning in Section 4.2, we study some new languages. These willbe based on a third parameter, an action signature . For each “type ofepistemic action” there will be an action signature . For each , we’llhave languages L0(), L1(), and L(). More generally, for each familyof action signatures S, we have languages L0(S), L1(S), and L(S). Thesewill be related to the other languages as indicated in the figure.

So we shall extend modal logic by adding one or another type of epi-stemic action. The idea is then we can generate a logic from an actionsignature corresponding to an intuitive action. For example, correspond-ing to the notion of a public announcement is a particular action signaturepub, and then the languages L0(pub), L1(pub), and L(pub) will havesomething to do with our notion of a “logic of public announcements.”

[ 22 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 187

Figure 4. The languages L0 and L1. For L0, we drop the �∗B construct.

Figure 5. Propositional Dynamic Logic (PDL).

In Baltag et al. (2003), we compare the expressive power of the lan-guages mentioned so far. It turns out that all of the arrows in Figure 3correspond to proper increases in expressive power. (There is one ex-ception: L0 turns out to equal L0(S) in expressive power for all S.) Itis even more interesting to compare expressive power as we change theaction signature. For example, we would like to compare logics of publicannouncement to logics of private announcement to groups. Most of thenatural questions in this area are open as of this writing.

2.6. Reformulation of Test-only PDL

In PDL, there are two types of syntactic objects, sentences and programs.The programs in PDL are interpreted on a structure by relations on thatstructure. This is not the way our semantics works, and to make this pointwe compare the standard semantics of (a fragment of) PDL with a languagecloser to what we shall ultimately study.

To make this point, we consider the test-only fragment of PDL in ourterms. This is the fragment built over the empty set of atomic programs. Sothe programs are skip, the tests ?ϕ and compositions, choices, or iterationsof these; sentences are formed as in PDL We give a reformulation in Figure6. The point is that in PDL the programs are interpreted by relations on agiven model, and in our terms programs are interpreted by updates. Wehave discussed updates of the form ?ϕ in Example2.2. Given that we have

[ 23 ]

188 LOGICS FOR EPISTEMIC PROGRAMS

Figure 6. Test-only PDL, with a semantics in our style.

an interpretation of the sentence ϕ as an epistemic proposition [[ϕ]], wethen automatically have an update ?[[ϕ]].

For the sentences, the main thing to look at is the semantics of sentences[π ]ϕ; here we use the semantic notions from Section 2.3. The way thesemantics works is that we have [[π ]] and [[ϕ]]; the former is an update andthe latter is an epistemic proposition. Then we use both of these to get anoverall semantics, using Equation (4). In more explicit terms,

[[[π ]ϕ]]S = ([[[π ]]][[ϕ]]])S

= {s ∈ S : if s [[π ]]S t , then t ∈ [[ϕ]]S}

2.7. Background: Bisimulation

We shall see the concept of bisimulation at various points in the paper, andthis section reviews the concept and also develops some of the appropriatedefinitions concerning bisimulation and updates.

DEFINITION. Let S and T be state models. A bisimulation between S andT is a relation R ⊆ S × T such that whenever s R t , the following threeproperties hold:

1. s ∈ ‖p‖S iff t ∈ ‖p‖T for all atomic sentences p. That is, s and t agreeon all atomic information.

1. For all agents A and states s′ such that s →A s′, there is some state t ′such that t →A t ′ and s′ R t ′.

2. For all agents A and states t ′ such that t →A t ′, there is some state s′such that s →A s′ and s′ R t ′.

EXAMPLE 2.3. This example concerns the update operation ?ϕ of Ex-ample2.2. Fix an epistemic proposition ϕ and a state model S. Recall that

[ 24 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 189

(?ϕ)S relates each s ∈ ϕ to the corresponding pair (0, s). We check thatthe following relation R is a bisimulation between S and S(?ϕ):

R = (?ϕ)S ∪ {(s, (1, s)) : s ∈ S}.

The definition of S(?ϕ) insures that the interpretations of atomicsentences are preserved by this relation.

Next, suppose that s →A t in S and s R (i, s′). Then we must have s′ = s.Further, the definition of R tells us that t R (1, t), and (i, s)→A (1, t).

Finally, suppose that s R (i, s′) and (i, s′)→A (j, t). Then s′ = s andj = 1. In addition, the definition →A in S(?ϕ) implies that s →A t in S.And as above, t R (1, t). This concludes our verifications.

Recall that L0 is ordinary modal logic, formulated as always over ourfixed sets of agents and atomic sentences. The next result concerns thelanguage L∞

0 of infinitary modal logic. In this language, one has conjunc-tions and disjunctions of arbitrary sets of sentences. We have the followingwell-known result:

PROPOSITION 2.4. If there is a bisimulation R such that s R t , then s andt agree on all sentences in L∞

0 : for all ϕ ∈ L∞0 , s ∈ [[ϕ]]S iff t ∈ [[ϕ]]T.

A pointed state model is a pair (S, s) such that s ∈ S. The state s

is called the designated state (or the “point”) of our pointed model, andis meant to represent the actual state of the system. Two pointed mod-els models are said to be bisimilar if there exists a bisimulation relationbetween them which relates their designated states. So, denoting by ≡ therelation of bisimilarity, we have:

(S, s) ≡ (T, t) iff there is a bisimulation R between S andT such that s R t .

This relation ≡ is indeed an equivalence relation.When S and T are clear from the context, we write s ≡ t instead of

(S, s) ≡ (T, t).We say that a proposition ϕ preserves bisimulations if whenever

(S, s) ≡ (T, t), then s ∈ ϕS iff t ∈ ϕT. We also say that an update rpreserves bisimulations if the following two conditions hold:

1. If s rS s′ and (S, s) ≡ (T, t), then there is some t ′ such that t rT t ′ and(S(r), s′) ≡ (T(r), t ′).

2. If t rT t ′ and (S, s) ≡ (T, t), then there is some s′ such that s rS s′ and(S(r), s′) ≡ (T(r), t ′).

[ 25 ]

190 LOGICS FOR EPISTEMIC PROGRAMS

PROPOSITION 2.5. Concerning bisimulation preservation:

1. The bisimulation preserving propositions include the atomic proposi-tions p, and they are closed under all of the (infinitary) operations onpropositions.

2. The bisimulation preserving updates are closed under composition and(infinitary) sums.

3. If ϕ and r preserve bisimulations, so does [r]ϕ.

Proof. We show the last part. Suppose that s ∈ ([r]ϕ)S, and let (S, s) ≡(T, t). To show that t ∈ ([r]ϕ)T, let t rT t ′. Then by condition (2) above,there is some s′ such that s rS s′ and (S(r), s′) ≡ (T(r), t ′). Since s ∈([r]ϕ)S, we have s′ ∈ ϕS(r). And then t ′ ∈ ϕT(r), since ϕ too preservesbisimulation. �

Endnotes. As far as we know, the first paper to study the interaction of com-munication and knowledge in a formal setting is Plaza’s paper “Logics ofPublic Communications” (Plaza 1989). As the title suggests, the epistemicactions studied are public announcements. In essence, he formalized thelogic of public announcements. (In addition to this, (Plaza 1989) containsa number of results special to the logic of announcements which we havenot generalized, and it also studies an extension of the logic with non-rigidconstants.) The same formalization was made in Gerbrandy (1999a, b), andalso Gerbrandy and Groeneveld (1997). These papers further formalize thelogic of completely private announcements. (However, their semantics usenon-wellfounded sets rather than arbitrary Kripke models. As pointed outin Moss (1999), this restriction was not necessary.) The logic of commonknowledge of alternatives was formulated in Baltag et al. (1998) and alsoin van Ditmarsch’s dissertation (2000).

Our introduction of updates is new here, as are the observations on thetest-only fragment of PDL.

For connections of the ideas here with coalgebra, see Baltag (2003).One very active arena for work on knowledge is distributed systems,

and the main source of this work is the book Reasoning About KnowledgeFagin et al. (1996). We depart from Fagin et al. (1996) by introducingoperators whose semantics are updates as we have defined them, and bydoing without temporal logic operators. In effect, our Kripke models aresimpler, since they do not incorporate all of the runs of a system; the newoperators can be viewed as a compensation for that.

REMARK. Our formulation of a program model uses a designated setof simple actions. There are other equivalent formulations. Another waywould be to use a disjoint union of pointed program models; It would be

[ 26 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 191

possible to further reformulate some of our definitions below and therebyto give a semantics for our ultimate languages that differs from what weobtain in Section 4.2 below. However, the two semantics would be equiva-lent. The reason we prefer to work with designated sets is that it permits usto draw diagrams with fewer states. The cost of the simpler representationsis the slightly more complicated definition, but we feel this cost is worthpaying.

3. THE UPDATE PRODUCT OPERATION

In this section, we present the centerpiece of the formulation of our logicalsystems by introducing action models, program models, and an updateproduct operation. The leading idea is that epistemic actions, like statemodels, have the property that different agents think that different thingsare possible. To model the effect of an epistemic update on a state, we usea kind of product of epistemic alternatives.

3.1. Epistemic Action Models

Let � be the collection of all epistemic propositions. An epistemic actionmodel is a triple = (,→A , pre), where is a set of simple actions, →Ais an A-indexed family of relations on , and pre : → �.

An epistemic action model is similar to a state model. But we call themembers of the set “simple actions” (instead of states). We use differentnotation and terminology because of a technical difference and a biggerconceptual point. The technical difference is that pre : → � (that is, thecodomain is the collection of all epistemic propositions). The conceptualpoint is that we think of “simple” actions as being deterministic actionswhose epistemic impact is uniform on states (in the sense explained inour Introduction). So we think of “simple” actions as particularly simplekinds of deterministic actions, whose appearance to agents is uniform: theagents’ uncertainties concerning the current action are independent of theiruncertainties concerning the current state. This allows us to abstract awaythe action uncertainties and represent them as a Kripke structure of actions,in effect forgetting the state uncertainties.

As announced in the Introduction, this uniformity of appearance isrestricted only to the action’s domain of applicability, defined by its pre-conditions. Thus, for a simple action σ ∈ , we interpret pre(σ ) as givingthe precondition of σ ; this is what needs to hold at a state (in a state model)in order for action σ to be “accepted” in that state. So σ will be executablein s iff its precondition pre(σ ) holds at s.

[ 27 ]

192 LOGICS FOR EPISTEMIC PROGRAMS

At this point we have mentioned the ways in which action models andstate models differ. What they have in common is that they use accessibilityrelations to express each agent’s uncertainty concerning something. Forstate models, the uncertainty has to do with which state is the real one; foraction models, it has to do with which action is taking place.

Usually we drop the word “epistemic” and therefore refer to “actionmodels”.

EXAMPLE 3.1. Here is an action model:

Formally, = {σ, τ }; σ →A σ , σ →B τ , τ →A τ , τ →B τ ; pre(σ ) = H, andpre(τ ) = tr, where recall that tr is the “always true” proposition.

As we shall see, this action model will be used in the modeling of acompletely private announcement to A that the coin is lying heads up.Further examples may be found later in this section.

3.2. Program Models

To model non-deterministic actions and non-simple actions (whose ap-pearances to agents are not uniform on states), we define epistemicprogram models. In effect, this means that we decompose complex ac-tions (’programs’) into “simple” ones: they correspond to sets of simple,deterministic actions from a given action model.

An epistemic program model is defined as a pair π = (, �) consistingof an action model and a set � of designated simple actions. Each of thesimple actions γ ∈ � can be thought as being a possible “deterministicresolution” of the non-deterministic action π . As announced above, the in-tuition about the map pre is that an action is executable in a given state onlyif all its preconditions hold at that state. We often spell out an epistemicprogram model as (,→A , pre, �) rather than ((,→A , pre), �). Whendrawing the diagrams, we use doubled circles to indicate the designatedactions in the set �. Finally, we usually drop the word “epistemic” and justrefer to these as program models.

EXAMPLE 3.2. Every action model and every σ ∈ gives a programmodel by taking {σ } as the set of designated simple actions. For instance,in connection with Example 3.1, we have

[ 28 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 193

with pre(σ ) = H, pre(τ ) = tr, as before. Program models of this type arevery common in our treatment. But we need the extra ability to have setsof designated simple actions to deal with more complicated actions, as ournext examples show.

EXAMPLE 3.3. A Non-deterministic Action. Let us model the non-deterministic action of either making a completely private announcementto A that the coin is lying heads up, or not making any announcement.The action is completely private, so B doesn’t suspect anything: he thinksno announcement is made. The program model is obtained by choosing� = {σ, τ } in the action model from Example 3.1. The picture is

with pre(σ ) = H, pre(τ ) = tr, as before. Alternatively, one could take thedisjoint union of with the one-action program model with preconditiontr.

EXAMPLE 3.4. A Deterministic, but Non-simple Action. Let us modelthe action of (completely) privately announcing to A whether the coin islying heads up or not. Observe that this is a deterministic action; that is,the update relation is functional: at any state, the coin is either heads up ornot. But the action is not simple: its appearance to A depends on the state.In states in which the coin is heads up, this action looks to A like a privateannouncement that H is the case; in the states in which the coin is not headsup, the action looks to A like a private announcement that ¬H. (However,the appearance to B is uniform: at any state, the action appears to him as ifno announcement is made.) The only way to model this deterministic, butnon-simple, action in our setting is as a non-deterministic program model,having as its ‘designated’ actions two mutually exclusive simple actions:one corresponding to a (truthful) private announcement that H, and anotherone corresponding to a (truthful) private announcement that ¬H.

with pre(σ ) = H, pre(τ ) = tr, and pre(ρ) = ¬H.

[ 29 ]

194 LOGICS FOR EPISTEMIC PROGRAMS

3.3. The Update Product of a State Model with an Epistemic ActionModel

The following operation plays a central role in this paper.Given a state model S = (S,→A S, ‖ · ‖S) and an action model =

(,→A , pre), we define their update product to be the state model

S ⊗ = (S ⊗ ,→A , ‖.‖),given by the following: the new states are pairs of old states s and simpleactions σ which are “consistent”, in the sense that all preconditions of theaction σ “hold” at the state s

S ⊗ = {(s, σ ) ∈ S × : s ∈ pre(σ )S}.(5)

The new accessibility relations are taken to be the “products” of the corres-ponding accessibility relations in the two frames; i.e., for (s, σ ), (s′, σ ′) ∈S ⊗ we put

(s, σ )→A (s′, σ ′) iff s →A s′ and σ →A σ ′,(6)

and the new valuation map ‖.‖S : AtSen → P (S ⊗) is essentially givenby the old valuation:

‖p‖S⊗ = {(s, σ ) ∈ S ⊗ : s ∈ ‖p‖S}.(7)

Intended Interpretation. The update product restricts the full Cartesianproduct S × to the smaller set S ⊗ in order to insure that states surviveactions in the appropriate sense.

For each agent A, the product arrows →A on the output frame repres-ent agent A’s epistemic uncertainty about the output state. The intuitionis that the components of our action models are “simple actions”, so theuncertainty regarding the action is assumed to be independent of the un-certainty regarding the current (input) state. This independence allows usto “multiply” these two uncertainties in order to compute the uncertaintyregarding the output state: if whenever the input state is s, agent A thinksthe input might be some other state s′, and if whenever the current actionhappening is σ , agent A thinks the current action might be some otheraction σ ′, and if s′ survives σ ′, then whenever the output state (s, σ ) isreached, agent A thinks the alternative output state (s′, σ ′) might have beenreached. Moreover, these all of the output states that A considers possibleare of this form.

[ 30 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 195

As for the valuation, we essentially take the same valuation as the onein the input model. If a state s survives an action, then the same facts p

hold at the output state (s, σ ) as at the input state s. This means that ouractions, if successful, do not change the facts. This condition can of coursebe relaxed in various ways, to allow for fact-changing actions. But in thispaper we are primarily concerned with purely epistemic actions, such asthe earlier examples in this section.

3.4. Updates Induced by Program Models

Recall that we defined updates and proper epistemic actions in Section 2.3.Right above, in Section 3.2, we defined epistemic action models. Note thatthere is a big difference: the updates are pairs of operations on the class ofall state models, and the program models are typically finite structures. Wethink of program models as capturing specific mechanisms, or algorithms,for inducing updates. This connection is made precise in the followingdefinition.

DEFINITION. Let (, �) be a program model. We define an update whichwe also denote (, �) as follows:

1. S(, �) = S ⊗ .2. s (, �)S (t, σ ) iff s = t and σ ∈ �.

We call this the update induced by (, �).

Bisimulation Preservation. Before moving to examples, we note a simpleresult that will be used later.

PROPOSITION 3.5. Let be an action model in which every pre(σ ) isa bisimulation preserving epistemic proposition. Let � ⊆ be arbitrary.Then the update induced by (, �) preserves bisimulation.

Proof. Write r for the update induced by the program model (, �). FixS and T, and suppose that s ≡ t via the relation R0. Suppose that s rS s′,so s′ ∈ S(r) is of the form (s, σ ) for some σ ∈ �. Then (t, σ ) ∈ T(r),and clearly t rT (t, σ ). We need only show that (s, σ ) ≡ (t, σ ). But thefollowing relation R is a bisimulation between S(r) and T(r):

(s′, τ1) R (t ′, τ2) iff s′ R0 t ′ and τ1 = τ2.

The verification of the bisimulation properties is easy. And R shows that(s, σ ) ≡ (t, σ ), as desired.

[ 31 ]

196 LOGICS FOR EPISTEMIC PROGRAMS

3.5. The Coin Scenario Models as Examples of the Update Product

We return to the coin scenarios of Section 1.1. Our purpose is partly toindicate how one may obtain the models there as examples of the updateproduct, and at the same time to exemplify the update product constructionitself. In this section, the set A of agents is {A,B}, and the set AtSenof atomic propositions is {H, T}. We remind the reader that T representsthe coin lying tails up, while tr is our notation for the true epistemicproposition.

EXAMPLE 3.6. We begin with an example worked out with many details,and then the rest of our examples will omit many similar calculations. Thisexample has to do with Scenario 4 from Section 1.1, where the coin liesheads up and A takes a look at the coin in such a way that she is certain thatB does not suspect anything. We take as S1 and 4 the structures shownbelow:

(We remind the reader that T is the atomic sentence for “tails” and tr isfor “true”.) S1 is from Scenario 1. In 4, we take the set of distinguishedstates to be {σ }. comes from Examples 3.1 and 3.2. To take the updateproduct, we first form the cartesian product S1 × 4:

{(s, σ ), (s, τ ), (t, σ ), (t, τ )}

Of these four pairs, we only want those whose first component satisfies(in S) the precondition of the second component. We do not want (t, σ ),since pre(σ ) = H and t /∈ [[H]]S. But the other three pairs do satisfy ourcondition. So the state model S1 ⊗ 4 will have three states: (s, σ ), (s, τ ),and (t, τ ). The atomic information is inherited from the first component,so we have [[H]]S1⊗4 = {(s, σ ), (s, τ )} and [[T]]S1⊗4 = {(t, τ )}. Theaccessibility relations →A and →B are those of the product. For example,we have (s, σ )→B (s, τ ), because s →B s and σ →B τ . But we do not have(s, σ )→A (s, τ ), because σ →A τ is false.

Now, we rename the states as follows:

(s, σ ) � s (s, τ ) � t (t, τ ) � u

And then we get a picture of this state model, the same one we had inScenario 4:

[ 32 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 197

The dotted line here shows the update relation between s and (s, σ ). This isthe only update relation. For example, we do not relate s and (s, τ ) becauseτ /∈ � = {σ }. Let S4 = S1 ⊗ 4.

EXAMPLE 3.7. 4 from Example 3.6 represents the action where A

cheats, learning heads or tails in a way which is oblivious to B. It is alsonatural to consider an action of A privately learning the state of the coin.(This action may take place even if the state is tails.) We could use thefollowing program model ′

4:

This program model consists of two disjoint components. We leave it tothe reader to calculate S1 ⊗′

4, and also the relation between “before” and“after”.

EXAMPLE 3.8. Next we construct the model corresponding to Scenario 5,where B cheats after Scenario 4. We consider the update product of thestate model S4 from Example 3.6 above with the program model 5 shownbelow:

It represents cheating by B. The update product S4 ⊗ 5 has five states:(s, σ ), (t, σ ), (s, τ ), (t, τ ), and (u, τ). Notice that (u, σ ) is omitted, sinceu /∈ [[H]]S4 . We leave it to the reader to calculate the accessibility relationsin S4 ⊗ 5; and to draw the appropriate figure.

Incidentally, we posed in Section 1.1 the exercise of constructing thisrepresentation from first principles. Many people are able to construct thefive state picture, and some others construct a related picture with sevenstates. The seven state picture is bisimilar to the one illustrated here.

EXAMPLE 3.9. We next look back at Scenario 2. The simplest actionstructure is 2:

(8)

[ 33 ]

198 LOGICS FOR EPISTEMIC PROGRAMS

It represents a public announcement to A and B that the coin is lying headsup. Here, the distinguished set � is the entire action structure. For therecord, we formalize 2 as a singleton set {Pub}. We have Pub →A Puband Pub →B Pub. Also, we set pre(Pub) = H. We did not put the namePub in the representation in (8), but in situations where we want the namewe would draw the same picture except that instead of H we would sayPub : H. Let us call the same structure S2 when we view it as a statemodel; formally these are the same object.

Let S be any model with the property that every action has both asuccessor in →A where H is true, and also a successor in →B where His true. Then S ⊗ 2 is bisimilar to S2. In particular, S1 ⊗ 2 is bisimilarto S2.

EXAMPLE 3.10. Let 3 be the following program model:

3 represents an announcement to A of heads in the manner of Scenario3. That is, B knows that A either sees heads or sees tails, but not which.Similarly, let let ′

3 represent the same kind of announcement to B:

Then we have the following:

S1 ⊗ 3∼= S3, where S3 is the model in Scenario 3 and ∼= denotes

isomorphism.

S3 ⊗ ′3

∼= S2. This conforms with the intuition that successive semi-private viewings by the two parties of the concealed coin amount to apublic viewing.

S3 ⊗ 3∼= S3. There is no point for A to look twice.

S4 ⊗ 3 is a three-state model bisimilar to the model S4 from Scenario 4.

EXAMPLE 3.11. To obtain the model in Scenario 7, we use the followingprogram model 7:

[ 34 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 199

with pre(ρ) = tr, pre(σ ) = H, and pre(τ ) = T. As illustrated, the set � isthe entire set 7 of simple actions. More generally, the general shape of 7

is the frame for an action which has three possibilities. A learns which onehappens, and B merely learns that one of the three happens. Further, thesetwo aspects are common knowledge. We omit the calculation that showsthat S1 ⊗ 7 is the model drawn in Scenario 7.

EXAMPLE 3.12. The program model employed in Scenario 8 is 8,shown below:

Again we take pre(ρ) = tr, pre(σ ) = H, and pre(τ ) = T. The differencebetween this and 7 above, is that instead of � = 7, we take � to be {s}.Then S1 ⊗ 8

∼= S8.

3.6. Operations on Program Models

1 and 0. We define program models 1 and 0 as follows: 1 is a one-actionset {σ } with σ →A σ for all A, PRE(σ ) = tr, and with distinguished set{σ }. The point here is that the update induced by this program model isexactly the update 1 from Section 2.3. We purposely use the same notation.Similarly, we let 0 be the empty program model. Then its induced updateis what we called 0 in Section 2.3.

Sequential Composition. In all settings involving “actions” in some senseor other, sequential composition is a natural operation. In our setting, wewould like to define a composition operation on program models, cor-responding to the sequential composition of updates. Here is the relevantdefinition.

[ 35 ]

200 LOGICS FOR EPISTEMIC PROGRAMS

Let = (,→A , pre, �) and � = (�,→A , pre�, ��) be programmodels. We define the composition

;� = ( × �,→A , pre;�, �;�)

to be the following program model:

1. × � is the cartesian product of the sets and �.2. →A in the composition ;� is the family of product relations, in the

natural way:

(σ, δ)→A (σ ′, δ′) iff σ →A σ ′ and δ →A δ′.

3. pre;�(σ, δ) = 〈(, σ )〉pre�(δ).4. �;� = � × ��.

In the definition of pre, (, σ ) is an abbreviation for the update (, {σ }),as defined in Section 3.4 i.e., the update induced by the program model(, {σ }).EXAMPLE 3.13. This example constructs a program model for lying, aswe find in Scenario 6. Lying in our subject cannot be taken to be simplya case of private or public announcement: this will not work out. In ourgiven situation, B simply knows that A doesn’t know the side of the coin,and hence cannot accept any lying announcement that would claim suchknowledge. One way to make sense of the action of A (successfully) lyingto B is to assume that, first, before the lying, a suspicion was aroused in B

that A might have privately learnt (e.g., by opening the box, or been told)which side of the coin was lying up; then second, that B subsequentlyreceives an untruthful announcement that A knows the coin is lying headsup, an announcement which is known to be false by A herself (but which isbelievable, and now believed, by B). Obviously, we cannot express thingsabout past actions in our logic, so we have to start right at the beginning,before the lying announcement is sent, and capture the whole action ofsuccessful lying as a sequential composition of two actions: B’s suspicionof A’s private learning, followed by B’s receiving (and believing) the lyingannouncement. This is what we shall do here.4 Let ϕ be ¬�AH and let ψ

be H ∧ �AH. Let 6 be

The idea is that B “learns” a false statement, namely that A knows the stateof the coin. Further, we saw 8 in Example 3.12. We consider 8;6. Onecan check using Proposition 3.14 that S1 ⊗ (8;6) ∼= (S1 ⊗ 8) ⊗ 6

∼=

[ 36 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 201

S8 ⊗ 6∼= S6. In addition, one can calculate 8;6 explicitly to see what

a reasonable one-step program for lying would be. The interesting point isthat its preconditions are complex sentences having to do with actions inour language.

Disjoint Union. If = (,→A , pre, �) and � = (�,→A , pre�, ��),we take � to be the disjoint union of the models, with union of the dis-tinguished actions. The intended meaning is the non-deterministic choicebetween the programs represented by and �. Here is the definition inmore detail, generalized to arbitrary (possibly infinite) disjoint unions: let{i}i∈I be a family of program models, with i = (i,→A i , prei , �i); wedefine their (disjoint) union

⊔i∈I

i =(⊔

i∈I

i,→A , pre, �

)

to be the model given by:

1.⊔

i∈I i is⋃

i∈I (i × {i}), the disjoint union of the sets i .2. (σ, i)→A (τ, j) iff i = j and σ →A iτ .3. pre(σ, i) = prei(σ ).4. � = ⋃

i∈I (�i × {i}).

Iteration. Finally, we define an iteration operation by ∗ = ⊔{n : n ∈N}. Here 0 = 1, and n+1 = n;.

We conclude by verifying that our definition of the operations onprogram models are correct in the sense that they are faithful to thecorresponding operations on updates from Section 2.3.

PROPOSITION 3.14. The update induced by a composition of programmodels is the composition of the induced updates. Similarly for sums anditeration, mutatis mutandis.

Proof. Let (, �) and (�,��) be program models. We denote by r theupdate induced by (, �), by s the update induced by (�,��), and by tthe update induced by (, �); (�,��). We need to prove that r; s = t.Let S = (S,→A S, [[.]]S) be a state model. Recall that

S(r; s) = S(r)(s) = (S ⊗ (, �)) ⊗ (�,��).

We claim that this is isomorphic to S ⊗ (;�,�;�), and indeed theisomorphism is (s, (σ, δ)) �→ ((s, σ ), δ). We check that (s, (σ, δ)) ∈S ⊗ (;�) iff ((s, σ ), δ) ∈ (S ⊗ ) ⊗ �. Indeed, the following areequivalent:

[ 37 ]

202 LOGICS FOR EPISTEMIC PROGRAMS

1. (s, (σ, δ)) ∈ S ⊗ (;�).2. s ∈ ‖pre;�(σ, δ)‖S.3. s ∈ ‖〈(, σ )〉pre�(δ)‖S.4. (s, σ ) ∈ S ⊗ and (s, σ ) ∈ ‖pre�(δ)‖S⊗ .5. ((s, σ ), δ) ∈ (S ⊗ ) ⊗ �.

The rest of verification of isomorphism is fairly direct.We also need to check that tS and (r; s)S are related by the isomorphism.

Now

tS = {(s, (s, (σ, δ))) ∈ S ⊗ (;�) : σ ∈ �, δ ∈ ��}.Recall that (r; s)S = rS; sS(r) and that this is a relational composition inleft-to-right order. And indeed,

rS = {(s, (s, σ )) : (s, σ ) ∈ S ⊗ , σ ∈ �}sS(r) = {((s, σ ), ((s, σ ), δ)) ∈ S(r) ⊗ � : δ ∈ ��}.

This completes the proof for composition. We omit the proofs for sumsand iteration. �

Endnotes. The work of this section explains of how complex representa-tions of naturally occurring scenarios may be computed from state modelsbefore the scenario and program models. Indeed, this is one of our mainpoints. There are precursors to our work in special cases, most notablyHans van Ditmarsch’s dissertation (2000). That work is about exampleslike 7, where some agent or set of agents knows that the current actionbelongs to some set, but does not know which action it is. But to ourknowledge, the work in this section is the first time that anything like thishas been obtained in general.

We have taught the material in this section in several courses at differ-ent levels. Our experience is that the material here is highly appealing tostudents and makes the case for formal representations in the first place(for students not familiar with formal logic) and for the kind of technicalwork that we pursue in this area (for those who are).

The idea of representing epistemic actions as Kripke models (or vari-ants of them) was first presented in our earlier paper with S. Solecki(Baltag et al. 1998). However, the proposal of that paper was to employKripke models in the syntax of a logical language directly. Many readershave felt this to be confusing, since the resulting syntax looked as if itdepended on the semantics. Proposals to improve the language were de-veloped in several papers of Baltag (1999, 2001, 2002, 2003). The logicsof these papers were more natural than those of Baltag et al. (1998). What

[ 38 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 203

is new in this paper is the overall apparatus of action signatures, epistemicactions, etc. We present what we hope is a natural syntax, and at the sametime we have programs as syntactic entities in our logical languages (seeSection 4 just below) linked to program models in the semantics.

4. LOGICAL LANGUAGES BASED ON ACTION SIGNATURES

This section presents the centerpiece of our formal work, the logical sys-tems corresponding to types of epistemic action. Our overall goal is toabstract from the propositional language earlier, in a way which allows usto fix only the epistemic structure of the desired actions, and vary theirpreconditions.

4.1. Action Signatures

We have presented many examples of the update product in Section 3.5.These allow us to represent many natural updates, and this is one of ourgoals in this paper. But the structures which we have seen so far are notsufficient to get logical languages incorporating updates. For example, wehave in Example 3.7 a program model that represents a private announce-ment to the agent A that a proposition H happens, and this takes place in away that B learns nothing. The picture of this was

What we want to do now is to vary things a bit, and then to abstract them.For example, suppose we want to announce a different proposition to A,say ψ . We would use

Varying the proposition ϕ, all announcements of this kind could be thoughtas actions of the same type. We could then represent the type of the actionby the following picture:

And the previous representations include the information that what actu-ally happens is what A hears. To vary this, we need only change whichworld is designated by the doubled border. We could switch things, or

[ 39 ]

204 LOGICS FOR EPISTEMIC PROGRAMS

double neither or both worlds. So we obtain a structure consisting of twoaction types:

The oval on the left represents the type PriA of a fully private announce-ment to agent A, while the oval on the right simply represents the type ofa skip action (or of an empty, or trivial, public announcement). By insert-ing any given proposition ψ into the oval depicting the action type PriA,we can obtain specific private announcements PriAψ , as depicted above.(There is no reason to insert any proposition into the right oval, since thiscomes already with its own precondition tr: this means that the type ofa skip action uniquely determines the corresponding action skip, since itcontains all the information needed to specify this action.)

Another example would be the case in which we announce ψ to A insuch a way that B is misled into believing ¬ψ (and is also misled intobelieving that everyone learns ¬ψ). Now we use

In itself, this is not general enough to give rise to an action type in oursense. But we can more generally consider the case in which ψ1 is an-nounced to A in such a way that B thinks that some other proposition ψ2is publicly announced:

By abstracting from the specific propositions, we obtain the followingstructure consisting of two action types:

Observe that if we are given now a sequence of two propositions (ψ1,ψ2),we could use them to fill in the oval with preconditions in two possibleways (depending on which proposition goes into the left oval). So, in orderto uniquely determine how an action type will generate specific announce-ments, we need an enumeration without repetition of all the action types inthe structure which do not come equipped with trivial preconditions (i.e.,all the empty ovals in the diagram, since we assume the others have trinside).

[ 40 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 205

So at this point, we can see how to abstract things, leading to thefollowing definition.

DEFINITION. An action signature is a structure

= (,→A , (σ1, σ2, . . . , σn))

where = (,→A ) is a finite Kripke frame, and σ1, σ2, . . . , σn is a desig-nated listing of a subset of without repetitions. We obviously must haven ≤ ||, but we allow the case n = 0 as well, which will produce an emptylist. We call the elements of action types, while the ones which are inthe listing (σ1, . . . , σn) will be called non-trivial action types.

The way this works for us is that an action signature together withan assignment of epistemic propositions to the non-trivial action types in will give us a full-fledged action model. The trivial action types willdescribe actions which can always happen, so they will get assigned thetrivial precondition tr, by default. And this is the exact sense in which thenotion of an action signature is an abstraction of the notion of action model.We shall shortly use action signatures in constructing logical languages.

EXAMLPES 4.1. Here is a very simple action signature which we callskip. is a singleton {skip}, we put skip →A skip for all agents A, and wetake the empty list () of types as listing. So we have only one, trivial, type:skip. In a sense which we shall make clear later, this is an action in which“nothing happens”, and moreover it is common knowledge that this is thecase.

The type of a public announcement can simply be obtained by onlychanging our listing in skip, to make the type non-trivial; we also changethe name of the type to Pub, to distinguish it from the previous one. So thesignature pub of public announcements has = {Pub}, Pub →A Pub forevery agent A, and the listing is just (Pub). So Pub is the non-trivial type ofa public announcement action. Note once again that we have not said whatis being announced. We are purposely separating out the structure of theannouncement from the content, and Pub is our model of the structure.

The next simplest action signature is the “test” signature ?. We take? = {?, skip}, with the listing (?). We also take ? →A skip, and skip →A skipfor all A. So the test ? is the only non-trivial type of action here. Thisturns out to be a totally opaque form of test: ϕ is tested on the real world,but nobody knows this is happening: when it happening, all the agentsthink nothing (i.e., skip) is happening. The function of this type will beto generate tests ?ϕ, which affect the states precisely in the way dynamiclogic tests do.

[ 41 ]

206 LOGICS FOR EPISTEMIC PROGRAMS

For each set B ⊆ A of agents, we define the action signaturePriB of completely private announcements to the group B. It has ={PriB, skip}; the listing is just (PriB), which makes skip into a trivial typeagain; and we put PriB →B PriB for all B ∈ B, PriB →C skip for C �∈ B,and skip →A skip for all agents A.

The action signature CkaBk is given by: = {1, . . . , k}; the listing is

(1, 2, . . . , k), so all types are non-trivial. i →B i for i ≤ k and B ∈ B; andfinally i →C j for i, j ≤ k and C �∈ B. This action signature is called thesignature of common knowledge of alternatives for an announcement tothe group B.

Signature-based Program Models. Now that we have a general abstractnotion, we introduce some notation to regain the earlier examples. Let

be a action signature, let (σ1, . . . , σn be the corresponding listing of non-trivial types, let � ⊆ , and let �ψ = ψ1, . . . ,ψn be a list of epistemicpropositions. We obtain an epistemic program model (, �)(ψ1, . . . ,ψn)

in the following way:

1. The set of simple actions is , and the accessibility relations are thosegiven by the action signature.

2. For j = 1, . . . , n, pre(σj) = ψ j . We put pre(σ ) = tr for all the other(trivial) actions.

3. The set of distinguished actions is �.

In the special case that � is the singleton set {σi}, we write the resultingsignature-based program model as (, σi,ψ1, . . . ,ψn).

To summarize: every action signature, set of distinguished action typesin it, and corresponding tuple of epistemic propositions gives an epistemicprogram model in a canonical way.

4.2. The Syntax and Semantics of L()

Fix an action signature . We present in Figure 7 a logic L().The first thing to notice about this is that for the first time in a great

while, we have a genuine syntax. In this regard, note that n is fixed from; it is the length of the given listing (σ1, . . . , σn). In the programs ofthe form σψ1, . . . , ψn we have sentences �ψ rather than epistemic propos-itions (which we had written using boldface letters ψ1, . . . ,ψn in Section2.3). Also, the signature figures into the semantics exactly in thoseprograms σψ1, . . . , ψn; in those we require that σ ∈ . The programmodel (, σ, [[ψ1]], . . . , [[ψn]]) is a signature-based program model as inthe previous section.

[ 42 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 207

Figure 7. The language L() and its semantics.

The second thing to note is that, as in PDL, we have two sorts ofsyntactic objects: sentences and programs. We call programs of the formσψ1, . . . , ψn basic actions. Note that they might not be “atomic” in thesense that the sentences ψj might themselves contain programs. The basicactions of the form σiψ1, . . . ψn (with i ≤ n) are called non-trivial, sincethey are generated by non-trivial action types.

We use the following standard abbreviations: false = ¬true, ϕ ∨ ψ =¬(¬ϕ ∧ ¬ψ), �Aϕ = ¬�A¬ϕ, �∗

Bϕ = ¬�∗Bϕ, and 〈π〉ϕ = ¬[π ]¬ϕ.

The Semantics. Defines two operations by simultaneous recursion onL():

1. ϕ �→ [[ϕ]], taking the sentences of L() into epistemic propositions;and

2. π �→ [[π ]], taking the programs of L() into program models (andhence into induced updates).

The formal definition is given in Figure 7. The main thing to note isthat with one key exception, the operations on the right-hand sides areimmediate applications of our general definitions of the closure conditionson epistemic propositions from Section 2.1 and the operations on programmodels from Section 3.6. A good example to explain this is the clause forthe semantics of sentences [α]ϕ. Assuming that we have a program model[[α]], we get an induced update in Section 3.4 which we again denote [[α]].We also have an epistemic proposition [[ϕ]]. We can therefore form theepistemic proposition [[[α]]][[ϕ]] (see equation (4) in Section 2.3). Notethat we have overloaded the square bracket notation; this is intentional,and we have done the same with other notation as well.

Similarly, the semantics of skip and crash are the program models 1and 0 of Section 3.6.

We also discuss the definition of the semantics for basic actions σ �ψ .This is precisely where the structure of the action signature enters. Forthis, recall that we have general definition of a signature-based program

[ 43 ]

208 LOGICS FOR EPISTEMIC PROGRAMS

model (, �,ψ1, . . . ,ψn), where � ⊆ and the ψ’s are any epistemicpropositions. What we have in the semantics of σ �ψ is the special case ofthis where � is the singleton {σ } and ψ i is [[ψi]], a proposition which wewill already have defined when we will come to define [[σ �ψ]].

At this point, it is probably good to go back to our earlier discus-sions in Section 2.1 of epistemic propositions and updates. What we havedone overall is to give a fully syntactic presentation of languages of theseepistemic propositions and updates. The constructions of the language cor-respond to the closure properties noted in Section 2.1. (To be sure, we haverestricted to the finite case at several points because we are interested in asyntax, and at the same time we have re-introduced some infinitary aspectsvia the Kleene star.)

4.3. Epistemic Program Logics L(S)

We generalize now our signature logics L() to families S of signatures,in order to define a general notion of epistemic program logics.

Logics Generated by Families of Signatures. Given a family S of signa-tures, we would like to combine all the logics {L()}∈S into a singlelogic. Let us assume the signatures ∈ S are mutually disjoint (otherwise,just choose mutually disjoint copies of these signatures). We define thelogic L(S) generated by the family S in the following way: the syntax isdefined by taking the same definition we had in Figure 7 for the syntax ofL(), but in which on the side of the programs we take instead as basicactions all expressions of the form

σψ1, . . . , ψn

where σ ∈ , for some arbitrary signature ∈ S, and n is the length of thelisting of non-trivial action types of . The semantics is again given by thesame definition as in Figure 7, but in which the clause about σψ1, . . . , ψn

refers to the appropriate signature: for every ∈ S, every σ ∈ , if n isthe length of the listing of , then

[[σψ1, . . . , ψn]] = (, σ, [[ψ1]], . . . , [[ψn]]).

EXAMPLE 4.2. This example constructs the logic of all epistemic pro-grams. Take the family

S = { : is a finite signature }of all finite signatures5. The logic L(S) will be called the logic of allepistemic programs.

[ 44 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 209

Preservation of Bisimulation and Atomic Propositions. We note two ba-sic facts about the interpretations of sentences and programs in the logicsL(S).

PROPOSITION 4.3. The interpretations of the sentences and programs ofL(S) preserve bisimulation.

Proof. By induction on L(S), using Propositions 2.5 and 3.5 �

PROPOSITION 4.4. The interpretations of programs of L(S) preserveatomic sentences in the sense that if s [[π ]]S t , then for all atomic sentencesp, s ∈ ‖p‖S iff t ∈ ‖p‖S([[π]]).

Proof. By induction on π . �

4.4. Formalization of the Target Logics in Terms of Signatures

We formalize now the target logics of Section 2.4 as epistemic programlogics L(S). We use the action signatures of Examples 4.1 and the notationfrom there.

The Logic of Public Announcements. This is formalized as L(Pub). Wehave sentences [Pub ϕ]ψ , just as we described in Section 2.4. Note thatL(Pub) allows announcements inside announcements. If ϕ, ψ , and χ

are sentences, then so is 〈Pub [Pub ϕ]ψ〉χ .We check that L(Pub) is a good formalization of the logic of public

announcements. Fix a sentence ϕ of L(Pub) and a state model S. Wecalculate:

S([[Pub ϕ]]) = S(Pub, Pub, [[ϕ]])= S ⊗ (Pub, Pub, [[ϕ]])= {(s, Pub) : s ∈ [[ϕ]]S}

The state model has the structure given earlier in terms of the updateproduct operation. The update relation [[Pub ϕ]]S relates s to (s, Pub)

whenever the latter belongs to the state model S([[Pub ϕ]]). The modelitself is isomorphic to the sub-state-model of S induced by {s ∈ S :s ∈ [[ϕ]]S}. Under this isomorphism, the update relation is then the in-verse of inclusion. This is just how the action of public announcement wasdescribed when we first encountered it, in Example 2.1 of Section 2.3.

Test-only PDL. was introduced in Section 2.6. Recall that this is PDL builtover the empty set of atomic actions. Although it was not one of the targetlanguages of Section 2.4, it will be instructive to see how it is formalized inour setting. Recall that test-only PDL has actions of the form ?ϕ. We want

[ 45 ]

210 LOGICS FOR EPISTEMIC PROGRAMS

to use our action signature ?. The action types of it are ? and skip, onlythe first one being non-trivial: n = 1. So technically we have sentences ofthe following forms:

[? ϕ]χ and [skip ϕ]χ(9)

Let us study the semantics of the basic actions ? ϕ and skip ϕ. Fix a statemodel S. We calculate:

S([[? ϕ]]) = S(?, ?, [[ϕ]])= S ⊗ (?, ?, [[ϕ]])= {(s, ?) : s ∈ [[ϕ]]S} ∪ {(s, skip) : s ∈ S}

The structure of S([[? ϕ]]) is that for each A, if s →A t in S, then(s, ?)→A (t, skip). Also (s, skip)→A (t, skip) under the same condition, andthere are no other arrows. The update relation [[? ϕ]]S relates s to (s, ?)whenever the latter belongs to the updated structure. Overall, this updateis isomorphic to what we described in Example 2.2 of Section 2.3.

Turning to the update map of skip ϕ, we again fix S. The modelS([[skip ϕ]]) is again literally the same as what we calculated above. How-ever, the update relation [[skip ϕ]]S now relates each s ∈ S to the pair(s, skip). This relation is a bisimulation. We shall formulate a notion ofaction equivalence later, and it will turn out that the update [[skip ϕ]] isequivalent to 1. For now, we can also consider the semantics of sentencesof the form [skip ϕ]ψ . We have

[[[skip ϕ]ψ]]S = {s ∈ S : if s [[skip ϕ]]S t , then t ∈ [[ψ]]S([[skip ϕ]])}= {s ∈ S : (s, skip) ∈ [[ψ]]S([[skip ϕ]])}= {s ∈ S : s,∈ [[ψ]]S}

This last equality is by Proposition 4.3 on bisimulation preservation. Theupshot is that [[[skip ϕ]ψ]]S = [[ψ]]S. So for this reason, we might as wellidentify the sentences [skip ϕ]ψ and ψ . Or to put things differently, wemight as well identify the basic action skip ϕ and (the constant of L(?))skip. Since we are already modifying the syntax, we might also abbreviate[skipϕ]ψ to ψ . Doing all this leads to a language which is exactly test-onlyPDL as we had it in Section 2.6, and our semantics there agrees with whatwe have calculated in the present discussion.

In conclusion, test-only PDL is equivalent to the logic L0(?); i.e., the�∗-free fragment of L(?).

The Logic of Totally Private Announcements. Let Pri be the family

Pri = {PriB : ∅ �= B ⊆ A}

[ 46 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 211

of all signatures of totally private announcements to non-empty groups ofagents (as introduced in Examples 4.1). Then L(Pri) formalizes one ofour target logics: the logic of totally private announcements. For example,in the case of A = {A,B}, L(Pri) will have basic actions of the forms:PriA ϕ, PriB ϕ, PriA,B ϕ, skipA ϕ, skipB ϕ, skipA,B ϕ. As before, wemay as well identify all the programs skipX ϕ with skip, and abbreviate[skipX]ψ to ψ .

The Logic of Common Knowledge of Alternatives. Can be formalized asL(Cka), where

Cka = {CkaBk : ∅ �= B ⊆ A, 1 ≤ k}.

4.5. Other Logics

Logics Based on Frame Conditions. Many other types of logics are easilyrepresentable in our framework. For example, consider the logic of all S4programs. To formalize this, we need only consider the disjoint union ofall finite action signatures whose underlying accessibility relations are pre-orders. For the logic of S5 programs, we change preorders to equivalencerelations. Another important class is given by the K45 axioms of modallogic. These systems are particularly important because the K45 and S5conditions are so prominent in epistemic logic.

Announcements by Particular Agents. Our modeling of the notion ofthe public announcement that ϕ is impersonal in the sense that the an-nouncement does not come from anywhere in particular. It might be bestunderstood as coming externally, as if someone shouted ϕ into the roomwhere the agents were standing.

We also want a notion of the public announcement by A of ϕ. We shallwrite this as PubAϕ. For this, we identify PubAϕ with the (externally-made) public announcement that ϕ holds and A knows this. This iden-tification does not represent the fact that A intends to inform the othersof ϕ. But as we know, intentions are not modeled at all in our system.We claim, however, that on a purely propositional level, the identificationworks. And using it, we can represent announcements by A. One way todo this is via abbreviation: we take 〈PubA ϕ〉ψ to be an abbreviation for〈Pub ϕ ∧ �Aϕ〉ψ . (A different way to formalize PubA ϕ would be to use aspecial signature constructed just for that purpose. But the discussion hereshows that there is no need to do this. One can use Pub.)

[ 47 ]

212 LOGICS FOR EPISTEMIC PROGRAMS

Lying. We can also represent misleading epistemic actions, such as lying.Again, we want to fix an agent A and then represent the action of A (suc-cessfully) lying that ϕ to the other agents. To all those others, this actionshould be the same thing as an announcement of ϕ by A. But to say that A

lies about ϕ, we want to assume that ϕ is actually false. Further, we wantto assume that A moreover knows that ϕ is false. (For if ϕ just happenedto be false when A said ϕ, we would not really want to call that “lying.”)

The technical details on the representation go as follows. We take asignature A

Lie given as follows.

A

Lie = {SecretA, PubA}.

We take (PubA, SecretA) as our non-repetitive list of types. The structure isgiven by taking SecretA →A SecretA; for B �= A, SecretA →B PubA; finally,for all B, PubA →B PubA.

L(A

Lie) contains sentences like [SecretA ϕ,ψ]χ . The extra argu-

ment ψ is a kind of secret condition. And we can use [LieAϕ]χ as anabbreviation of

[SecretA ϕ ∧ �Aϕ,¬ϕ ∧ �A¬ϕ]χ.

That is, for A to lie about ϕ there is a condition that ¬ϕ ∧ �A¬ϕ. But theother agents neither need to know this ahead of time nor do they in anysense “learn” this from the announcement. Indeed, for the other agents,LieAϕ is just like a truthful public announcement by A.

As with private announcements, we take the family of signatures{A

Lie : A ∈ A}. This family then generates a program logic. In this logicwe have actions which represent lying on the part of any agent, not justone fixed agent.

Other Effects: Wiretapping, Paranoia etc. It is possible to model scenarioswhere one player believes a communication to be private while in reality asecond player intercepts the communication. We can also represent gratuit-ous suspicion (“paranoia”): maybe no “real” action has taken place, exceptthat some people start suspecting some action (e.g., some private commu-nication) has taken place. With these and other effects, the problem is notso much deciding how to model them. Once one has clear intuitions abouta social scenario, it is not hard to do the modeling. The real issue in theirapplication seems to be that in complex social situations, our intuitions arenot always clear. There is no getting around this, and technical solutionsare of limited value for conceptual problems.

[ 48 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 213

Endnote. This section is the centerpiece of this paper, and all of the workin it is new. We make a few points about the history of this approach. Ourearlier paper with S. Solecki (Baltag et al. 1998) worked with a syntax thatemployed structured objects as actions. In fact, the actions were Kripkemodels themselves. This type of syntax also is used in other work suchas van Ditmarsch et al. (2003). But the use of structured objects in syntaxis felt by many readers to be awkward; see, e.g., remarks in chapter 4 ofKooi’s thesis Kooi (2003). We disagree with the assertion that our earliersyntax blurs the distinction between syntax and semantics. But in order tomake the subject more attractive, we have worked out other approachesto the syntax. Baltag (2001, 2002, 2003) developed logical systems thatstreamline the syntax using a repertoire of program operations, such aslearning a program and variable-binding operators. This paper is the firstone to formulate program logics in terms of action signatures.

5. LOGICAL SYSTEMS

We write |= ϕ to mean that for all state models S and all s ∈ S, s ∈ [[ϕ]]S.In this case, we say that ϕ is valid. In this section, we fix an arbitraryfamily S of mutually disjoint action signatures, and we consider the gen-erated logics. We present a sound proof system for the validities in L(S),and sound and complete proof systems for two important sublogics: theiteration-free fragment L1(S) and the logic L0(S) obtained by deletingboth iteration and common knowledge operators. In particular, our resultsapply to the logics L(), L1() and L0() given by only one signature.However, the soundness/completeness proofs will appear in Baltag (2003).So the point of this section is to just state clearly what the logical systemis, and to illustrate its use.

Sublanguages. We are of course interested in the languages L(S), but wealso consider sublanguages L0(S) and L1(S). Recall that L1(S) is thefragment without the action iteration construct π∗. L0(S) is the fragmentwithout π∗ and �∗

B . It turns out that L0() is the easiest to study: it isof the same expressive power as ordinary multi-modal logic. On the otherhand, the full logic L(S) is in general undecidable: indeed, even if wetake a family consisting of only one signature, of public announcementsPub, the corresponding logic L(Pub) is undecidable (see Miller and Moss(2003)). L1() is decidable and we have a complete axiom system for it(see Baltag (2003)).

In Figure 8 below we present a logic for L(S). We write � ϕ if ϕ can beobtained from the axioms of the system using its inference rules. We often

[ 49 ]

214 LOGICS FOR EPISTEMIC PROGRAMS

Figure 8. The logical system for L(S). For L1(S), we drop the ∗∗ axioms and rule; forL0(S), we also drop the ∗ axioms and rules.

omit the turnstile � when it is clear from the context. Our presentation ofthe proof system uses the meta-syntactical notations associated with thenotion of canonical action model � (to be introduced below in Section5.1). This could have been avoided only at the cost of complicating thepresentation. We chose the simplest version, and so our logical system, asgiven in Figure 8, can be fully understood only after reading Section 5.1.

AXIOMS. Most of the system will be quite standard from modal logic. TheAction Axioms are new, however. These include the Atomic Permanenceaxiom; note that in this axiom p is an atomic sentence. The axiom says

[ 50 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 215

that announcements do not change the brute fact of whether or not p holds.This axiom reflects the fact that our actions do not change any kind of localstate.

The Action-Knowledge Axiom gives a criterion for knowledge after anaction. For non-trivial basic actions σi

�ψ (induced by a non-trivial actiontype σi ∈ , for some signature ∈ S), this axiom states that

� [σi�ψ]�Aϕ ↔ (ψi →

∧{�A[σj

�ψ]ϕ : σi →A σj in })(9)

In words, two sentences are equivalent: first, an agent A will come toknow ϕ after a basic action σi

�ψ is executed; and second, whenever thisaction σi

�ψ is executable (i.e., its precondition ψi holds), A knows (alreadybefore this action) that ϕ will come to be true after every action that A

considers as possibly happening (whenever σi�ψ is in fact happening).

This axiom should be compared with the Ramsey axiom in conditionallogic. One should also study the special case of it for the logic of publicannouncements in Section 5.3.

The Action Rule then gives a necessary criterion for common know-ledge after a simple action. (Simple actions are defined below in Section5.1. They include the actions of the form σ �ψ .) Since common knowledgeis formalized by the �∗

B construct, this rule is a kind of induction rule. (Thesentences χβ play the role of strong induction hypotheses.) (For the induc-tion rule for common knowledge assertions without actions, see Lemma5.5.)

5.1. The Canonical Action Model �

Recall that we defined action models and program models in Sections 3.1and 3.2, respectively. At this point, we define a new action model calledthe canonical action model � of a language L(S).

DEFINITION. Let S be a family of mutually disjoint signatures. Recallthat a basic action of L(S) is a program of the form σψ1, . . . , ψn, whereσ ∈ , for some signature ∈ S, and n is the length of ’s list of non-trivial action types. A simple action of L(S) is a program of L(S) in whichneither the choice operation nor the iteration operation π∗ occur. We useletters like α and β to denote simple actions only. A simple sentence is asentence of L1(S) in which neither action sum nor action iteration occur.So all programs in simple sentences are simple actions.

The Canonical Action Model � of L(S) We define a program model � inseveral steps. The simple actions of � are the simple actions of L(S) as

[ 51 ]

216 LOGICS FOR EPISTEMIC PROGRAMS

defined just above. For all A, the accessibility relation →A is the smallestrelation such that

1. skip →A skip.2. If σ →A σ ′ in some signature ∈ S and �ϕ = �ψ , then σ �ϕ →A σ ′ �ψ .3. If α →A α′ and β →A β ′, then α;β →A α′;β ′.

PROPOSITION 5.1. As a frame, � is locally finite: for each simple α,there are only finitely many β such that α−→A∗

β.Proof. By induction on α; we use heavily the fact that the accessib-

ility relations on � are the smallest family with their defining property.For the simple action expressions σ �ψ , we use the assumption that all oursignatures ∈ S are finite and mutually disjoint. �

Next, we define PRE : � → L(S) by recursion so that

PRE(skip) = truePRE(crash) = falsePRE(σi

�ψ) = ψi for σi in the given listing of

PRE(σ �ψ) = true for all trivial typesσPRE(α;β) = 〈α〉PRE(β)

REMARK. This function PRE should not be confused with the function prewhich is part of the structure of a program model. PRE(σ ) is a sentence inour language L(S), while pre was a purely semantic notion, associatingpropositions to simple actions in a program model. However, there is aconnection: we are in the midst of defining the program model �, and itspre function is defined in terms of PRE. This is also perhaps a good place toremind the reader that neither PRE nor pre is a first-class symbol in L(S);they are defined symbols.

Completing the Definition of �. We set

pre(σ ) = [[PRE(σ )]].This action model � is the canonical (epistemic) action model; it plays asomewhat similar role in our work to the canonical model in modal logic.

PROPOSITION 5.2 (see Baltag et al. (1998) and Miller and Moss (2003)).For every family S of action signatures, the logical systems for L0(S)

and L1(S) presented in Figure 8 are sound, complete, and decidable.However, for every signature which contains a (copy of the) “publicannouncement” action type Pub, the full logic L() (including iteration

[ 52 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 217

π∗) is undecidable and the proof system for L() presented in Figure 8 issound but incomplete. Indeed, validity in L() is �1

1-complete, so thereare no recursive axiomatizations which are complete. (The same obviouslyapplies to L(S) for any family of signatures S such that ∈ S.)

5.2. Some Derivable Principles

LEMMA 5.3. The Action-Knowledge Axiom is provable for all simpleactions α:

� [α]�Aϕ ↔ (PRE(α) →∧

{�A[β]ϕ : α →A β in �})(10)

The proof is by induction on α and may be found in Baltag et al. (2003).

LEMMA 5.4. For all A ∈ C, all simple α, and all β such that α →A β,

1. � [α]�∗Cψ → [α]ψ .

2. � [α]�∗Cψ ∧ PRE(α) → �A[β]�∗

Cψ .

Proof. Part (1) follows easily from the Epistemic Mix Axiom and modalreasoning. For part (2), we start with a consequence of the Epistemic MixAxiom: � �∗

Cψ → �A�∗Cψ . Then by modal reasoning, � [α]�∗

Cψ →[α]�A�∗

Cψ . By the Action-Knowledge Axiom generalized as we have itin Lemma 5.3, we have � [α]�∗

Cψ ∧ PRE(α) → �A[β]�∗Cψ . �

LEMMA 5.5 (The Common Knowledge Induction Rule) From � χ →ψ ∧ �Aχ for all A, infer � χ → �∗

Aψ .Proof. We apply the Action Rule to the simple action skip, recalling that

PRE(skip) = true, and skip →A skip for all A. �

5.3. Logical Systems for the Target Logics

We presented a number of target logics in Section 2.4, and these werethen formalized in Section 4.1. In particular, we have logics L1() for anumber of interesting action signatures . What we want to do here is tospell out what the axioms of L1(S) come to when we specialize the generallogic to the logics of special interest. In doing this, we find it convenient toadopt simpler notations tailored for the fragments.

The logic of public announcements is shown in Figure 9. We only in-cluded the axioms and rule of inference that specifically used the structureof the signature pub. So we did not include the sentential validities, thenormality axiom for �A, the composition axiom, modus ponens, etc. Also,we renamed the main axiom and rule to emphasize the “announcement”aspect of the system.

[ 53 ]

218 LOGICS FOR EPISTEMIC PROGRAMS

Figure 9. The main points of the logic of public announcements.

Our next logic is the logic of completely private announcements togroups. We discussed the formalization of this in Section 4.4. We haveactions PriBϕ and (of course) skip. The axioms and rules are just asin the logic of public announcements, with a few changes. We must ofcourse consider the relativized operators [PriBϕ] instead of their simplercounterparts [Pub ϕ].)

The actions skip all have true as their precondition, and since (true →ψ) is logically equivalent to ψ , we certainly may omit these actions fromthe notation in the axioms and rules. The most substantive change whichwe need to make in Figure 9 concerns the Action-Knowledge Axiom. Itsplits into two axioms, noted below:

[PriBϕ]�Aψ ↔ (ϕ → �A[PriBϕ]ψ) for A ∈ B[PriBϕ]�Aψ ↔ (ϕ → �Aψ) for A /∈ B

The last equivalence says: assuming that ϕ is true, then after a privateannouncement of ϕ to the members of B, an outsider knows ψ just incase she knew ψ before the announcement.

Finally, we study the logic of common knowledge of alternatives. This,too, was introduced in Section 2.4 and formalized in Section 4.1. TheAction-Knowledge now becomes

[CkaB �ϕ]�Aψ ↔ (ϕ1 → �A[CkaB �ϕ]ψ) for A ∈ B[CkaB �ϕ]�Aψ ↔ (ϕ1 → ∧

0≤i≤k �A[CkaB �ϕi]ψ) for A /∈ B

where in the last clause, (ϕ1, . . . , ϕn)i is the sequence ϕi, ϕ1, . . . , ϕi−1,

ϕi+1, . . . , ϕk . (That is, we bring ϕi to the front of the sequence.)

[ 54 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 219

5.4. Examples in the Target Logics

This section studies some examples of the logic at work.We begin with an application of the Announcement Rule in the logic of

public announcements. We again work with the atomic sentences H and Tfor heads and tails, and with the set {A,B} of agents. We show

� �∗A,B(H ↔ ¬T) → [Pub H]�∗

A,B¬T.

That is, on the assumption that it is common knowledge that heads andtails are mutually exclusive, then as a result of a public announcement ofheads it will be common knowledge that the state is not tails.

We give this application in detail. Recall that pub has one simple actionwhich we call Pub. We take χPub to be �∗

A,B(H ↔ ¬T). In additionPub →A Pub for all A, and there are no other arrows in pub. We take α

to be Pub H; note that this is the only action accessible from itself in thecanonical action model. To use the Announcement Rule, we must showthat

1. � �∗A,B(H ↔ ¬T) → [Pub H]¬T.

2. � (�∗A,B(H ↔ ¬T) ∧ H) → �A�∗

A,B(H ↔ ¬T), and the same with B

replacing A.

From these assumptions, we may infer � �∗A,B(H ↔ ¬T) →

[Pub H]�∗A,B¬T.

For the first statement,

(a) T ↔ [Pub H]T Atomic Permanence(b) (H ↔ ¬T) → (H → ¬[Pub H]T) (a), propositional reasoning(c) [Pub H]¬T ↔ (H → ¬[Pub H]T) Partial Functionality(d) �∗

A,B(H ↔ ¬T) → (H ↔ ¬T) Epistemic Mix(e) �∗

A,B(H ↔ ¬T) → [Pub H]¬T (d), (b), (c), propositional reasoning

And the second statement is an easy consequence of the Epistemic MixAxiom.

What Happens when a Publicly Known Fact is Announced? One intuitionabout public announcements and common knowledge is that if ϕ is com-mon knowledge, then announcing ϕ publicly does not change anything.Formally, we express this by a scheme rather than a single equation:

�∗ϕ → ([Pub ϕ]ψ ↔ ψ)(11)

(In this line and in the rest of this section, we are omitting the subscriptson the �∗ operator. More formally, the subscript should be A, since we are

[ 55 ]

220 LOGICS FOR EPISTEMIC PROGRAMS

dealing with knowledge which is common to all agents.) What we wouldlike to say is �∗ϕ → ∧

ψ([Pub ϕ]ψ ↔ ψ), but of course this cannot beexpressed in our language. So we consider only the sentences of the form(12), and we show that all of these are provable. We argue by induction onϕ.

For an atomic sentence p, (12) follows from the Epistemic Mix andAtomic Permanence Axioms. The induction steps for ∧ and ¬ are easy.Next, assume (12) for ψ . By necessitation and Epistemic Mix, we have

�∗ϕ → (�A[Pub ϕ]ψ ↔ �Aψ)

Note also that by the Announcement-Knowledge Axiom

�∗ϕ → ([Pub ϕ]�Aψ ↔ �A[Pub ϕ]ψ)

These two imply (12) for �Aψ .Finally, we assume (12) for ψ and prove it for �∗

Bψ . We show first that�∗ϕ ∧ �∗ψ → [Pub ϕ]�∗ψ . For this we use the Action Rule. We mustshow that

(a) �∗ϕ ∧ �∗ψ → [Pub ϕ]ψ .

(b) �∗ϕ ∧ �∗ψ ∧ ϕ → �A(�∗ϕ ∧ �∗ψ).

(Actually, since our common knowledge operators �∗ here are really ofthe form �∗

A, we need (b) for all agents A.) Point (a) is easy from ourinduction hypothesis, and (b) is an easy consequence of Epistemic Mix.

To conclude, we show �∗ϕ ∧ [Pub ϕ]�∗ψ → �∗ψ . For this, we usethe Common Knowledge Induction Rule of Lemma 5.5; that is, we show

(c) �∗ϕ ∧ [Pub ϕ]�∗ψ → ψ .

(d) �∗ϕ ∧ [Pub ϕ]�∗ψ → �A(�∗ϕ ∧ [Pub ϕ]�∗ψ) for all A.

For (c), we use Lemma 5.4, part (1) to see that [Pub ϕ]�∗ψ → [Pub ϕ]ψ ;and now (c) follows from our induction hypothesis. For (d), it will besufficient to show that

ϕ ∧ [Pub ϕ]�∗ψ → �A[Pub ϕ]�∗ψ

This follows from Lemma 5.4, part (2).

[ 56 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 221

A Commutativity Principle for Private Announcements. Suppose that Band C are disjoint sets of agents. Let ϕ1, ϕ2, and ψ be sentences. Then weclaim that

� [PriBϕ1][PriCϕ2]ψ ↔ [PriCϕ2][PriBϕ1]ψ.

That is, order does not matter with private announcements to disjointgroups.

Actions Do Not Change Common Knowledge of Non-epistemic Sentences.For yet another application, let ψ be any boolean combination of atomicsentences. Then for all actions α of any of our logics, � ψ ↔ [α]ψ . Theproof is an easy induction on ψ . Even more, we have � �∗

Cψ ↔ [α]�∗Cψ .

In one direction, we use the Action Rule, and in the other, the CommonKnowledge Induction Rule (Lemma 5.5).

Endnotes. Although the general logical systems in this paper are new,there are important precursors for the target logics. Plaza (1989) constructswhat we would call L0(Pub), that is, the logic of public announcementswithout common knowledge operators or program iteration. (He workedonly on models where each accessibility relation is an equivalence relation,so his system includes the S5 axioms.) Gerbrandy (1999a, b), and also Ger-brandy and Groeneveld (1997) went a bit further. They studied the logic ofcompletely private announcements (generalizing public announcements)and presented a logical system which included the common knowledgeoperators. That is, their system included the Epistemic Mix Axiom. Theyargued that all of the reasoning in the original Muddy Children scenariocan be carried out in their system. This is important because it shows that inorder to get a formal treatment of that problem and related ones, one neednot posit models which maintain histories. Their system was not completesince it did not have anything like the Action Rule; this first appears in aslightly different form in Baltag (2003).

5.5. Conclusion

We have been concerned with actions in the social world that affect theintuitive concepts of knowledge, (justifiable) beliefs, and common know-ledge. This paper has shown how to define and study logical languagesthat contain constructs corresponding to such actions. The many examplesin this paper show that the logics “work”. Much more can be said aboutspecific tricky examples, but we hope that the examples connected to ourscenarios make the point that we are developing valuable tools.

[ 57 ]

222 LOGICS FOR EPISTEMIC PROGRAMS

The key steps in the development are the recognition that we can as-sociate to a social action α a mathematical model . is a programmodel. In particular, it is a multi-agent Kripke model, so it has featuresin common with the state models that underlie formal work in the entirearea. There is a natural operation of update product at the heart of ourwork. This operation is surely of independent interest because it enablesone to build complex and interesting state models. The logical languagesthat we introduce use the update product in their semantics, but the syntaxis a small variation on propositional dynamic logic. The formalization ofthe target languages involved the signature-based languages L() and alsotheir generalizations L(S). These latter languages are needed to formulatethe logic of private announcements, for example. We feel that presentingthe update product first (before the languages) will make this paper easierto read, and having a relatively standard syntax should also help.

Once we have our languages, the next natural step is to study them. Thispaper presented logical systems for validities, omitting many proofs due tothe lack of space.

NOTES

1 It is important for us that the sentence p be a syntactic object, while the proposition pbe a semantic object. See Section 2.5 for further discussion.2 The subscript 3 comes from the number of the scenario; we shall speak of correspondingmodels S1, S2, etc., and each time the models will be the ones pictured in Section 1.1.3 We are writing relational composition in left-to-right order in this paper.4 In Section 4.5, we shall consider a slightly simpler model of lying.5 To make this into a set, instead of a proper class, we really mean to take all finitesignatures whose action types are natural numbers, and then take the disjoint union ofthis countable set of finite signatures.

REFERENCES

Baltag, Alexandru: 1999, ‘A Logic of Epistemic Actions’, (Electronic) Proceedings of theFACAS workshop, held at ESSLLI’99, Utrecht University, Utrecht.

Baltag, Alexandru: 2001, ‘Logics for Insecure Communication’, in J. van Bentham(ed.) Proceedings of the Eighth Conference on Rationality and Knowledge (TARK’01),Morgan Kaufmann, Los Altos, pp. 111–122.

Baltag, Alexandru: 2002, ‘A Logic for Suspicious Players: Epistemic Actions and BeliefUpdates in Games’, Bulletin Of Economic Research 54(1), 1–46.

Baltag, Alexandru: 2003, ‘A Coalgebraic Semantics for Epistemic Programs’, in Proceed-ings of CMCS’03, Electronic Notes in Theoretical Computer Science 82(1), 315–335.

[ 58 ]

ALEXANDRU BALTAG AND LAWRENCE S. MOSS 223

Baltag, Alexandru: 2003, Logics for Communication: Reasoning about Informa-tion Flow in Dialogue Games. Course presented at NASSLLI’03. Available athttp://www.indiana.edu/∼nasslli.

Baltag, Alexandru, Lawrence S. Moss, and Sławomir Solecki: 1998, ‘The Logic of Com-mon Knowledge, Public Announcements, and Private Suspicions’, in I. Gilboa (ed.),Proceedings of the 7th Conference on Theoretical Aspects of Rationality and Knowledge(TARK’98), pp. 43–56.

Baltag, Alexandru, Lawrence S. Moss, and Sławomir Solecki: 2003, ‘The Logic ofEpistemic Actions: Completeness, Decidability, Expressivity’, manuscript.

Fagin, Ronald, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi: 1996, ReasoningAbout Knowledge, MIT Press.

Fischer, Michael J. and Richard E. Ladner: 1979, ‘Propositional Modal Logic of Pro-grams’, J. Comput. System Sci. 18(2), 194–211.

Gerbrandy, Jelle: 1999a, ‘Dynamic Epistemic Logic’, in Lawrence S. Moss, et al (eds),Logic, Language, and Information, Vol. 2, CSLI Publications, Stanford University.

Gerbrandy, Jelle: 1999b, Bisimulations on Planet Kripke, Ph.D. dissertation, University ofAmsterdam.

Gerbrandy, Jelle and Willem Groeneveld: 1997, ‘Reasoning about Information Change’, J.Logic, Language, and Information 6, 147–169.

Gochet, P. and P. Gribomont: 2003, ‘Epistemic Logic’, manuscript.Kooi, Barteld P.: 2003, Knowledge, Chance, and Change, Ph.D. dissertation, University of

Groningen.Meyer, J.-J. and W. van der Hoek: 1995, Epistemic Logic for AI and Computer Science,

Cambridge University Press, Cambridge.Miller, Joseph S. and Lawrence S. Moss: 2003, ‘The Undecidability of Iterated Modal

Relativization’, Indiana University Computer Science Department Technical Report 586.Moss, Lawrence S.: 1999, ‘From Hypersets to Kripke Models in Logics of Announce-

ments’, in J. Gerbrandy et al. (eds), JFAK. Essays Dedicated to Johan van Benthem onthe Occasion of his 60th Birthday, Vossiuspers, Amsterdam University Press.

Plaza, Jan: 1989, ‘Logics of Public Communications’, Proceedings, 4th InternationalSymposium on Methodologies for Intelligent Systems.

Pratt, Vaughn R.: 1976, ‘Semantical Considerations on Floyd-Hoare Logic’, in 7th AnnualSymposium on Foundations of Computer Science, IEEE Comput. Soc., Long Beach, CA,pp. 109–121.

van Benthem, Johan: 2000, ‘Update Delights’, manuscript.van Benthem, Johan: 2002, ‘Games in Dynamic Epistemic Logic’, Bulletin of Economic

Research 53(4), 219–248.van Benthem, Johan: 2003, ‘Logic for Information update’, in J. van Bentham (ed.) Pro-

ceedings of the Eighth Conference on Rationality and Knowledge (TARK’01), MorganKaufmann, Los Altos, pp. 51–68.

van Ditmarsch, Hans P.: 2000, ‘Knowledge Games’, Ph.D. dissertation, University ofGroningen.

van Ditmarsch, Hans P.: 2001, ‘Knowledge Games’, Bulletin of Economic Research 53(4),249–273.

van Ditmarsch, Hans P., W. van der Hoek, and B. P. Kooi: 2003, in V. F. Hendricks etal. (eds), Concurrent Dynamic Epistemic Logic, Synt. Lib. vol. 322, Kluwer AcademicPublishers.

[ 59 ]

224 LOGICS FOR EPISTEMIC PROGRAMS

Alexandru BaltagOxford UniversityComputing LaboratoryOxford, OX1 3QD, U.K.E-mail: [email protected]

Lawrence S. MossMathematics DepartmentIndiana UniversityBloomington, IN 47405, U.S.A.E-mail: [email protected]

[ 60 ]