View
219
Download
0
Category
Tags:
Preview:
Citation preview
Default and Cooperative Reasoning in Multi-Agent Systems
Chiaki Sakama
Wakayama University, Japan
Programming Multi-Agent Systems based on Logic Dagstuhl Seminar, November 2002
Incomplete Knowledge in Multi-Agent System (MAS)
• An individual agent has incomplete knowledge in an MAS.
• In AI a single agent performs default reasoning when its knowledge is incomplete.
• In a multi-agent environment, caution is requested to perform default reasoning based on an agent’s incomplete knowledge.
Default Reasoning by a Single Agent
Let A be an agent and F a propositional sentence. When
A ≠ F
F is not proved by A and ~ F is assumed by default (negation as failure).
Default Reasoning in Multi-Agent Environment
Let A1,…,An be agents and F a propositinal sentence. When
A1 ≠ F (†)
F is not proved by A1 but F may be proved by other agents A2,…,An . ⇒ It is unsafe to conclude ~ F by default due to the evidence of (†).
Default Reasoning v.s. Cooperative Reasoning in MAS
• An agent can perform default reasoning if it is based on incomplete belief wrt an agent’s internal world.
• Else if an agent has incomplete knowledge about its external world, it is more appropriate to perform cooperative reasoning.
Purpose of this Research
• It is necessary to distinguish different types of incomplete knowledge in an agent.
• We consider a multi-agent system based on logic and provide a framework of default/cooperative reasoning in an MAS.
Problem SettingProblem Setting
• An MAS consists of a finite number of agents.
• Every agent has the same underlying language and shared ontology.
• An agent has a knowledge base written by logic programming.
Multi-Agent Logic Program (MLP)
• Given an MAS {A1 ,…, An } with agents Ai (1 i n)≦ ≦ , a multi-agent logic program (MLP) is defined as a set { P1 ,…, Pn } where Pi is the program of Ai .
• Pi is an extended logic program which consists of rules of the form:
L0 ← L1 ,…, Lm , not Lm+1 ,…, not Ln
where Li is a literal and not represents negation as failure.
Terms / Notations
• Any predicate appearing in the head of no rule in a program is called external, otherwise, it is called internal . A literal with an external/internal predicate is called an external/internal literal .
• ground(P): ground instantiation of a program P. • Lit(P): The set of all ground literals appearing
in ground(P). • Cn(P) = { L | L is a ground literal s.t. P |= L }.
Answer Set Semantics
Let P be a program and S a set of ground literals satisfying the conditions:
1. PS is a set of ground rules s.t.
L0← L1 ,…, Lm is in PS iff
L0 ← L1 ,…, Lm , not Lm+1 ,…, not Ln is in ground(P) and { Lm+1 ,…, Ln }∩ S = φ.
2. S = Cn( PS ).
Then, S is called an answer set of P.
Rational Agents
• A program P is consistent if P has a consistent answer set.
• An agent Ai is called rational if it has a consistent program Pi.
• We assume an MAS {A1 ,…, An } where each agent Ai is rational.
Semantics of MLP
Given an MLP {P1,…, Pn}, the program Πi is defined as
(i) Pi Π⊆ i
(ii) Πi is a maximal consistent subset of P1 ∪ ・・・ ∪ Pn
A set S of ground literals is called a belief set of an agent Ai if S=T∩Lit(Pi ) where T is an answer set of Πi .
Belief Sets
• An agent has multiple belief sets in general.
• Belief sets are consistent and minimal under set inclusion.
• Given an MAS { A1 ,…, An }, an agent Ai (1i n)≦ ≦ concludes a propositional sentence F
(written Ai |=F ) if F is true in every belief set of Ai .
Example
Suppose an MLP { P1, P2 } such that P1: travel( Date, Flight# ) ← date( Date ),
not scheduled( Date ),reserve( Date, Flight# ).
reserve( Date, Flight# ) ← flight( Date, Flight# ),not state( Flight#, full ).
date( d1 )←. date( d2 )←. scheduled(d1)←. flight( d1, f123 )←. flight( d2, f456 )←. flight( d
2, f789 )←.
P2: state( f456, full ) ← .
Example (cont.)
The agent A1 has the single belief set
{ travel( d2, f789 ) , reserve( d2, f789 ),
date( d1 ), date( d2 ), scheduled(d1),
flight( d1, f123 ), flight( d2, f456 ),
flight( d2, f789 ), state( f456, full ) }
Example
Suppose an MLP { P1, P2 , P3 } such that
P1 : go_cinema ← interesting, not crowded
¬ go_cinema ← ¬ interesting
P2: interesting ←
P3: ¬ interesting ←
The agent A1 has two belief sets:
{ go_cinema, interesting }
{ ¬ go_cinema, ¬ interesting }
Abductive Logic Programs
・ An abductive logic program is a tuple 〈 P,A 〉 where P is a program and A is a set of hypotheses (possibly containing rules).
・〈 P,A 〉 has a belief set SH if SH is a consistent answer set of P H where H A.∪ ⊆
・ A belief set SH is maximal (with respect to A) if there is no belief set TK such that H K .⊂
Abductive Characterization of MLP
Given an MLP {P1 ,…, Pn} , an agent Ai has a belief set S iff S=TH∩Lit(Pi ) where TH is a maximal belief set of the abductive logic program 〈 Pi ; P1
∪ ・・・∪ Pi - 1 P∪ i+1 ∪ ・・・∪ Pn 〉 .
Problem Solving in MLP
• We consider an MLP {P1 ,…, Pn} where each Pi is a stratified normal logic program.
• Given a query ← G, an agent solves the goal in a top-down manner.
• Any internal literal in a subgoal is evaluated within the agent.
• Any external literal in a subgoal is suspended and the agent asks other agents whether it is proved or not.
Simple Meta-Interpreter
solve(Agent, (Goal1,Goal2)):- solve(Agent,Goal1), solve(Agent,Goal2).
solve(Agent, not(Goal)):- not(solve(Agent, Goal)).
solve(Agent, int(Fact)):- kb(Agent, Fact).
solve(Agent, int(Goal)):- kb(Agent, (Goal:-Subgoal)), solve(Agent, Subgoal).
solve(Agent, ext(Fact)):- kb(AnyAgent, Fact).
solve(Agent, ext(Goal)):- kb(AnyAgent, (Goal:-Subgoal)), solve(AnyAgent, Subgoal).
Example
Recall the MLP { P1, P2 } with P1: travel( Date, Flight# ) ← date( Date ),
not scheduled( Date ),reserve( Date, Flight# ).
reserve( Date, Flight# ) ← flight( Date, Flight# ),not state( Flight#, full ).
date( d1 )←. date( d2 )←. scheduled(d1)←. flight( d1, f123 )←. flight( d2, f456 )←. flight( d
2, f789 )←.
P2: state( f456, full ) ← .
Example (cont.)
Goal: ← travel( Date, Flight# ). P1 : travel( Date, Flight# ) ←
date( Date ), not scheduled( Date ),
reserve( Date, Flight# ).
G: ← date( Date ), not scheduled( Date ),
reserve( Date, Flight# ).
Example (cont.)
G: ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ).
P1 : date(d1)← , date(d2) ←
G: ← not scheduled(d1), reserve(d1,Flight# ).
P1 : scheduled(d1)←fail
Example (cont.)
! BacktrackG: ← date( Date ),
not scheduled( Date ), reserve( Date, Flight# ).
P1 : date(d1)← , date(d2) ←
G: ← not scheduled(d2), reserve(d2, Flight# ).
G: ← reserve(d2, Flight# ).
Example (cont.)
G: ← reserve(d2, Flight# ). P1: reserve( Date, Flight# ) ←
flight( Date, Flight# ),not state( Flight#, full ).
G: ← flight( d2, Flight# ), not state( Flight#, full ).
P1: flight( d2, f456 )←. flight( d2, f789 )←. G: ← not state( f456, full ).
! Suspend the goal and ask P2 whether state( f456, full ) holds or not.
Example (cont.)
G: ← not state( f456, full ).P2 : state( f456, full ) ← fail ! BacktrackG: ← flight( d2, Flight# ),
not state( Flight#, full ).P1: flight( d2, f456 )←. flight( d2, f789 )←. G: ← not state( f789, full ).
Example (cont.)
G: ← not state( f789, full ).
! Suspend the goal and ask P2 whether state( f789, full ) holds or not.
The goal ← state( f789, full ) fails in P2 then G succeeds in P1 .
As a result, the initial goal ← travel( Date, Flight# ) has the unique solution Date=d2 and Flight# = f789.
Correctness
Let {P1 ,…, Pn} be an MLP where each Pi is a stratified normal logic program. If an agent Ai solves a goal G with an answer substitution θ, then Ai |= Gθ, i.e., Gθ is true in the belief set of Ai .
Further Issue
• The present system suspends the goal with an external predicate and waits a response from other agents.
• When an expected response is known, speculative computation [Satoh et al, 2000] would be useful to proceed computation without waiting for responses.
Further Issue
• The present system asks every other agent about the provability of external literals and has no strategy for adopting responses.
• When the source of information is known, it is effective to designate agents to be asked or to discriminate agents based on their reliability.
Summary
• A declarative semantics of default and cooperative reasoning in an MAS is provided. Belief sets characterize different types of incompleteness of an agent in an MAS.
• A proof procedure for query-answering in an MAS is provided. It is sound under the belief set semantics when an MLP is stratified.
Recommended