50
Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology, The Netherlands

Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Embed Size (px)

Citation preview

Page 1: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013

IntroductionAgent Programming

Birna van Riemsdijk and Koen HindriksDelft University of Technology, The Netherlands

Page 2: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 2 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 2

Outline

• Previous Lecture, last lecture on Prolog:– “Input & Output”– Negation as failure– Search

• Coming lectures:– Agents that use Prolog for knowledge

representation

• This lecture:– Agent Introduction– “Hello World” example in GOAL

Page 3: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 3 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 3

Agents: Act in environments

Choose an action

Percepts

Action

environment

agent

Page 4: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 4 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 4

4

Agent Capabilities

• Reactive – respond in timely manner to change

• Proactive – (persistently) pursues multiple, explicit goals over time

• Social – agents need to interact and perform e.g. as a team => last lecture

Page 5: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 5 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 5

Agents: Act to achieve goals

Percepts

Action

actions

events

goals

environment

agent

Reactivity

Proactivity

Page 6: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 6 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 6

Agents: Represent environment

Percepts

Action

events

actions goals

plans

beliefs

environment

agent

Page 7: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 7 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 7

Agent Oriented Programming• Develop programming languages where events,

beliefs, goals, actions, plans, .... are first class citizens in the language

Page 8: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 8 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 8

Language Elements

Key language elements of APLs:

• beliefs and goals to represent environment

• events received from environment (& internal)

• actions to update beliefs, adopt goals, send messages, act in environment

• plans, capabilities & modules to structure action

• rules to select actions/plans/modules/capabilities

• support for multi-agent systems

Inspired by Belief-Desire-Intention agent metaphor

Page 9: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 9 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 9

A Brief History of AOP• 1990: AGENT-0 (Shoham)• 1993: PLACA (Thomas; AGENT-0 extension with plans)• 1996: AgentSpeak(L) (Rao; inspired by PRS)• 1996: Golog (Reiter, Levesque, Lesperance)• 1997: 3APL (Hindriks et al.)• 1998: ConGolog (Giacomo, Levesque, Lesperance)• 2000: JACK (Busetta, Howden, Ronnquist, Hodgson)• 2000: GOAL (Hindriks et al.)• 2000: CLAIM (Amal El FallahSeghrouchni)• 2002: Jason (Bordini, Hubner; implementation of AgentSpeak)• 2003: Jadex (Braubach, Pokahr, Lamersdorf)• 2008: 2APL (successor of 3APL)This overview is far from complete!

Page 10: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 19 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 19

ReferencesWebsites• 2APL: http://www.cs.uu.nl/2apl/ • Agent Factory: http://www.agentfactory.com • Goal: http://mmi.tudelft.nl/trac/goal• JACK: http://www.agent-software.com.au/products/jack/• Jadex: http://jadex.informatik.uni-hamburg.de/• Jason: http://jason.sourceforge.net/• JIAC: http://www.jiac.de/

Books• Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2005 Multi-

Agent Programming Languages, Platforms and Applications. presents 3APL, CLAIM, Jadex, Jason

• Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2009, Multi-Agent Programming: Languages, Tools and Applications.presents a.o.: Brahms, CArtAgO, Goal, JIAC Agent Platform

Page 11: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013

The GoalAgent Programming Language

Page 12: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 21 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013

The Blocks WorldThe Hello World example of Agent Programming

Page 13: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 22 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 22

The Blocks World

• Positioning of blocks on table is not relevant.• A block can be moved only if it there is no other block on top of it.

Objective: Move blocks in initial state such that result is goal state.

A classic AI planning problem.

Page 14: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 25 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 25

Representing the Blocks World

Basic predicate:• on(X,Y).

Defined predicates:• block(X) :- on(X, _).• clear(X) :- block(X), not(on(Y,X)).• clear(table).• tower([X]) :- on(X,table).

tower([X,Y|T) :- on(X,Y),tower([Y|T]).

EXERCISE:

Prolog is the knowledge representation language used in Goal.

Page 15: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 26 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 26

Representing the Initial State

Using the on(X,Y) predicate we can represent the initial state.

beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table).}

Initial belief base of agent

Page 16: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 27 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 27

Representing the Blocks World• What about the rules we defined before?• Add clauses that do not change into the knowledge base.

tower([X]) :- on(X,table).tower([X,Y|T]) :- on(X,Y),tower([Y|T]).clear(X) :- block(X), not(on(Y,X)). clear(table).block(X) :- on(X, _).

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}

Static knowledge base of agent

Page 17: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 28 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 28

Representing the Goal State

Using the on(X,Y) predicate we can represent the goal state.

goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table).}

Initial goal base of agent

Page 18: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 29 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 29

One or Many Goals

In the goal base using the comma- or period-separator makes a difference!

goals{ on(a,table), on(b,a), on(c,b).}

goals{ on(a,table). on(b,a). on(c,b).}

• Left goal base has three goals, right goal base has single goal.

• Moving c on top of b (3rd goal), c to the table, a to the table (2nd goal) , and b on top of a (1st goal) achieves all three goals but not single goal of right goal base.

• The reason is that the goal base on the left does not require block c to be on b, b to be on a, and a to be on the table at the same time.

Page 19: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 30 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 30

Mental State of Goal Agent

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table).}goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table).}

The knowledge, belief, and goal sections together constitute the specification of the Mental State of a Goal Agent.

Initial mental state of agent

Page 20: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 31 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 31

Why a Separate Knowledge Base?

• Concepts defined in knowledge base can be used in combination with both the belief and goal base.

• Example– Since agent believes on(e,table),on(d,e), then infer:

agent believes tower([d,e]).– If agent wants on(a,table),on(b,a), then infer: agent

wants tower([b,a]).

• Knowledge base introduced to avoid duplicating clauses in belief and goal base.

Page 21: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 32 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 32

Using the Belief & Goal base

• Selecting actions using beliefs and goals

• Basic idea:– If I believe B then do action A (reactivity)– If I believe B and have goal G, then do action A

(proactivity)

Page 22: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 33 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 33

Inspecting the Belief & Goal base

• Operator bel()to inspect the belief base.

• Operator goal()to inspect the goal base.– Where is a Prolog conjunction of literals.

• Examples:– bel(clear(a), not(on(a,c))).– goal(tower([a,b])).

Page 23: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 34 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 34

Inspecting the Belief Base• bel() succeeds if follows from the belief base

in combination with the knowledge base.– Condition is evaluated as a Prolog query.

• Example:– bel(clear(a), not(on(a,c))) succeeds

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table).}

Page 24: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 35 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 35

Inspecting the Belief Base

Which of the following succeed?1.bel(on(b,c), not(on(a,c))).

2.bel(on(X,table), on(Y,X), not(clear(Y)).

3.bel(tower([X,b,d]).

[X=c;Y=b]

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table).}

EXERCISE:

Page 25: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 36 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 36

Inspecting the Goal Base

• goal() succeeds if follows from one of the goals in the goal base in combination with the knowledge base.

• Example:– goal(clear(a))succeeds.– but not goal(clear(a),clear(c)).

Use the goal(…) operator to inspect the goal base.

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). on(c,table).}

Page 26: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 37 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 37

Inspecting the Goal Base

Which of the following succeed?1.goal(on(b,table), not(on(d,c))).

2.goal(on(X,table), on(Y,X), clear(Y)).

3.goal(tower([d,X]).

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table).}

EXERCISE:

Page 27: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 38 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 38

Negation and Beliefs

not(bel(on(a,c))) = bel(not(on(a,c)))?

• Answer: Yes.– Because Prolog implements negation as failure.

– If φ cannot be derived, then not(φ) can be derived.

– We always have: not(bel()) = bel(not())

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table).}

Page 28: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 39 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 39

Negation and Goals

not(goal()) = goal(not())?

• Answer: No.

• We have, for example:goal(on(a,b)) and goal(not(on(a,b))).

knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}goals{ on(a,b), on(b,table). on(a,c), on(c,table).}

EXERCISE:

Page 29: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 40 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 40

Combining Beliefs and Goals

– Consider the following beliefs and goals

– We have both bel(on(a,b))as well asgoal(on(a,b)).

– Why have something as a goal that has already been achieved?

Useful to combine the bel(…) and goal(…) operators.

beliefs{ on(a,b). on(b,c). on(c,table).}

goals{ on(a,b), on(b,table).}

Page 30: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 41 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 41

Combining Beliefs and Goals

• Achievement goals:– a-goal() = goal(), not(bel())

• Agent only has an achievement goal if it does not believe the goal has been reached already.

• Goal achieved:– goal-a() = goal(), bel()

• A (sub)-goal has been achieved if the agent believes in .

Useful to combine the bel(…) and goal(…) operators.

Page 31: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 42 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 42

Expressing BW Concepts

• Define: block X is misplaced• Solution:

goal(tower([X|T])),not(bel(tower([X|T]))).

• But this means that saying that a block is misplaced is saying that you have an achievement goal:

a-goal(tower([X|T])).

Possible to express key Blocks World concepts by means of basic operators.

Mental States

EXERCISE:

Page 32: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 43 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013

Actions specificationsChanging Blocks World Configurations

Page 33: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 44 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 44

Actions Change the Environment…

move(a,d)

Page 34: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 45 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 45

and Require Updating Mental States.• To ensure adequate beliefs after performing an action the belief base

needs to be updated (and possibly the goal base).

– Add effects to belief base: insert on(a,d) after move(a,d).– Delete old beliefs: delete on(a,b) after move(a,d).

Page 35: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 46 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 46

and Require Updating Mental States.• If a goal has been (believed to be) completely achieved, the goal is

removed from the goal base.

• It is not rational to have a goal you believe to be achieved.• Default update implements a blind commitment strategy.

move(a,b)

beliefs{ on(a,table), on(b,table).}goals{ on(a,b), on(b,table).}

beliefs{ on(a,b), on(b,table).}goals{ }

Page 36: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 47 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 47

Action Specifications• Actions in GOAL have preconditions and

postconditions.• Executing an action in GOAL means:

– Preconditions are conditions that need to be true:• Check preconditions on the belief base.

– Postconditions (effects) are add/delete lists (STRIPS):• Add positive literals in the postcondition• Delete negative literals in the postcondition

• STRIPS-style specificationmove(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) ) } post { not(on(X,Z)), on(X,Y) }}

Page 37: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 48 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 48

move(X,Y){

pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) )}

post { not(on(X,Z)), on(X,Y) }

}

Example: move(a,b)• Check: clear(a), clear(b), on(a,Z), not( on(a,b) )• Remove: on(a,Z)• Add: on(a,b)

Note: first remove, then add.

Actions Specifications

table

Page 38: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 49 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 49

move(X,Y){

pre { clear(X), clear(Y), on(X,Z) }

post { not(on(X,Z)), on(X,Y) }

}

Example: move(a,b)

Actions Specifications

beliefs{ on(a,table), on(b,table).}

beliefs{ on(b,table). on(a,b).}

Page 39: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 50 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 50

move(X,Y){

pre { clear(X), clear(Y), on(X,Z) }

post { not(on(X,Z)), on(X,Y) }

}

1. Is it possible to perform move(a,b)?

2. Is it possible to perform move(a,d)?

Actions SpecificationsEXERCISE:

knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]).}beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table).}

No, not( on(a,b) ) fails. Yes.

Page 40: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 51 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013

Action RulesSelecting actions to perform

Page 41: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 52 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 52

Agent-Oriented Programming

• How do humans choose and/or explain actions?

• Examples:• I believe it rains; so, I will take an umbrella with me.• I go to the video store because I want to rent I-robot.• I don’t believe busses run today so I take the train.

• Use intuitive common sense concepts:

beliefs + goals => action

See Chapter 1 of the Programming Guide

Page 42: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 53 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 53

Selecting Actions: Action Rules

• Action rules are used to define a strategy for action selection.

• Defining a strategy for blocks world:– If constructive move can be made, make it.– If block is misplaced, move it to table.

• What happens:– Check condition, e.g. can a-goal(tower([X|T]))be derived given

current mental state of agent?– Yes, then (potentially) select move(X,table).

program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table).}

Page 43: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 54 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 54

Order of Action Rules

• Action rules are executed by default in linear order.• The first rule that fires is executed.

• Default order can be changed to random.• Arbitrary rule that is able to fire may be selected.

program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table).}

program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table).}

Page 44: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 55 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 55

Example Program: Action RulesAgent program may allow for multiple action choices

dTo table

Random, arbitrary choice

program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table).}

Page 45: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 56 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 56

The Sussman Anomaly (1/5)

• Non-interleaved planners typically separate the main goal, on(A,B),on(B,C) into 2 sub-goals: on(A,B) and on(B,C).

• Planning for these two sub-goals separately and combining the plans found does not work in this case, however.

a

c

Initial state

b c

b

a

Goal state

Page 46: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 57 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 57

The Sussman Anomaly (2/5)• Initially, all blocks are misplaced• One constructive move can be made (c to table)• Note: move(b,c) not enabled.• Only action enabled: c to table (2x).

Need to check conditions of action rules: if bel(tower([Y|T]),a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table).

We have bel(tower([c,a]) and a-goal(tower([c])).

c

b

a

Goal state

a

c

Initial state

b

Page 47: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 58 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 58

The Sussman Anomaly (3/5)• Only constructive move enabled is

– Move b onto c

Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table).

Note that we have:

a-goal(on(a,b),on(b,c),on(c,table)),but not: a-goal(tower[c])).

Current state

c

b

a

Goal state

ac b

Page 48: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 59 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 59

The Sussman Anomaly (4/5)• Again, only constructive move enabled

– Move a onto b

Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X,T))then move(X,Y).

Note that we have: a-goal(on(a,b),on(b,c),on(c,table)), but not: a-goal(tower[b,c]).

c

b

a

Goal state

ac

b

Current state

Page 49: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 60 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 60

The Sussman Anomaly (5/5)• Upon achieving a goal completely

that goal is automatically removed.• The idea is that no resources should

be wasted on achieving the goal.

In our case, goal(on(a,b),on(b,c),on(c,table)) has been

achieved, and is dropped. The agent has no other

goals and is ready.

c

b

a

Goal state

a

c

b

Current state

Page 50: Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 61 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 61

Organisation• Read Programming Guide Ch1-3 (+ User Manual)

• Tutorial:– Download Goal: See http://ii.tudelft.nl/trac/goal (v4537)– Practice exercises from Programming Guide– BW4T assignments 3 and 4 available

• Next lecture:– Sensing, perception, environments– Other types of rules & macros– Agent architectures