View
216
Download
1
Embed Size (px)
Citation preview
Lecture 7:
Robot Teamwork
Gal A. Kaminka
Introduction to Robots and Multi-Robot Systems
Agents in Physical and Virtual Environments
2 © Gal Kaminka
This week, on Robots….
We’re starting to look at multiple robots Relevant Fields:
Multi-agent systems, Multi-robot systems Focus: Teams, small groups
Common goals Complex coordination Complex tasks
3 © Gal Kaminka
Developing Teamwork
Agent teams are everywhere A (Biased?) historical perspective
Sense-Think-Act: What's in Think? How Neaties view Teamwork: Joint-Intentions How Scruffies view Teamwork: ALLIANCE, STEAM Theory-inspired Teamwork Engines
4 © Gal Kaminka
Agent Teams Are Everywhere:Teamwork is Important
Nature Formations, flocking, pack hunting, software development
Robotic nature imitations, explorations, soccer Internet, Intranets
Routing, distributed applications, groupware Workflow, cooperating information agents
Virtual environments for training, simulations Human-computer interactions
5 © Gal Kaminka
The Sense-Think-Act Cycle:What's in Think (for scruffies) in late 80's?
No need to Think: If sensors read X, then do Y Reactive Camp (Brooks 1986, Schoppers 1987)
Limited thinking: Behavior-based control Behaviors may have state, memory, procedures Arkin, Firby (1986), Maes, ...
Deep thinking: integrated planning, monitoring e.g., IPEM (1988)
Hybrid architectures (e.g., Gat 1992)
6 © Gal Kaminka
The Sense-Think-Act Cycle:What's in Think (for neaties) in late 80's?
"The Old View" Plans as sequences of actions for execution
Plans as mental attitudes (Pollack 1992) Plans as recipes: Some get executed, some just known
BDI: Belief-Desire-Intention (approximately): Belief: What the agent knows Desire: What the agents ideally wants to see happening Intention: What the agents actually acts towards
Commitments
7 © Gal Kaminka
An Historical Perspective on Teamwork:
From a Single Agent to Multiple Agents
Time Scruffiness Neatness
IntegratingPlanning, Execution,
Monitoring,Re-Planning, Architectures
Reactive-Plans,Architectures
Behavior-Based Architectures
Mental Attitudes,Belief, Desire, Intention (BDI)
Plans as Attitude
'86
'90
'96
Subjective
8 © Gal Kaminka
Introduction of Multi-Agent Settings:A Change in Perspectives?
Multi-agent env. become more pervasive Late '80s, Early '90s Philosophical influences on NLP, challenging test-beds
Everyone responds using what they know: Social reactive/behavior based applications Multi-agent planning, negotiations Social intentions, commitments, beliefs, desires
We’ll examine theories and behavior-based architectures
9 © Gal Kaminka
An Historical Perspective on Teamwork:
From a Single Agent to Multiple Agents
Time Scruffiness Neatness
Mental Attitudes,Belief, Desire, Intention (BDI)
Plans as AttitudeSocial Attitudes,Commun. Acts,Commitments
'86
'90
'96Teamwork, etc.
Subjective
The MAS Line
10 © Gal Kaminka
Teamwork is (Bratman 1992): Mutual Commitment to Joint Activity
Agreement on the joint activity Cannot abandon activity without involving teammates
Mutual support Must be active in helping teammate activity
Mutual Responsiveness Take over tasks from teammates if necessary
11 © Gal Kaminka
Teamwork Theories
SharedPlans (Sidner&Grosz 1986, Grosz&Kraus 1996) Teammates agree on SharedPlan Plan it together, execute it together Specifies conditions for assistance, monitoring
Joint Intentions Framework (Cohen&Levesque) Teammates agree on intentions Teammates agree on selecting/deselecting goals i.e., goal unachievable, achieved, irrelevant
12 © Gal Kaminka
What’s Teamwork?The Famous Convoy Example: Two agents Alice and Bob
Bob does not know how to get home Bob knows Alice knows how to get home Bob knows Alice lives near Bob
We have two agents, with matching goals Both want to get to (approximately) same place
If Bob follows Alice, is that teamwork?
13 © Gal Kaminka
Convoy Example Cont’d
Imagine Bob following Alice
Case 1: No teamwork Bob follows Alice without talking to her first
Case 2: Teamwork Bob asked Alice to lead him home and her agreeing
14 © Gal Kaminka
Case 1: No teamwork
What happens if Alice goes home as planned? What happens if Bob’s car breaks down? What happens if Alice decides to change her mind?
15 © Gal Kaminka
Case 2: Teamwork Cars are in a convoy (a team)
If Bob stops, Alice should stop or … Alice should use lots of signals Alice drives slowly, looks in mirror a lot ….
16 © Gal Kaminka
Joint Intentions Key Ideas
Mutual belief (MB) in the joint intention Mutual in goal
Joint execution until MB in goal termination Cannot abandon teamwork when privately believe its over
Termination: Goal achieved Goal unachievable Goal irrelevant
17 © Gal Kaminka
Intuition and exampleTeam-members work towards the joint goal If they privately believe it should be terminated
achieved/unachievable/irrelevant Then they are responsible for their belief mutual
Consider: If Bob decides got home, no need to follow Alice If Alice changes her mind about where to go If Bob’s car breaks down …
18 © Gal Kaminka
Additional Theoretical Thoughts
Teamwork is not coordination: Convoy looks just like traffic when everything OK Chess is coordinated, but not teamwork Tracking involves one-sided coordination, for example
Teamwork is not necessarily rational May not be rational to “waste” cycles on informing others There is only little work addressing this problem
19 © Gal Kaminka
שאלות?
20 © Gal Kaminka
An Historical Perspective on Teamwork:
From a Single Agent to Multiple Agents
Time Scruffiness Neatness
Reactive-Plans,Architectures
Behavior-Based Architectures
'86
'90
'96
Social Behaviors,Reacting to others,Behavioral Roles
Subjective
The MAS Line
21 © Gal Kaminka
Behavior-Based Teamwork
Parker: Distributed, fault-tolerant, architecture Closest to explicit teamwork among roboticists
Tambe: STEAM teamwork engine Explicit teamwork, virtual robots
Mataric: Behavior combinations create different spatial group behavior e.g., foraging, flocking, follow-the-leader
Balch: Behavior-based formation maintenance Kuniyoshi et al.: Observation-based cooperation Many more, inc. explosive # of ad-hoc techniques
22 © Gal Kaminka
ALLIANCE (Parker 1998)
Fault-tolerant robot team control: Robots carry out team sub-tasks Each robot uses behavior-based control
Runs ALLIANCE processes in addition Robots communicate as part of ALLIANCE
Heterogeneous robots (but all run ALLIANCE) Covers many kinds of failures:
Individual action, sensing Communications
23 © Gal Kaminka
What ALLIANCE uses?
Each robot has several Behavior-Sets Behavior-Set: collection of behaviors for a particular task
Behaviors within a set may inhibit or activate each other Sets are (de)selected based on Motivational Behaviors:
Triggers integrate perceptions, communications Also internal-state motivations (explained below)
Only one behavior set active at a time Robots communicate to each other what they are doing
24 © Gal Kaminka
25 © Gal Kaminka
Motivations
How do robots select what task to do? Task == behavior-set
This is a social choice! All tasks need to get done If robots fail, others should step in
Motivation: A numerical internal-state variable Value changes based on processing of sensors, comm.
26 © Gal Kaminka
An Elegant Solution
Robots keep track of their own progress Robots communicate to each other what they are doing
Each robot knows what tasks its peers are doing
Fault tolerance achieved through two motivations: Impatience with others' performance of task
Value increases when peer not making progress Acquiescence: Impatience with own lack of progress
Value increases when I am not making progress
27 © Gal Kaminka
An Elegant Solution (Cont'd)
If impatience with task T too big If another robot takes-over T, reset impatience If no other robot does T, try to take-over T
If acquiescence with task T too big Then robot abandons own execution of T
Assumptions: Robots can monitor their own actions, and those of others Robots do not lie, and are not intentionally adversarial Taking over roles can be done smoothly
28 © Gal Kaminka
ALLIANCE Example A joint paper to be written by V, B, and K 5 sections: Intro, Background, Method, Results, Discussion Initially, V picks Intro, B picks Background, K picks Results All have similar thresholds, update each other via email every
15 minutes What will happen if (different types of failures):
After much work, V's dog eats his copy of the Intro. B finishes Background ahead of schedule K's email server stops working B's loses outgoing email to V, but not to K?
29 © Gal Kaminka
What you should know now
How single-agent people (used to) view the world Behavior-based control vs. planning/monitoring vs. BDI
How they view the problem of constructing a team We did not discuss planning/monitoring Because it is an open area
Some of what the theoreticians found Teamwork involves responsibility towards others' state Teamwork involves more than just “strong agreement”
ALLIANCE: An elegant behavior-based approach to teams
OpportunityAlert!
30 © Gal Kaminka
שאלות?
31 © Gal Kaminka
Teamwork in Multi-Agent Systems
GRATE*: Teamwork in Industry (Jennings 1995) Uses reliable communications “Naïve” team formation using acquaintance models
STEAM: Teamwork in VR, Internet (Tambe 1997) Re-planning and team repair Selective communications
32 © Gal Kaminka
An Historical Perspective on Teamwork:
From a Single Agent to Multiple Agents
Time Scruffiness Neatness
IntegratingPlanning, Execution,
Monitoring,Re-Planning, Architectures
Reactive-Plans,Architectures
Behavior-Based Architectures
Mental Attitudes,Belief, Desire, Intention (BDI)
Plans as AttitudeSocial Attitudes,Commun. Acts,Commitments
'86
'90
'96
Social Behaviors,Reacting to Others
Teamwork, etc.
Social Planning,Reasoning about
Roles
STEAM, GRATE*
Subjective
33 © Gal Kaminka
GRATE*
Nick Jennings, 1995. In: Artificial Intelligence
34 © Gal Kaminka
GRATE* Problem Settings
Real-world industrial applications very complex A centralized control system is infeasible
Problem complexity is practically unmanageable Distributed control provides solutions
Divide&Conquer: Each sub-problem is reduced complexity A natural fit to a distributed-components problem Allows re-use of existing components
35 © Gal Kaminka
There’s always a “But…”
Distributed AI is appealing, BUT…. No principled way to build distributed systems
Unclear when to communicate, about what Lack of coherent, predictable global behavior Brittleness in dynamic, complex domains
Explosive number of possible interactions Cannot predict all of them -- so system fails
36 © Gal Kaminka
Brittleness: Built on real-world experience
New info is available to some, not all agents Agents abandon task, while others still working Inter-agent communication difficult to construct
Agents wait too long, interruptions cause failures Agents actions causing false-readings for other agents
37 © Gal Kaminka
Example: Power Transportation Networks
Three agents: AAA and BAI perform diagnosis together CSI performs monitoring
CSI detects problems, AAA and BAI do diagnosis Initially: AAA and BAI have own monitoring
Maint. operations would then cause false alarms Each had different monitoring expertise Development of CSI alleviated this problem
38 © Gal Kaminka
Example Cont’d
Cooperation between CSI, AAA, and BAI was brittle CSI detected fault, and then ruled it out
But “forgot” to let AAA and BAI know AAA realized can’t diagnose, let BAI/CSI continue working CSI detected more information about faults,
But did not send it to BAI,AAA Interruptions cause BAI and AAA to wait for each other
Deadlock in communications
39 © Gal Kaminka
Jennings’ Analysis of the Situation
Often no explicit representation of cooperation: Agents cooperate for individual reasons Don’t know other agents exist, or affected
Rare explicit cooperation is in surface rules Describe “social norms”, with no deep knowledge
e.g., “If A asked for X, and I promised to deliver,
then inform A if I cannot deliver” Like subset of compiled knowledge: Insufficient by itself From expert systems, known to be limited and brittle
40 © Gal Kaminka
Joint Responsibility Model Built on Joint-Intentions Adds the idea of a Joint-Recipe (a Plan)
Team-members commit to execution until: Recipe goal achieved/unachievable, OR Recipe step failed/undoable
Agent communicate upon plan termination Agents can form the team dynamically (later)
41 © Gal Kaminka
GRATE*: A Joint-Responsibility Implementation
Each agent runs GRATE* processes remember ALLIANCE?
We assume communications reliable, known delays MB approximated through simple agreement
Everyone knows that everyone knows P Also, global time is kept synchronized
42 © Gal Kaminka
GRATE*: Structure GRATE* task-dependent knowledge
What tasks are there, how do I do them GRATE* cooperation layer
Controls task scheduling based on coordination Feeds on information from individual control Sends and receives communications Uses acquaintance models, J.R. implementation
43 © Gal Kaminka
Example Suppose AAA receives diagnosis information Task knowledge says: Do task T1, then T2 AAA starts working on T1,
Discovers cannot do it Cooperation layer jumps in, says: Inform BAI,CSI
OR, AAA starts working on T1, Finishes it successfully, starts doing T2 Cooperation layers jumps in, says: Inform BAI/CSI
44 © Gal Kaminka
GRATE* Team Formation: Phase 1
When agent recognizes need for attaining G Determine best recipe R for achieving G Determine appropriately “skilled” agents
Use acquaintance models Contact these agents with a CFP
Ask for their commitment proposals Form a set of possible agents for team
45 © Gal Kaminka
GRATE* Team Formation Phase 2
Evaluate commitment proposals: Select minimal # of agents that can execute actions in R Determine execution time t of each action Get commitment proposal from each agent for t Each agent agrees or counters
Somewhat similar to contract-net bidding, BUT: No backtracking: counter-proposals always accepted
46 © Gal Kaminka
GRATE* Recipe execution Each agent starts carrying out its role in R
May wait for information from others, or work in parallel GRATE* cooperation layers coordinates execution
In case of contingencies, J.R. jumps in to handle execution Likewise for normative behavior
This relies on task knowledge to evaluate progress, unachievability, etc.
47 © Gal Kaminka
A GRATE* Team
Recipe R
Agent 1/Action A Agent 3/Action DAgent 2/Action C
Agent 1/Action B
Organizational link Dataflow/execution link
GRATE* knows this … and therefore coordinates this
48 © Gal Kaminka
STEAM: A Shell for Teamwork
Milind Tambe, JAIR, 1997
49 © Gal Kaminka
STEAM at a glance:
Motivation similar to GRATE*: Robustness Interesting: Completely different environments
Uses Joint-Intentions, but also SharedPlans Adds significantly to the practice of teamwork
50 © Gal Kaminka
STEAM Novelties Sub-teams, individual roles clearly defined Selective communications
Comm. not assumed reliable Explicit mutual beliefs Re-planning and team repair Collaborative team-formation Demonstrated re-use across several domains!
51 © Gal Kaminka
STEAM: Motivation Similarly to GRATE*, the issue is Robustness Domain: high-fidelity distributed VR
Dozens of interacting agents, thousands altogether Agents participate as combatants in virtual battlefield Simulation developed commercially for military
Task: build helicopter pilots for variety of missions Discovered: Debugging never ends!
52 © Gal Kaminka
An easy to program scenario...
Three helicopters out to attack the enemy Fly in formation until landmark seen Then Scout goes out while Attackers wait
Waiting Attackers
Scout
53 © Gal Kaminka
… is very difficult to maintain
What if some helicopters don’t see landmark? What if a helicopter crashes?
Different helicopters, different roles Chain of command
What if the communication fail? What if a one sees threat on another?
…
Too many possible interactions--
too many behaviors to specify!
54 © Gal Kaminka
An endless stream of problems….An endless stream of problems….
2-3 agents: carefully controlled interactions Bigger teams, complex scenarios: failures
Numerous ad-hoc plans, unable to preempt failures No reuse across applications
• Company waited indefinitely, when scout crashed• Commander returned to home base alone• Flight leader started mission alone, when others not ready• Company waited indefinitely when no scout specified• …..
55 © Gal Kaminka
Theoretical underpinnings From Joint-intentions
Joint-termination clauses Communications/MB requirements
From Joint-intentions (Smith and Cohen, 1996) Collaborative team-formation (request-confirm protocol) Handles failures during team-formation (not in GRATE*)
From SharedPlans (Grosz and Kraus 1996) Hierarchy of teams, subteams, individual intentions Knowledge of capabilities (similar: acquaintance models)
56 © Gal Kaminka
A STEAM Agent Structure Agent controlled by hierarchical behaviors/reactive plans
Architecture: Soar Specific behaviors tagged by designer
By (sub)team that should execute them together Or, no tagging for individual choice and execution
STEAM controls collaboration: Makes (sub)team jointly execute the specific behaviors Replans/repairs in case of problems
57 © Gal Kaminka
Example STEAM-tagged hierarchy:(Note implicit organizational hierarchy)
Execute MissionEntire Team
Get OrdersEntire Team
LandFlyEntire Team
Fly MainMain Team
Fly SupportSupport Team
Contact HQ
Contact Peer
Fly High Fly Low
58 © Gal Kaminka
Selective Communication Don’t assume free communications
Assume communications have costs Still assumes reliable (when used)
Is a message worth communicating about? Weigh cost of communication against cost of mis-collaboration, given probability that information already known, and reward of collaboration
59 © Gal Kaminka
Deciding when to communicate
Assume: We need others to have information P P tells them that system is down P tells them of a change in orders from user P tells them to switch from one mode to next ….
Why not just communicate the information P ?
60 © Gal Kaminka
Cannot always communicate
Overloads network Bandwidth unavailable
Costs too much Unreliable Takes too long to receive and process Takes too long to process and send Insecure
Radio silence
61 © Gal Kaminka
Communication Selectivity in STEAM (Tambe, 1997)Communication Selectivity in STEAM (Tambe, 1997)
Simplified decision procedure
Decision node
Chance node
No comm
CommCost: C
1
0
(1-tau)
tau
F knownReward: B
F knownReward: B
F unknownReward: B-Cmt
F unknown
Reward: B-Cmt
EU(comm)=B*1 + 0*(B-Cmt)-C = B-C
EU(nocomm)= B - tau*Cmt
62 © Gal Kaminka
Simplified communication decision
Trade-off communication costs, risk of miscoordination
Communicate only if EU(comm) > EU(no-comm)
tau*Cmt > C
Probability that other system does not know P
Cost of mis-coordination
Cost of communication
63 © Gal Kaminka
Deciding what to communicate about
So now we can decide whether to communicate But what do we communicate about?
How do we communicate the right information?
64 © Gal Kaminka
Selective Communication Cont’d STEAM maintains explicit team-state Differentiates individual beliefs, team beliefs Communication selectivity not only chooses when Also what:
Message content limited to what’s necessaryDifference between team-beliefs, indiv. beliefs
65 © Gal Kaminka
Helicopter Example (simplified)
Execute Mission
Scout ahead
ScoutingFly-PlanObtain Orders
Attackers Wait
Divide task into synchronized-execution modes Tag mode by systems that should execute it Synchronized at beginning and end
66 © Gal Kaminka
Helicopter Example Cont’d
Execute Mission
Scout ahead
ScoutingFly-PlanObtain Orders
Attackers Wait
Execute Mission
Scout ahead
ScoutingFly-PlanObtain Orders
Attackers Wait
Execute Mission
Scout ahead
ScoutingFly-PlanObtain Orders
Attackers Wait
Scout Attacker Attacker
67 © Gal Kaminka
Team Monitoring and Repair Designer specifies dependencies between tasks
e.g., To fly, both fly-main and fly-support must be executed e.g., Only agent A or B can execute “Contact HQ”
STEAM monitors these dependencies Using agent’s own sensors Keeps track of team beliefs Keeps track of agent state (dead/alive), capability
STEAM re-plans organization upon failure If dependencies cannot hold, tries to re-assign roles
68 © Gal Kaminka
Final STEAM bits Team formation is collaborative:
Agents act responsibly when selecting new behavior Do not leave teammates hanging Address communication failures
STEAM demonstrated on several domains Many different tasks in simulated battlefield Cyberspace/heterogeneous agents (city evacuation) RoboCup Soccer simulation
69 © Gal Kaminka
What you should know Common motivation for teamwork: Robustness
Too many specific coordination plans possible! Teamwork is a success of the theoreticians!
Theory remarkably useful, even when partially used GRATE* was first, introduces Joint Recipe STEAM adds significantly:
selective comm, organizational hierarchy, monitoring/repair Open:
How to identify team-formation opportunities?Persistent teams (some initial work)
70 © Gal Kaminka
Explicit Models of Teamwork
Algorithms to keep track of organization: Decide when to communicate Decide what to communicate about Decide how to adapt organization to failures Dynamically allocate tasks within organization Choose systems to carry out tasks …..Advantages:
Robustness in the presence of unanticipated events Reuse/transfer to other applications
Why build coordination from scratch each time?
71 © Gal Kaminka
Research in Teamwork Selective Communications Detecting failures in teamwork Task allocation Team planning, design Coalition formation Conversation Policies Teamwork without communications Teamwork Theory Monitoring teams, Debugging
72 © Gal Kaminka
Discussion
What are the communication requirements imposed: by Joint-Intentions? by ALLIANCE? by STEAM?
Do they agree/disagree? Whose right? What does it tell us about the science of Teamwork?
73 © Gal Kaminka
Context-free communication policies
Observation: Teamwork architectures are monolithic
You choose engine, you commit to: Synchronization protocol Allocation protocols
e.g., same protocol regardless of situation
Context-free communication policies
74 © Gal Kaminka
MONAD
Contributions: Synchronized variable table maintains coordination
Agents do not worry about communications MONAD assumes reliable communications
Offline team-design tool Team constitution constraints
Can constrain allocation by percentages, etc. Context-dependent policies
75 © Gal Kaminka
MONAD
Contributions: Synchronized variable table maintains coordination
Agents do not worry about communications MONAD assumes reliable communications
Offline team-design tool Team constitution constraints
Can constrain allocation by percentages, etc.
Context-dependent policies
76 © Gal Kaminka
Context-dependent Policies
Key idea: Change protocol based on settings
How much time team has to reach decision? Who can make better decisions? What behaviors are involved? How many agents are involved? ….
77 © Gal Kaminka
A Portion of Behavior Hierarchy
Escort
Explore Win Game
….
ExploreOurBase
….
Attack Go Back
Defend
….
Return Our Flag
Attack Base
Go For Flag
Distract
Go Back To Base
Decomposition Edges
Temporal sequencing edges
78 © Gal Kaminka
behavior Explore
following WinGame
children ExploreOurBase ExploreTheirBase
end behavior
79 © Gal Kaminka
Per-Behavior Parameters Arbitration protocol selection
Allows agents to agree on allocation to tasks Designer chooses protocol based on context of decision e.g., use voting when time allows
Activation conditions, de-activation conditions Can be changed independently of behavior Automatically synchronized between agents
Team constitution constraints Which/how many agents assigned to each subtask
80 © Gal Kaminka
behavior Explore startswhen (var ourBaseKnown == NULL
OR var theirBaseKnown == NULL) endswhen (var ourBaseKnown != NULL
AND var theirBaseKnown != NULL) following WinGame children ExploreOurBase ExploreTheirBase
end behavior
81 © Gal Kaminka
behavior Explore
startswhen (var ourBaseKnown == NULL
OR var theirBaseKnown == NULL)
endswhen (var ourBaseKnown != NULL
AND var theirBaseKnown != NULL)
following WinGame
children ExploreOurBase ExploreTheirBase
arbitrator preference
constraint 25
constraint 75
end behavior
82 © Gal Kaminka
behavior Explore startswhen (var ourBaseKnown == NULL
OR var theirBaseKnown == NULL) endswhen (var ourBaseKnown != NULL
AND var theirBaseKnown != NULL) following WinGame children ExploreOurBase ExploreTheirBase arbitrator role constraint attacker constraint ExploreTheirBase constraint defender constraint ExploreOurBase
end behavior
83 © Gal Kaminka
Run-Time Support (SCORE)
SCORE interprets the script file Binds script file to behavior execution library Executes specified arbitration protocols
Manages team allocation constraints Synchronizes conditions across multiple agents
Full algorithm described in detail in paper
84 © Gal Kaminka
Experiments
Aim: Demonstrate that flexibility is critical Focus on arbitrator flexibility
Evaluated in the GameBots CTF domain Implemented 3 teams, Using the same behaviors
Team ROLE: Role arbitration for all behaviors Team PREF: Preference arbitration for all behaviors Team MIXED: Combination of ROLE and PREF
Tested against two different opponent teams 60-135 games for each team and opponent
85 © Gal Kaminka
Subset of Results (Fixed Opponent 1)
0
10
20
30
40
50
60
% of games won
ROLE
PREF
MIXED
MIXED arbitration is comparable to ROLE
Better
86 © Gal Kaminka
Subset of Results (Fixed Opponent 1)
0
100
200
300
400
500
600
700
800
900
Time to win Time to capture flag
ROLE
PREF
MIXED
MIXED does better than ROLE, PREF
Better
87 © Gal Kaminka
Discussion: Pros
Results demonstrate importance of flexibility Down with monolithic architectures!
Context-dependent arbitration can be better Winning protocol + Losing protocol > Winning protocol
Future:What happens if two agents select different protocol?
88 © Gal Kaminka
Discussion: Cons
Weaknesses Currently assumes reliable, cheap communications Requires synchronization between agents No facilities for failure detection and recovery
Unlike STEAM (Tambe 1997), ALLIANCE (Parker 1998)