Collaborative Warrior Tutoring Tom Livak Neil Heffernan 8/24/06

Preview:

Citation preview

Collaborative Warrior Tutoring

Tom Livak

Neil Heffernan

8/24/06

Background

• Was masters student at Worcester Polytechnic Institute, working with Neil Heffernan

• Worked with Cognitive Tutor Authoring Tools (CTAT) [Koedinger, K. R., Aleven, V., Heffernan. T., McLaren, B, et al]

• Worked on the Assistments project with Neil Heffernan (www.assistments.org)

Objective

• Build an tutoring system that can tutor multiple students collaborating together to solve a problem

• Develop the first collaborative model tracing tutor with simulated teammates– possible exception of AETS [Zachary et al]

Goals

• Teach collaboration/communication• Work in real time• Have computer teammates so that we

don’t need a complete team of students• Flexible

– Deal with different ways of solving problems

– Deal with plans having to change

Domain Task

• Develop a system to tutor a squad of soldier clearing a building of enemy threats

• Worked with Fort Benning/Office of Naval Research

Limits of our system

• Task has been simplified

• Not evaluating teaching potential

Description of the Task

• MOUT Doctrine (Military Operations Urban Terrain• Team Leader clears a building with his team

– Stack outside the room– Move into position– Take out any threats– Team leader orders civilians to leave– Take out any threats (if civilians become hostile)

• Another soldier stays in the first room to watch over civilians– Just orders them to get down

Room Clearing

Room Clearing

Room Clearing (Strong Wall)

Room Clearing (Strong Wall)

Room Clearing (Opposing Corners)

Room Clearing (Opposing Corners)

Model Tracing

• Build a model that can solve your problem

• Compare the students actions to those that the model would take

• Introspect the model to better understand the student and give appropriate feedback

Model Tracing

• Model is a set of if-then rules• If-part works on events happening in the world and

the student’s state of mind (working memory)• Then-part changes working memory or gives actions

for student to take• Conflict resolution: when two or more rules can fire,

this represents a decision point where a student has several correct options to choose from

• Augment rules with additional information to facilitate tutoring

Example Model

• If( you are team leader AND room is clear)Then( take action: Order team to clear next room)– Hint: “You are done clearing this room, move on.”– Hint: “Order your team to clear the next room.”

• If( no threats in room )Then( change memory: room is clear )– Hint: “There are no threats here, the room is

clear.”

Example Model (forward chaining)

• Working memory (you are team leader, no threats in room)

• FIRE– If( no threats in room )– Then( change memory: room is clear )

• Working memory (you are team leader, no threats in room, room is clear)

• FIRE– If( you are team leader AND room is clear )– Then( take action: Order team to clear next room )

• Result : from this situation, student should order his team to clear the next room

Model Tracing

• When a student makes an action, we need to see if that action could be produced in the model

• Use backward chaining to see if we can find a series of rules that could fire to produce the action

Example Model (model tracing)

• Working memory ( you are team leader, no threats in room )• Student’s action = order team to clear the next room

• SEARCH for (order team to clear the room)– If( you are team leader AND room is clear )– Then( take action: Order team to clear room)

• SEARCH for (room is clear)– If( no threats in room )– Then( change memory: room is clear )

• Result: we found a correct line of reasoning, we can tell the student they made the correct choice

Hints

• To give the student hints, run the model forward, keeping track of the hint messages for each rule

• When the student asks for a hint, give them the next hint in the list

Example Model (hints)

• Working memory (you are team leader, no threats in room)

• FIRE If( no threats in room) ... – Hint: “There are no threats here, the room is clear”

• FIRE If( you are team leader AND room is clear) ...– Hint: “You are done clearing this room, move on.”– Hint: “Order your team to clear the next room.”

• Result: Student gets progressive hints

Example Model (incorrect rules)

• Can add rules to model incorrect behavior, to give better feedback

• Incorrect Rule:• If( you are team leader AND threats in room )• Then( take action: Order team to clear next

room )– Incorrect Message: “There are still threats in the

room, you must deal with them first”

Example Model (incorrect rules)

• Working memory ( you are team leader, threats in room )

• Student’s action = order team to clear the next room

• SEARCH for (order team to clear the next room)– If( you are team leader AND threats in room )– Then( take action: Order team to clear next room )– Incorrect Message: “There are still threats in the room, you

must deal with them first”

• Result: We found an incorrect line of reasoning, give student the feedback

Knowledge Tracing

• Each rule represents a skill• Using the Corbett and Anderson's knowledge

tracing algorithm, we can predict whether or not a student “knows” a skill by looking at past performance

• Can use these to assess students, and change scenarios to work on skills that have not been learned yet

Collaborative Tutoring

• How to use model tracing for multiple students?• Each student has their own instance of the model

(same set of rules)• Actions that one student makes become events for

other students– Action: Order team to clear a room => Event: You’ve been

ordered to clear a room– Action: Move myself to point A => Event: Bob moves to point

A

• If a student makes an incorrect action, we can usually suppress it from other students seeing it

Suppressible Actions

• Some actions aren’t easily suppressible– Moving– Shooting– Things handled directly by the simulation

• Instantaneous actions are generally suppressible– Communication

• To suppress an action, we simply wait until we get correct/incorrect from the model before allowing the action to happen in the simulation

Using model for CGFs

• If there aren’t enough students, we’ll need Computer Generated Forces (CGFs) to fill out the team

• Computer generated forces need to act just like a human would

• Need some sort of model that given a situation, gives the correct action the soldier should take

• That’s the same model as we need for model tracing!• Use the same model with forward chaining to run the

computer generated forces

Model Tracing via Search

• Difficult to find a rules system that supports both forward and backwards chaining

• Emulate backwards chaining by doing a forward chaining search

• From any state of working memory, do a depth first search of rules that fire, stopping when we produce an action

• Result is the set of possible actions from a given state of working memory

Model Tracing Search

System Architecture

• Students will interact with some simulation– We are using Unreal Tournament 2003

• Each soldier (student and CGFs) will have an instantiation of the model– We are using JESS for our rules system

• These will communicate using a set of defined messages

System Architecture

Events

• Events are things that happen in the simulation

• Soldier has no control over events

• Examples– You hear shots– Team leader ordered you to clear a room

Actions

• Actions are things the soldiers do• CGFs send them to the simulation• Tutoring agents receive actions from

simulation, give feedback• One soldier’s actions turn into event for other

soldiers• Examples

– Move to a particular place– Order team to clear a room

Feedback

• Send to the simulation to give feedback to the students

• Examples– Display some text on the screen– Highlight area on the map

Cognitive Model

• Current model contains only 24 rules• Goals/Tasks

– Clearing a building– Clearing a room– Moving– Shooting– Waiting for team to be ready– Controlling civilians

JESS Rules(defrule ShootEnemy "Engaging enemy"

; figure out which room we’re in(self (room ?room)); is there a enemy in this room?(person

(name ?person)(type enemy)(room ?room)

)=>

; hint message(assert (advice-message

(message "You need to engage the enemy")

) )

; produce the action(assert (shoot-person-action

(person ?person)) )

)

(defrule ShootCivilian "Violating ROE"

; mark this as an incorrect action

(incorrect)

; figure out which room we’re in

(self (room ?room))

; is there a civilian in this room?

(person

(name ?person)

(type civilian)

(room ?room)

)

=>

; incorrect message

(assert (advice-message

(message ”INCORRECT: Do not

shoot civilians!")

) )

; produce the action

(assert (shoot-person-action

(person ?person)

) )

)

Future Work

• Expand the model/task– Model perception more finely

• Evaluate teaching potential• Add a model for the instructor [Heffernan,

2001]– Use student model for diagnosis– Use instructor model to decide what feedback to

give, and scenarios to use

Recommended