49
Possibilistic Activity Recognition in Smart Homes for Cognitively Impaired People * Patrice C. Roy a , Abdenour Bouzouane b , Sylvain Giroux a and Bruno Bouchard b a Domus Lab, Université de Sherbrooke, Canada { patrice.c.roy, sylvain.giroux }@usherbrooke.ca b LIARA Lab, Université du Québec à Chicoutimi, Canada { abdenour.bouzouane, bruno.bouchard }@uqac.ca Abbreviated title: Possibilistic Activity Recognition Corresponding author: Patrice C. Roy Laboratoire Domus Université de Sherbrooke 2500 boul. de l’Université Sherbrooke, Qc Canada, J1K 2R1 * This paper not been published elsewhere and has not been submitted simultaneously for publication elsewhere 1

POSSIBILISTIC ACTIVITY RECOGNITION IN SMART HOMES FOR COGNITIVELY IMPAIRED PEOPLE

Embed Size (px)

Citation preview

Possibilistic Activity Recognition in Smart Homes forCognitively Impaired People∗

Patrice C. Roya, Abdenour Bouzouaneb,Sylvain Girouxa and Bruno Bouchardb

aDomus Lab, Université de Sherbrooke, Canada{ patrice.c.roy, sylvain.giroux }@usherbrooke.ca

bLIARA Lab, Université du Québec à Chicoutimi, Canada{ abdenour.bouzouane, bruno.bouchard }@uqac.ca

Abbreviated title: Possibilistic Activity Recognition

Corresponding author:Patrice C. Roy

Laboratoire DomusUniversité de Sherbrooke2500 boul. de l’Université

Sherbrooke, QcCanada, J1K 2R1

∗This paper not been published elsewhere and has not been submitted simultaneously for publication elsewhere

1

Abstract

In order to maintain and improve the quality of life of people with dementia in their home,

we must elaborate assistive technologies that will alleviate the effects of the cognitive decline,

such as erroneous behaviours associated with carrying on activities of daily living (ADL).

Nevertheless, in order to provide adequate assistive services at the opportune moment, it is

necessary to recognize the occupant behaviour from sensor events of the smart home. To

address this challenge, this paper presents a formal model of activity recognition in smart

homes based on possibility theory and environmental contexts. This activity recognition

model is able to take into account coherent and erroneous behaviours, which may result

from the cognitive decline of the occupant. An implementation of this model was validated

in a smart home laboratory with real sensor data from coherent and erroneous realizations

of an ADL scenario.

1 Introduction

A major development in recent years is the importance given to research on ambient intelligence

in the context of recognition of activities of daily living. Ambient intelligence, in opposition to

traditional computing where the desktop computer is the archetype, consists of a new approach

based on the capacities of mobility and integration of digital systems in the physical environment,

in accordance with ubiquitous computing. This mobility and fusion are made possible by the

miniaturization and reduced power consumption of electronic components, the omnipresence

of wireless networks and the fall of production costs. This allows us to glimpse the opportune

composition of devices and services of all kinds on an infrastructure characterized by a granularity

and variable geometry, endowed with faculties of capture, action, treatment, communication and

interaction (Ramos et al., 2008).

One of these emerging infrastructures is the concept of smart home. To be considered as

intelligent, the proposed home must inevitably include techniques of activity recognition, which

can be considered as being the key to exploit ambient intelligence. Combining ambient assisted

living with techniques from activity recognition greatly increases its acceptance and makes it

more capable of providing a better quality of life in a non-intrusive way. Elderly people, with or

without disabilities, could clearly benefit from this new technology (Casas et al., 2008).

Activity recognition, often referred as plan recognition, aims to recognize the actions and goals

of one or more agents from observations on the environmental conditions. The plan recognition

problem has been an active research topic in artificial intelligence (Augusto and Nugent, 2006)

2

for a long time and still remains very challenging. The keyhole, adversarial or intended plan

recognition problem (Geib, 2007) is usually based on a logic or probabilistic reasoning for the

construction of hypotheses about the possible plans, and on a matching process linking the

observations with some activity models (plans) related to the application domain. Prior work has

been done to use sensors, like radio frequency identification (RFID) tags attached to household

objects (Philipose et al., 2004), to recognize the execution status of particular types of activities,

such as hand washing (Mihailidis et al., 2007), in order to provide assistive tasks like, for instance,

reminders about the activities of daily living (ADL).

However, most of this research has focused on probabilistic models such as Markovian models

and Bayesian networks. But there is some limitations with probability theory. Firstly, the belief

degree concerning an event is determined by the belief degree in the contrary event (additivity

axiom). Secondly, the classical probability theory (single distribution) cannot model (full or

partial) ignorance in a natural way (Dubois et al., 2007). An uniform probability distribution

on an event set better express randomness than ignorance, i.e. the equal chance of occurrence

of events. Ignorance represents the fact that, for an agent, each possible event’s occurrence is

equally plausible, since there is no evidence that is available to support any of them by the lack

of information. Hence, one of the solutions to this kind of problem is possibility theory (Dubois

and Prade, 1988), an uncertainty theory devoted to the handling of incomplete information.

In contrast with probability theory, the belief degree of an event is only weakly linked to

the belief degree of the contrary event. Also, the possibilistic encoding of knowledge can be

purely qualitative, whereas the probabilistic encoding is numerical and relies on the additivity

assumption. Thus, possibilistic reasoning, which is based on max and min operations, is compu-

tationally less difficult than probabilistic reasoning. Unlike probability theory, the estimation of

an agent’s belief about the occurrence of events is based on two set-functions. These functions

are the possibility and necessity (or certainty) measures. Finally, instead of using historical

sensors data like others probabilistic approaches, we use partial beliefs from human experts con-

cerning the smart home occupant’s behaviours, according to plausible environment’s contexts.

Since the recognition system can be deployed in different smart homes and that their environ-

ments are dynamics, it is more easier to obtain an approximation of the occupant’s behaviours

from human experts’ beliefs than learning the behaviours in different smart home’s settings. By

using general descriptions of environmental contexts, it is possible to keep the same possibility

distributions for different smart home’s settings. Consequently, another advantage of possibility

theory is that it is easier to capture partial belief concerning the activities’ realizations from

3

human experts, since this theory was initially meant to provide a graded semantics to natural

language statements (Zadeh, 1978).

In the DOMUS (Giroux et al., 2009) and LIARA (Bouchard et al., 2011) research labs,

we use possibility theory to address the problem of behaviour recognition, where the observed

behaviour could be associated to cognitive errors. These recognition results are used to identify

the various ways a smart home may help a cognitively impaired occupant at early-intermediate

stages to carry out his ADLs. This context increases the recognition complexity in such a

way that the presumption of the observed agent’s coherency, usually supposed in the literature,

cannot be reasonably maintained. We propose a formal framework for activity recognition

based on description logic and possibility theory, which transforms the recognition problem into

a possibilistic classification of activities. The possibility and necessity measures on behaviour

hypotheses allow us to capture the fact that, in some activities, erroneous behaviour is as possible

as normal behaviour. Hence, in a complete ignorance setting, both behaviour types are possible,

although each type is not necessarily the one being carried out.

The paper is organized as follows. Firstly, we present some background knowledge on people

with dementia (Alzheimer’s disease), on the plan recognition problem, and the possibility theory.

Secondly, we summarize some related works in activity recognition in a smart environment.

Thirdly, we introduce the possibilistic activity recognition model that we propose. Thereafter,

we presents the results of our implementation’s experimentation based on real data from the

AIHEC project at the Singapore’s Institute for Infocomm Research. Furthermore, this section

also presents a discussion of our results according to related recognition approaches. Finally, we

conclude the paper, mentioning future perspectives of this work.

2 Background

This section presents some background on people with dementia, the plan recognition problem,

and the possibility theory. An overall picture of Alzheimer’s disease is presented in order to

illustrate the issues related to cognitive decline.

2.1 Overall Picture of People with Dementia: Alzheimer’s disease

Alzheimer’s disease, or senile dementia of the Alzheimer’s type (SDAT), is a disease of the brain

characterized by progressive deterioration of intellectual abilities (cognitive skills and memory)

of the carrier (Diamond, 2006). This disease evolves slowly over a period of seven to ten years.

4

The capacity to carry out normal activities (reading, cooking, . . . ) decreases gradually, just like

the capacity to produce judgments and answers that are suitable to everyday life problems. The

intermediate stages of the disease constitute the longest portion of the degeneration process. The

main symptoms of these stages are related to planning problems caused by sporadic memory

losses, the weakening of the executive functions, and the attention decrease while performing

particular tasks. A distraction (e.g., a phone call, an unfamiliar sound, . . . ) or a memory

lapse can hinder the patient in accomplishing his activity, leading him to carry out the actions

attached to his activity in the wrong order, to skip some steps of his activity, or to carry out

actions that are not even related to his initial objectives (Patterson et al., 2002). However, the

patient’s capacity to perform a simple action (without many steps) remains relatively unaffected

(Reisberg et al., 1982).

In these intermediate stages, patients suffering from Alzheimer’s disease do not need to be

totally taken in charge. They need a supervision and a specific intervention on the part of an

assistant. When continuous support is provided to Alzheimer’s patients in the form of cognitive

assistance, the degeneration process of the disease is slowed down, making it possible for a patient

to remain at home longer (Boger et al., 2006). Hence, the development of an artificial agent

recognizer able to assist Alzheimer’s patients at an intermediate stage, when the situation require

it, would make it possible to decrease the workload carried by natural and professional caregivers.

This type of technology has the potential to delay the fateful moment of institutionalization of

patients by alleviating some caregiver duties and by giving a partial autonomy to patients. In

the longer term, it will also constitute an economically viable solution to the increasing cost

of home care services. Consequently, the research in this field will have important social and

economic impacts in the future.

When an Alzheimer’s patient performs a task requiring cognitive skills, it is possible that

inconsistencies become present in his behaviour (Pigot et al., 2003). In the plan recognition prob-

lem, the presence of these inconsistencies results in erroneous execution of activities (erroneous

plans), according to certain types of particular errors. In a real context, the people without such

impairment can also act in an inconsistent manner, but on a level less important and regular

than that of an Alzheimer’s patient. The main difference is that a healthy person is usually able

to recognize his behavioural errors and corrects them in order to achieve his initial goals. In

contrast, an Alzheimer’s patient will act incoherently, even while performing familiar tasks, and

his behaviour will become more inconsistent as the disease evolves. Also, we must consider that

hypotheses concerning the behaviour of the observed patient can fall into the scope of reactive

5

recognition (Rao, 1994). More precisely, a patient at an early stage is not necessarily making

errors; he can simply temporarily stop the execution of an activity plan to begin another one in

the middle of an activity realization. This way, the patient deviates from the activity originally

planned by carrying out multiple interleaved activities, in order to react to the dynamic of his

environment (Reisberg et al., 1982). This is coherent behaviour.

Starting from this point, if one wishes to establish an effective recognition model capable of

predicting any type of correct or incorrect behaviour, the confrontation of these two investiga-

tions, erroneous versus interleaved, constitutes a necessity that is in conformity with reality, but

which is also paradoxical. The detection of a new observed action, different from the expected

one, cannot be directly interpreted as an interruption of the plan in progress, in the objective of

pursuing a new goal. In fact, this action can be the result of an error on the part of a mentally

impaired patient, or a healthy but fallible human operator. However, this unexpected action is

not inevitably an error, even in the most extreme case where the patient is at an advanced stage

of his disease. This context raises a recognition dilemma that is primarily due to the problem of

completing the activity plan library, which of course cannot be complete in any domain. This

dilemma is important in a context of cognitive deficiency such as Alzheimer’s disease. A person

suffering from Alzheimer’s disease will tend to produce more errors, induced by his cognitive

deficit, than a healthy person when he carries out his activities of daily living. Like healthy

people, Alzheimer’s patients carry out activities in an interleaved way. Consequently, the con-

sideration of interleaved and erroneous realizations that could explain the current behaviour of

an Alzheimer’s patient is important.

2.2 Plan recognition

In the last two decades, the AI community produced many fundamental works addressing the

activity recognition problem, in order to infer the goals of an observed agent. This is often

referred as plan recognition (Carberry, 2001). The plan recognition problem could be defined

as to take as input a sequence of actions performed by an observed agent in order to infer the

goals pursued by the agent and to organize this action sequence in terms of a plan structure

(Schmidt et al., 1978). This plan structure explicitly organizes an action set that will allow the

observed agent to achieve it’s goals. While the observed agent carries out it’s goals by following

the plan structure, an observer agent tries to infer the plan and the associated goals. Since the

observer agent does not know the actual plan followed by the observed agent, it must therefore

make hypotheses concerning the composition of this plan structure from the actions carried out

6

by the observed agent. Those hypotheses could represent distinct goals. Therefore, the observed

agent must use the most plausible hypotheses in order to evaluate the possible plan structures

and the associated goals.

The plan recognition problem can be subdivided in three distinct categories (Geib, 2007):

keyhole, intended, and adversarial plan recognitions. This classification is based on the relation

that exists between the observed agent and the observer agent (Cohen et al., 1982). Also, this

classification is used to clarify the recognition context, which influences the recognition model’s

conception. In the keyhole plan recognition, it is presumed that the observed agent does not

know that it is observed and, therefore, it does not attempt to influence the recognition process or

to mislead the observer agent (Cohen et al., 1982). Thereby, the observed agent offers a neutral

cooperation, where it does not clarify or hide its intentions. In others words, the observed

agent carries out its activities without assisting or interfering with the recognition process. For

instance, the cognitive assistance context, where an occupant carries out his activities without

taking into account the fact that he is observed, corresponds well with this recognition category.

In the intended plan recognition, it is assumed that the observed agent knows that it is

observed and, consequently, it adapts its behaviour by carrying its activity in order to help the

recognition process (Kautz, 1987). Therefore, the observed agent offers a direct cooperation,

where it tries to explicit its intentions. Some approaches are based on this direct cooperation

assumption in order to propose recognition systems where, in uncertain situations, the observer

agent can directly request some clarifications about the on-going task to the observed agent

(Lesh et al., 1999). However, when we are in a cognitive assistance context, we cannot assume

of the collaboration capabilities of the occupant. Indeed, a clarification request could increase

the patient’s cognitive load, which must be avoided with people with dementia.

In the adversarial plan recognition, it is assumed that the observed agent is opposed to the

observation of its behaviour (Geib and Goldman, 2001). Therefore, the observed agent tries to

abort the recognition process by carrying out actions hiding its intentions (Mao and Gratch,

2004). Thus, the observed agent offers a negative cooperation, where it tries to hide its inten-

tions by all means. For the observed agent, the observer agent is an enemy and, therefore, the

observed agent deliberately tries to carry out actions that are inconsistent with its objectives.

Therefore, the observer agent infers erroneous conclusions concerning the observed agent be-

haviour. Adversarial recognition may be used in military contexts (Heinze et al., 1999) or in

video games (Albrecht et al., 1998). Nevertheless, adversarial recognition is inconsistent with

the objectives related to cognitive assistance. Hence, even if the occupant carries out irrelevant

7

actions with its intentions, these actions are principally caused from the dementia symptoms

that generate the erroneous behaviours. Consequently, the patient does not deliberately hinder

the recognition process. It should be noted that one important aspect for cognitive assistance is

the patient acceptation for this assistance service.

According to the keyhole plan recognition literature, the recognition approaches can be di-

vided into three major trends of research. The first one comprises works based on logical ap-

proaches (Christensen, 2002; Kautz, 1991), which consist of developing a theory, using first

order logic, for formalizing the recognition activity into a deduction or an abduction process.

The second trend of literature is related to probabilistic models (Albrecht et al., 1998; Charniak

and Goldman, 1993), which define plan recognition in terms of probabilistic reasoning, based

primarily on some Markovian models or on Bayesian networks. The third trend covers emerging

hybrid approaches (Avrahami-Zilberbrand and Kaminka, 2007; Geib and Goldman, 2005; Roy

et al., 2009) that try to combine the two previous avenues of research. It considers plan recog-

nition as the result of a probabilistic quantification on the hypotheses obtained from a symbolic

(qualitative) algorithm.

2.3 Possibility theory

The probability theory is inadequate to process imperfect information, which is imbued of uncer-

tainty and imprecision (Dubois and Prade, 2005). The possibility theory is an uncertainty theory

dedicated to process incomplete information (Dubois and Prade, 2003). Like probability theory,

this theory is based on set functions, but differs by using a pair of dual set functions (possibility

and necessity measures) instead of one. Furthermore, unlike probability theory where a disjoint

event set’s probability is given by summing each event’s probability, the possibility of such an

event set is given by taking the maximum value among the possibilities of each event in the set.

The main component of this theory is the possibility distribution. A possibility distribution

π on a state set S consists to a function from S into a totally ordered set (L, <), which could be

numerical or not. For instance, this ordered set may consist to the [0, 1] interval with maximum

and minimum operations and a reverse order function n, where n(0) = 1 and n(1) = 0. The

possibility function π represents the state of knowledge of an agent concerning the current state

of things, distinguishing what is plausible from what is less plausible (expected/unexpected).

A possibility distribution is often linked to a variable x in S (room temperature . . . ). Possible

values in x are mutually exclusive. For all s ∈ S, πx(s) = 0 indicates that x = s is rejected

(impossible), and πx(s) = 1 indicates that x = s is totally possible (plausible). Distinct values

8

can simultaneously have a possibility degree of 1. Furthermore, some values of πx(s) can be

between 0 and 1. Hence, πx(s) represents the possibility degree of the affectation x = s, where

some values s are more possible than others. If S is exhaustive, at least one member of S must

be totally possible as a value of x:

∃s ∈ S, πx(s) = 1 (normalization) .

Extreme cases of partial knowledge can be represented with the possibility theory. Firstly,

complete knowledge happens when some states are totally possible, while the others are impos-

sible:

∃s0 ∈ S πx(s0) = 1 and ∀s 6= s0 πx(s) = 0,

which indicates that the x = s0 assignations are the only ones that are possibles. Secondly,

complete ignorance happens when all states are totally possibles:

∀s ∈ S πx(s) = 1,

which indicates that all values in S are possibles.

Let’s suppose an event A, which is defined as a state subset A ⊆ S. If we want to evaluate the

result of request such as did the event A occurs ?, we must evaluate the partial belief induced

by the knowledge encoded by the possibility distribution. This is achieved by evaluating the

possibility Π(A) and necessity (or certainty) N(A) measures:

Π(A) = sups∈Aπ(s)

N(A) = n(Π(A)) = infs 6∈An(π(s)),

where n is a reverse order function on L. In this case, L = [0, 1] and n(e) = 1− e, where e is a

value in L (e ∈ L). Π(A) indicates to what extent A is consistent with π, while N(A) indicates

to what extent A is certainly implied by π. For a normalized possibility distribution, we have

Π(S) = 1 and Π(∅) = 0.

The possibility measure satisfies the maxitivity axiom:

Π(A ∪B) = max(Π(A),Π(B)),

9

while the necessity measure satisfies a dual axiom:

N(A ∩B) = min(N(A), N(B)) .

Thus, unlike the probability theory where a disjoint event set’s probability is given by summing

each event’s probability, the possibility of this event set is obtained from the maximum value

among the possibilities of each event in the set. Furthermore, in the probability theory, the

probability of an event completely determines the probability of the contrary event. This can be

seen as one of the main differences between possibility and probability. Indeed, the possibility

or necessity of an event and of its contrary event are but weakly linked. Furthermore, in order

to characterize the uncertainty of a particular event A, we need the Π(A) and N(A) values.

According to Dubois and Prade (1988), when modeling uncertain judgment (belief) from domain

expert, it is not desirable to rigidify the relationship between the indications one has in favor

of an event and those that weigh against it. In this situation, the notion of probability is less

flexible than that of possibility. For instance, the belief of an expert concerning a patient profile

is more easier to model with possibility than probability, since the expert knowledge (partial

belief) concerning the profile is incomplete and imperfect.

3 Related Work

Currently, there are many research groups working on assistive technologies that try to exploit

and enhance activity / plan recognition models, in order to concretely address the problems

related to cognitive support in a smart environment. Many proposed approaches use models

based on dynamic Bayesian networks or hidden Markov models, inheriting the advantages and

disadvantages of these models. The remainder of this section present a non-exhaustive list of

proposed approaches related to the context of cognitive assistance in a smart environment.

3.1 Autominder

The Autominder system (Pollack, 2005) is a planning assistant elaborated at the University of

Michigan. This system provides reminders about the daily activities of its user. It tracks the

behaviours of the user in order to reason about the need to provide reminders and when it is the

most appropriate to issue them. This system has been deployed in prototype form on a mobile

robot assistant, named Pearl, developed at the Carnegie Mellon University (Pineau et al., 2003).

The robot Pearl was developed to assist elderly individuals with mild cognitive and physical

10

impairments and to support nurse in their daily activities. The autominder system is divided in

three components: a plan manager, a client modeler, and a personal cognitive orthotic (Pollack

et al., 2003).

The plan manager stores the user’s ADL plans in the client plan and update it and identify

possible conflict in it. The client modeler tracks the execution of the plan given information

about user’s observable activities, and it stores its belief in the execution status in the client

model. The personal cognitive orthotic reasons about inconsistencies between what the user

is supposed to do and what he is currently doing, and determines when to issue reminders.

The caregiver initializes the system and is responsible to input a description of the person’s

daily activities, the constraints between them, and preferences about time and manner of their

performance. In order to take into account several kinds of temporal constraints, the user’s

plans are modeled as Disjunctive Temporal Problems (DTPs). The client manager uses sensor

information to infer activities performed by the user. If the likelihood that a planned activity

has been carried out is high, the client manager sends this information to the plan manager,

which updates the client plan by recording the time of execution and by propagating all the

affected constraints.

The client model is represented by a Quantitative Temporal Bayes Net (QTBN), which was

developed for the reasoning about fluents and probabilistic temporal constraints (Pollack et al.,

2003). A QTBN is composed of a dynamic Bayesian network (DBN) that contains all causal

relationships and a standard Bayesian network that represents all temporal relationships. The

personal cognitive orthotic uses the client plan and the client model to determine what reminders

should be issued and when. The generation of a reminder plan is considered as a satisfaction

problem and uses a local-search approach (planning-by-rewriting) in order to create a reminder

plan that minimizes the number of times where the system interrupts the user. This system is

able to take into account situation where the user performs multiple activities and to prompt

reminders when some erroneous behaviours occurs (mainly temporal related).

A clinical trial of the system with the robot Pearl has shown promising results by issuing

reminders for some assigned tasks (bring the patient to a planned meeting) (Pineau et al., 2003).

However, all the activities of the day must be inserted in the system when it is initialized. If the

patient must add a new activity or cancel a planned one in order to react to the environment,

the system will produce erroneous reminders for a period of time. The schedule of a day can

change greatly when an unexpected event occurs.

11

3.2 I.L.S.A.

The Independent LifeStyle Assistant (I.L.S.A.) (Haigh et al., 2006) is a monitoring and support

system for elderly people developed at the Honeywell Laboratories. The purpose of this system

is to allow to elderly people to live longer in their homes by reducing caregiver burden. The

I.L.S.A. concept consists to a multiagent system that incorporates different home automation

devices to detect and monitor activities and behaviour. It uses the Geib and Goldman (2005)

hybrid plan recognition model for its task-tracking component. The plan recognition is based

on probabilistic abductive reasoning to generate hypotheses concerning the person’s goals, while

taking into account observation loss uncertainty, when the system is unable to observe an action

because, for instance, of a hardware error. These inherent difficulties with the recognition of

low-level actions are not related to high-level behavioural errors, which occur when an observed

person carries out, in a erroneous way, an activity planned in advance.

The activity plan library is represented with a Hierarchical Task Network (HTN). There are

several methods to achieve possible goals related to an activity plan. Each method consists

of a set of primitive actions (which can be partially ordered) that a person must carry out to

achieve a specific goal. Also, this approach is centered around a model of plan execution, in

which the recognition system takes the intentions (plans) of an observed person at the start

of the recognition process. The set of plans (activities) determines the set of pending actions,

which represents all the actions that the observed person can carry out according to the partial

order of the selected plans. When the observed person carries out one of the pending actions, a

new set of pending actions is generated by removing the executed action and by adding the set

of newly enabled actions. An action is enabled when its predecessor is carried out. The set of

pending actions allows to consider the realization of multiple activities in a interleaved way. This

execution model is formalized with logical rules in the notation of Poole’s logic of probabilistic

Horn abduction (PHA) (Poole, 1993). After each observation, the probability of each hypothesis

(a plan selected) is evaluated. Also, for each action in the pending set, the probability that it

will be the next one executed is evaluated, in order to determine which action is executed when

the system is unable to recognize or to observe an action, due to a system error or to deficient

sensors.

There are some limitation with this approach. Firstly, the recognition agent must know all

the observed person’s intentions in order to be able to use the model of plan execution. In a real

context, the recognition agent does not know all the intentions precisely and hence must consider

all the plans from the library, augmenting the complexity of the recognition process. Secondly,

12

when the system is unable to observe an action, the system selects the most probable action that

could be executed from the pending set, which limits the system in a context where the observed

entity can carry out an activity in an erroneous way, in which some actions could be absent

from the pending set. Finally, this model considers that the observed person generally acts in

a coherent way, never making an error. The only exception is the case of goal abandonment,

which can be induced by coherent behaviour, such as a person that reacts to the dynamic of

his environment where a goal cannot be carried out, or by incoherent behaviour, such as an

Alzheimer’s patient that forgets what he is doing. A field study of the I.L.S.A. system was

made to complete an end-to-end proof-of-concept and consists mainly to monitor two ADLs

(medication and mobility) and to issue alerts and information to family caregivers. However,

the hybrid recognition component was removed from the I.L.S.A system field study because of

experimental code, scalability, and memory management issues and to reduce the deployment

cost induced by the size of the set of sensors.

3.3 MavHome

The MavHome (Managing an Adaptive Versatile Home) project (Cook et al., 2006) is a smart

home based on a multi-agents architecture, where each agent perceives its state through sensors

and acts, in a rational way, on the smart home environment, in order to maximize the inhabitants’

comfort and productivity. For instance, some devices can be automatically controlled according

to the behaviour prediction and the routine and repetitive task patterns (data mining) of the

inhabitant. The location prediction uses the Active Lezi algorithm, which is based on the LZ78

compression algorithm. It uses information theory’s principles to process historical information

sequences and thereby learn the inhabitant’s likely future positions. This algorithm has been

tested on synthetic and real data, with an accuracy of 64% and 55% respectively. By adding

temporal rules, based on Allen’s temporal logic, to this algorithm, the synthetic data set and

real data set prediction accuracies increase to 69% and 56% respectively (Jakkula and Cook,

2007).

The inhabitant action prediction is based on the SHIP (Smart Home Inhabitant Prediction)

algorithm, which considers the most recent sequence of actions (commands to devices issued by

the inhabitant) with those in the inhabitant’s history, in order to predict the inhabitant interac-

tions (commands to devices) with the smart home. The SHIP algorithm prediction accuracy on

real data set is 53.4%. The ED (Episode Discovery) algorithm, based on data mining techniques,

is used to identify significant episodes (set of related device events that can be partially/totally

13

ordered or not) that are present in the inhabitant history. Those episodic patterns allow them

to minimize the length of the input stream by using the minimum description length principle

(MDL), where each instance of a pattern is replaced with a pointer to the pattern definition. By

combining the Active Lezi and the ED algorithms, the prediction accuracy on the real data set

is improved by 14%. These results indicate the effectiveness of these algorithms in predicting

the inhabitant’s activities. Nevertheless, an algorithmic analysis in terms of response time is

indispensable, before more conclusions can be drawn.

3.4 Gator Tech

The Gator Tech Smart House developed by the University of Florida’s Mobile and Pervasive

Computing Laboratory (Helal et al., 2005) has been used to conduct several cutting-edge research

works aiming to create the technology, framework and systems that will make happen the concept

of assistive environments benefiting the growing elderly and disabled population around the

world. That research includes personal assistant, sensors, recognition systems, localization and

tracking technology, robotic technology, . . . . One of their recent projects, closely related to our

research on activity recognition, is on a health platform that monitors the activity, diet and

exercise compliance of diabetes patients (Helal et al., 2009). The activity recognition is based

on Hidden Markov Models (HMM), one for each task to recognize. By using the sensor data,

the activity recognition is able to quantitatively evaluate the completion status of an activity

and qualitatively evaluate the steps that are missing. By using real data from the CASAS smart

apartment at Washington State University (Szewcyzk et al., 2009), five tasks’ HMMs, trained

on those data, were able to achieve a recognition accuracy of 98% on this real data set. Another

validation where 20 participants carried out the 5 activities in a sequential way by deliberately

incorporating behaviour errors (steps that are missing or badly carried out) was able to detect

errors from 19 participants. 1 participant carried out in a coherent way the 5 activities.

However, this approach has some limitations. The 5 activities must be completed according to

a specific sequence. This sequence is relatively inflexible since the accomplishment of an activity

is necessary in order to carry out another one. Concerning error recognition, the errors are

deliberated, so that the usual routine is not respected (event sequence). Thus, environmental

modifications that change the configuration of the sensors could induce false behaviour error

recognitions. Also, a HMM that was trained from the sensor events of a smart home could be

inefficient in another smart home, since the sensors configuration is possibly different. These

limitations occur usually in approaches where the activity recognition is directly based on sensor

14

events, without intermediate steps (environment state / context recognition, low-level action

recognition . . . ).

3.5 Smart Environments Research Group (Ulster)

The Smart Environments Research Group at Ulster University (Chen et al., 2009) works on

different aspects of cognitive assistance in smart homes. In one recent project (Hong et al.,

2009), they proposed a solution to model and reason with uncertain sensor data with the aim

of predicting the activities of the monitored person. Their recognition approach is based on the

Dempster-Shafer (DS) theory of evidence (Shafer, 1976), an alternative theory for uncertainty

that bridges fuzzy logic and probabilistic reasoning and uses belief functions. The use of DS

allows them to fuse uncertain information detected from sensors for activities of daily living

(ADL), which are monitored within a smart home, by using an ordinal conditional function

and merging approaches to handle inconsistencies within knowledge from different sources. It

allows them to combine evidence from various sources and reach a degree of belief (represented

by a belief function) that takes into account all the available evidences. They use background

knowledge (such as a caregiver’s diary) to resolve any inconsistencies between the predicted

(recognized) action and actual activities. The strength of this recognition approach is that it

allows one to capture the fact that sensors provide only imprecise data and that some information

or sources are, a priori, more reliable than others. A validation was carried out on a real data

set (Hong and Nugent, 2011).

However, the weakness of approaches based on DS theory is that the complexity of combining

evidences is related to the number of focal elements used by the belief functions. In a smart

home context, the number of focal elements can be high, thus increasing the recognition process

complexity.

3.6 COACH

The COACH system (Cognitive Orthosis for Assisting aCtivities in the Home) (Mihailidis et al.,

2008) is a prototype aiming to actively monitor an Alzheimer’s patient attempting a specific

task, for instance the hand washing activity, and to offer assistance in the form of guidance

(prompts or reminders) when it is most appropriate. The assistance system is based on a

Partially Observable Markov Devision Process (POMDP) (Lovejoy, 1991). The POMDP is

able to estimate unobserved states, like the patient’s dementia level and responsiveness. From

observed state variables taken from a camera (hand position, water flow . . . ), the POMDP

15

evaluates the most likely world state, including the actual achievement state of the activity.

This state is used with a preevaluated policy in order to select the action that the COACH

system must carry out. A policy is a function specifying the action (state transition) that the

system must take when it is in a specific state. The chosen policy must maximise the reward

measure (limit the state transitions’ cost). For instance, if a problem happen, like an error or

the patient is confused, the system evaluates the most appropriate solution to achieve the task,

according to the POMDP activity model, and guides the patient in order to complete the task.

Experiments and clinical trials with the COACH system, including Alzheimer’s patients and

therapists, have shown very promising results in monitoring a single pre-established activity

(Hand Washing) and in providing adequate assistance at the right moment (Boger et al., 2006).

However, a major limitation of the prototype is presuming that the system already knows which

activity is in progress, and thus supposing that it can only have one on-going task at a time.

3.7 Barista

The Barista system (Patterson et al., 2007) is a fine-grained ADL recognition system that uses

object IDs to evaluate which activities are currently carried out. It uses Radio Frequency Iden-

tification (RFID) tags on objects and two RFID gloves that the user wears in order to recognize

activities in a smart home (Philipose et al., 2004). With object interactions detected with the

gloves, a probabilistic engine infers activities according to the probabilistic models of activi-

ties, which were created from, for instance, written recipes. The activities are represented as

sequences of activity stages. Each stage is composed of the objects involved, the probability

of their involvement, and, optionally, a time to completion modelled as a Gaussian probability

distribution. The activities are converted into Dynamic Bayesian Networks (DBN) by the prob-

abilistic engine. By using the current sub-activity as a hidden variable and the set of objects

seen and time elapsed as observed variables, the engine is able to probabilistically estimate the

activities from sensor data. The engine was also tested with hidden Markov models (HMM) in

order to evaluate the accuracy precision of activity recognition (Patterson et al., 2005). These

models were trained with a set of examples where an user performs a set of interleaved activ-

ities. However, some HMM models perform poorly, and the DBN model was able to identify

the specific on-going activity with a recognition accuracy higher than 80%. This approach is

able to identify the currently carried out ADL in a context where activities can be interleaved.

However, this approach does not take into account the erroneous realization of activities, because

the result of the activity recognition is the most plausible on-going ADLs.

16

4 Possibilistic Activity Recognition Model

This section presents our activity recognition model, which is based on possibility theory. In

this model, the activity recognition process can be separated into three agents: 1) environment

representation agent, 2) action recognition agent, and 3) behaviour recognition agent. According

to the Figure 1, describing the architecture of an activity recognition system in a smart home,

our approach is related to the three last layers.

The environment representation agent (layer 2) allows to describe the current state of the

smart home environment resulting from an action realization. This observed state, which can

be partial, is obtained from information retrieved from the smart home’s sensor events. From

this observed partial state, the environment representation layer is able to infer the plausible

contexts that could explain the current observations. It should be noted that our model assumes

that there is an event manager agent (layer 1) that collects sensor events and evaluates if an

action was carried out by the smart home’s occupant.

The action recognition agent (layer 3) infers the most plausible low-level action that was

carried out in the smart home according to the changes observed in the environment’s state.

In accordance with a possibilistic action formalization and to the set of plausible environment

contexts that could represent the observed environment’s state resulting from an action realiza-

tion, the action recognition agent selects in the action ontology the most possible and necessary

recognized action that could explain the environment changes.

The behaviour recognition agent (layer 4) infers hypotheses about the plausible high-level

occupant’s behaviour related to the accomplishment, in an erroneous or coherent way, of some

intended activities. When the smart home’s occupant performs some activities, its behaviour,

which could be coherent or erroneous, is observed as a sequence of actions. According to the

plausible sequence of observed actions and to a possibilistic activity plan formalization, the

behaviour recognition agent infers a behaviour hypothesis set and evaluates the most possible

and necessary hypotheses in order to send them to an assistive agent. Our approach assumes

that there is a smart home’s assistive agent that uses information from different agents, including

the behaviour recognition agent, in order to plan, if needed, a helping task, through some smart

home’s actuators, towards the smart home’s occupant.

4.1 Environment Representation and Context

In our model, knowledge concerning the resident’s environment is represented by using a formal-

ism in description logics (DL) (Baader et al., 2007). DL is a family of knowledge representation

17

formalisms that may be viewed as a subset of first-order logic, and its expressive power goes

beyond propositional logic, although reasoning is still decidable. By using the open world as-

sumption, it allows us to represent the fact that knowledge about the smart home environment’s

state is incomplete (partially observable). Smart home environment’s states are described with

terminological (concepts and roles) and assertional (concepts’ instances and relations between

them) axioms. DL assertions describing the environment’s state resulting from an action real-

ization are retrieved from information sent by an event manager, which is responsible to collect

sensor events and to infer if an action was carried out. The DL language used should be between

SHOIN (D) and ALC. The SHOIN (D) (Horrocks and Patel-Schneider, 2004) family consti-

tutes the foundation of the ontological language OWL DL (Horrocks et al., 2003). The ALC

family (Schmidt-Schauß and Smolka, 1991), a subset of SHOIN (D), is one of the most easier

to use. It should be noted that the implementation for the model validation uses the SH DL

(Horrocks and Sattler, 1999), which corresponds to ALC with the use of role transitivity and

role hierarchy.

In order to reduce the size of plausible states to consider for action and activity formalizations,

our approach use low-level environment’s contexts. Low-level contexts are related to the smart

home environment’s state and the occupant profile, while high-level contexts are related to the

occupant’s behaviour (intentions, carried out actions, current activities, behaviour type . . . ). For

the remainder of the paper, usage of the term context will mean low-level context. A context c is

defined as a set of environmental properties that are shared by some states s in the environment’s

state space S. More formally, a context c can be interpreted as a subset of the environment’s

state space (cI ⊆ S) 1, where states of this subset share some common environmental properties

described by the context assertions. For instance, the context where the occupant is in the

kitchen, the pantry door is open, and the pasta box is in the pantry can matches several possible

states of the smart home environment. It is possible to partition the environment’s state space

with a set of contexts. Since the observation of the current state is partial, it is possible to have

multiple contexts that could explain the current observed state. Thus, our approach must infer,

by using a reasoning system, the plausible contexts that can be satisfied by the current observed

state’s assertions.

A context ontology (C,vC) describes the set of all environment’s contexts C, which is partially

ordered with the context subsumption relation vC . The context subsumption relation vC can

be seen as an extension of the concept subsumption relation v of DL (Baader et al., 2007). This1·I is an interpretation function that assigns to a context C a subset of the interpretation domain ∆I = S

(the interpretation of c corresponds to a nonempty set of states).

18

order relation, which is transitive, allows us to indicate that a concept is more general than

(subsumes) another concept. In other words, a subsumed concept is a subset of the subsumer

concept. For instance, c vC d means that the context d is more general than the context c. In

others words, the environment states associated to the context c (cI) are included in the states

associated to the context d (dI), so that cI ⊆ dI . For instance, a context where a door is open

is more general than a context where the pantry door is open (concept pantry door subsumed

by the concept door).

4.2 Action Recognition

In order to infer hypotheses about the observed behaviour of the occupant when he carries out

some activities in the smart home environment, we need to recognize the sequence of observed

actions that were performed in order to achieve the activities’ goals. In our model, we formalize

action according to a context-transition model where transitions between contexts resulting from

an action realization are quantified with a possibility value.

Proposition 1 (Possibilistic Action). A possibilistic action a is a tuple (Cprea, Cposta

, πinita,

πtransa), where Cprea and Cposta are context sets and πinita and πtransa are possibility distribu-

tions on those context sets.

Cprea is the set of possible contexts before the action occurs (pre-action contexts), Cposta is

the set of possible contexts after the action occurs (post-action contexts), πinitais the possibility

distribution on Cpreathat an environment’s state in a particular context ci ∈ Cprea

allows the

action to occur, and πtransa is the transition possibility distribution on Cprea × Cposta if the

action does occur.

The action library, which contains the set of possible actions A that can be carried out by

the occupant, is represented with an action ontology (A, vA), where each action is partially

ordered according to an action subsumption relation vA.

Proposition 2 (Action subsumption relation). Let a, b ∈ A be two actions, represented by

(Cprea, Cposta

, πinita, πtransa

) and (Cpreb, Cpostb

, πinitb, πtransb

), and let vA⊆ A × A be the

action subsumption relation. If a subsumes b, represented by b vA a or vA (b, a) ((b, a) ∈vA),

then we have:

1. ∀d ∈ Cpreb, ∃c ∈ Cprea , d vC c ∧ πinitb

(d) 6 πinita(c),

2. and ∀(d, e) ∈ Cpreb× Cpostb

, ∃(c, f) ∈ Cprea× Cposta

, d vC c ∧ e vC f ∧ πtransb(d, e) 6

πtransa(c, f).

19

In others words, according to the action subsumption relation vA, if an action subsumes

another one, its possibility values are at least as possible as the action subsumed. For instance,

since OpenDoor subsumes OpenDoorPantry, then the OpenDoor possibility is greater or equal

than the OpenDoorPantry possibility, since OpenDoor is more general than OpenDoorPantry.

This action ontology is used when we need to evaluate the most plausible action that could

explain the changes observed in the smart home environment resulting from an action realization

by an observed occupant. At the same time, we evaluate the next most plausible action that

can be carried out according to the current state of the smart home environment.

In order to evaluate the recognition and prediction possibilities on the action ontology at a

time t, we need to use the observation obst of the current environment state, represented by a set

of DL assertions. This observed state can be partial or complete according to the information

that can be retrieved from the environment’s sensors. Furthermore, each observation timestamp

t ∈ Ts is associated with a time value ti ∈ T that indicates the elapsed time (in minutes,

seconds, . . . ) since the start of the recognition process. This observation obst allows to evaluate

the set of plausible contexts ci that are entailed by this observation (obst |= ci). If we suppose

that obst is an observed context, then we have ci vC obst for each context ci that is entailed

by the observation (obst |= ci). Since the environment can be partially observable, multiple

entailed contexts are possible. Those entailed contexts are then used to evaluate the possibility

distributions for the prediction and recognition of actions.

The action prediction possibility distribution πpret on A indicates, for each action, the pos-

sibility that the action could be the next one carried out according to the current state observed

by obst. Thus, for each action a ∈ A, the prediction possibility πpret(a) is obtained by select-

ing the maximum value among the initiation possibilities πinita(ci) for the pre-action contexts

ci ∈ Cpreathat are entailed by the current observation (obst |= ci). The action recognition

possibility distribution πrecton A indicates, for each action, the possibility that the action was

carried out according to the previous and currents states observed by obst−1 and obst. Thus,

for each action a ∈ A, the recognition possibility πrect(a) is obtained selecting the maximum

value among the transition possibilities πtransa(ci, cj) for the pre-action contexts ci ∈ Cprea

and post-action contexts cj ∈ Cposta that are entailed by the previous and current observations

(obst−1 |= ci and obst |= cj).

With this action recognition possibility distribution πrect, we can evaluate the possibility

and necessity measures that an action was observed at a time t. The possibility that an action

in Act ⊆ A was observed by obst, denoted by Πrect(Act), is given by selecting the maximum

20

possibility among the actions in Act (πrect(a), a ∈ Act).

Πrect(Act) = max

a∈Act

(πrect

(a))

.

The necessity that an action in Act ⊆ A was observed by obst, denoted by Nrect(Act), is given

by selecting the maximum value in πrect and to subtract the maximum possibility among the

actions not in Act (πrect(a), a /∈ Act).

Nrect(Act) = max

b∈A

(πrect

(b))−Πrect

(Act),

= minc/∈Act

(maxb∈A

(πrect

(b))− πrect

(c)),

= maxb∈A

(πrect

(b))− max

c/∈Act

(πrect

(c))

.

It should be noted that the prediction possibility and necessity measures Πpretand Npret

are

obtained by using the prediction possibility distribution πpret instead of the recognition possibil-

ity distribution πrect. According to those possibility and necessity measures (Πrect

and Nrect),

the most plausible action a that could explain the changes observed in the environment state

described by obst, is selected in A. If more than one action is plausible, then they have the same

possibility value and the necessity measure is null for each of these actions if taken individually.

In this case, the most plausible action is taken in A according to the plausible action set Act

and the subsumption relation vA. This selection is carried out in the following way:

1. select the most specific actions K ⊆ Act among the plausible actions Act ⊆ A, so that

∀a ∈ K, @b ∈ Act, a 6= b ∧ b vA a,

2. select the actions M ⊆ A among all actions A that subsume all actions in K, so that

∀c ∈ M , ∀d ∈ K, d vA c,

3. select the most specific actions N ⊆ M among the actions in M , so that ∀e ∈ N , @f ∈ M ,

e 6= f ∧ f vA e,

4. if |N | ≥ 2, then we go to the third step by subtracting the action set N from the action

set M . Else, we select a ∈ N as the observed action, where a is the only action in N .

The observed action selection process is illustrated in Figure 2.

This example presents a subset of the action ontology (nodes) partially ordered by the sub-

sumption relation (arcs). The dark nodes are the most plausible actions, delimited by Act (full

line), that are the most possible and certain. The most specific actions in Act, delimited by K

21

(dashed line), represent plausible actions that are carried out in more precise contexts. From the

specific actions K, we must find among all actions in A the most specific action that generalizes

(subsumes) all actions in K. This action is selected among the actions that subsume all actions

in K. These actions are delimited by M (dotted line). The most specific action in M , which is

delimited by N (square), is selected as the most plausible observed action that was carried out

by the occupant. Since |N | = 1, the selection process is completed.

For instance, if the most plausible actions in Act are AnyAction, OpenTap, OpenColdTap

and OpenHotTap, then OpenTap is selected since it is the most specific common subsumer of

OpenColdTap and OpenHotTap, which are the most specific actions in Act. It should be noted

that the Act subset is chosen in order to maximize the possibility measure, while the necessity

measure is maximized according to a fixed value or a ratio with the possibility measure. This

fixed value for the necessity measure is to prevent as much as possible the situation where

Act = A.

This new observed action (a, t) is sent to the behaviour recognition agent, which uses the

sequence of observed actions to infer behaviour hypotheses concerning the realization of the

occupant’s activities. This sequence of actions represents the observed plan Pobst, and consists

to a set of observed actions (ai, tj) totally ordered by the sequence relation ≺T . For instance,

let obs0 and obs1 be two observations where the time values associated to the timestamps 0

and 1 are 3 minutes and 4 minutes, respectively. Then the observed plan (OpenDoor, 0) ≺T

(EnterKitchen, 1) indicates that OpenDoor was observed, according to obs0, 3 minutes after

the start of the recognition process and that EnterKitchen was then observed, according to

obs1, 1 minute later. This observed plan will be used by the behaviour recognition agent to

generate a set of hypotheses concerning the occupant’s observed behaviour when he carries out

some activities.

4.3 Action Recognition Process Overview

In order to illustrate the action recognition process step by step, let’s suppose a fictive smart home

where the only occupant is Peter. While carrying out his ADL in the smart home, Peter realizes

an action at time t. The action’s effects produce changes in the smart home environment’s state,

which are perceived by sensors disseminated in the smart home. According to the sensor events,

the only information about the currently observed state is that Peter is located in the kitchen

and a door is open. This resulting state from the action realization is described by obst with

DL assertions. There are multiple environment states that can be described by this observation

22

obst. For instance, the state where Peter is in the kitchen and the pantry door is open and the

state where Peter is in the kitchen and the refrigerator door is open represent two possible states

according to the observed situation described by obst. Consequently, the first step of the action

recognition process is to evaluate the environment contexts among the context ontology C that

are entailed by this observation obst.

From the identified plausible contexts, the action recognition process evaluates the action

that Peter carried out and the next plausible action that Pater will carry out. This evaluation

is based on the action ontology and Peter profile, which are specified by a domain expert. The

action ontology allows to define all possible actions that Peter could carry out in the smart home

and the hierarchy between them (action subsumption relation). The profile allows to define, for

each action, the initiation and transition possibility distributions on the contexts associated to

the action, while respecting constraints related to the hierarchy defined in the action ontology.

Thus, each action a ∈ A is described a 4-tuple (Cprea, Cposta

, πinita, πtransa

). Concerning the

action prediction, the process evaluates the possibility distribution πpret , which indicates the

possibility πpret(a) that an action a is the next one carried out by Peter according to the pre-

action contexts Cpreaentailed by the observation obst. From this prediction distribution πpret

,

we select the most plausible actions (Act) that maximize the necessity measure according to

a fixed value. If we assume that Act = {AnyAction, TakeFood, TakeCookie, TakePasta},

the predicted action is TakeFood, since it subsumes the most specific actions (TakeCookie

and TakePasta) and is more specific than the generic action AnyAction. Thus the plausible

predicted action that Peter will carry out is the action TakeFood where Peter takes some foods

(cookie or pasta) from the pantry.

Concerning the action recognition, the process evaluates the recognition possibility distri-

bution πrect, which indicate the possibility πrect

(a) that an action a is the action that Peter

carried out according to the plausible transitions between the pre-action contexts Cprea entailed

by obst−1 and the post-action contexts Cpostaentailed by obst. From the recognition distribution

πrect, the recognition process selects the most plausible observed action Act. If we assume that

obst−1 describes an environment state where Peter is in the kitchen and the pantry and refriger-

ator doors are closed, Act is then composed of {AnyAction, OpenDoor, OpenDoorInKitchen,

OpenPantryDoor, OpenRefrigeratorDoor}. Thus, the most plausible observed action is Open-

DoorInKitchen, since it subsumes the most specific actions OpenPantryDoor and Open-

RefrigeratorDoor and is more specific than the actions AnyAction and OpenDoor. Hence,

the plausible observed action that Peter carried out is OpenDoorInKitchen where Peter opens

23

a door (pantry or refrigerator) in the kitchen. This plausible observed action is then added to

the observed plan, which consists to a sequence of the most plausible observed actions.

Table 1 illustrates the evaluation process for the prediction and recognition possibility and

necessity measures. It should be noted that the action AnyAction is omitted, since it subsumes

all actions in A. Furthermore, only a subset of the action ontology is showed.

According to the observations, the prediction and recognition possibility distributions πpret

and πrecton the action ontology A are evaluated. Concerning the prediction, the most plausible

predicted actions in Act are AnyAction, TakeFood, TakeCookie and TakePasta. This set Act

contains actions with one of the three highest values in the possibility distribution πpret (0.9, 0.8

and 0.7). This allows to increase the prediction necessity measure, while minimizing the number

of actions to take into account. According the action ontology and the action subsumption

relation, the most plausible predicted action is TakeFood. According to πpret , the fact that

Peter will carry out the action TakeFood is very possible (Πpret(Act) = 0.9) and fairly certain

(Npret(Act) = 0.7). Thus, we have a prediction plausibility interval [0.9, 0.7] for the action

TakeFood.

Concerning the recognition, the most plausible observed actions in Act are AnyAction,

OpenDoor, OpenDoorInKitchen, OpenPantryDoor and OpenRefrigeratorDoor. This set

Act contains actions with one of the three highest values in the possibility distribution πrect

(0.8, 0.7 and 0.6). Since there is no action with a possibility value of 0.6, only actions with

values at 0.8 and 0.7 are taken into account. According to the action ontology and action sub-

sumption relation, the most plausible observed action is OpenDoorInKitchen. In accordance

with πrect, the fact that Peter carried out the action OpenDoorInKitchen is very possible

(Πrect(Act) = 0.8) and relatively certain (Nrect(Act) = 0.6).

The predicted and observed actions, including the prediction and recognition possibility

distributions, are then sent to the behaviour recognition process. This behaviour recognition

process will generate hypotheses concerning Peter’s observed behaviour when he carries out his

activities of daily living.

4.4 Behaviour Recognition

In order to have hypotheses about the behaviour associated with the performance of some

activities, we need to formalize activities as plan structures by using the action ontology A.

An activity plan consists of a partially ordered sequence of actions that must be carried out in

order to achieve the activity’s goals.

24

Proposition 3 (Activity Plan). An activity plan α is a tuple (Aα, ≺α, Crealα , πrealα , πerrα)

where Aα ⊆ A is the activity’s set of actions, which is partially ordered by a temporal relation

≺α⊆ Aα×Aα×πtimeα where πtimeα represents a set of temporal possibility distributions πtimeα,k,

Crealα is the set of possible contexts related to the activity realization, πrealα is the possibility

distribution on Crealα that a context is related to a coherent realization of the activity, and πerrα

is the possibility distribution on Crealα that a context is related to an erroneous realization of

the activity.

Each relation (ai, aj , πtimeα,k) ∈≺α allows us to describe the possible delays, according to the

temporal distribution πtimeα,k, between the carrying out of ai and aj . The temporal distribution

πtimeα,kon T , which is a set of time values, indicates for each time value tl ∈ T , the possibility

πtimeα,k(tl) that tl represents a coherent delay between the realization of ai and aj . For in-

stance, the activity WatchTv can have an activity plan composed of the actions SitOnCouch,

TurnOnTv and TurnOffTv and the temporal relations (SitOnCouch, TurnOnTv, πtime0) and

(TurnOnTv, TurnOffTv, πtime1), where πtime0 and πtime1 indicates possible delays between

the realization of the actions, according to Figure 3.

The activity plan library, which contains the set of possible activity plans P that can be

carried out by the occupant in the smart home environment, is represented with an activity

plan ontology (P, vP), where each activity plan is partially ordered according to an activity

subsumption relation vP .

Proposition 4 (Activity Subsumption). Let two activity plans α, β ∈ P, represented by tuples

(Aα, ≺α, Crealα , πrealα , πerrα) and (Aβ , ≺β , Crealβ , πrealβ , πerrβ), and the activity subsumption

relation vP ⊆ P × P. If α subsumes β, denoted by β vP α, then we have: for each context

d ∈ Crealβ , there is a context c ∈ Crealα where d vC c, πrealβ (d) 6 πrealα(c), and πerrβ(d) 6

πerrα(c).

In other words, if an activity subsumes another one, its possibility values are at least as

possible than the activity subsumed. For instance, since CookFood subsumes CookPastaDish,

then the CookFood possibility is greater or equal than the CookPastaDish possibility, since

CookFood is more general than CookPastaDish.

After each observation obst, we need to evaluate the possibility distributions πrealt and πerrt

on the activity plan ontology P, which indicates the possibility that the environment’s state

observed by obst is related, respectively, to a coherent realization and an erroneous realization

of the activity. The coherent activity realization possibility distribution πrealt on P indicates

the possibility, denoted by πrealt(α) that the environment’s state observed by obst is related to

25

a coherent realization of the activity plan α ∈ P. Hence, πrealt is obtained by selecting, for

each activity plan α ∈ P, the maximum value among the context possibilities πrealα(ci) for the

contexts ci ∈ Crealα that are entailed by the observation (obst |= ci).

The erroneous activity realization possibility distribution πerrton P indicates the possibility,

denoted by πerrt(α) that the environment’s state observed by obst is related to an erroneous

realization of the activity plan α ∈ P. Thus, πerrt is obtained by selecting, for each activity plan

α ∈ P, the maximum value among the context possibilities πerrα(ci) for the contexts ci ∈ Crealα

that are entailed by the observation (obst |= ci).

By using the activity plan ontology and the observed plan, the behaviour recognition agent

can generate hypotheses concerning the actual behaviour of the observed patient when he carries

out some activities. Since multiple activity realizations can explain the observed plan, we need

to evaluate activity realization paths, which represent partial/complete realizations of activities.

An activity realization path pathj is a tuple (αj , Pobst, Rpathj

), where αj ∈ P is the activity that

is partially or completely carried out, Pobst is the observed plan, and Rpathj ⊆ Pobst ×Aαj is a

relation which associates a subset of observed actions (ai, tk) ∈ Pobstfrom the observed plan to

actions al ∈ Aαjfrom the activity αj , so that ((ai, tk), αl) ∈ Rpathj

. The observed actions in

Rpathj must represent a coherent partial or complete realization of the activity αj , according to

the temporal constraints defined in the activity plan αj . It should be noted that if an observed

action (ai, tk) is associated with an activity action al, denoted by ((ai, tk), al) ∈ Rpathj, then

the observed action must subsume the activity action (al vA ai). For instance, given the

observed plan (SitOnCouch, 0) ≺T (TurnOnElectricalAppliance, 1) and the WatchTv activity

plan, we can have as realization path the associations ((SitOnCouch, 0), SitOnCouch) and

((TurnOnElectricalAppliance, 1), TurnOnTv) (since TurnOnTv is subsumed by TurnOn-

ElectricalAppliance).

Figure 4 illustrates the concept of a partial activity realization path. From a sequence of ob-

served actions (observed plan), some actions (a0, a2, a3 and a5) are associated to actions in an ac-

tivity plan (a vA a0, b vA a2, c vA a3 and d vA a5), where actions a to h are partially ordered.

Thus we have an activity realization path {((a0, t0), a), ((a2, t2), b), ((a3, t3), c), ((a5, t5), d)}

which denote a partial coherent realization of the activity plan. The remaining observed actions

(a1, a4 and a6) may be associated to another activity plan or be a behavioural error.

Since the set of realization paths Path depends on the observed plan Pobst , we must update

Path for each new observed action by extending, removing, or adding new realization paths:

(i) a realization path pathj ∈ Path is extended if the new observed action subsumes one of the

26

next possible actions in the activity plan and if the extended path respects the constraints in

the activity plan. Also, we must keep a copy of the original realization path, since it is possible

that the new observed action is not associated to the path’s activity, (ii) a realization path

pathj ∈ Path is removed if the maximum delays for the next possible action in the activity plan

are exceeded, (iii) a realization path pathj is added in Path if the new observed action subsumes

one of the activity’s actions that can be directly carried out.

With this activity realization path set Path, we need to evaluate the possibility distributions

πPathC,tand πPathE,t

on Path, which indicates the possibility that a particular path is associated,

respectively, to a coherent behaviour or an erroneous behaviour. The coherent path realization

possibility distribution πPathC,ton Path indicates the possibility, denoted by πPathC,t

(pathj),

that a particular path pathj ∈ Path is associated with a coherent behaviour according to an

observed plan Pobst . Thus, πPathC,tis obtained by considering, for each pathj ∈ Path, the

action prediction possibility for each action in the activity plan that could be the next one to

be carried out, the action prediction and recognition possibilities for the actions that are in the

path, the coherent activity realization possibility for the activity associated to the path, and the

time possibilities for the action transitions in the realization path.

The erroneous path realization possibility distribution πPathE,ton Path indicates the pos-

sibility, denoted by πPathE,t(pathj), that a particular path pathj ∈ Path is associated with

an erroneous behaviour according to an observed plan Pobst. Thereby, πPathE,t

is obtained by

considering, for each pathj ∈ Path, the action prediction and recognition possibilities for the

observed actions not in the realization path, and the erroneous activity realization possibilities

for the activity associated to the realization path.

The activity realization path set Path allows to identify activities that can be carried out

in a coherent way by the smart home occupant. These activities are denoted by the plausible

activity set Pposs. The observed behaviour, erroneous or coherent, may contain a subset of these

plausible activities, which are carried out in a coherent way. According to Pposs ⊆ P, there are

multiple hypotheses that can explain the observed behaviour of the occupant when he carries

out his activities of daily living.

Definition 5 (Behavioural hypothesis). A behavioural hypothesis hi ∈ Ht is a subset of Pposs,

where ∀αj ∈ hi, @αk ∈ hi, αi 6= αk ∧ αi vP αk. Ht denotes the behavioural hypothesis set

concerning the observation obst of the environment’s state.

Thus, a behavioural hypothesis hi ∈ Ht is an element of the powerset of Pposs (hi ∈ ℘(Pposs)),

where each activity plan α ∈ hi is not subsumed by another activity plan β ∈ hi. In others

27

words, a hypothesis is a set of activities (if any) that are partially or completely realized in a

coherent way (and possibly in an interleaved way) and are associated to a coherent behaviour or

an erroneous behaviour. So, for each hypothesis hi ∈ Ht, we have two interpretations (coherent

and erroneous behaviours). A hypothesis hi can be interpreted as a coherent behaviour where

the occupant carries out, in a coherent way, the activities in the hypothesis. Those activities can,

at the current observation time, be partially realized. Also, a hypothesis hi can be interpreted

as an erroneous behaviour where the occupant carries out some activities in an erroneous way,

while the activities in the hypothesis, if any, are carried out in a coherent way.

According to these two interpretations, we evaluate the coherent behaviour and erroneous

behaviour possibility distributions, πBevC,tand πBevE,t

, on the behaviour hypothesis set Ht.

The coherent possibility distribution πBevC,tindicates, for each hi ∈ Ht, the possibility that a

behaviour hypothesis denotes coherent behaviour according to the observed plan Pobst and the

realization paths associated to the activities in the hypothesis. If there is no activity in the

hypothesis or that some observed actions in Pobstare not associated to at least one realization

path, the possibility is zero. Otherwise, for each behaviour hypothesis hi ∈ Ht, the possibility

πBevC,t(hi) is obtained by selecting the maximum value among the activities’ minimal coherent

realization path possibilities:

πBevC,t(hi) =

0 if there are actions in Pobst

missing from activities in hi,

or if hi is empty,

maxαj∈hi

minpathk∈Path,

αj∈pathk

(πPathC,t

(pathk)) else.

The erroneous possibility distribution πBevE,tindicates, for each hi ∈ Ht, the possibility

that a behaviour hypothesis denotes erroneous behaviour according to the observed plan Pobst

and the partial paths associated to the activities in the hypothesis. If there is no activity in

the hypothesis, the possibility is obtained by selecting the minimal action recognition possibility

among the actions in the observed plan Pobst. Else, for each behaviour hypothesis, the possibility

πBevE,t(hi) is obtained by selecting the maximum value among the activities’ minimal erroneous

28

partial path possibilities:

πBevE,t(hi) =

min

(al,tk)∈Pobst

(πrectk

(al))

if hi is empty,

maxαj∈hi

minpathk∈Path,

αj∈pathk

(πPathE,t

(pathk)) else.

With those two possibility distributions, we can evaluate, in the same manner as the action

recognition, the possibility and necessity measures that each hypothesis represents coherent or

erroneous behaviour that could explain the observed actions Pobst. The possibility and necessity

measures that a hypothesis in B ⊆ Ht represents coherent behaviour that could explain the

observed plan Pobst , is given by ΠBevC,t(B) and NBevC,t

(B), which are obtained from the πBevC,t

possibility distribution. The possibility and necessity measures that a hypothesis in B ⊆ Ht

represents erroneous behaviour that could explain the observed plan Pobst, is given by ΠBevE,t

(B)

and NBevE,t(B), which are obtained from the πBevE,t

possibility distribution. The most possible

and necessary hypotheses are then selected according to the ΠBevC,t, NBevC,t

, ΠBevE,tand

NBevE,tmeasures on the hypothesis set Ht. The result of the behaviour recognition is then sent

to an assistant process, which will use this information in order to plan a helping task if needed.

4.5 Activity Recognition Process Overview

In order to illustrate the behaviour recognition process step by step, let’s suppose that we have

the same smart home with Peter. When Peter carries out an action in the smart home, the

action recognition process sends the observed plan Pobst , the distributions πrect and πrect on A,

and the environment contexts entailed by obst to the behaviour recognition process. According

to this information, the behaviour recognition process generates a behavioural hypothesis set

Ht. Each hypothesis h ∈ Ht denotes a behaviour where some activities (if any) are carried out

in a coherent way by Peter. Since these coherent activity realizations may occur in erroneous

and coherent behaviours from Peter, the πBevC,tand πBevE,t

distributions on Ht are evaluated.

The hypothesis set Ht and the distributions πBevC,tand πBevE,t

are then sent to an assistance

process, which will plan a helping task to Peter if necessary.

The behaviour recognition process is based on activity plans, which are described with a

5-tuple (Aα, ≺α, Crealα , πrealα , πerrα). From the entailed contexts, the process evaluates the

πrealt and πerrtdistributions on the activity plan ontology P. These distributions allow to

indicate the possibility, for each activity, that the observed state is related to a coherent or an

erroneous realization of the activity by Peter according to the entailed environment contexts.

29

By taking into account the observed actions Pobstand the activity plans in P, the recognition

process evaluates the activity realization paths Path, where each path contains a subset of the

observed actions Pobst . An activity realization path p ∈ Path represents a partial or complete

realization of an activity that respects the temporal constraints specified in the activity plan, so

that this realization is coherent.

Let’s suppose an observed plan Pobst = {(OpenDoorInKitchen, t = 0)}, which indicates

that Peter opened a door in the kitchen at time t = 0. Let’s suppose the activities GetCookie

and GetMilk. The GetCookie activity indicates that Peter want to get cookies by open-

ing the pantry door (OpenPantryDoor), taking a box of cookies (TakeCookie), and closing

the pantry door (ClosePantryDoor). The GetMilk activity indicates that Peter want to

get milk by opening the refrigerator door (OpenRefrigeratorDoor), taking the milk carton

(TakeMilk), and closing the refrigerator door (CloseRefrigeratorDoor). According to the

observed action (OpenDoorInKitchen) and the action ontology A, both activities are plausi-

ble as currently carried out by Peter. Hence, the observed action OpenDoorInKitchen sub-

sumes the first action in both activities, OpenPantryDoor vA OpenDoorInKitchen and

OpenRefrigeratorDoor vA OpenDoorInKitchen, so that we have partial activity realiza-

tion paths for both activities. For GetCookie, we have the realization path p1, described by

(GetCookie, Pobst, Rp1) where Rp1 = {((OpenDoorInKitchen, t = 0), OpenPantryDoor)}.

For GetMilk, we have the realization path p2, described by (GetMilk, Pobst, Rp2) where

Rp2 = {((OpenDoorInKitchen, t = 0), OpenRefrigeratorDoor)}. Thus, the realization path

set consists to Path = {p1, p2}.

Now, let’s suppose that the next observed action is (TakeObjectInKitchen, t = 1), which

indicates that Peter took an object in the kitchen at time t = 1. According to A, Take-

ObjectInKitchen subsumes TakeMilk and TakeCookie. Then, the paths p1 and p2 can be

extended by adding this new observed action. However, the delay between the observations

t = 0 and t = 1 exceeds the maximum delay between the realization of OpenRefrigeratorDoor

and TakeMilk described in the GetMilk activity plan. Consequently, the path p2 is re-

moved from Path, which is now composed of p1 and p3. The path p3 is an extension of

p1 and is described by (GetCookie, Pobst , Rp3) where Rp3 = {((OpenDoorInKitchen, t =

0), OpenPantryDoor), ((TakeObjectInKitchen, t = 1), TakeCookie)}. The path p1 remains

in Path since TakeObjectInKitchen could be associated to another activity or be an error, and

that p1 is still temporally valid according to the GetCookie activity plan.

The realization paths p1 and p3 may occur in a coherent or an erroneous behaviour from

30

Peter when he carries out his activities of daily living. Consequently, the recognition process

evaluates the πPathC,tand πPathE,t

possibility distributions on Path. For each path p ∈ Path,

πPathC,t(p) indicates the possibility that the realization path p occurs in a coherent behaviour,

while πPathE,t(p) indicates the possibility that the realization path p occurs in an erroneous

behaviour.

Let’s suppose that Path also contains realization paths for the AnyActivity, GetObject

and GetObjectInKitchen activity plans. Thus, the plausible carried out activity set is defined

by Pposs = {AnyActivity, GetObject, GetObjectInKitchen, GetCookie}. From Pposs, the

recognition process generates a behavioural hypothesis set Ht. Each hypothesis h ∈ Ht describes

a set of activities that are coherently carried out by Peter, which may occur in a coherent or

erroneous behaviour. Thereby, we may observe an erroneous behaviour where Peter carries

out activities in h in a coherent way while others are carried out in an erroneous way, or we

may observe a coherent behaviour where Peter carries out the activities in h in a coherent way.

According to Path, the hypothesis set consists toHt = {h1, h2, h3, h4, h5}, where h1 = ∅, h2 =

{AnyActivity}, h3 = {GetObject}, h4 = {GetObjectInKitchen} and h5 = {GetCookie}. By

the restriction of activity subsumption in hypotheses, we have Ht ⊂ ℘(Pposs). Indeed, we have

the activity subsumption relations GetCookie vP GetObjectInKitchen vP GetObject vP

AnyActivity, which prevent the occurrence of more than one plausible activities from Pposs

in a hypothesis. Each hypothesis h ∈ Ht can represent a coherent behaviour or an erroneous

behaviour from Peter. Thus, the recognition process must evaluate the πBevC,tand πBevE,t

possibility distribution on Ht. The πBevC,tdistribution allows to indicate, for each hypothesis

h ∈ Ht, the possibility πBevC,t(h) that Peter has a coherent behaviour when he carries out the

activities in h. The πBevE,tdistribution allows to indicate, for each hypothesis h ∈ Ht, the

possibility πBevE,t(h) that Peter has an erroneous behaviour when he carries out in a coherent

way, if any, the activities in h.

The Table 2 allows to illustrate the evaluation process for the coherent behaviour and erro-

neous behaviour possibilities πBevC,tand πBevE,t

for the hypothesis h5.

The hypothesis h5 refers to coherent or erroneous behaviours where Peter carries out the

GetCookie activity. According to current observations, the realization path set for GetCookie

consists of p1, p3, p4, and p5. Consequently, the possibility value of a coherent behaviour for

hypothesis h5 is πBevC,t(h5) = min

(πPathC,t

(p1), πPathC,t(p3), πPathC,t

(p4), πPathC,t(p5)

)=

min(0.8, 0.9, 0.9, 0.8) = 0.8. Furthermore, the possibility value of an erroneous behaviour for

hypothesis h5 is πBevE,t(h5) = min

(πPathE,t

(p1), πPathE,t(p3), πPathE,t

(p4), πPathE,t(p5)

)=

31

min(0.3, 0.1, 0.2, 0.4) = 0.1. Both values (πBevC,t(h5) = 0.8 and πBevE,t

(h5) = 0.1) will be

used in order to evaluate the coherent and erroneous behaviour possibility and necessity measures

of subsets of Ht that contains h5.

The behavioural hypothesis set Ht and the πBevC,tand πBevE,t

possibility distributions on Ht

are then sent to an assistance process. This assistance process will plan, if necessary, a helping

task carried out by the smart home actuators. This helping task is planned according to the

received information and Peter’s profile.

5 Smart Home Validation

In this section, we present results from our possibilistic model implementation in the Ambient

Intelligence for Home based Elderly Care (AIHEC) project’s infrastructure at Singapore’s Insti-

tute for Infocomm Research (I2R) (Phua et al., 2009). This infrastructure consists of a simulated

smart home environment, like the ones in DOMUS and LIARA labs, which contains stations

that represent smart home rooms (pantry, dining, . . . ). The behaviour of the observed person is

monitored by using pressure sensors (to detect sitting on a chair), RFID antennas (to detect cup

and plate on the table and in the cupboard), PIR sensors (to detect movement in the pantry and

dining areas), reed switch sensors (to detect opening and closing of the cupboard), accelerometer

sensors (to detect occupant’s hand movements), and video sensors (mainly to annotate and audit

the observed occupant’s behaviour). The event manager, which collect the dining and pantry

sensor events, is based on a Dynamic Bayes Network (DBN) (Tolstikov et al., 2008). It should

be noted that other approaches could be used instead of DBN in the event manager. Also, the

lab environment uses a wireless sensor network.

Our possibilistic behaviour recognition model is implemented according to the simplified

smart home system architecture (Figure 5 ), and is subdivided into three agents: environment

representation agent, action recognition agent, and behaviour recognition agent.

The system architecture works as follows. Basic events related to an action realization by

the smart home’s occupant are generated by the sensors and are sent to a sensor event manager

agent. The sensor event manager agent, which is based on a DBN in this case, infers that an

action was carried out and sends the current smart home environment’s state, which could be

partially observable, to the environment representation agent. The environment representation

agent, which has a virtual representation of the smart home environment encoded in a Pellet

description logic system (Sirin et al., 2007), infers which contexts are entailed by the current

(partial) environment state. Those entailed contexts are then sent to an action recognition agent,

32

which will use a possibilistic action formalization and the action ontology to select the most

plausible action that could explain the observed changes in the environment. This recognized

action is sent to a behaviour recognition agent, which will use the sequence of observed actions

(observed plan) and the activity plan ontology to generate possibilistic hypotheses about the

behaviour of the observed occupant. These hypotheses, with information from other agents,

will be used by an assistive agent, which will plan a helping task, if needed, towards the smart

home’s occupant by using the smart home’s actuators.

5.1 Results

A previous trial was carried out in this simulated smart home environment, where 6 actors sim-

ulated a meal-time scenario several times (coherent and erroneous behaviour) on 4 occasions.

This meal-time scenario, which is an eating ADL (activity of daily living), contains several sub-

activities: getting the utensils (plate), getting the food (cookie), getting drink (water bottle)

from the cupboard in the pantry to the table, eating and drinking while sitting on the chair, and

putting back the utensils, food, and drink in the cupboard. These activities can be interleaved.

For instance, the actor can bring the utensils, food and drink at the same time to the table,

where some actions are shared between these activities (e.g. open the cupboard). Some erro-

neous realizations for this scenario were carried out and are mainly associated with realization

errors (forget an activity step, add irrelevant actions, break the temporal constrain between two

activity’s actions), where some of them can also be considered as an initiation error (do not start

an activity), or a completion error (forget to complete the activity).

By using the sensor databases for each observed behaviour, a set of observed sequences of

smart home events was recognized, constituting a set of behavioural realizations. Among those

observed behaviours, we select 40 (10 coherent/30 erroneous) scenario realizations that are the

most representative, since some of them are similar. The selected coherent scenario realizations

represent a coherent behaviour of the activities, in an interleaved way, in the meal-time scenario.

The selected erroneous scenario realizations represent an erroneous behaviour of the meal-time

scenario, with or without some coherent partial activity realizations. In those erroneous realiza-

tions, there is usually more than one error type that occurs (realization, initiation and completion

errors). Each selected scenario realization was simulated in our model implementation by in-

putting the smart home events related to each realization and to use the sensor event manager,

in order to recognize the sequence of observed actions and to generate hypotheses concerning

the observed behaviour, according to the environment, action and activity ontologies.

33

The main goal of our implementation experimentation is to evaluate the high-level recog-

nition accuracy about the observed behaviour associated with the realization of the meal-time

scenario. Since our recognition approach is based on the knowledge about the observed smart

home environment’s state, problems related to sensors (e.g. noise) are managed by a sensor event

manager, in this case a DBN. This sensor event manager sends the observed environment’s state

to our environment manager agent, which infers the plausible contexts that could explain the

current observed state.

Another goal of our implementation experimentation is to evaluate empirically the runtime

performance of our proposed approach. For each new observed action, we evaluated the runtime

concerning the inference of plausible contexts, the action prediction and recognition, and the

generation of behavioural hypotheses. It should be noted that we focused more specifically

on the computational load for the components related to our approach. Thus, sensor event

management is not taken in consideration when evaluating the implementation performance.

Concerning behaviour recognition accuracy, which results are shown in Figure 6, three ways

to filter hypotheses were considered. The first filter is to evaluate if a specific scenario realization

is among the most plausible behavioural hypotheses. In others words, we consider subsets of

the hypothesis set Ht that contain only one hypothesis. Our model was able to recognize 56.7%

of the erroneous behaviours and 80% of the coherent behaviour (overall 62.5%) (first part of

Figure 6).

The second filter is to evaluate if a specific scenario realization is included in a hypothesis set

that is among the most plausible hypothesis subsets of the hypothesis set Ht. A most plausible

hypothesis subset consists to a hypothesis set where possibility values of the hypotheses in the

subset maximize (according to a threshold criteria) the necessity measure of the subset. We

use a threshold criteria (fixed value or ratio of possibility measure) for the necessity measure in

order to reduce the size of the hypothesis subset. Our proposed approach was able to recognize

63.3% of the erroneous behaviour (overall 67.5%) (second part of Figure 6).

The last filter is to consider erroneous scenario realizations as generic erroneous behaviours,

without taking into account activities that may by carried out in a coherent way. Our proposed

approach was able to recognize 100% of them (95% overall) (third part of Figure 6 ). It should

be noted that in the three filters, the coherent behaviour recognition accuracy is constant at

80%.

There are different reasons that explain the moderate recognition accuracy results and the

difference between the filters. One of the main reasons that some behaviour realizations (coherent

34

and erroneous) are not recognized is related to the rigidity of the action temporal relation, where

the only time constraint is a possibility distribution on duration between actions. In this case,

some erroneous or coherent realizations are instead considered as generic erroneous realizations,

since the coherent activities’ realizations are not recognized. Since some temporal constraints

are not respected, the realization paths of these activities are rejected.

Furthermore, in some cases, the sensor configuration changes a little bit (mainly the PIR

sensors). These modifications influence the accuracy of the event manager system. For instance,

a change in the PIR sensor localization can produce a lot of perturbations: zones detected

by the PIR sensors become overlapped, while the training on the event recognizer is made on

non-overlapping zones.

Finally, it should by noted that all possibility distributions, temporal constraints and action

descriptions are obtained from one’s belief, unlike most probabilistic approaches that use histor-

ical sensor data. However, this approximation has shown promising results and a learning phase

on the recognition system parameters would improve the recognition accuracy.

Concerning the runtime performance, Figure 7 plots, for each scenario realization, the system

runtime for each observed action that was recognized by our possibilistic model implementation.

The runtime includes the interactions with the description logic reasoning system, the action

recognition and the behaviour recognition. Since the focus of the experimentation is the perfor-

mance of our behaviour recognition implementation, the runtime prior to the smart home event

recognition is not considered.

For this simulation, the runtime is generally between 100 and 200 milliseconds for each action

observed. We observe bell shaped curves, which are an effect that results from a diminution of the

activity realization path set’s size. This size diminution results from the fact that some temporal

constraints between actions described in activity plans are no longer satisfied, so that subsets of

the activity realization path set must be removed. It should be noted that the number of actions

carried out and the time between them for each scenario realization are different from those of

another scenario realization, since each scenario realization represents a specific behaviour.

5.2 Discussion

Several previous related approaches presented earlier have conducted the same kind of experi-

ments that we did, using synthetic and real data on comparable problems of similar size. Com-

paring our experimental results with these previous one is not a simple task. One of the main

reasons is the rather different nature of the used recognition approaches and algorithms. In our

35

case, we experimented with a hybrid (logical and possibilistic) approach aiming to recognize

precisely the occupant’s correct and incorrect behaviour from an approximate description of

actions and activities obtained from an expert’s belief, unlike most approaches that use sensor

historical data from a learning phase. The output of our recognition algorithm, which takes

the form of a set of behaviour hypotheses with a possibility distribution that partially order

these hypotheses according to the possibility and necessity measures obtained from the possi-

bility distribution, is difficult to compare with other probabilistic approaches, since they do not

capture the same facets of uncertainty. Thus, probability theory offers a quantitative model for

randomness and indecisiveness, while possibility theory offers a qualitative model of incomplete

knowledge (Dubois et al., 2004), where probability law is difficult to obtain because to the ir-

rational nature of the domain (cognitively impaired people). This is a situation of complete

uncertainty, namely ignorance of the occupant’s behaviour.

Moreover, the objectives of our respective experiments are also somewhat different. In our

case, the focus of our experiment was to know if our method is able to correctly identify the

observed occupant’s correct behaviour, which could be related to the realization of multiple

interleaved activities and to erroneous deviations from the occupant. In contrast, the experiment

of Mihailidis et al. (2007), as an example, focused only on the identification of the person’s current

activity step, while assuming to know the current on-going activity. These two objectives and

methods are quite different and lead to some difficulties in comparing them.

Despite the heterogeneous nature of previous works experiments, we can draw some useful

comparisons and conclusions from the evaluation of their experimental results. First, most of

the previous work exploited a form of probabilistic model (Markovian or Bayesian based). These

approaches seem to give better results in recognizing an on-going activity and the current activity

step with a small plan library. For instance, the results presented by Helal et al. (2009) with

a Hidden Markov Model give a recognition accuracy of 98% in identifying the correct activity

among five candidates. Also, this approach was able to detect, in a qualitative way, the omitted

steps of those activities. The approach of Patterson et al. (2007), based on Dynamic Bayesian

Networks, was able to identify the specific on-going activity with a recognition accuracy higher

than 80%. The Markovian model proposed Mihailidis et al. (2007) also has shown amazing

results in recognition accuracy. However, this last approach only focused on monitoring a single

activity.

In the light of these experimental results, we can draw some comparisons. First, despite

their good results, these previous probabilistic models seem to be adapted to small recognition

36

contexts with only a few activities. It seems much more difficult to use them on a large scale,

knowing that each activity must be handcrafted and included in a stochastic model, while con-

serving a normalized probability distribution. Also, the propagation of probabilities following

an observation can be quite laborious while dealing with a large activity library. One of the

main probabilistic approaches, the Hidden Markov Model, requires an exponential number of

parameters (according to the number of elements that describe the observed state) to specify the

transition and observation models, which means that we need a lot of data to learn the model,

and the inference is exponential (Murphy, 2002).

Moreover, a lot of these approaches assume that the sensors are not changing and the recog-

nition is then based on the sensor events and on historical sensors data. This is a limitation,

since the smart home environment is dynamic (sensors can be removed, added, moved) and the

recognition system could be deployed in different smart homes. For high-level recognition, it is

more easier to work with environment’s contexts, which are common in different smart homes,

if more general concepts are used to describe the contexts.

Also, most previous models simply do not take into account the possibility of recognizing

coherent behaviour composed of few activities with their steps interleaved. They usually propose

the most likely current activity, according to a probability distribution on the activity set. They

also tend to only identify specific types of errors (ex. missing steps), while avoiding the others.

Few approaches take into account both coherent and erroneous behaviours when they infer the

observed behaviour.

Finally, we believe that the biggest problem of using classical probabilistic theory is the

inability of handle together the imprecision and the uncertainty of the incomplete information

in a natural way. One way to deal with this difficulty is to use a probabilistic interval, which

means that there are two probability distributions (one for the minimum values and one for

the maximum values). Our approach based on possibility theory, seems to have more flexibility

and potential, and to be more advantageous regarding these issues. For instance, by using only

one possibility distribution, we can obtain possibility and necessity measures (the interval) on

the hypotheses. It allows us to capture partial belief concerning the activities’ execution from

human experts, since this theory was initially meant to provide a graded semantics to natural

language statements (Zadeh, 1978). It also allows us to manage a large quantity of activities, to

take into account multiple interleaved plans, and to recognize most types of correct and incorrect

behaviour. Furthermore, applications based on possibility theory are usually computationally

tractable (Dubois and Prade, 2007).

37

5.3 Summary of Our Contribution

By looking at the previous approaches, we can note that most applications are based on a

probabilistic reasoning method for recognizing the observed behaviour. They mainly learn the

activities’ patterns by using real sensor data collected in the smart home. A potential drawback

of such an approach is that, in order to have an adequate probabilistic model, the required

amounts of historical data on activity realization can be quite large. Also, since the inhabitant

profile influences the activities’ realization, the probabilistic model should be tailored to the

inhabitant in order to properly recognize the observed behaviour. Furthermore, if we want

to deploy an assistive system in existing homes, where each home configuration can be quite

different from another, the learning process will need to be carried out again. Some approaches

propose learning erroneous patterns of some activities, but since there exist many ways to carry

out activities having errors associated with their realizations, the erroneous pattern library will

be always incomplete. Moreover, the patient’s habits may change from time to time, according

to new experiences, the hour of the day, his physical and psychological condition, etc. Therefore,

the patient’s routines must be constantly re-learned, and an adaptation period is required by

the system.

Our possibilistic activity recognition model can be seen as a hybrid between logical and

probabilistic approaches. By using knowledge concerning the environment state, the action

ontology, the activity ontology, and uncertainty and imprecision associated with each level of

the recognition process, our model evaluates a set of behavioural hypotheses, which is ranked

according to the erroneous and coherent behaviour possibility distributions. Since learning the

patient profile from the sensors is a long term process, the use of possibility theory in our

recognition model allows us to capture partial belief from human experts, from its origin as a

graded semantics to natural language statements (Zadeh, 1978). The knowledge of these human

experts concerning the patient profile is incomplete and imperfect, and therefore difficult to

represent by probability theory. Our approach takes into account the recognition of coherent

(interleaved or not) and erroneous behaviour, where both behaviour types are valid hypotheses

to explain the observed realization of some activities, instead of choosing one behaviour type

by default. For instance, if some events are not in the learned activity pattern, instead of

considering only erroneous behaviour or interleaved coherent behaviour, both behaviour types

must be considered in the hypotheses that could explain the observed behaviour. In other

words, erroneous and coherent (interleaved or not) behaviours are both valid explanations for

the patient’s observed behaviour when he carries out some activities.

38

6 Conclusion

Despite the important progress made in the activity recognition field for the last 30 years, many

problems still occupy a significant place at a basic level of the discipline and its applications. This

paper has presented a formal framework of activity recognition based on possibility theory as the

semantic model of the agent’s behaviour. It should be emphasized that the initial framework and

our preliminary results are not meant to bring exhaustive or definitive answers to the multiple

issues raised by activity recognition. However, it can be considered as a first step toward a more

expressive ambient agent recognizer, which will facilitate the support of imprecise and uncertain

constraints inherent to smart home environments. This approach was implemented and tested

on a real data set, showing that it can provide, inside a smart home, a viable solution for the

recognition of the observed occupant’s behaviour, by helping the system to identify opportunities

for assistance. An interesting perspective for the enrichment of this model consists of using a

possibilistic description logic to represent the environment’s observed state, thus taking into

account the uncertainty, imprecision and fuzziness related to the information obtained from

the sensors and to the description (concepts, roles, assertions) of the environment. Another

perspective is to use machine learning approaches during the recognition system usage period,

which will improve the reliability of the model since the occupant’s lifestyle may change. Finally,

we clearly believe that considerable future work and large scale clinical trials will be necessary

to help evaluate the effectiveness of this model in the field.

References

Albrecht, D. W., I. Zukerman, and A. E. Nicholson (1998). Bayesian models for keyhole plan

recognition in an adventure game. User Modeling and User-Adapted Interaction 8, 5–47.

Augusto, J. C. and C. D. Nugent (Eds.) (2006). Designing Smart Homes: The Role of Artificial

Intelligence, Volume 4008 of Lecture Notes in Artificial Intelligence. Springer.

Avrahami-Zilberbrand, D. and G. A. Kaminka (2007). Incorporating observer biases in keyhole

plan recognition (efficiently!). In Proc. of the Twenty-Second AAAI Conference on Artificial

Intelligence, pp. 944–949. AAAI Press, Menlo Park, CA.

Baader, F., D. Calvanese, D. L. McGuinness, D. Nardi, and P. F. Patel-Schneider (Eds.) (2007).

The Description Logic Handbook: Theory, Implementation, and Applications (2e ed.). Cam-

bridge University Press.

39

Boger, J., J. Hoey, P. Poupart, C. Boutilie, G. Fernie, and A. Mihailidis (2006). A planning

system based on Markov Decision Processes to guide people with dementia through activities

of daily living. IEEE Transactions on Information Technology in BioMedicine 10(2), 323–333.

Bouchard, K., B. Bouchard, and A. Bouzouane (2011). Qualitative spatial activity recogni-

tion using a complete platform based on passive rfid tags: Experimentations and results. In

B. Abdulrazak, S. Giroux, B. Bouchard, H. Pigot, and M. Mokhtari (Eds.), Proc. of the 9th In-

ternational Conference on Smart Homes and Health Telematics (ICOST 2011), Volume 6719

of Lecture Notes in Computer Science, pp. 308–312. Springer Berlin / Heidelberg.

Carberry, S. (2001). Techniques for plan recognition. User Modeling and User-Adapted Interac-

tion 11, 31–48.

Casas, R., R. B. Marín, A. Robinet, A. R. Delgado, A. R. Yarza, J. Mcginn, R. Picking, and

V. Grout (2008). User modelling in ambient intelligence for elderly and disabled people. In

Proc. of the 11th ICCHP, Number 5105 in Lecture Notes in Computer Science. Springer-

Verlag.

Charniak, E. and R. G. Goldman (1993). A bayesian model of plan recognition. Artificial

Intelligence 64, 53–79.

Chen, L., C. Nugent, M. Mulvenna, D. Finlay, and X. Hong (2009). Semantic smart homes: To-

wards knowledge rich assisted living environments. In S. McClean, P. Millard, E. El-Darzi, and

C. Nugent (Eds.), Intelligent Patient Management, Volume 189 of Studies in Computational

Intelligence, pp. 279–296. Springer.

Christensen, H. B. (2002). Using logic programming to detect activities in pervasive healthcare.

In Proc. of the 18th International Conference on Logic Programming (ICLP 2002), Volume

2401 of Lecture Notes in Computer Science, pp. 421–436.

Cohen, P. R., C. R. Perrault, and J. F. Allen (1982). Beyond question answering. In W. G.

Lehnert and M. H. Ringle (Eds.), Strategies for Natural Language Processing, pp. 245–274.

Lawrence Erlbaum Associates.

Cook, D. J., M. Youngblood, and S. K. Das (2006). A multi–agent approach to controlling a

smart environment. In J. C. Augusto and C. D. Nugent (Eds.), Designing Smart Homes:

The Role of Artificial Intelligence, Volume 4008 of Lecture Notes in Artificial Intelligence, pp.

165–182. Springer.

40

Diamond, J. (2006). A report on Alzheimer’s disease and current research. Technical report,

Alzheimer Society of Canada.

Dubois, D., L. Foulloy, G. Mauris, and H. Prade (2004). Probability–possibility transformations,

triangular fuzzy sets, and probabilistic inequalities. Reliable computing 10(4), 273–297.

Dubois, D., A. HadjAli, and H. Prade (2007). A possibility theory–based approach to the

handling of uncertain relations between temporal points. International Journal of Intelligent

Systems 22(2), 157–179.

Dubois, D. and H. Prade (1988). Possibility Theory: An Approach to Computerized Processing

of Uncertainty. Plenum Press.

Dubois, D. and H. Prade (2003). Possibility theory and its applications: a retrospective and

prospective view. In The 2003 IEEE International Conference on Fuzzy Systems, pp. 5–11.

Dubois, D. and H. Prade (2005). Traiter le flou et l’incertain. Dossier Pour la Science 49,

114–119.

Dubois, D. and H. Prade (2007). Qualitative possibility theory in information processing. In

M. Nikravesh, J. Kacprzyk, and L. A. Zadeh (Eds.), Forging New Frontiers: Fuzzy Pioneers

II, Studies in Fuzziness and Soft Computing, pp. 53–83. Springer.

Geib, C. (2007). Plan recognition. In A. Kott and W. M. McEneaney (Eds.), Adversarial

Reasoning: Computational Approaches to Reading the Opponent’s Mind, pp. 77–100. Chapman

& Hall/CRC.

Geib, C. W. and R. P. Goldman (2001). Plan recognition in intrusion detection systems. In

Proceedings of DARPA Information Survivability Conference & Exposition (DISCEX) II, pp.

1–10.

Geib, C. W. and R. P. Goldman (2005). Partial observability and probabilistic plan/goal recog-

nition. In Proceedings of the IJCAI workshop on Modeling Other from Observations, pp. 1–6.

Giroux, S., T. Leblanc, A. Bouzouane, B. Bouchard, H. Pigot, and J. Bauchet (2009). The praxis

of cognitive assistance in smart homes. In B. Gottfried and H. Aghajan (Eds.), Behaviour

Monitoring and Interpretation – BMI – Smart Environments, Volume 3 of Ambient Intelligence

and Smart Environments, pp. 183–211. IOS Press.

Haigh, K. Z., L. M. Kiff, and G. Ho (2006). The independent lifestyle assistant: Lessons learned.

Assistive Technology 18(1), 87–106.

41

Heinze, C., S. Goss, and A. Pearce (1999). Plan recognition in military simulation: Incorporating

machine learning with intelligent agents. In Proceedings of IJCAI’99 Workshop on Team

Behaviour and Plan Recognition, pp. 53–64.

Helal, A., D. J. Cook, and M. Schmalz (2009). Smart home–based health platform for be-

havioral monitoring and alteration of diabetes patients. Journal of Diabetes Science and

Technology 3(1), 141–148.

Helal, S., W. Mann, H. El-Zabadani, J. King, Y. Kaddoura, and E. Jansen (2005). The Gator

Tech Smart House: A programmable pervasive space. Computer 38(3), 50–60.

Hong, X. and C. Nugent (2011). Implementing evidential activity recognition in sensorised

homes. Technology and Health Care 19(1), 37–52.

Hong, X., C. Nugent, M. Mulvenna, S. McClean, B. Scotney, and S. Devlin (2009). Evidential

fusion of sensor data for activity recognition in smart homes. Pervasive and Mobile Comput-

ing 5(3), 236–252.

Horrocks, I. and P. Patel-Schneider (2004). Reducing OWL entailment to description logic

satisfiability. J. of Web Semantics 1(4), 345–357.

Horrocks, I., P. F. Patel-Schneider, and F. V. Harmelen (2003). From SHIQ and RDF to OWL:

The making of a web ontology language. Web semantics: science, services and agents on the

World Wide Web 1(1), 7–26.

Horrocks, I. and U. Sattler (1999). A description logic with transitive and inverse roles and roles

hierarchies. Journal of Logic and Computation 9(3), 385–410.

Jakkula, V. R. and D. J. Cook (2007). Using temporal relations in smart environment data for

activity prediction. In Proceedings of the 24th International Conference on Machine Learning,

pp. 1–4.

Kautz, H. A. (1987). A Formal Theory of Plan Recognition. Ph. D. thesis, University of

Rochester.

Kautz, H. A. (1991). A formal theory of plan recognition and its implementation. In J. F. Allen,

H. A. Kautz, R. N. Pelavin, and J. D. Tenenberg (Eds.), Reasoning About Plans, Chapter 2,

pp. 69–126. Morgan Kaufmann.

42

Lesh, N., C. Rich, and C. L. Sidner (1999). Using plan recognition in human–computer collab-

oration. In Proceedings of the seventh international conference on User modeling, Secaucus,

NJ, USA, pp. 23–32. Springer-Verlag New York, Inc.

Lovejoy, W. S. (1991). A survey of algorithmic methods for partially observed Markov decision

processes. Annals of Operations Research 28, 47–66.

Mao, W. and F. Gratch (2004). A utility–based approach to intention recognition. In AAMAS

2004 Workshop on Agent Tracking: Modeling Other Agents from Observations, pp. 1–6.

Mihailidis, A., J. Boger, M. Canido, and J. Hoey (2007). The use of an intelligent prompting

system for people with dementia: A case study. ACM Interactions 14(4), 34–37.

Mihailidis, A., J. Boger, T. Craig, and J. Hoey (2008). The COACH prompting system to assist

older adults with dementia through handwashing: An efficacy study. BMC Geriatrics 8(28),

1–18.

Murphy, K. P. (2002). Dynamic Bayesian Networks: Representation, Inference and Learning.

Ph. D. thesis, University of California, Berkeley.

Patterson, D. J., O. Etzioni, D. Fox, and H. Kautz (2002). Intelligent ubiquitous computing to

support alzheimer’s patients: Enabling the cognitively disabled. In Proceedings of UbiCog ’02:

First International Workshop on Ubiquitous Computing for Cognitive Aids, Göteborg, Sweden,

pp. 1–2.

Patterson, D. J., D. Fox, H. Kautz, and M. Philipose (2005). Fine–grained activity recognition

by aggregating abstract object usage. In Proceedings of IEEE 9th International Symposium

on Wearable Computers (ISWC).

Patterson, D. J., H. A. Kautz, D. Fox, and L. Liao (2007). Pervasive computing in the home

and community. In J. E. Bardram, A. Mihailidis, and D. Wan (Eds.), Pervasive Computing

in Healthcare, pp. 79–103. CRC Press.

Philipose, M., K. P. Fishkin, M. Perkowitz, D. J. Patterson, D. Fox, H. Kautz, and D. Hähnel

(2004). Inferring activities from interactions with objects. IEEE Pervasive Computing: Mobile

and Ubiquitous Systems 3(4), 50–57.

Phua, C., V. Foo, J. Biswas, A. Tolstikov, A. Aung, J. Maniyeri, W. Huang, M. That, D. Xu,

and A. Chu (2009). 2–layer erroneous–plan recognition for dementia patients in smart homes.

In Proc. of HealthCom09, pp. 21–28.

43

Pigot, H., A. Mayers, and S. Giroux (2003). The intelligent habitat and everyday life activity

support. In Proceeding of the 5th International Conference in Simulations in Biomedicine,

Slovenia, pp. 507–516.

Pineau, J., M. Montemerlo, M. Pollack, N. Roy, and S. Thrun (2003). Towards robotic assistants

in nursing homes: Challenges and results. Robotics and Autonomous Systems 42, 271–281.

Pollack, M., L. Brown, D. Colbry, C. E. McCarthy, C. Orosz, B. Peintner, S. Ramakrishnan,

and I. Tsamardinos (2003). Autominder: An intelligent cognitive orthotic system for people

with memory impairment. Robotics and Autonomous Systems 44, 273–282.

Pollack, M. E. (2005). Intelligent technology for an aging population: The use of AI to assist

elders with cognitive impairment. AI Magazine 26(2), 9–24.

Poole, D. (1993). Probabilistic Horn abduction and bayesian networks. Artificial Intelli-

gence 64(1), 81–129.

Ramos, C., J. Augusto, and D. Shapiro (2008). Ambient intelligence–the next step for artificial

intelligence. IEEE Intelligent Systems 23(2), 15–18.

Rao, A. S. (1994). Means–end plan recognition: Towards a theory of reactive recognition. In

Proceedings of the 4th International Conference on Principles of Knowledge Representation

and Reasoning, pp. 497–508. Morgan Kaufmann publishers.

Reisberg, B., S. Ferris, M. Leon, and T. Crook (1982). The global deterioration scale for assess-

ment of primary degenerative dementia. AM J Psychiatry 9, 1136–1139.

Roy, P., B. Bouchard, A. Bouzouane, and S. Giroux (2009). A hybrid plan recognition model for

Alzheimer’s patients: Interleaved–erroneous dilemma. Web Intelligence and Agent Systems:

An International Journal 7 (4), 375–397.

Schmidt, C. F., N. S. Sridharan, and J. L. Goodson (1978). The plan recognition problem: An

intersection of psychology and artificial intelligence. Artificial Intelligence 11, 45–83.

Schmidt-Schauß, M. and G. Smolka (1991). Attributive concept descriptions with complements,.

Artificial Intelligence 48(1), 1–26.

Shafer, G. (1976). A mathematical theory of evidence. Princeton university press Princeton, NJ.

Sirin, E., B. Parsia, B. C. Grau, A. Kalyanpur, and Y. Katz (2007). Pellet: A practical OWL–DL

reasoner. J. Web Sem. 5(2), 51–53.

44

Szewcyzk, S., K. Dwan, B. Minor, B. Swedlove, and D. Cook (2009). Annotating smart envi-

ronment sensor data for activity learning. Technology and Health Care 17 (3), 161–169.

Tolstikov, A., J. Biswas, C.-K. Tham, and P. Yap (2008). Eating activity primitives detection – a

step towards ADL recognition. In Proc. of the 10th IEEE Intl. Conf. on e–Health Networking,

Applications and Service (HEALTHCOM’08), pp. 35–41.

Zadeh, L. A. (1978). Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and Systems 1(1),

3–28.

45

Table 1: Example of a prediction distribution πpretand a recognition distribution πrect

on theaction ontology A.

πpretπrect

ΠpretNpret

ΠpretNpret

TakeFood 0.9 0.20.9 0.7 − −TakeCookie 0.8 0.1

TakePasta 0.7 0.2OpenDoor 0.2 0.8

− − 0.8 0.6OpenDoorInKitchen 0.2 0.8OpenPantryDoor 0.1 0.7OpenRefrigeratorDoor 0.1 0.7MAX 0.9 0.8 − − − −

Table 2: Example of a coherent realization path distribution πPathC,tand an erroneous realization

path distribution πPathE,ton the realization paths for the activity GetCookie.

p1 p3 p4 p5

πPathC,t0.8 0.9 0.9 0.8

πPathE,t0.3 0.1 0.2 0.4

46

Figure 1: The four layered architecture of an activity recognition system in a smart home

Figure 2: Example of the observed action selection process.

Figure 3: Temporal distributions for WatchTv.

47

Figure 4: Example of a partial activity realization path.

Figure 5: Simplified Assistance and Activity Recognition System.

48

0

10

20

30

40

50

60

70

80

90

100

Best hypothesis Best Hypotheses Subset Generic Error

Behavio

ur

Recognitio

n A

ccura

cy

Coherent BehavioursErroneous BehavioursOverall

Figure 6: Recognition Accuracy of Behaviour Recognition according to the best plausible hy-pothesis, the best hypothesis subset, and the generic error approaches.

60

80

100

120

140

160

180

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

Runtim

e in m

sec

Number of Observed Actions

Figure 7: Computational efficiency per observed action for the scenario realizations.

49