6
Conviction Model for Incident Reaction Architecture Monitoring based on Automatic Sensors Alert Detection Christophe Feltus and Djamel Khadraoui Public Research Centre Henri Tudor, 29, avenue John F. Kennedy L-1855 Luxembourg-Kirchberg, Luxembourg [email protected] ABSTRACT Dynamic distributed wireless networks constitute a critical pillar for the information system. Nonetheless, the openness of these networks makes them very sensitive to external attack such as the DoS. Being able to monitor the conviction level of network components and to react in a short time once an incident is detected is a crucial challenge for their survival. In order to face those problems, research tends to evolve towards more dynamic solutions that are able to detect and validate network anomalies and to adapt themselves in order to retrieve a secure configuration. In this position paper, we complete our previous works and make the assignment of functions to agents more contextual. Our approach considers the concept of agent responsibility that we assigned dynamically to agent and that we exploit in order to analyze the level of “conviction” in the component. In this current paper, we provide an insight of the architecture without depicting the assignment mechanism neither the conviction calculation. Categories and Subject Descriptors H.2.7: Security, Integrity, and Protection. General Terms Management, Measurement, Performance, Design, Reliability, Experimentation, Security, Standardization, Verification. Keywords Keywords are your own designated keywords. 1. INTRODUCTION Wide-area wireless data services are provided by heterogeneous entities which have to communicate in order to forward information from A to B. In our case, we consider the security of this kind of wireless overlay networks. To ensure the security of the information system, entities have to collaborate in order to detect, forward, make decision and react in case of attack. The architecture proposed in ReD project [1] defines an advanced single management console for security incident detection and reaction management, as part of a comprehensive Secure Information Management (SIM) system. Despite its capacity to detect [15] and characterize attacks, react accurately and automatically, and manage network equipment policy to protect the infrastructure, no mechanism has been defined to include the requirement for autonomous reaction and dynamic self- reconfiguration of the architecture. Each entity has a responsibility e.g. detect an intrusion, forward the alert if necessary, aggregate and correlate the information from possible multiple sources, decide to apply a new security policy and disseminate the new policy. But what is the behavior to adopt if an entity becomes malicious after an attack? Which other entity will take its responsibility? And how can we assure that this alternative entity is the more appropriate to take the responsibility? Our objective is to extend the solution proposed in [2] with (i) a set of policies that specifies and represents the responsibilities assigned to agents, and (ii) with an conviction model able to give an assurance value based on the verification of responsibility fulfillment by the assigned agent. The paper is structured as following: Section II details the ReD architecture and explains how agents interact in order to detect incidents and react accordingly. Section III presents the responsibility model and its instantiation for our use-case specification. Section IV links the responsibility model to a conviction model, evaluates the responsibility of the network components at a period of time (p) and provides a conviction value for all of them. Section V proves the conceptual validity from a Lab Case deployment and last section concludes the paper and introduces future works. 2. ReD ARCHITECTURE The reaction architecture presented in this section is based on the ReD project [1]. The ReD (Reaction after Detection) project defines and designs a solution to enhance the detection/reaction process and improves the overall resilience of IP networks. ReD architectures are built around a set of four types of responsibilities assigned to agents: PEP (Policy Enforcement Point) enforces, outside the ReD node, the security policies provided by the PDP. PIE (Policy Instantiation Engine) is the agent that receives information about attacks from the ACE and instantiates new security policies to react to the attack. PDP (Policy Decision Point) receives the new security policies defined by the PIE and deploys them at the enforcement points (PEP); ACE (Agent Correlation Engine) is the agent in charge of receiving alerts coming from network nodes, to correlates the information and to forward confirmed alert to the PIE; Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIN'13, November 26-28, 2013, Aksaray, Turkey Copyright © 2012 ACM 978-1-4503-1668-2/12/10... $15.00.

Conviction model for incident reaction architecture monitoring based on automatic sensors alert detection

Embed Size (px)

Citation preview

Conviction Model for Incident Reaction Architecture Monitoring based on Automatic Sensors Alert Detection

Christophe Feltus and Djamel Khadraoui Public Research Centre Henri Tudor,

29, avenue John F. Kennedy L-1855 Luxembourg-Kirchberg, Luxembourg

[email protected]

ABSTRACT Dynamic distributed wireless networks constitute a critical pillar

for the information system. Nonetheless, the openness of these

networks makes them very sensitive to external attack such as the

DoS. Being able to monitor the conviction level of network

components and to react in a short time once an incident is

detected is a crucial challenge for their survival. In order to face

those problems, research tends to evolve towards more dynamic

solutions that are able to detect and validate network anomalies

and to adapt themselves in order to retrieve a secure

configuration. In this position paper, we complete our previous

works and make the assignment of functions to agents more

contextual. Our approach considers the concept of agent

responsibility that we assigned dynamically to agent and that we

exploit in order to analyze the level of “conviction” in the

component. In this current paper, we provide an insight of the

architecture without depicting the assignment mechanism neither

the conviction calculation.

Categories and Subject Descriptors

H.2.7: Security, Integrity, and Protection.

General Terms Management, Measurement, Performance, Design, Reliability,

Experimentation, Security, Standardization, Verification.

Keywords

Keywords are your own designated keywords.

1. INTRODUCTION Wide-area wireless data services are provided by heterogeneous

entities which have to communicate in order to forward

information from A to B. In our case, we consider the security of

this kind of wireless overlay networks. To ensure the security of

the information system, entities have to collaborate in order to

detect, forward, make decision and react in case of attack.

The architecture proposed in ReD project [1] defines an advanced

single management console for security incident detection and

reaction management, as part of a comprehensive Secure

Information Management (SIM) system. Despite its capacity to

detect [15] and characterize attacks, react accurately and

automatically, and manage network equipment policy to protect

the infrastructure, no mechanism has been defined to include the

requirement for autonomous reaction and dynamic self-

reconfiguration of the architecture. Each entity has a

responsibility e.g. detect an intrusion, forward the alert if

necessary, aggregate and correlate the information from possible

multiple sources, decide to apply a new security policy and

disseminate the new policy. But what is the behavior to adopt if

an entity becomes malicious after an attack? Which other entity

will take its responsibility? And how can we assure that this

alternative entity is the more appropriate to take the

responsibility?

Our objective is to extend the solution proposed in [2] with (i) a

set of policies that specifies and represents the responsibilities

assigned to agents, and (ii) with an conviction model able to give

an assurance value based on the verification of responsibility

fulfillment by the assigned agent.

The paper is structured as following: Section II details the ReD

architecture and explains how agents interact in order to detect

incidents and react accordingly. Section III presents the

responsibility model and its instantiation for our use-case

specification. Section IV links the responsibility model to a

conviction model, evaluates the responsibility of the network

components at a period of time (p) and provides a conviction

value for all of them. Section V proves the conceptual validity

from a Lab Case deployment and last section concludes the paper

and introduces future works.

2. ReD ARCHITECTURE The reaction architecture presented in this section is based on the

ReD project [1]. The ReD (Reaction after Detection) project

defines and designs a solution to enhance the detection/reaction

process and improves the overall resilience of IP networks. ReD

architectures are built around a set of four types of responsibilities

assigned to agents:

PEP (Policy Enforcement Point) enforces, outside the ReD node, the security policies provided by the PDP.

PIE (Policy Instantiation Engine) is the agent that receives information about attacks from the ACE and instantiates new security policies to react to the attack.

PDP (Policy Decision Point) receives the new security policies defined by the PIE and deploys them at the enforcement points (PEP);

ACE (Agent Correlation Engine) is the agent in charge of receiving alerts coming from network nodes, to correlates the information and to forward confirmed alert to the PIE;

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are

not made or distributed for profit or commercial advantage and that

copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists,

requires prior specific permission and/or a fee.

SIN'13, November 26-28, 2013, Aksaray, Turkey Copyright © 2012 ACM 978-1-4503-1668-2/12/10... $15.00.

Figure 1. ReD node Architecture mapped with BARWAN case study [14]

Fig. 1 illustrates the ReD architecture applied on the BARWAN1

use-case [2]. The flow is supposed to begin with an alert detected

by the automatic sensors (termed IDS). This alert is sent to the

ACE of BuildingA (BuildingA_ACE) agent that does or does not

confirm the alert to the PIE. Afterwards, the PIE decides to apply

new policies or to forward the alert to an ACE from a higher layer

(upper ACE). Its PIE agent sends the policies to the PDP agent,

which decides which PEP is able to implement it in terms of rules

or script on devices (laptop, InfoPad server, fileserver, etc.). Then

the PDP agent returns the new policy to the PEP agent that knows

how to transform a policy into an understandable rule or script for

the component. The Fig. 2 presents a more detailed view of the

architecture of the use case.

As previously explained, ReD specifications are embedded in

reaction policies managed at the multi-agents system (MAS)

management layer. These policies specify the responsibility of

each agent on the network and their evolution according to

reaction. The formalization of the agent responsibilities has been

achieved according to the responsibility model presented in the

next section.

3. AGENT RESPONSIBILITY

3.1 Responsibility Model

In a non-crisis context, agents are assigned to responsibilities like

PEP, PIE, ACE, etc. By analyzing for instance the activity of

monitoring the fileserver (see Fig. 2), we observe e.g. that the PEP

concerned by that activity has the responsibility to collect the log

file on the firewall, to make a basic correlation between the values

1 Bay Area Research Wireless Access Network project, conducted at the

University of California at Berkeley.

and the previous log values and to report this analysis to the ACE

in case of suspected alert. In order to perform the monitoring

activity, the PEP is assigned to obligations of achieving some

tasks and he gains in parallel the access rights needed to perform

these tasks. When a crisis occurs, for instance a DoS attack, one

or more PEP agents can be isolated from the rest of the network,

the normal monitoring rules and procedures do no longer work as

usual and it is required to change the responsibility of the agents.

For instance, in the above case, other agents have to fulfill the

responsibilities of the isolated PEP.

Figure 2. Synoptical ReD Architecture

In general, the definition of the agent responsibility is mostly

incomplete. Most of the architectures only consider the agent

against the outcome that it has to produce. Sometimes, advanced

solutions integrate the inputs that those agents request for

performing the outcome. We define the responsibilities as a state

assigned to an agent to signify him its obligations concerning the

task, its accountabilities regarding its obligations, and the rights

and capabilities necessary to perform it. In [3] and [12] we have

proposed an initial responsibility model that can be used to depict

the agent responsibility. That responsibility model has been

upgraded in order to integrate the following concepts:

Fig.3. Responsibility model for Conviction sharing

The assignment is the action of linking an agent to a

responsibility. Delegation process is the transfer of an agent’s

responsibility assignment to another agent.

The accountability is a duty to justify the performance of a task

to someone else under threat of sanction [5]. Accountability is a

type of obligation to report the achievement, maintenance or

avoidance of some given state to an authority and, as

consequence, is associated to an obligation. Accountability

contribute to generate trust or to remove trust depending of the

accountability outcomes [20].

The obligation is the most frequent concept to appear as well in

literature [4] as in industrial and professional frameworks.

Obligation is a duty which links a responsibility with a task that

must be performed. We define a task as an action to use or

transform an object.

The capability describes the requisite qualities, skills or resources

necessary to perform a task. Capability may be declined through

knowledge or know-how, possessed by the agent such as ability to

make decision, its processing time, its faculty to analyze a

problem, and its position on the network.

The right is common component but is not systematically

embedded in all frameworks. Right encompasses facilities

required by an agent to fulfill his obligations e.g. the access right

that the agent gets once he is assigned responsible.

The commitment pledged by the agent related to this assignment

represents his required engagement to fulfill a task and the

conviction that he does it in respect of good practices. The

commitment in MAS has already been subject to many researches

[6]. The semantic analyze of the commitment [7] and [8]

advocates for considering trust between agents as a pragmatic

commitment antecedent [1].

We consider the trust in an agent as the reliance that this agent

act as it is requested. For didactic reason, we consider in this

paper that a trust level of 10 is high and a trust level of 0 is low.

3.2 Agent Responsibility Specifications

Based on the responsibility model defined above, we may

instantiate the responsibility model for each responsibility of the

agents within the network. Because of the size of the paper, only

the four most important meta-concepts are instantiated: the

obligations concerning the task (in red), the capabilities (in blue),

the rights (in green), and commitment represented as a trust value

(in black). Table 1 provides these concepts instantiated for each

responsibilities of the network. The two last columns propose a

mapping of the rights and capabilities which are necessary by

obligation.

For the PEP, we observe that the responsibility includes

obligations such as the obligation “to retrieve the logs from the

component he monitors” (O1), “to provide an immediate reaction

if necessary” (O2), etc. In order to perform that obligation, he

must have the capabilities “to be on the same network as the

component he controls” (C1), “to be able to communicate with the

PDP” (C2), “to be able to communicate with the facilitator agent”

(C3) and so on. He also must have the right “to read the log file on

the concerned network component” (R1), “to write the log in a

central logs database” (R2), and so on.

4. MONITORING NEEDS BASED

CONVICTION MODEL Commonly an agent is considered as an encapsulated computer

system [13] that is situated in some environment and that is

capable of flexible, autonomous action in that environment in

order to meet its design objectives [9]. As agents have control

over their own behaviour, they must cooperate and negotiate with

each other to achieve their goals [10]. The convergence of these

agents’ properties and distributed systems behaviour makes the

multi-agent architecture an appropriate mechanism to evaluate the

security (Conviction) of critical infrastructures run by distributed

systems [11]. Nonetheless for such multi-agents systems one

would expect each involved agent to be able to meet its assigned

responsibilities in order to provide efficient monitoring of the

security [14] of a network. Indeed, this is an intrinsic

characteristic of the monitoring system which should be

guaranteed if one is to gain a reliable insight of a network security

posture. The common approach which is to put more emphasis on

the well functioning of the network itself need being augmented

with a critical evaluation of the monitoring system to ensure the

reliability of its operations. This is relevant since links between

entities part of the monitoring system may break, agents with the

task of conducting the verification and measurements may fail to

fulfill their tasks and obligations for a range of raisons including:

Erroneous assignment of their rights or alteration of the latter during runtime [16].

Agents’ capabilities may be insufficient for accomplishing a task assigned to them

An accumulation of tasks for an agent may result in an overload and subsequently a failure to meet some of its responsibilities.

And so forth.

Table 1: Responsibilities instantiation

Obligations concerning Task Capabilities Mapping of Capabilities to

Obligations

Mapping of Rights to

Obligations Level of Trust Rights

PE

P

O1: Must retrieve the logs from the component it monitors

C1: Is on the same network as the component to control C2: Be able to communicate with the PDP C3: Be able to communicate with the facilitator agent C4: Have enough computing resource to monitor the component

to control C5: Be able to communicate with the MAS management layer C6. Must be able to encrypt data C7. Be able to communicate securely with the ACE

C1, C4, C6, C7 R1, R2, R4

O2: Must provide an immediate reaction if necessary

C1, C2, C4

R3

O3: Must communicate with the facilitator tin order to get the address of the other components (PDP, ACE)

C3

O4: Must report the incident to the ACE in a secure way

C5, C6, C7

R5

T: 3,365

R1. Allow to read log file on the concerned network component R2. Allow to write log in the central logs database R3. Be able to read the Policy in the MAS management layer R4. Allow to read and right in the alert database R5. Allow to read the Public key database

PD

P

O1: Based on the incident report from the PEP, must decide which reaction policy is appropriate to be deployed by the PEP

C1: Has a fast bandwidth C2: Has high CPU resources C3: Has a central position on the network C4. Be able to perform backup of the policy rules

C1, C2

R1, R2, R3

O2: Must communicate with the facilitator to get the address of the other components (PDP, PIE, Facilitator) and make back up

C1, C3, C4

R1, R2

T: 4,897

R1. Allow to read the yellow pages database R2. Allow to read the white page database R3. Allow to read the policy rules status

AC

E

O1: Must communicate with the PEP or others ACE to receive alert message

C1: Has high CPU resources in order to make correlations. C2: Has a central position on the network C3: Be able to communicate with all agents C4. Must be able to decrypt data from the PEP C5. Must be able to encrypt data to upper ACE

C2, C3, C4

R4

O2. Must correlate the Alerts from different PEP or from inferior ACE

C1

R2, R3

O3. Must confirm the alert to related PIE C2, C3,

R3

O4. Must forward the alert to the upper ACE

C2, C3, C5

R1, R4

T: 8,116

R1. Allow to read policy rules status R2. Allow to read alert database R3. Allow to write in the confirmed alert database R4. Allow to read the Public key database

Faci

lita

tor

O1: Must provide IT addresses of the requested component

C1. Have a position in which it is always available C2. Has a significant bandwidth depending on the network size C3. Be able to perform backup of the white page and yellow page

database

C1, C2

R1, R2, R3

O2: Make a mapping between the component name and the IP address and keep backup

C3

R1, R2, R3

T: 5,099 R1. Allow to read and write to the white pages services database R2. Allow to read and write to the yellow pages services database R3. Allow to read information about the topology of the network

This call for a clear definition and specification of the conditions

under which an entity part of the monitoring system [17] can, with

reasonable evidence, be expected to fulfill a required task. In

another word, we need to provide the basis for gaining justifiable

conviction that an entity can meet its monitoring responsibilities.

4.1 Predetermination for Agents’

Responsibilities Fulfillment

Although a plethora of conditions may need to be fulfilled for

expecting an agent to meet its obligations, it is imperative that the

followings are met:

Rights: the set of rights entrusted to the agent should be such that they enable satisfaction of its obligations.

Capability: the overall capability assigned to an agent should be below its capability. Moreover such capability should enable it to fulfill its obligations

Level of Trust: should be higher or equal to the minimum level required specified in Table 1.

Based on the above requirements the conviction for an agent

fulfilling its obligation should be based on the followings:

Conviction “A” for fulfillment of Obligation “O” by an Agent with right “R”, Capability “C” and Trust “T”: A0 (R, C, T.) (according to the assurance description from [11]):

A0 (R, C, T) = 0 if (R0 R) (C0 C) (Tp≥T) (1)

Otherwise:

A0 (R, C, T) = 1 (2)

With:

R the current rights of the agent

C the current capabilities of the agent

R0 the set of rights necessary for fulfilling obligation O

C0 the set of capabilities necessary for fulfilling obligation O

R0 include in R if for each right R0, i, part of R0, R0,i є R

C0 include in C if for each capability C0, i, part of C0, C0,i є C

Tp the trust at period p.

Relations (1) and (2) imply that the satisfaction of an obligation can only be guaranteed if the set of rights allocated to the agent and its current capabilities are both subsets of the set of rights and capabilities required for the satisfaction of that obligation and if the trust level at period p (Tp) is higher or at least equal to the reference T. As illustration, Table 2 provides the set of rights, capabilities and trust possessed by the agents being assigned to responsibilities on the network at a period (p). The table reveals for instance that to make the PEP able to fulfill obligation “O1: Must retrieve the logs from the component it monitors”, it should be on the same network than the component to control (C1), have enough computing resource to monitor the component to control (C4), be

able to encrypt data (C6) and be able to communicate securely with the ACE (C7). The PEP is also entrusted with a set of rights to satisfy O1. These include “R1: is allowed to read log file on the concerned network component”, “R2: is allowed to write log in the central logs database” and “R4: is allowed to read and write in the alert database”. The minimum level for the trust parameter expected from the PEP is set to 3.

5. DEPLOYMENT LAB CASE

CONCEPTUAL VALIDATION

Based on the specifications of the responsibilities associated to each agent provided in Table 1, one can assess whether current rights, capabilities and trust level of each agent can be sufficient to fulfill a given obligation. Let’s consider for instance Table 2, the current deployment of ReD’s agents revealed that all four agents PEP, PDP, ACE and the facilitator, although the level of trust is always sufficient, they will not be able to fulfill respectively their obligations O2, O1, O1, O2. In the case of the PEP, the obligation to provide an immediate reaction is hampered by the fact that the PEP lacks the capability to communicate with the PDP (C2). This means that any appropriate policy cannot be grounded to the PEP and be implemented in case of abnormally within the system.

Table 2: rights and capabilities of monitoring agents at period t

Obligations concerning Task Current agents’ capabilities

Current agents’ obligations

Conviction of obligation

fulfillment Level of Trust

PE

P

O1: Must retrieve the logs from the component it monitors C1, C4, C6, C7 R1, R2, R4 1

O2: Must provide an immediate reaction if necessary C1, C4 R3 0

O3: Must communicate with the facilitator tin order to get the address of the other components (PDP, ACE)

C3 1

O4: Must report the incident to the ACE in a secure way C5, C6, C7 R5 1

T: 3

PD

P

O1: Based on the incident report from the PEP, must decide which reaction policy is appropriate to be deployed by the PEP

C1, C2 R1, R2, 0

O2: Must communicate with the facilitator to get the address of the other components (PDP, PIE, Facilitator) and make back up

C1, C3, C4 R1, R2 1

T:4

AC

E

O1: Must communicate with the PEP or others ACE to receive alert message

C2, C3,

R4

0

O2. Must correlate the Alerts from different PEP or from inferior ACE

C1

R2, R3

1

O3. Must confirm the alert to related PIE C2, C3, R3 1

O4. Must forward the alert to the upper ACE C2, C3, C5 R1, R4 1

T: 8

Faci

li-

tato

r

O1: Must provide IT addresses of the requested component C1, C2 R1, R2, R3 1

O2: Make a mapping between the component name and the IP address and keep backup

R1, R2, R3 0

T: 5

Obligation O1 of the PDP also suffers the lack of R3 which gives the PDP the right to actually read the policy status and deploy a problem solving mechanism. The ACE as the agent responsible for receiving alerts from nodes within the network cannot current meet its obligation O1 which is about communicating with the PEP and

other ACEs to receive alerts since it cannot decrypt the message protocol coming from the PEP (C4). The facilitator’s obligation to keep back up (O1) can hardly be satisfied given the required capability C3 is currently not there.

6. CONCLUSIONS Critical infrastructures are more and more present and needs to be

seriously managed and monitor regarding the increasing amount

of threats. This paper presents a solution to automatically react

after an incident on a wireless network based on MAS

architecture. The system initially based on static assignments of

function to agents needed more dynamicity in order to stay

aligned with the new arising risks.

In this position paper, we firstly enhance our previous works by

providing a conceptual representation of the agent responsibilities.

Our solution exploits the concept of agent’s obligations regarding

tasks, the concepts of right and capability required to satisfy an

obligation and the concept of trust that represent the reliance that

an agent to act as it is requested . Secondly, based on that

definition of the agents’ responsibilities, a conviction level can be

estimated in order to determine the confidence that the agent can

meet its responsibilities. In the event of such conviction level

being low, decisions can be made as to whether to shift the

fulfillment of such a responsibility to a different agent.

The architecture that we exploit to demonstrate the enhanced

reaction mechanism relies on ReD, which is being tested and

currently produced in our deployment lab case. Practically ReD

defines the structural bases for the alert mechanism that we have

exploited in the paper in order to illustrate the BARWAN project.

Additional lab case demonstrations are currently running and

more formal result are being generated within the CockpiCI

project [18, 19]. The outcomes of these field experiments already

underline the accuracy of the expected conviction model

outcomes and strengthen to recalculate the assurance value within

trust function perspective.

7. ACKNOWLEGMENTS This research is supported and funded by the European FP7-

Security project “CockpiCI”, Cybersecurity on SCADA: risk

prediction, analysis and reaction tools for Critical Infrastructures.

8. REFERENCES [1] Gateau, B.; Khadraoui, D.; Feltus, C., "Multi-agents system service

based platform in telecommunication security incident reaction," Information Infrastructure Symposium, 2009. GIIS '09. Global , vol., no., pp.1,6, 23-26 June 2009. doi: 10.1109/GIIS.2009.5307083

[2] E. A. Brewer, R. H. Katz, E. Amir, H. Balakrishnan, Y. Chawathe, A. Fox, SS. D. Gribble, T. Hodes, G. Nguyen, V. N. Padmanabhan, M. Stemm, S. Seshan, T. Henderson, A network Architecture for Heterogeneous Mobile Computing, IEEE Personal Communications Magazine, Oct. 1998

[3] Christophe Feltus, Michaël Petit, Building a Responsibility Model Including Accountability, Capability and Commitment, ARES 2009, Fukuoka, Japan. doi: 10.1109/ARES.2009.45

[4] B. Gâteau. Modélisation et Supervision d'Institutions Multi-Agents. PhD Thesis held in cooperation with Ecole Nationale Superieure des Mines de Saint Etienne and CRP Henri Tudor, defended in Luxembourg the 26th of June 2007.

[5] B. C. Stahl, Accountability and reflective responsibility in information systems. In: C. Zielinski et al. The information society - emerging landscapes. Springer, 2006, pp. 51 -68.

[6] P. Munindar Singh, Semantical Considerations on Dialectical and Practical Commitments. Proceedings of the 23rd Conference on Artificial Intelligence (AAAI). July 2008

[7] M. J. Smith and M. Desjardins. 2009. Learning to trust in the competence and commitment of agents. Autonomous Agents and Multi-Agent Systems 18, 1, 36-82.

[8] J.Broersen, Mehdi Dastani, Zhisheng Huang, and Leendert W. N. van der Torre. 2002. Trust and Commitment in Dynamic Logic. EurAsia-ICT '02, Springer-Verlag, London, UK, 677-684.

[9] N.R Jennings, An agent-based software engineering, Agent–Oriented Software Engineering, in the Proceedings of the 9th European Workshop on Modeling Autonomous Agents in a Multi-Agent World (MAAMAW-99), Valencia, Spain.

[10] P. Ciancarini, and M. Wooldridge, Agent–Oriented Software Engineering in Procceedings of the 22nd International Conference on Software Engineering, June 2000, pp. 816-817.

[11] M.Ouedraogo, H. Mouratidis, D. Khadraoui and E.Dubois, An agent based system to support assurance of security requirement., in proceeding of the fourth IEEE international conference on secure software integration and reliability improvement (SSIRI 2010)

[12] C. Feltus, E. Dubois, E. Proper, I. Band, M. Petit, Enhancing the ArchiMate® Standard with a Responsibility Modeling Language for Access Rights Management, 5th ACM International Conference on Security of Information and Networks (ACM SIN 2012), Jaipur, Rajastan, India. doi>10.1145/2388576.2388577

[13] Jennings, N. R. (2001). An agent-based approach for building complex software systems. Communications of the ACM, 44(4), 35-41.

[14] Schranz, Paul Steven. "VoIP security monitoring & alarm system." U.S. Patent Application 10/694,678.

[15] Zaher, A. S., & McArthur, S. D. J. (2007, July). A multi-agent fault detection system for wind turbine defect recognition and diagnosis. In Power Tech, 2007 IEEE Lausanne (pp. 22-27). IEEE.

[16] Sadeghi, A. R., Wolf, M., Stüble, C., Asokan, N., & Ekberg, J. E. (2007). Enabling fairer digital rights management with trusted computing. In Information Security (pp. 53-70). Springer Berlin Heidelberg.

[17] Kalinowski, J., Stuart, A., Wamsley, L., & Rastatter, M. P. (1999). Effects of monitoring condition and frequency-altered feedback on stuttering frequency. Journal of Speech, Language and Hearing Research, 42(6), 1347.

[18] Jonathan. Blangenois, Guy. Guemkam, Christophe Feltus, Djamel Khadraoui, Organizational Security Architecture for Critical Infrastructure, 8th International Workshop on Frontiers in Availability, 8th FARES 2013, IEEE, Germany.

[19] Djamel Khadraoui, Christophe Feltus, Critical Infrastructures Governance - Exploring SCADA Cybernetics through Architectured Policy Semantic, IEEE SMC 2013, UK.

[20] Christophe Feltus, Michaël Petit, and Eric Dubois. 2009. Strengthening employee's responsibility to enhance governance of IT: COBIT RACI chart case study. In Proceedings of the first ACM workshop on Information security governance (WISG '09). ACM, New York, NY, USA, 23-32. DOI=10.1145/1655168.1655174 http://doi.acm.org/10.1145/1655168.1655174