22
See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/263874564 Qualitative assessment dynamics - Complementing trust methods for decision making ARTICLE in INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY AND DECISION MAKING · JANUARY 2014 Impact Factor: 1.41 · DOI: 10.1142/S0219622014500072 READS 20 1 AUTHOR: Denis Trcek University of Ljubljana 67 PUBLICATIONS 194 CITATIONS SEE PROFILE Available from: Denis Trcek Retrieved on: 26 December 2015

Qualitative assessment dynamics - Complementing trust methods for decision making

  • Upload
    uni-lj

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Seediscussions,stats,andauthorprofilesforthispublicationat:http://www.researchgate.net/publication/263874564

Qualitativeassessmentdynamics-Complementingtrustmethodsfordecisionmaking

ARTICLEinINTERNATIONALJOURNALOFINFORMATIONTECHNOLOGYANDDECISIONMAKING·JANUARY2014

ImpactFactor:1.41·DOI:10.1142/S0219622014500072

READS

20

1AUTHOR:

DenisTrcek

UniversityofLjubljana

67PUBLICATIONS194CITATIONS

SEEPROFILE

Availablefrom:DenisTrcek

Retrievedon:26December2015

International Journal of Information Technology & Decision Making

World Scientific Publishing Company

1

QUALITATIVE ASSESSMENT DYNAMICS –

COMPLEMENTING TRUST METHODS FOR DECISION MAKING

Denis Trček

Laboratory of E-media, Informatics Dept., Faculty of Computer and Information Science

University of Ljubljana, Ljubljana, 1000, Slovenia / EU

[email protected] http://www.fri.uni-lj.si/en/laboratories/lem/

Received (12th January 2012)

Revised (2th July 2012)

Accepted (11th October 2012)

Communicated by (xxxxxxx)

Trust is not only one key ingredient of prosperous organizations and societies, but also an essential

factor in decision making processes. And when it comes to trust, the latest advances in computing

sciences area are increasingly supporting the related processes by deployment of so called trust

management systems. These systems are slowly advancing from their early stages of evolution

toward more sophisticated and already operationally deployable solutions. As there seems to be no

“Swiss-army knife” like methodology for trust management, it is reasonable to assume that not only

one, but a few of them will be deployed in the future, depending on their basic principles of

functioning, purposes and contexts of use. Therefore there still exists a gap in this area with

unaddressed issues where humans (or humans-like agents) would be in focus. Quality Assessment

Dynamics, QAD, which is presented in this paper, is taking these issues into account. It is based on

operands and operators that model human ways of reasoning as described in many natural languages.

Further, it is a formal system and therefore enabled for deployment in computing environments. This

way QAD complements existing trust management methods and provides additional means for

decision making through deployment in simulations and in trust management engines, while being

understandable to ordinary users without requiring sophisticated expert knowledge.

Keywords: decision making; trust management; user modeling; multi-agent systems, simulation.

1. Introduction

In the medieval era, Shakespeare advised us to love all, trust a few, and do wrong to

none. Later, the German poet Goethe, who had a strong sense for deep analyses, claimed

that as soon as one trusted himself (herself), one knew how to live. And recently, Prof.

Dr. H. Smead vividly noted: “When we were young, we didn't trust anyone over thirty.

Now that we're over thirty, we don't trust anyone at all”. Clearly, trust is a scarce and

sensitive resource.

Nowadays developed societies are more and more dependent on networks and e-services.

Trust in these environments is therefore becoming crucial. So it comes as no surprise that

not only pioneers of the Internet like V. Cerf are exposing the need for more trust in these

environments,3 but also high ranking politicians are doing so, because trust in networks

denis
Typewriter
Preprint of an article published in International Journal of Information Technology & Decision Making Vol. 13, No. (2014) C World Scientifc Publishing Company DOI: 10.1142/S0219622014500072 http://www.worldscientific.com/worldscinet/ijitdm

2 Denis Trček

and e-services is a key driver for further economical prosperity of whole nations.33

While

at a general level of societies experimental research of such claims is not so extensive, the

opposite is the case at the organizational level, where many research works provide

supporting evidence that trust is an important ingredient for the stability of organizations,

not to mention its role in getting competitive advantage.20

In addition, as information

technologies are essential to organizations management, trust (and reputation systems)

focused research in information systems is becoming evidently important, which can be

observed by analyzing most productive and impacting themes in high-ranking journals.28

There exists quite an extensive coverage of trust research in social sciences where it has

decades-long tradition. Cyber solutions emerged only some fifteen years ago, therefore

trust research in this area requires additional addressing, adapted to specifics of e-media

and computing environments, which earlier research could not take into account. Further,

the epistemic basis of research in social sciences is typically bound to statistics and

statistics based models that are verified in experimental settings. Research in computing

science has a different epistemic basis. This is most often formalisms with theoretic

background in, e.g., logic or mathematics, and such background already implies the core

properties of models of trust phenomenon. In addition, social sciences research on trust

does not focus on tight formalization, which is required if one wants computationally

supported trust solutions. And this is where the contribution of Qualitative Assessment

Dynamics, QAD, comes in. The method is being developed now for almost ten years and

it is intended to provide users with additional support in their decision making in various

contexts. It should be emphasized that QAD does not replace other existing methods that

certainly have a merit, but it presents their complement.

The rest of the paper is structured as follows. In the next section we will provide an

overview of trust related research in the social sciences area, multi-disciplinary area, and

computer science area. This overview is necessary for an analysis of related methods in

section three to identify their strong points and to further focus on unaddressed issues. On

this basis, in the fourth section, the main tenets of QAD are given, being followed by

formal definitions. In the fifth section, a model for holistic addressing of trust

management is presented that anticipates future research in this area. In the sixth section

an application of QAD is given through examples, and it is demonstrated how such

systems can support decision making. The conclusions are given in the seventh section,

being followed by an appendix, acknowledgements and references.

2. Overview of the Field

In this section an overview of trust research streams in three main areas will be given in

order to provide the basis for their analysis in the next section, and to pinpoint

complementary issues that are covered by QAD. Social sciences will be addressed first.

Afterward multi-disciplinary research will follow, where the main approaches are similar

to those in social sciences, while their application field is computer and information

QAD – Complementing Trust Management Methods for Decision Making 3

systems. Last, computer science (and mathematics) rooted research will be given.

Although it is often the case that sharp boundaries between these families of research do

not exist, the main characteristics are evident to an extent that makes such categorizing

sensible.

2.1. Research on Trust in Social Sciences

Experiences tell us that humans, when it comes to trust, may not go exclusively for the

maximal tangible benefits, but may also go for non-tangible gains that are aligned with

their beliefs, wishes to help others, etc. This implies that factors behind trust are quite

diverse. Social sciences research has been therefore very focused on trust driving and

trust formation factors, which include warranting properties like contextual properties

(temporal, social and institutional embededness), intrinsic properties (abilities,

internalized norms, benevolence), and interpersonal clues.34

Despite the diverse nature of

trust driving and formation factors, it is interesting that trust in the final instance often

leads to tangible benefits in societies and organizations, because less complex control

mechanisms can be used, while these structures are also more adaptable, which inherently

improves their competiveness.34

A different, but interesting kind of research models trust in an engineering-like manner.36

The resulting model of the interpersonal trust formation consists of inputs, followed by

cognitive processes, and resulting in the outputs. More precisely, inputs that consist of

signals and signs spark cognitive processes. These processes are then dealing with

information collection, their selection, the assessment of trustworthiness, the assessment

of the situation, the trust state, the trust decision, and the context. The results of cognitive

processes lead to final effects, which are trust manifesting behavior, interactions and

evaluations.

Clearly, the most interesting part is cognitive processes that are about the trust formation

phase. In the literature two trust formation phase models are often mentioned:40

The first

model builds on a heuristic strategy, where people base decisions on only the most

obvious apparent information (typical cases include decisions in situations where people

lack motivation or capacity). The second model, however, builds on a detailed and

analytically intense message content processing (typical cases include situations where a

lot is at stake).4,5

Last but not least, social sciences research provides evidences that trust is a complex mix

of emotion and cognition, meaning that not only (neo)cortex is involved in its formation,

but also sub-cortical parts.32,35

This is backed by research, which shows that trust may

grow on the basis of signs and signals within concrete trust evaluation context by being

biased and affected with the trusting person mood.32,12

Further, it may completely ignore

evidence or warrant because two key kinds of trust formation elements are rational and

irrational.7 Cognitive views (which are often assumed to mean rational) and emotional

4 Denis Trček

views have been studied by Vassalou et al,47

where cognitive views are driven by

reliability, availability, and alike, while emotional views are driven by affective bonds

and alike. Clearly, cognitive and emotional trust does occur simultaneously, and in such

cases one view can prevail over the other and vice versa.8

2.2. Multi-disciplinary Research on Trust

The main property of on-line communications is that direct, face-to-face interactions are

interfaced with e-channels. It is therefore expected that research in this area would

identify some specifics related to trust. And yes, the first thing (which many times holds

true also in ordinary environments) is so called channel reduction, meaning that trust is

affected, because entities are separated in time and space. Consequently, uncertainly

between a trustor (i.e., a person who can trust), and a trustee (i.e., a person that is trusted)

is increased.39

Further, one widely accepted theory in social sciences is social-cognitive theory, which

serves for validating individual behavior through the following constituent elements:

personal factors, environmental factors, and behavior itself (which leads to feed-back

loops). On top of this theory, trust is analyzed through knowledge sharing in e-

environments.21

The study concludes that this sharing is based on outcome expectations

that may include rational rewards, i.e., tangible benefits for an entity, or may be based on

emotional outcome expectations, e.g., recognition within a community, or self-

satisfaction. Therefore authors conclude that trust in the context of information sharing in

e-environments should be divided into economy based, information based and

identification based trust.

To better understand the contextual factors in on-line trust, some authors have focused on

health care sector and web-based health advices.40

They have developed a model of

users’ behavior that consists of rapid screening of sites by deploying heuristics analysis,

followed by systemic evaluation of site’s content, followed by integration of information

across visited sites and (longer term) consultation with self disclosure processes.

Now what is probably the most important result in this area and related to computerized

trust support that we aim at, is research on trust toward non-human artifacts, i.e.,

computer and information systems solutions. Although some scientists criticize such kind

of research by arguing that technology is not a moral actor and is not characterized by

free will,41,

8 it is the fact that humans, when it comes to trusting non-human artifacts, do

perceive and treat these artifacts in a similar way that they perceive and treat humans.13

2.3. Computer and Information Science Research on Trust

The research of trust in computing domain can be divided into two epochs. The first

epoch lasted roughly until year 2000, and during this epoch the research was mainly

tackling trust related solutions. More precisely, the research was about security solutions

QAD – Complementing Trust Management Methods for Decision Making 5

that may enable trust. The focus shifted in the second epoch after year 2000, when trust

phenomenon as such got in the center.

2.3.1. The Research Epoch until Year 2000

The methods and solutions from this early period were actually addressing security

(security services) and not trust directly. Among these representatives the Trusted

Computer System Evaluation Criteria, known as the Orange Book, should be mentioned

first.10

Although it was supposed to be about trusted computer systems, it was about

security. Next early representative is Platform for Internet Content Selection, or PICS,

which was about web-sites filtering.30

Next, PolicyMaker – it was aimed at addressing

trust in distributed services environments by bounding access rights to owners of public

keys that were obtained and verifiable through certificates. So this was a trust supporting

solution that was deploying public key infrastructure, or PKI.2 Similar holds true for

Trust Establishment Module, which was a Java based implementation with a dedicated

language for enabling trusting relationships between unknown entities through PKI.19

Important solutions that were used for trust management already in the early web

environments are also eBay’s and Amazon’s solutions. eBay’s system computes sums of

positive and negative scores about an entity, and the difference of these two results

presents reputation of a particular entity. Amazon uses slightly more sophisticated and

uses averaging, so that the final score is the average of all ratings.

Other approaches from the first epoch are described in survey written by Grandison and

Sloman,17

and the reader is referred to it for additional details.

2.3.2. The Research Epoch After Year 2000

The main methods that are presented in this subsection are characteristic for research in

the computer science area after year 2000, when the majority of those trust related

methods emerged that were addressing (or at least approaching) the core of trust

phenomenon. These methods can be roughly divided into two typical streams. The first

stream is based on probabilities (e.g., on Bayesian inference), and the second on non-

probability related mechanisms (e.g., game theoretic approaches).

I. Naïve trust management – This approach uses Bayesian inference,49

where an

agent computes a probability using Bayes’ formula about other agent’s

characteristic C1 that it is interested in (the corresponding table includes values

for satisfying, i.e., trusted interactions and non-satisfying, i.e., distrusted

interactions). In the same manner, the agent computes a probability about

another characteristic C2. Suppose now that this agent wants to address a more

realistic scenario by asking a question “What is the probability that the next

interaction will be trusted given that both characteristics C1 and C2 have to be

met?” Such questions are answered by applying Bayes theorem to existing data:

6 Denis Trček

𝑝(𝑡𝑟𝑢𝑠𝑡𝑒𝑑|(𝐶1, 𝐶2)) =𝑝(𝑡𝑟𝑢𝑠𝑡𝑒𝑑, 𝐶1, 𝐶2)

𝑝(𝐶1, 𝐶2)= ⋯ = 𝑝(𝐶1|(𝑇, 𝐶2)) ∗

𝑝(𝑇|𝐶2)

𝑝(𝐶1|𝐶2).

Extension of Bayesian inference leads to Dempster-Shaffer Theory of evidence,

or ToE,38

which starts with a set of possible (atomic) states, called a frame of

discernment Θ (within frame of discernment exactly one state is assumed to be

true at any time). Afterward, basic probability assignment, or BPA (also called

belief mass) function is introduced that is defined as 𝑚: 2Θ → [0,1], where m{ }

= 0, and ∑ m(A) = 1A⊆Θ . A belief mass mΘ(X) expresses the belief assigned to

the whole set X, and does not express beliefs in subsets of X. For a given subset

𝐴 ⊆ Θ, the belief function bel(A) is defined as the sum of the beliefs committed

to the possibilities in A. ToE serves as a basis for subjective algebra, which

enables formal treatment with introduction of new operators for modeling trust

like consensus and recommendation.23

Trust ω is a triplet (b, d, u), where b

stands for belief (belief function in ToE), d for disbelief and u for uncertainty:

𝑏(𝑥) =∑ 𝑚(𝑦), 𝑑(𝑥) =∑ 𝑚(𝑦)𝑥⋂𝑦=∅𝑦⊆𝑥

, 𝑢(𝑥) =∑ 𝑚(𝑦),𝑥⋂𝑦≠∅,𝑦 ⊈ 𝑥

𝑏, 𝑑, 𝑢 ∈ [0,1], 𝑥, 𝑦 ∈ 2Θ.

One main contribution of subjective algebra is various trust dynamics mimicking

operators. An example for conjunction follows: Let 𝜔𝑝𝐴 = {𝑏𝑝

𝐴, 𝑑𝑝𝐴, 𝑢𝑝

𝐴} and

𝜔𝑞𝐴 = {𝑏𝑞

𝐴, 𝑑𝑞𝐴, 𝑢𝑞

𝐴} be agent A’s opinion about two distinct binary statements p

and q. Then the conjunction, representing A’s opinion about both p and q being

true, is defined by 𝜔𝑝⋀𝑞𝐴 = 𝜔𝑝

𝐴 ∧ 𝜔𝑞𝐴 = {𝑏𝑝⋀𝑞

𝐴 , 𝑑𝑝⋀𝑞𝐴 , 𝑢𝑝⋀𝑞

𝐴 }, where 𝑏𝑝⋀𝑞𝐴 = 𝑏𝑝

𝐴𝑏𝑞𝐴,

𝑑𝑝⋀𝑞𝐴 = 𝑑𝑝

𝐴 + 𝑑𝑞𝐴 − 𝑑𝑝

𝐴𝑑𝑞𝐴, and 𝑢𝑝⋀𝑞

𝐴 = 𝑏𝑝𝐴𝑢𝑞

𝐴 + 𝑢𝑝𝐴𝑏𝑞

𝐴 + 𝑢𝑝𝐴𝑢𝑞

𝐴. A similar approach

has been developed by Yu and Singh, and Paul-Amaury, where Θ = {T, ¬T}.50,31

Belief based approaches serve also for cognitive conceptual models, where trust

is considered to be a result of underlying beliefs and dictated by mental states of

(artificial intelligent) agents. Typical examples can be found in work by Sabater

and Sierra, where the delegation plays a central role, i.e., trust is the mental

background of delegation.37

To build this mental state, an agent needs the

following beliefs: competence belief that the other agent is capable to do the

task, dependence belief where the agent believes that it is better to rely on

another agent, and disposition belief where the agent believes that the other

agent will do the task.

II. Non-probability based approaches often deploy game theory,42,18

where a game

consists of a set of players, a set of actions that are aligned with strategies of the

players, and a set of payoffs for each strategy. Using game-theoretic basis, a so

called personalized ranking system, PRS, has been developed.1 More formally,

PRS is defined in the domains of graphs and (linear) orderings as follows: Let A

QAD – Complementing Trust Management Methods for Decision Making 7

be some set. A relation R ⊆ A × A is called an ordering on A if it is reflexive,

transitive, and complete. Further, let L(A) denote the set of orderings on A, and

let 𝔾𝑉𝑠 be the set of all directed graphs G = (V, E) such that for every vertex v ∈

V, there exists a directed path in set of edges E from s to v. Then a personalized

ranking system F is a functional that for every finite vertex set V and for every

source s ∈ V maps every graph G ∈ 𝔾𝑉𝑠 to an ordering ≼𝐺,𝑠

𝐹 ∈ L(V ) (“≼” denotes

an ordering).

It can be seen that PRS is in fact a reputation system, which are a kind of trust

enabling and supporting systems, and they do not tend to model trust as such.*

An inherent drawback of reputation systems is so called “exit problem”, where a

seller completes correctly numerous small value transactions to gain reputation.

Later, the agent engages into a large-sum transaction and defaults. An interesting

method to counter this problem is Commodity Trunits approach.25

Each selling

agent collects so called Trunits, i.e., trust units. When entering a transaction, the

agent has to possess sufficient amount of Trunits. If the transaction is completed

as promised, its amount of Trunits is increased; otherwise it is decreased

according to some scheme that fosters trustful behavior. This is a kind of a

penalizing principle, which is used also in other approaches, where authors

additionally propose appropriate level of monitoring while at the same time

addressing privacy issues.22

But as this paper is about modeling trust

phenomenon, further discussion of reputation systems exceeds its focus.

Finally, many new specific approaches are appearing. One such recent method

deploys the idea of Hirsch’s H-index.51

It is built upon H-Trust aggregation that

is defined as follows: A peer i has trust rating Ti,j =H toward peer j if H value of

the qualified N peers have at least trust rating score H toward peer j, and the

other (N-H) peers have at most trust rating H toward peer j. Other agents are

sorted according to this aggregation value and a new interaction takes place, if

the other agent’s rating is below or above a predefined threshold in this table.

Further, another specific approach that is intended for particular area of mobile

ad hoc networks is given in the work of Cho et al.6 Here, the main idea is to

combine the notions of social trust as discussed in this paper with quality of

service trust as interpreted in computer communications area.

3. An Analysis of Existing Research Approaches and the Definition of Trust

The above overview of trust research methods in social, multi-disciplinary and computer

sciences fields enables us to analyze them through computerized trust management

perspective and to focus on trust phenomenon itself.

As to social sciences, and social sciences rooted multi-disciplinary research, this research

has produced many important results. But generally speaking, the research is quite

* Personal ranking system presented above actually belongs to this category, which also holds true for eBay’s

and Amazon’s solution.

8 Denis Trček

fragmented from the computing point of view. Further, it is often not concerned with

implementation issues in the computing environment, where hard formalization does

matter. Further, this research provides evidence that trust has rational and emotional

factors. Therefore, at its manifesting level, trust can be related to (descriptions of) various

emotional states and driving factors. Further, this research exposes situations where trust

is formed or changed on a non-identifiable basis, because sometimes rational factors

prevail over emotional and vice-versa. Last but not least, it can be concluded that non-

human artifacts (information technology solutions) are perceived and treated by users

similarly as if they were humans.

As to approaches in computer science, these are grounded on assumptions that particular

mathematical / logical formalism reflects appropriately trust phenomenon. But such

assumptions hold true only to a certain extent, and in most cases they require rational

agents. In case of game theory, for example, two basic tenets have to be additionally

fulfilled: the existence of preference relation, and its transitivity. However, trust is even

not necessarily tied to a preference relation. Moreover, when this relation exists, it is

often not transitive. It is also not (in general) reflexive and symmetric. To show that this

is the case some brief mental experiments can be performed. Suppose one is asked about

trusting himself in a life-threatening context, where appropriate experiences and training

is required – an example can be surgery operations. Clearly, if one is not a trained

surgeon in a particular area, trust is not reflexive. As to symmetry, assume the basic

social structure, i.e., a family. Children trust a priori their parents in numerous contexts,

including those that are of existential importance like financial matters, while parents do

not necessarily trust their children, in particular when it comes to such questions. As to

transitivity, it is often the case that an entity A trusts entity B in a certain context, while

this entity trusts entity C in the same context. However, it is also often the case that entity

A and C may have had some disputes or conflicts in the past, therefore A would not

delegate trust to B to ask C to act on behalf of A. The argumentation of transitivity covers

approaches that are tied to game theory. As to the rest of approaches in (intelligent

agents) research that focus on delegation - true, there do exist contexts where trust can be

seen “as a mental background of delegation”. But this means focus on indirect

manifestations of trust due to delegation (delegation contexts are not the only ones where

trust is involved, although this is often the case). Finally, as to H-Trust and penalization

based principles – these solutions are not addressing the (core of) trust phenomenon as

such, but provide important additional support for its formation processes.

Now getting to Bayesian statistics based approaches (naïve trust management, subjective

logic), the following issues raise questions. First, human agents are rational to a limited

extent, or may be rational in certain contexts, but not in other contexts (an outstanding

research on the problem of irrationality in economic contexts has been done by

Kahneman and Tversky).24

Second, even if they do not have problems with rationality,

QAD – Complementing Trust Management Methods for Decision Making 9

very few will understand sophisticated mathematics that is required for ToE and

subjective algebra.

This analysis is not to say that the above methods are wrong – on the contrary. What we

claim is that they do play an important role in certain contexts, while in other contexts a

complementary method is needed. And this is where QAD comes in. It has been

developed now for approximately ten years, and continually improved.43,44,26,45,46

Its main

advantages are addressing of trust as such, tight formalization and a possibility for

deployment in computing environments, while being based on operators and operands

that are understandable to majority of users.

Before stating its details, trust has to be defined. Although the notion of trust seems to be

intuitively clear, the history tells us that this is often not the case. Let us analyze the

research literature in social sciences field. First, it is almost always, at least implicitly,

assumed that trust is required only in situations, which inherently contain probability of

an adverse outcome. Second, trust is seen as a kind of a necessity because of the lack of

details about others’ abilities and motivations to act as promised.11

It is even claimed that

if one had accurate insight into the trusted actor's reasoning or functioning, trust would

not be an issue.16

Third, and this is already tackling the definition of trust, it is proposed

that trust is as an implicit set of beliefs that the other party will behave in a dependent

manner and will not take the advantage of situation.15,27

Now according to Merriam-Webster dictionary, trust is assured reliance on the character,

ability, strength, or truth of someone or something. It also follows from the discussion

stated so far that trust can be rarely treated in isolation, so its social dimension comes to

the surface. This was elegantly expressed in D. E. Denning's definition of trust related to

Orange Book discussions at the beginning of nineties: Trust is not a property of an entity

or a system, but is an assessment. Such assessment is driven by experience, it is shared

through a network of people interactions and it is continually remade each time the

system is used.9 This expression is concise enough to enable formal treatment of trust,

and consequently, its support in computing environments.

4. Qualitative Assessment Dynamics - QAD

It has been already stated that QAD complements other methods and research streams by

focusing on humans (or human-like agents) and is based on accordingly defined trust.

Trust is formalized in a way that enables computerized treatment, while its definition

reflects semantics of plain language descriptions. Further, by basing its operators and

operands on descriptions that can be found in many languages QAD becomes

understandable to a wide number of users. And finally, decision makers can use it to

define structures that they would like to study, assign operands and operators and see the

ways through which system could evolve through time. By studying various settings the

10 Denis Trček

decision maker can identify possibilities about influencing the structure in order to drive

it to a more desirable state.

Let us now address the basic tenets of QAD: First, human agents are in principle not

(always) rational. Second, trust is a mix of rational and irrational factors and may be

formed or changed on a non-identifiable basis. Third, trust in general is not a reflexive

relation, nor it is symmetric or transitive. Fourth, it should not be widely assumed that

certain sophisticated mathematical apparatus can be comprehended by ordinary users

(which are supposed to use trust management solutions); therefore to model trust

dynamics, ordinary language descriptions should be used. Fifth, when human agents talk

about trust they usually use descriptive, qualitative assessments. Therefore, trust

assessments should be based on ordinary language descriptions as well.

Definition 1. Trust is a relation between agents A and B, where this relation is denoted

by A,B, which means agent's A assessment of agent B.

In the below figure there are four trust relations, two of them denoting assessments of

entities A and B toward themselves (A,A andB,B), and two of them denoting

assessments of one entity toward another entity (A,B and B,A).

Figure 1: The definition of trust relationships

For trust analysis and modeling its dynamics, trust graphs are introduced, where links are

directed and qualitatively weighted. If a link denotes trust attitude of agent A toward

agent B, the link is directed from A to B. Because graphs can be equivalently presented

with matrices, the second basic definition follows.

Definition 2. Trust assessments in agents societies are given by trust matrix , where

elements i,j denote trust assessments of i-th agent toward j-th agent. The assessment

values are taken from the set = {2, 1, 0, -1, -2, }, where these values denote totally

trusted, partially trusted, undecided, partially distrusted and totally distrusted

relationships. The last symbol, "", denotes an undefined relation, where an agent is

either not aware of existence of another agent, or does not want to disclose its trust.

Definition 3. In a trust matrix , a column represents society trust vector, which states

society assessments about particular agent k, i.e., n,k = (1,k , 2,k ,…, n,k), while a row

represents agent’s k trust vector, i.e., k,n = (k,1 , k,2 ,…, k,n), where k = 1, 2,…, n.

QAD – Complementing Trust Management Methods for Decision Making 11

Definition 4. By excluding undefined assessments from a trust vector, a society

assessment sub-vector is obtained, denoted as n1,k = (1,k , 2,k ,…, n1,k), where index

“n1” denotes number of non-undefined values in a society trust vector.

An example society with trust relations, qualitative weights and corresponding matrix is

given in Fig. 2.

Figure 2: An example society that includes a dumb agent

Definition 5. QAD operators are elements of the set = {, , ,, , , , ,

,}, where the symbols denote extreme optimistic assessment, extreme pessimistic

assessment, moderate optimistic assessment, moderate pessimistic assessment,

centralistic consensus seeker assessment, non-centralistic consensus-seeker assessment,

extreme-opponent assessment, moderate-opponent assessment, self-confident assessment

and assessment-hoping. These operators are n-ary functions fi, such that 𝑓𝑖: 𝑨𝑛,𝑗 =

(𝛼1,𝑗− , 𝛼2,𝑗

− , 𝛼3,𝑗− , … , 𝛼𝑛,𝑗

− ) → 𝛼𝑖,𝑗+

, i = 1, 2,…, n ,where “i” denotes the i-th agent,

superscript “-“ denotes pre-operation value, superscript “+” post-operation value, and

where mappings for particular operators are defined as follows (𝑖, 𝑗 = 1, 2, … , 𝑛):

𝜶𝒊,𝒋− ≠ −:

i:

𝑚𝑎𝑥(𝛼1,𝑗− , 𝛼2,𝑗

− , … , 𝛼𝑛1,𝑗− ) → 𝛼𝑖,𝑗

+

i:

𝑚𝑖𝑛(𝛼1,𝑗− , 𝛼2,𝑗

− , … , 𝛼𝑛1,𝑗− ) → 𝛼𝑖,𝑗

+

i: {

𝛼𝑖,𝑗− → 𝛼𝑖,𝑗

+

𝛼𝑖,𝑗− + 1 → 𝛼𝑖,𝑗

+

𝑖𝑓 1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1≤ 𝛼𝑖,𝑗

𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

i:

{

𝛼𝑖,𝑗− → 𝛼𝑖,𝑗

+

𝛼𝑖,𝑗− − 1 → 𝛼𝑖,𝑗

+

𝑖𝑓 1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1≥ 𝛼𝑖,𝑗

𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

12 Denis Trček

i:

{

⌈1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1⌉ → 𝛼𝑖,𝑗

+

⌊1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1⌋ → 𝛼𝑖,𝑗

+

𝑖𝑓 1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1< 0

𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

i:

{

⌈1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1⌉ → 𝛼𝑖,𝑗

+

⌊1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1⌋ → 𝛼𝑖,𝑗

+

𝑖𝑓 1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1> 0

𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

i:

{

− ⌈1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1⌉ → 𝛼𝑖,𝑗

+

− ⌊1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1⌋ → 𝛼𝑖,𝑗

+

𝑖𝑓

1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1≥ 0

𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

i:

{

− ⌊1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1⌋ → 𝛼𝑖,𝑗

+

− ⌈1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑘=1⌉ → 𝛼𝑖,𝑗

+

𝑖𝑓

1

𝑛1∑ 𝛼𝑘,𝑗

−𝑛1

𝑖=1≥ 0

𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

i:

𝛼𝑖,𝑗− → 𝛼𝑖,𝑗

+

i: 𝑟𝑎𝑛𝑑𝑜𝑚(−2,−1, 0, 1, 2) → 𝛼𝑖,𝑗+

𝜶𝒊,𝒋− = −:

− → 𝛼𝑖,𝑗+

The properties of these operators can be also informally stated (undefined value remains

undefined also in the next iteration; otherwise it is changed accordingly):

Extreme-optimistic assessment operator filters out the most positive assessment

value among existing values about a certain agent in a certain context.

Extreme-pessimistic assessment operator filters out the most negative

assessment among existing values about a certain agent in a certain context.

Moderate optimistic assessment operator means the expressed assessment is

“strengthened” to the next higher qualitative level, narrowing the gap toward the

aggregated assessment of the rest of community if this is more optimistic than

the agent’s trust is (the value changes one level upwards).

Moderate pessimistic assessment operator means the expressed assessment is

“weakened” to the next lower qualitative level, narrowing the gap toward the

QAD – Complementing Trust Management Methods for Decision Making 13

aggregated assessment of the rest of community if this is more pessimistic than

the agent’s trust is (the value changes one level downwards).

Centralistic consensus–seeker assessment operator results in a “toward zero

rounded average” value by using community values in a certain context.

Non-centralistic consensus-seeker assessment operator results in a value, which

is (contrary to the previous operator) “average rounded away from the 0 value”.

Extreme-opponent assessment operator results in a value that is opposite to the

average value of the rest of community (in case of rounding, this value is

rounded up to the next assessment with a larger absolute value).

Moderate-opponent assessment operator results in a value that is opposite to the

average value of the rest of community (in case of rounding, this value is

rounded down to the next assessment with a smaller absolute value).

Self-confident assessment operator results in output being the same as input.

Assessment-hoping operator results in a value that is changing through time on

an unidentifiable basis, and can be seen as a random process.

5. The Model for IT Supported Trust Management

QAD has been built in parallel with a holistic model for computational trust management.

The very first version of this model was defined in 2002 and it was based solely on

Piaget’s work about reasoning development processes in humans.48

Later the model was

extended,44

but did not take into account reputation. Now that the notion of reputation

and its relation to trust is getting clear, the current model incorporates also this view (see

Fig. 3). Explicit support for reputation is obtained through the ponder values matrix = {1,11,2n,n}So if an agent has a high reputation, its assessments will be pondered

with 1, while assessments of agents with a lower reputation will be pondered accordingly.

Thus a society, in terms of trust, is basically determined by two matrices, and .

In Fig. 3, the set T consists of discrete time values t, the matrix of observed facts based

assessments (e.g., deeds), the matrix of other agents' assessments, while the matrix

contains ponder values that are needed to address the fact that usually only a certain

number of all assessments is used by the observed agent and not all of them. Suppose

denotes the sequence 1, 2, …, n, and

denotes the sequence 1, 2, …, n, then the

mapping performed by the function results in agent’s major trust: = (,

, t). The

relationship between the space of major opinions and deeds for an agent is defined by

function , such that = (). Further, the expressed opinion is the result of mapping

by function , i.e., = () and as such directly enters society assessment matrix A.

Finally, the matrix A forms a feed-back loop with where this loop is driven by

functions and .

14 Denis Trček

Figure 3: Computational model for trust management

Getting concrete values for the above matrices depends on particular area of application.

One possibility is that matrices A and are filled by obtaining oral assessments from

agents, while matrix is filled with values on the basis of, e.g., observation of an agent in

his / her environment. As deeds give more accurate information about agent’s particular

trust assessment than oral expressions, these observed facts based assessments may

prevail over the oral ones, and are therefore included in computations.

Following the model in Fig. 3, the rest of definitions can be given.

Definition 6. Matrix = {1,11,2n,n} consists of elements i,j [0,1] that denote

values used by agent i for pondering opinions of agent j when calculating its own

assessments.

Currently, is supposed to contain only 1’s or 0’s – if i,j is set to 1, then agent i takes

assessments of agent j into account when calculating its new assessments, otherwise not.

Definition 7. QAD context is the quadruple .

Definition 8. Qualitative assessment dynamics is a six-tuple QAD,

where denotes the set of trust assessments, the set of functions (operators)

}, the matrix of agents’ assessments, the matrix of ponders values,

the matrix of observed facts based assessments, and T the set of time increments.

The current definition of QAD and computational model identify also future areas of

research for further development of QAD, and also other trust management methods.

QAD – Complementing Trust Management Methods for Decision Making 15

6. Simulations, Discussion and Future Work

This discussion starts with a demonstration application of the presented apparatus, where

it will be assumed that () is such that = (i.e., no mapping is taking place), and the

feed-back link does not exist as well. Further, let = { }, and = [1], while is such

that all elements in society vectors in mstrix A are taken into account in each calculation

of new assessments.

Let us analyze the possible behaviors of a society with the following properties. It

consists of 30 agents, where initially all of them are undecided about one another.

Further, 90% of agents are extreme optimists, while 10% are governed by assessment

hoping operator. Running the simulation (30 runs, each with 100 steps) on this society an

interesting outcome is obtained (see Fig. 4, run I). Although agents are initially undecided

about one another, approx. 98% assessments become totally trusted, while roughly 2% is

roughly equally distributed among other assessments. Changing the initial position of

90% agents from being extreme optimists to extreme pessimists leads to the expected

outcome (see Fig. 4, run II). However, introducing a more sophisticated instability in the

first setting by requiring that in each step 10% of population (randomly chosen agents)

change their operators randomly, a surprising result is obtained (see Fig. 4, run III).

Actually, a clear, but somewhat polarized, extremist pattern is emerging, where 34.5%

assessments are totally distrusted, 36.2% assessments are totally trusted, and the rest of

assessments are roughly equally represented. Put another way, “truncated bimodal-like”

distribution comes out as a result of the experiment that has started with a completely

homogenous, undecided society.

Figure 4: Experimental results (a cumulative histogram of 30 runs with each run having 100 steps)

Run I

Run II

Run III0

5000

10000

15000

20000

25000

totDistrustedpartDistrusted

undecidedpartTrusted

totTrusted

515563

537538

24847

24821

536572

544527

9310

2279 2362 3271

9778

numOfAgents

16 Denis Trček

These results give useful hints to a decision maker. Suppose a decision maker is faced

with a society as given in run II, which becomes evidently distrusted and therefore non-

cooperative. He is aware of the dynamics of this society and wants to change it somehow

early enough. Suppose further that he wants to increase cooperation where a successful

result would mean at least 1/3 of society being totally or partially trusted. One would

intuitively approach this problem in such a way that he would try to convince agents that

other agents are in fact nice, positive entities. However, the lesson of simulations is that

the decision maker needs less effort – he just has to destabilize the opinion formation in

the society. Put another way, he has to mess-up with the society so the members are

starting to randomly change assessments and the goal of cooperation will emerge.

This example demonstrates that many interesting research questions can, and need to be

addressed. Already the above example also indicates that the number of possible settings

is enormous, thus a thorough study of QAD communities exceeds the scope of this paper,

and will be a matter of future research.

Future work will also address additional experimental testing of existing operators with

humans, and introduction of possibly needed new operators. Again, these will be based

on linguistic grounds to make them intuitively understandable in various cultural settings.

Further, refinement of operators will also be a subject of future research. Further, the

accuracy of assessments will be addressed, where some promising approaches exist like

those that enable extractions and evaluations of assessments from natural language

expressions.14

Last but not least, one important issue will be the “averaging” processes of

values that belong to ordinal scale of assessments.

7. Conclusions

Trust is very important not only in ordinary environments, but increasingly so in e-

environments, organizations and societies in general. The emergence of ubiquitous

information technology solutions into our lives leads to increased interactions with (and

within) e-environments, where more and more interactions will be related to security,

privacy and safety. Now the more sensitive a service is in terms of risk, the more trust

there has to exist if users are supposed to use such service. Taking into account that trust

is also seen as a key enabler for the prosperity of organizations and whole societies, trust

research is getting in the focus in many fields, not only computing and information

technology sciences.

In this paper an overview of trust related research in social sciences, inter-disciplinary

research and in computing science is given. The related results are then analyzed, and on

the basis of this analysis, Qualitative Assessment Dynamics, QAD, is introduced. It

complements existing trust management methods by focusing on humans and humans

like agents, and their reasoning when it comes to trust. Thus QAD has linguistic basis – it

QAD – Complementing Trust Management Methods for Decision Making 17

contains operands and operators that have clear counterparts in many languages when

descriptions of trust related processes are in focus. Consequently, QAD can be

comprehended by a large number of ordinary users. Despite this, it is a formal system

that is implementable in computerized information systems, and the corresponding

implementation model is presented as well. Therefore on one hand QAD enables rigorous

formal treatment and research of trust, while on the other hand it enables practical

applications, primarily aimed at trust management systems and improved decision

making. This way a promising basis for further multidisciplinary research with other

disciplines like sociology and economy is given.

Summing up, QAD aims at a formal system that enables modeling and simulations by

deploying anthropocentric agents in order to provide additional insights into trust related

phenomena in human-centric systems that cannot be obtained using solely traditional

social sciences methods.

Appendix

In relation to some issues discussed in this paper we have performed also a simple poll-

like analysis and asked users about their preferences when it comes to trust management

systems properties. The stated questions were straightforward. One was whether users

would prefer quantitative or qualitative assessments when it comes to trust. The next one

was whether users would prefer a five-level ordinal descriptive scale or some other

metric to assess trust. And the third one was whether users would want to have a

possibility to be directly engaged in trust management system functioning (which

therefore has to support operators and operands meaningful to them). The poll was

administered over the web to a sample of computer science students population at Faculty

of Mathematics, Natural Sciences and Information Technologies, University of

Primorska, in May 2010. Invitations were sent through e-mail to all 109 students, and the

final response rate was 24.1 %, which is acceptable for field research. Taking into

account that no benefits were offered and that the survey was anonymous, a negligible

bias was assumed, so the respondents were treated as a random sample of the above

population. Sample proportions for these questions have been tested as well, and margin

of error has been calculated for them as follows (confidence level was set to 95%,

meaning that Z value was 1.96): As to metrics, the percentage of users that would prefer

qualitative descriptions was 0.810.15. As to the number of descriptive intervals on an

ordinary scale, the preferred option was five levels and the percentage of users that opted

for this option was 0.62 0.19. As to the percentage of users that would prefer to be be

directly involved in interactions with trust management system, the result was 0.770.16.

Acknowledgements

Author acknowledges the support of the Slovenian Research Agency ARRS through

program P2-0359. This research is partially also a result of collaboration within EU

18 Denis Trček

COST IC0801 Agreement Technologies project. Author also wants to thank to Mrs. E.

Zupančič for programming the simulation environment. Finally, author thanks to all three

reviewers for their helpful and constructive comments.

References

1. A. Altman, M. Tennenholtz, An axiomatic approach to personalized ranking systems, in

Proceedings of the 20.th international joint conference on Artificial intelligence

(IJCAI'07), 2007, Morgan & Kaufmann, San Francisco, pp. 1187-1192.

2. M. Blaze, Feigenbaum J., Lacy J., Decentralized Trust Management, in Proceedings of

the '96 IEEE Symposium on Security and Privacy, 1996, Oakland, pp. 164-173.

3. V. G. Cerf, Trust and the Internet, IEEE Internet Computing, 5 (14) 96.

4. S. Chaiken, Heuristic versus systematic information processing and the use of source

versus message cues in persuasion, Journal of Personality and Social Psychology 39

(1980) 752–766.

5. S. Chen and S. Chaiken, The heuristic-systematic model in its broader context (in: S.

Chaiken and Y. Trope, Editors, Dual-process Theories in Social Psychology, Guilford

Press, New York, 1999).

6. J. H. Cho, A. Swami, I.R. Chen., A Survey on Trust Management for Mobile Ad Hoc

Networks, IEEE Communications Surveys & Tutorials, 4 (13) 562-583.

7. K. Chopra, A.Wallace, A. William, Trust in Electronic Environments, in Proc. of the

36th Annual Hawaii International Conference on System Sciences (HICSS'03), 2003,

IEEE, IEEE Computer Society, Washington, DC, p. 331.1.

8. C. L. Corritore, B. Kracher, S. Wiedenbeck, On-line trust: concepts, evolving themes, a

model, International Journal of Human Computer Studies, 58 (6) 737–758.

9. D. Denning, A new Paradigm for trusted systems, in Proc. of ACM SIGSAC New Security

Paradigms Workshop, 1993, ACM, pp. 36–41, New York.

10. Dept. of Defense, Trusted Computer System Evaluation Criteria (DoD, 5200.28-STD,

Washington D.C., 1985).

11. M. Deutsch, Trust and suspicion, Journal of Conflict Resolution, 2 (3) 265–279.

12. J. R. Dunn and M. E. Schweitzer, Feeling and believing: the influence of emotion on

trust, Journal of Personality and Social Psychology, 88 (5) 736–748.

13. B. J. Fogg, Persuasive Technology: Using Computers to Change What We Think and Do

(Morgan Kaufmann, San Francisco, 2003).

14. M. Fuketa, Y. Kadoya, E. Atlam, T. Kunikata, K. Morita, S. Kashiji, J. I. Aoe., A method

of extracting and evaluating good and bad reputations for natural language expressions,

International Journal of Information Technology & Decision Making (IJITDM), 2 (4)

177-196.

15. D. E. Gefen, E. Karahanna, D. W. Straub, Trust and TAM in online shopping: an

integrated model, MIS Quarterly, 27 (1) 51–90.

16. A. Giddens, The consequences of modernity, (Stanford University Press, Stanford, 1990).

17. T. Grandison, M. Sloman, A survey of trust in internet applications, IEEE

Communications Surveys, 4 (3) 2–13.

QAD – Complementing Trust Management Methods for Decision Making 19

18. M. Harish, N. Anandevelu, N. Anbalagan, G. S. Mahalakshmi, T. V. Geetha, Design and

Analysis of a Game Theoretic Model for P2P Trust Management, (Distributed Computing

and Internet Technology, Springer, 2007).

19. A. Herzberg et al., Access Control Meets Public Key Infrastructure, in Proc. of the IEEE

Conf. on Security and Privacy, 2000, Oakland, pp. 2-14.

20. L. A. Ho., T. H. Kuo, C. Lin, B. Lin, The mediate effect of trust at knowledge sharing:

An empirical study, Int. Journal of Information Technology & Decision Making, 4 (9)

625-644.

21. M. Hsu, Ju M., Yen T., C. & C. Chang, Knowledge sharing behavior in virtual

communities: The relationship between trust, self-efficacy, and outcome expectations,

International Journal of Human Computer Studies, 65 (2007) 153-169.

22. J. Hwang, S. Kim, H. J. Kim, J. Park, 2011. An Optimal Trust Management Method to

Protect Privacy and Strengthen Objectivity in Utility Computing Services, International

Journal of Information Technology & Decision Making, 2 (10) 287-308.

23. A. Jøsang, A logic for uncertain probabilities, Int. Journal of Uncertainty, Fuzziness and

Knowledge-Based Systems, 3 (9) 279-311.

24. D. Kahneman, P. Slovic, A. Tversky (Eds.), Judgment Under Uncertainty, (Cambridge

University Press, Cambridge, 2006).

25. R. Kerr, R. Cohen, Trust as a tradable commodity: A foundation for safe electronic

marketplaces, Computational Intelligence, 26 (2010) 160–182.

26. D. Kovač, D. Trček, Qualitative trust modeling in SOA, Journal of Systems Architecture,

4 (55) 255-263.

27. N. Kumar, L. K. Scheer, J. B. E. M. Steenkamp, The Effects of Perceived

Interdependence on Dealer Attitudes, Journal of Marketing Research, 17 (1995) 348–356.

28. G. A. Lopez-Herrera, E. Herrera-Viedma, M. J. Cobo, G. Kou, Y. Shi, A Conceptual

Snapshot of the First Decade (2002-2011) of the International Journal of Information

Technology & Decision Making, International Journal of Information Technology &

Decision Making, 2 (11) 247-270.

29. D. J. Louis, A. Weigert, Trust as a social reality, Social Forces 63 (4) 967–985.

30. J. Miller, P. Resnick, D. Singer, PICS Rating Services and Rating Systems (W3C,

http://www.w3c.org/TR/REC-PICS-services, W3C, 1996).

31. M. Paul-Amaury, M. Morge, F. Toni, Combining statistics and arguments to compute

trust, in Proc. of the 9th Int. Conf. on Autonomous Agents and Multiagent Systems (AA-

MAS), 2010, Toronto, pp. 209-216.

32. L. Pessoa, On the relationship between emotion and cognition, Nature 9 (2008) 148–158.

33. V. Reding, The need for a new impetus to the European ICT R & I Agenda, in Int. High

Level Research Seminar on “Trust in the Net”, 2006, Vienna.

34. J. Riegelsberger, M. A. Sasse, J. D. McCarthy, The mechanics of trust: A framework for

research and design, International Journal of Human-Computer Studies, 3 (62) 381-422,

2005.

35. H. Roediger, E. Capaldi, S. Paris, J. Polivy, C. Herman, M. Brysbaert, Psychologie. Een

inleiding (Academia Press, Gent, 1999).

20 Denis Trček

36. E. Rusman, J. van Bruggen, P. Sloep, R. Koper, Fostering trust in virtual project teams:

Toward a design framework grounded in a TrustWorthiness ANtecedents (TWAN)

schema, Int. Journal of Human-Computer Studies, 11 (68) 834-850.

37. J. Sabater and C. Sierra, Review on Computational Trust and Reputation Models,

Artificial Intelligence Review, 24 (2005) 33-60.

38. G. Shafer, A Mathematical Theory of Evidence (Princeton University Press, Princeton,

1976).

39. B. Shneiderman, Designing trust into online experiences, Comm. of the ACM, 43 (12) 57–

59.

40. E. Sillence, P. Briggs, P. Harris, L. Fishwick, A framework for understanding trust

factors in web-based health advice, International Journal of Human-Computer Studies, 8

(64), pp ?.

41. R. C. Solomon and F. Flores, Building Trust in business, Politics, Relationships, and Life

(Oxford University Press, New York, 2001).

42. M. Tennenholtz, Game-Theoretic Recommendations: Some Progress in an Uphill Battle,

in Proc. of AAMAS ’08, 2008, Estoril, pp. 10 -16.

43. D. Trček and G. Kandus, Trust management in E-business systems - from taxonomy to

trust engine architecture, in Proceedings of the WSEAS Int. Conference on Information

Security, Hardware and Software Codesign, E-Commerce and Computer Networks, Rio

de Janeiro, 2002, pp. 1891-1895.

44. D. Trček, A formal apparatus for modeling trust in computing environments,

Mathematical and Computer Modelling, 1-2 (49) 226–233.

45. D. Trček, Trust Management in the Pervasive Computing Era, IEEE Security & Privacy,

4 (9) 52-55.

46. D. Trček, Trust Management Methodologies for the Web, in Reasoning Web’11, Lecture

notes in computer science 6848, 2011, Springer, pp. 445-459.

47. A. Vasalou, J. V. P. Hopfensitz, 2008. In praise of forgiveness: Ways for repairing trust

breakdowns in one-off online interactions, Int. Journal of Human-Computer Studies, 6

(66) 466-480.

48. B. J. Wadsworth, Piaget's Theory of Cognitive and Affective Development: Foundations

of Constructivism (Allyn & Bacon Classics Edition, Longman, New York, 1996).

49. Y. Wang and J. Vassileva, Trust and Reputation Model in Peer-to-Peer Networks, in

Proc. of the 3rd Int. Conference on Peer-to-Peer Computing (P2P'03), 2003, Linkoping,

pp. 150-159.

50. B. Yu, and M. P. Singh, Distributed Reputation Management for e-Commerce, in Proc.

of the 1st AA-MAS Conference, 2002, Bologna.

51. H. Zhao, X. Li, H-Trust: A Group Trust Management System for Peer-to-Peer Desktop

Grid, Journal of Computer Science and Technology, 5 (24) 833-843.

About the author

Prof. Dr. Denis Trček is heading Laboratory of E-media at Faculty of Computer and

Information Sciences, University of Ljubljana. He has been involved in the field of IT

QAD – Complementing Trust Management Methods for Decision Making 21

security, privacy and trust for over twenty years. He has taken part in many EU and

national projects in government, banking and insurance sectors (projects under his

supervision totaled to approx. one million EUR). His has authored or co-authored over

hundred titles, including monograph published by renowned publisher Springer. D. Trček

has served (or still serves) as a member of various international bodies, including NATO

ICS panel and MB of the European Network and Information Security Agency ENISA.