11
Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping Marc S anchez-Artigas, Member, IEEE, and Blas Herrera Abstract—Trust-based systems have been proposed as means to fight against malicious agents in peer-to-peer networks, volunteer and grid computing systems, among others. However, there still exist some issues that have been generally overlooked in the literature. One of them is the question of whether punishing disconnecting agents is effective. In this paper, we investigate this question for these initial cases where prior direct and reputational evidence is unavailable, what is referred in the literature as trust bootstrapping. First, we demonstrate that there is not a universally optimal penalty for disconnection and that the effectiveness of this punishment is markedly dependent on the uptime and downtime session lengths. Second, to minimize the effects of an improper selection of the disconnection penalty, we propose to incorporate predictions into the trust bootstrapping process. These predictions based on the current activity of the agents shorten the trust bootstrapping time when direct and reputational information is lacking. Index Terms—Trust bootstrapping, network disconnection, punishment, stereotypes Ç 1 INTRODUCTION T RUST-BASED systems have been proposed for a large variety of applications, ranging from mobile ad-hoc net- works, Grids and P2P networks. At present, despite their maturity, some fundamental questions have still left unan- swered. One of these important questions relates to the notion of disconnection as a trust diminishing event or pun- ishable action. In open environments like P2P systems and Grid platforms like BOINC [1], disconnection affects the quality of service (QoS). To wit, in a P2P streaming service, QoS can be achieved as long as a continuous and uninter- rupted data flow is maintained. It is basically for this reason that streaming systems like ripple-stream [2] and trust sys- tems like [3], [4], [5] issue negative feedback for agents that are supposed to be providing the service but cannot do so because they are disconnected. The key problem is that in open environments it is not possible to differentiate between negative feedback due to malicious behavior and negative feedback due to disconnection; an agent can dis- connect at any time and the trustor cannot tell whether the disconnection was intentional or not. By the above discussion, one could infer that the most convenient method is to heavily penalize disconnection. However, contrary to intuition, as we show in this work, punishing disconnection might be counterproductive. This is especially true in those initial situations where no prior direct and reputational evidence is available. One case is when a new agent enters the system for the first time. In this situation, it is generally not possible for any trustor to form a reliable opinion on that agent. This also occurs when users form an ad-hoc group around a shared goal and disband once the pursued goal is met. In such cases, evidence can only be obtained through direct interaction, when some trustors are willing to take a chance and risk interacting with unknown trustees. It is the risk inherent in bootstrap- ping trust that can lead to applying little or no penalty on disconnecting agents. For instance, consider the case that multiple unknown agents offer the same file to download. In this context, a transaction may simply be the transfer of a file piece to a trustor. Since a priori all agents have the same unknown disposition to good action, the first interacting trustee is chosen at random. Now suppose that after completing a cer- tain number of transactions, the interacting trustee is unre- sponsive. At that point, the trustee may accumulate a certain amount of positive feedback and present a good trust level. Then it is not hard to imagine that the trustor gets confronted with the decision of whether to wait for the trustee to recover or take a chance on another agent. The magnitude of the penalty determines the outcome of that decision. If the penalty is large, the odds to take a chance on a new agent are higher. In this case the trustor will maximize interaction. But it will be more exposed to abuses. On the contrary, if the penalty is low, the trustee may come online before getting low trustworthiness and continue providing good service. This will minimize the risk of bad interaction but at the expense of more service interruptions. Our first contribution is to examine this tradeoff, and more generally, to assess to which extent the amount of pen- alty given to disconnection affects the bootstrapping of trust. To make the analysis tractable, we assume that T time units must elapse after disconnection in order to prefer an unknown agent. A smaller value of T implies a greater pen- alty, i.e., a higher probability for the trustor to take a chance on a new trustee. Using this parameter, we develop a sto- chastic model to estimate the expected time to obtain the first confident trust evaluation on an agent, provided that all the agents implementing the service are unknown to the trustor. By “confidence” we refer to the event that the The authors are with the Department of Computer Engineering and Maths, Universitat Rovira i Virgili, Spain. Email: {marc.sanchez, blas.herrera}@urv.cat. Manuscript received 25 Aug. 2013; revised 15 Dec. 2013; accepted 11 Jan. 2014. Date of publication 24 Feb. 2014; date of current version 5 Dec. 2014. Recommended for acceptance by X. Li. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference the Digital Object Identifier below. Digital Object Identifier no. 10.1109/TPDS.2014.2308186 2 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015 1045-9219 ß 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

  • Upload
    blas

  • View
    219

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

Activity Stereotypes, or How to Cope withDisconnection during Trust Bootstrapping

Marc S�anchez-Artigas,Member, IEEE, and Blas Herrera

Abstract—Trust-based systems have been proposed as means to fight against malicious agents in peer-to-peer networks, volunteer

and grid computing systems, among others. However, there still exist some issues that have been generally overlooked in the

literature. One of them is the question of whether punishing disconnecting agents is effective. In this paper, we investigate this question

for these initial cases where prior direct and reputational evidence is unavailable, what is referred in the literature as trust bootstrapping.

First, we demonstrate that there is not a universally optimal penalty for disconnection and that the effectiveness of this punishment is

markedly dependent on the uptime and downtime session lengths. Second, to minimize the effects of an improper selection of the

disconnection penalty, we propose to incorporate predictions into the trust bootstrapping process. These predictions based on the

current activity of the agents shorten the trust bootstrapping time when direct and reputational information is lacking.

Index Terms—Trust bootstrapping, network disconnection, punishment, stereotypes

Ç

1 INTRODUCTION

TRUST-BASED systems have been proposed for a largevariety of applications, ranging from mobile ad-hoc net-

works, Grids and P2P networks. At present, despite theirmaturity, some fundamental questions have still left unan-swered. One of these important questions relates to thenotion of disconnection as a trust diminishing event or pun-ishable action. In open environments like P2P systems andGrid platforms like BOINC [1], disconnection affects thequality of service (QoS). To wit, in a P2P streaming service,QoS can be achieved as long as a continuous and uninter-rupted data flow is maintained. It is basically for this reasonthat streaming systems like ripple-stream [2] and trust sys-tems like [3], [4], [5] issue negative feedback for agents thatare supposed to be providing the service but cannot do sobecause they are disconnected. The key problem is that inopen environments it is not possible to differentiatebetween negative feedback due to malicious behavior andnegative feedback due to disconnection; an agent can dis-connect at any time and the trustor cannot tell whether thedisconnection was intentional or not.

By the above discussion, one could infer that the mostconvenient method is to heavily penalize disconnection.However, contrary to intuition, as we show in this work,punishing disconnection might be counterproductive. This isespecially true in those initial situations where no priordirect and reputational evidence is available. One case iswhen a new agent enters the system for the first time. In thissituation, it is generally not possible for any trustor to forma reliable opinion on that agent. This also occurs when usersform an ad-hoc group around a shared goal and disband

once the pursued goal is met. In such cases, evidence canonly be obtained through direct interaction, when sometrustors are willing to take a chance and risk interactingwith unknown trustees. It is the risk inherent in bootstrap-ping trust that can lead to applying little or no penalty ondisconnecting agents.

For instance, consider the case that multiple unknownagents offer the same file to download. In this context, atransaction may simply be the transfer of a file piece to atrustor. Since a priori all agents have the same unknowndisposition to good action, the first interacting trustee ischosen at random. Now suppose that after completing a cer-tain number of transactions, the interacting trustee is unre-sponsive. At that point, the trustee may accumulate acertain amount of positive feedback and present a goodtrust level. Then it is not hard to imagine that the trustorgets confronted with the decision of whether to wait for thetrustee to recover or take a chance on another agent.

The magnitude of the penalty determines the outcome ofthat decision. If the penalty is large, the odds to take achance on a new agent are higher. In this case the trustorwill maximize interaction. But it will be more exposed toabuses. On the contrary, if the penalty is low, the trusteemay come online before getting low trustworthiness andcontinue providing good service. This will minimize therisk of bad interaction but at the expense of more serviceinterruptions.

Our first contribution is to examine this tradeoff, andmore generally, to assess to which extent the amount of pen-alty given to disconnection affects the bootstrapping oftrust. To make the analysis tractable, we assume that T timeunits must elapse after disconnection in order to prefer anunknown agent. A smaller value of T implies a greater pen-alty, i.e., a higher probability for the trustor to take a chanceon a new trustee. Using this parameter, we develop a sto-chastic model to estimate the expected time to obtain thefirst confident trust evaluation on an agent, provided thatall the agents implementing the service are unknown to thetrustor. By “confidence” we refer to the event that the

� The authors are with the Department of Computer Engineering andMaths, Universitat Rovira i Virgili, Spain.Email: {marc.sanchez, blas.herrera}@urv.cat.

Manuscript received 25 Aug. 2013; revised 15 Dec. 2013; accepted 11 Jan.2014. Date of publication 24 Feb. 2014; date of current version 5 Dec. 2014.Recommended for acceptance by X. Li.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference the Digital Object Identifier below.Digital Object Identifier no. 10.1109/TPDS.2014.2308186

2 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015

1045-9219� 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

trustor acquires enough direct evidence to form a risk per-ception. This time, we simply called it the trust bootstrappingtime, is a good indicator of the efficacy of a trust system. Ifthis time is short, trustors can quickly form an impressionto guide their future interactions.

As a result of our analysis, we arrive at the conclusionthat there is not a universally optimal penalty. The optimal pen-alty is too much dependent on the exact amount and type ofchurn, or the term coined to refer to the collective effect ofthe continuous arrival and departure of users in the system[6]. The direct consequence of this is that an unfortunatechoice for the penalty given to disconnection can lengthenthe trust bootstrapping time. To the best of our knowledge,we are the first to analyze the effect that disconnection pun-ishment has on trust bootstrapping.

Our second contribution is to incorporate availabilitypredictions into the trust bootstrapping process to reducethe effects of a bad selection of the disconnection penalty.The key feature of our predictions is that they are based onthe activities of users. Returning back to our example, atrustor may simply learn that the trustees downloading filesbetween 1-4 GBs tend to have long sessions, and use thisknowledge to choose between the candidates based on theirdownloading activity. We call these predictions activity ster-eotypes. While one can argue that this concept is similar tothe “classical” concept of stereotypes [7], [8], the specifics ofuser connection habits require specialized treatment, mak-ing activity stereotypes a novel approach.

The remaining of the paper is structured as follows. InSection 2, we overview related work. We introduce ouranalytical model of trust bootstrapping and dynamics inSection 3. Section 4 describes the analytical results thatdemonstrate the lack of a universally optimal penalty fordisconnected operation. Section 5 describes the notion ofactivity stereotypes and Section 6 provides an evaluationof their effectiveness. Conclusions are drawn in Section 7.

2 RELATED WORK

In general, the differentiation between a malicious and agood user depends heavily on the nature of the system. Mis-behavior can be classified as deliberately malicious or ariseout of temporary outages or user connection habits.

Deciding whether to qualify such non-subjective factors asmisbehavior depends on the parameters of the system.While many works on trust, either implicitly or explicitly,have formerly classified ’No Response’ and disconnection asa punishable event [2], [3], [4], [9], to name a few, the conse-quences of punishing disconnected behavior have notreceived sufficient attention. Typically, trust systems likePET [3] and ripple-stream [2] fix a numeric constant topenalize disconnection, thereby punishing agents in a waycompletely irrespective of their connection patterns. As wedemonstrate in this work, this may be problematic wheninteracting with strangers, and in general in those caseswhere both direct and reputational evidence is not forth-coming. The present article is the first to investigate thisimportant issue by studying the effects of punishing discon-necting agents and proving the lack of an optimal penaltyfor disconnection.

The general effects of dynamics on trust systems havebeen studied in our prior works [10], [11], [12]. In these

works, we developed an analytical framework to assess thedifficulties of establishing trust in the absence of trust infor-mation sharing among entities. Thus, no analysis of discon-nection punishment was conducted, and even less thedevelopment of a new technique to bootstrap trust.

For trust bootstrapping, recently there have been someefforts to develop tentative forms of trust in the absence ofdirect and reputational experiences [7], [8], [13]. Theseapproaches propose to exploit stereotypical impressionsformed on “similar” agents in previous contexts in order tomake tentative trust evaluations on unknown agents.Although the concept of stereotypes for decision makingwas proposed previously (see, for instance, [14]), the firstcomputational stereotype model was introduced by Liuet al. in [7]. Since then, other works like [8], [13] proposedto exploit stereotypes for trust evaluation. In particular,Burnett et al. [8] proposed to bootstrap trust of unknownagents through stereotypes, which are built based on theM5 tree learning algorithm and shareable as reputations.

The principal idea of the above described stereotypicalapproaches is to utilize visible features of agents to makegeneralized trust assessments. In this regard, “classical”stereotypes resemble activity stereotypes. However, theypresent two important differences. The first difference isthat activity stereotypes focus on service continuity ratherthan on the trustworthiness of agents, as occurs in the case ofclassical stereotypes. The other basic difference is that activ-ity stereotypes are always formed from information avail-able in the system, while classical stereotypes can be builtwith featural evidence coming from external sources ofinformation like social networking systems. To inform thisargument, Burnett et al. [15] have recently discussed whattypes of contextual knowledge can be used to build stereo-types. Three feature sources are identified, but any of themcan be directly used to build activity stereotypes, since theycorrelate with trustworthiness instead of with service conti-nuity, like the relationships between agents in social net-works and the experience of agents in certain tasks. As aresult, activity stereotypes require a different and originaltreatment compared with prior approaches.

Finally, Sensoy et al. [16] argue that agents exhibitingsimilar behavior share some patterns in the relationshipsbetween their descriptive features and propose to exploitthem to bootstrap trust. As the above proposals, the mainflaw of this approach is the use of ontological knowledge todescribe agents in detail, information which is seldom avail-able in many scenarios. Like us, they use subjective logic torepresent trust [17] and the base rate to integrate their pre-dictions into the trust formation process as [8].

3 MODELING TRUST BOOTSTRAPPING

3.1 Agent Dynamics and Metrics

As in many important modeling works [18], [19], [20], wemodel the alternating online and offline agent behavior as a2-state continuous-time Markov chain (CTMC) with transi-tion rates ron and roff , respectively. That is, a user stays con-nected for an average amount of time 1=ron and thendisconnects. The average amount of time the agent stays off-line before reconnecting to the system is 1=roff . In systemsdominated by user-driven interruptions like Maze or Kad,

S�ANCHEZ-ARTIGAS AND HERRERA: ACTIVITY STEREOTYPES, OR HOW TO COPE WITH DISCONNECTION DURING TRUST BOOTSTRAPPING 3

Page 3: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

it has been shown that this simple CTMC provides a goodapproximation to user behavior [21]. All the notation iscompiled in Table 1.

Having described the model for agent churn, we nowturn to trust bootstrapping. Since we consider initial caseswhere direct and reputational information is unavailable,initial trust can only be obtained from direct encounters. Itmay be unwise to aggregate the opinions of unknownagents, as some may be malicious or unreliable in someway. This is the fundamental reason why we characterizetrust bootstrapping as a stochastic process modeling theoccurrence of direct interactions. Concretely, we assumethat a trustor must interact at least ‘ times with a trustee tobe confident that the resulting trust evaluation is a good pre-dictor of future behavior. That is, the value of ‘ marks thepoint at which there is little or no uncertainty about the out-come of the next interaction, and when it is safe to ask repu-tational queries to a trustee, among other things.

Further, the use of this parameter turns our study into ageneric analysis. That is, while the exact value of ‘ will varyacross systems, our approach will remain valid for many ofthem. For instance, this includes those trust and reputationsystems that combine the numerical ratings of past interac-tions to output a trust value, including those based on beliefmodels [17] and all the cited works in the related work like[2], [3], [4], [8], [9], [16], to name a few.

Based on this parameter, we define a new metric calledthe trust bootstrapping time to understand the influence ofpunishing disconnection when interacting with unknownagents. This measure focuses on the time it can take for atrustor within a group of unknown agents to form the firstconfident trust evaluation. Ideally, this time should be asshort as possible so that the trustor could benefit from confi-dent trust evaluations in their partners before groupdisbandment.

Definition 1. Given a group of unknown agents U providing aservice, we define the trust bootstrapping time t‘ as the time tocomplete the first ‘ transactions with any of the agents in U.For analytical tractability, we assume that transactions

occur immediately one after another, according to a Poissonprocess with parameter �. In this way, transactions have asimilar duration, thus making unnecessary to calculate thegain in trust on the basis to the amount of work done. Also,this interaction model is very common in the trust literature(see [10], [22], to cite a few examples).

3.2 Stochastic Model

Since all agents in the group are new and unknown, all areassigned the same default trust value. In practice, this valuerepresents how much trust the trustor places in an agentbefore any evidence has been received. Because all agentshave the same trust value, we assume that trustor choosesone agent uniformly at random to interact with. At thatmoment, the trust bootstrapping process starts.

For analytical tractability, we assume that the outcome ofa transaction is always satisfactory to initiate a new one withthe current trustee. Therefore, our stochastic model consid-ers only one type of trust decreasing event: the ‘No response’,which occurs when the trustee, intentionally or not, fails tocomplete a transaction due to disconnection. This corre-sponds to the case where agents provide good service butreceive punishment due to their online-offline oscillatorybehavior, the subject of study of this work. We observe thatthis is the right way to isolate the effects of dynamics fromthe effects of malicious behavior in trust bootstrapping,which has been already studied [7], [8].

Also, it must be noted that the effect of this assumptionis not significant. We notice that if trustees behave badlyby responding wrongly or even maliciously to transactionrequests, the trust bootstrapping time will be longer. Thereason is that it generally takes a few bad interactions to losetrust but many good interactions to trust someone [23]. Then,it is natural to expect that the trustor switches to a newagent after experiencing a just few negative transactions,lengthening the time to complete the first ‘ transactionswith some agent in the group. In practice, this makes ourresults conservative but accurate enough to measure theeffect of disconnection. Indeed, our analytical predictionsare in good agreement with our simulation results wherehalf of the trustees were instrumented to misbehave. SeeSection 6 for further details.

Last but not least, there remains the question of how topunish disconnecting agents stochastically speaking. Tocharacterize the intensity of the penalty, we introduce anew parameter into the model. This parameter, denoted byT , specifies the number of time units needed to elapse afterdisconnection of the current trustee to take a chance on anunknown agent in the group:

Definition 2. A switch to an unknown agent is triggered after thedisconnection of the current trustee during T , T > 0, timeunits. During this time, the disconnected trustee is assumed toreject all attempted transactions issued by the trustor.

By this characterization, it is possible to make surethat an unknown agent is always preferred over thetrustees who had been disconnected for more than Tconsecutive time units, in an attempt to maximize ser-vice continuity.

Further, the value of T determines the aggressiveness ofthe punishment. Smaller values of T represent a greaterpenalty, i.e., a higher probability for the trustor to take achance on a new agent. On the contrary, larger values of Tdecrease the risk of bad response. Considering that the cur-rent trustee is offering good service, a larger T trades offlonger interruptions in the service against the risk of switch-ing to an unknown agent, who may be malicious. By simplyvarying the value of T , our model allows us to investigate

TABLE 1Trust Model Main Notation

4 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015

Page 4: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

how the intensity of disconnecting penalties impacts thetrust bootstrapping time.

The state transition diagram is given in Fig. 1. States(ON, i) and (OFF, i) represent the case where the numberof completed transactions is i � 0 and the current trusteeis ON and OFF, respectively. In state (ON, i), the processcan jump into either state (ON, iþ 1), which representsthat a new transaction has ended, or state (OFF, i), whichimplies that the trustee is now offline. In this state, theprocess can jump into state (ON, 0) if disconnection timeexceeds T , which implies a trustee switch, or to state (ON,i) otherwise. The state of the process at time 0 is of course(ON, 0). The kernel QðtÞ ¼ ½Q�;i

y;jðtÞ� of this process wedenote by Y ðtÞf gt�0 is as follows:

QOFF;0ON;0 ðtÞ ¼ Prftrustee recovers before or at time t, t<T ,

with no transactions completedg¼ 1� e�roff t þ e�roff tuðt� T Þ:

QON;iOFF;iðtÞ ¼ Prftrustee logs off during transaction iþ 1g

¼ ronron þ �

ð1� e�ðronþ�ÞtÞ; 8i ¼ 0; . . . ; ‘� 1:

QON;iON;iþ1ðtÞ ¼ Prftransaction iþ 1 completed before the

trustee goes to OFF stateg¼ �

ron þ �ð1� e�ðronþ�ÞtÞ; 8i ¼ 0; . . . ; ‘� 1:

QOFF;iON;i ðtÞ ¼ Prftrustee goes to ON state before

or at time t and t<Tg¼ 1� e�roff t � ðe�roffL � e�roff tÞuðt� T Þ;

8i ¼ 1; . . . ; ‘� 1:

QOFF;iON;0 ðtÞ ¼ Prftrustee does not go to ON state

at time t and t � Tg¼ e�roffT uðt� T Þ; 8i ¼ 1; . . . ; ‘� 1:

where uðt� T Þ is the unit step function at T .Because we are interested in the time to complete the first

‘ transactions with any of the unknown trustees, it can beeasily verified that the trust bootstrapping time t‘ corre-sponds to the first-hitting time of process Y ðtÞf gt�0 ontostate (ON, ‘) given that Y ð0Þ ¼ ðON; 0Þ:

t‘ ¼ inf u > 0 : Y ðuÞ ¼ ðON; ‘ÞjY ð0Þ ¼ ðON; 0Þf g:

In the next section, we calculate the expectation of thefirst-hitting time and use it to observe what happens to thetrust bootstrapping time as function of the intensity of thepenalty imposed on disconnection.

4 DISCONNECTION PUNISHMENT: ANALYSIS

We start by finding the average bootstrapping time E t‘½ �and measure the influence of disconnection punishment ontrust evaluation when no prior evidence can be found:

Theorem 1. For user ontimes with CDF 1� e�r on x, user offtimeswith CDF 1� e�r off x and punishment intensity T , the meantime to complete the first ‘ transactions with a trustee in thegroup is given by

E t‘½ � ¼ 1

roff�‘

X‘

i¼0

rone�Troff

� �‘�iSi;‘;

Si;‘ ¼ h‘;iroff�i�1 � �‘;i�

i þ �‘;i�ieTroff

� �;

(1)

where ron ¼ 1=E½L�,roff ¼ 1=E½D�, and E½L� and E½D� denotethe average online and offline session durations, respectively.Further, h‘;i and �‘;i satisfy the recurrence relations: h‘;i ¼h‘�1;i þ h‘�1;i�1 for all i > 1, �‘;i ¼ h‘;iþ1 for all i < ‘. Theinitial conditions are: h‘;0 ¼ 0, h‘;1 ¼ 1, h‘;‘ ¼ ‘ and �‘;‘ ¼ 0.

Proof. The proof has been deferred to Appendix A,which can be found on the Computer Society DigitalLibrary at http://doi.ieeecomputersociety.org/10.1109/TPDS.2014.2308186 tuThe first observation to be made is that the pure effect of

the alternating online-offline behavior of agents can be pre-dicted by taking Eq. (1) to the limit (T �! 1). Recall thatletting T ! 1 is equivalent to imposing no penalty on dis-connected agents. Hence, the trustor considers that discon-nection is temporary, e.g., due to the breakdown of theInternet connection, and it is worthwhile waiting for thetrustee to recover. By taking T �! 1, we obtain the follow-ing corollary from Theorem 1:

Corollary 1. When imposing no penalty on disconnected agents,the mean bootstrapping time for ‘ transactions is given by

E t‘½ � ¼ ‘

roff þ ronroff

� �; (2)

where � is the transaction rate, and ron and roff denote the dis-connection and reconnection rates, respectively.

Proof. A rigorous proof is given in Appendix B, available inthe online supplemental material. tuEq. (2) has a very interesting interpretation: the mean

bootstrapping time is inversely proportional to the steady-

state user availability A ¼ E½L�E½L� þ E½D� ¼

roffron þ roff

.

Because there is no punishment in this case (T ! 1),Eq. (2) suggests that the time spent in accruing enough sup-porting evidence to make a confident trust evaluationdepends basically on the probability of the initial trustee tostay connected. This carries consequences for trustors whoseek to minimize the inherent risk in bootstrapping trust. Acautious trustor will prefer to impose little or no penalty tominimize the number of switches to unknown agents causedby the temporary outages of a cooperative trustee. Such a“conservative” behavior will cause trust evaluations to behighly dependable on agent availabilities. If availabilities arelow, it will be clearly advantageous to select another agentafter a short period of inactivity with no way to reduce theinherent risk in bootstrapping trust. This result is pessimistic

Fig. 1. State diagram for semi-Markov chain Y ðtÞf gt�0.

S�ANCHEZ-ARTIGAS AND HERRERA: ACTIVITY STEREOTYPES, OR HOW TO COPE WITH DISCONNECTION DURING TRUST BOOTSTRAPPING 5

Page 5: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

for certain types of distributed systems like P2P networks. InP2P systems, availabilities tend to be low. Hence, imposingno penalty may convert trust bootstrapping into a ratherlengthy process. Taking, for instance, the mean availabilityobserved in BitTorrent [24], the average bootstrapping timeE t‘½ �will be of 3:57 hours for ‘ ¼ 10 and transaction rate � of10 interactions per hour. More examples are reported inAppendix C, available in the online supplemental material.

4.1 Impact of Disconnection Penalty

As just elaborated above, it seems at first glance that anaggressive punishment should shorten the bootstrappingtime and favor a more uninterrupted service. While the lat-ter is true since the periods of inactivity are shorter, the for-mer is not necessarily so. Contrary to intuition, a short Tmay increase the trust bootstrapping time, specially if theextended design principle that good behavior shouldincrease trust slowly is applied. In PET [3], for example, thepenalty incurred by not responding is 3 times greater thanthe reward obtained for good behavior.

An example of this behavior is illustrated in Fig. 2a. Inthe figure, the predicted E t‘½ � is plotted against ‘ and com-pared to the empirical E t‘½ � obtained via simulation. Forthis example, the average ontime was E L½ � ¼ 1 hour, theaverage offtime E D½ � ¼ 0:5 hours with a transaction rate �of 10 transactions per hour. As seen in the figure, as thenumber of interactions ‘ needed to form a reliable opinionincreases, the average bootstrapping time under an aggres-sive punishment (T ¼ 6 minutes1) may end up becominggreater than when no punishment is imposed on departingagents (T ¼ 1). The primary reason is that disconnectedagents present a strong tendency to return sooner than lateras observed in real systems [6]. Hence, an aggressive strat-egy to support service continuity may be counterproductivefor trust establishment. In fact, the probability of switchingto a new and unknown agent at least once is given by

1� r‘� �

e�roffT ; with r ¼ �

�þ ron; (3)

which is dominated by r‘ as T ! 0 (as the punishmentintensity level is raised). To put in perspective, considerthat the number of transactions required to form a usefultrust opinion is ‘ ¼ 10. In this scenario, Eq. (3) yields a

probability of switching of 0:61 when T ¼ 0 (immediateswitch after disconnection), which is reduced to 0:41 forT ¼ 12minutes, and to 0when T ! 1 (no penalty).

Moreover, this numerical example clearly captures theexisting tradeoff between quality of service and the risk ofinteraction: optimizing service continuity may require, inmore or less degree, to take a chance on an unknown, dis-trusted agent, which, in addition to increasing the risk asso-ciated with interaction, may not help to shorten the trustbootstrapping time. The same behavior can be seen for otherchoices of parameters as illustrated in Fig. 2b. As can beseen in the figure, there exists a crossover point (‘ ¼ 14 inthis case) beyond which imposing no penalty on departedagents is again more effective.

In fact, if we concentrate on the amount of punishment Tthat minimizes the mean bootstrapping time, it can beshown that there exits an integer Lwith the property that

limT!1

E t‘½ � < E t‘½ �ð0Þ, 8‘ � L; (4)

where E t‘½ �ð0Þ denotes the average bootstrapping timeobtained when T ¼ 0 (maximum penalty), i.e., when thetrustor switches immediately to an unknown agent afterthe disconnection of the actual trustee. Loosely speaking,(4) tell us that there exists a threshold L on the numberof transactions beyond which the imposition of no pen-alty at all leads to a shorter bootstrapping time than theone reached when imposing an aggressive penalty,which is counterintuitive. Exact conditions of when thistransition happens are given in Appendix D, available inthe online supplemental material.

To wit, for the same choice of parameters of Fig. 2a, thevalue of L is 9, which marks the point beyond which pun-ishing disconnection becomes negative. This is betterreflected in Fig. 2c where the mean bootstrapping time isplotted for different punishment intensities. The main pointto be made here is the presence of two differentiated regionsin the figure, separated by the crossover point L. For‘ ¼ 1; ; L� 1, the minimum bootstrapping time is achievedwhen the penalty is maximal (T ¼ 0), while for ‘ � L, theoptimal bootstrapping time is attained when no penalty isapplied. Intermediate values strike balance between quickbootstrapping and service continuity.

In summary, our analysis proves the lack of a universally“optimal” penalty for disconnection and formally shows that theeffectiveness of the punishment is strongly dependent on the typeand amount of agent turnover.

Fig. 2. E t‘½ � plotted against ‘ for different mean ontime E L½ � and offtime E D½ � durations. � ¼ 10 transactions/hour.

1. Note that T ¼ 6minutes is equivalent to the average duration of atransaction: 1=�, for � ¼ 10 interactions per hour.

6 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015

Page 6: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

5 ACTIVITY STEREOTYPES

The lack of a global optimal disconnecting penalty demandsnew techniques to improve bootstrapping of trust whileprotecting the system from high agent turnover.

One way to do so is to incorporate predictions on peeruptimes into the trust bootstrapping process. Among thepossible solutions, we concentrate on the current activity ofan agent as a predictive mechanism. By ascribing initial peerselection to learned classes of agent activity, a trustor canmake use of prior experience in these classes to form tenta-tive evaluations on the dependability component of trust,which is critical in the initial cases discussed in this article.This concept is similar to the notion of “classical” stereo-types [7], [8] but targeted at reducing the impact of discon-nections and service interruptions in initial cases whereprior evidence is unavailable.

To put it in a practical context, typical activities can bethe download of a file in pieces (Direct Connect,Gnutella,. . .); the participation in a live streaming session(PPLive, PPStream,. . .). Concrete examples of activities andtheir respective stereotypes are discussed in Appendix E,available in the online supplemental material.

A requirement for activity stereotypes is that they aremeant to complement, not replace, direct evidence about anagent when it is available. While activity stereotypes mayfacilitate useful predictions in initial cases, they are basedon empirical generalizations, and should carry less weightthan direct observation. Like in [8], we fulfill this require-ment by adapting default trust in unknown agents to thebehavior of the majority of agents carrying out the sameactivity. Recall that default trust refers to the trust put in anewly deployed trustee before any evidence has beenreceived [25]. If the agents participating in a given activityAi have historically tended to stay connected for long peri-ods of time, an unknown agent executing Ai can be privi-leged by assigning a high initial trust to it.

Technically, the objective of our approach is to identify afunction f that maps the activity vector of an agent ~A to anestimate fð~AÞ on service continuity which increases ordecreases the default trust given to an unknown agent.

5.1 Trust Model

Regardless of the underlying model, the key requirement ofour approach is that the estimates function f produces arecompatible with the trust model being used. For this reason,we describe next a concrete trust model to show how easy itis to integrate activity stereotypes into a trust system. Theprimary requirement is that the trust system allows toassign neutral or default trust values to newly joined agents,which holds in the majority of cases.

Specifically, we adopt the model proposed in [17] andbased on Subjective Logic. The reason is that it admits themapping of the estimates from activity stereotypes ontodefault trust values, referred to as base rates in this trustmodel. Deterring participation by those individuals whoare dishonest and encouraging trustworthy behavior arematters of the underlying trust system. We have simplyaugmented the trust model proposed in [17] with activitystereotypes to demonstrate their effectiveness. Any trustmodel using numerical ratings could be used in its stead.

5.1.1 Representation

In this trust model, an opinion held by a trustor x about atrustee y is represented as a tuple wx

y ¼ hbxy ; dxy ; uxy ; a

xyi, where

values bxy , dxy , and ux

y express the degree of belief, disbelief,and uncertainty towards y, respectively. These values sat-isfy bxy þ dxy þ ux

y ¼ 1, with bxy , dxy , uxy 2 ½0; 1�. Uncertainty

measures the absence of evidence to support either belief ordisbelief, such that an opinion based on ‘ ¼ 100 transactionshas a greater certainty than another one based on just 1observation. Clearly, certainty or the confidence on a trustvalue is thus equivalent to ð1� ux

yÞ.Worthy of special mention is the parameter axy 2 ½0; 1�,

called the base rate, which corresponds to default trust. In theabsence of any specific evidence about a given agent, thebase rate determines the a priori trust that would be put inany member of the group. For instance, if axy is set to 0:75,we believe that the result of the first interaction with agent ywill likely to be favorable.

5.1.2 Evidence Aggregation

Opinions are formed upon the basis of evidence amassed byinteracting with other agents, which is represented asobserved frequencies of positive and negative outcomes. Abody of evidence held by a trustor x is a pair hrxy ; sxyi, whererxy is the number of positive transactions received from y,and sxy is the number of negative experiences. Using theseparameters, an opinion is produced as [17]

bxy ¼ rxy=�rxy þ sxy þ 2

�; dxy ¼ sxy=

�rxy þ sxy þ 2

�;

uxy ¼ 2=

�rxy þ sxy þ 2

�:

(5)

Notice that Eq. (5) guarantees that uncertainty decreases asmore evidence is accumulated. Alternatively, evidencecould be obtained from third parties who had interactedwith this specific individual before. However, since we areexamining the problem of establishing trust when no histor-ical information is available, evidence is acquired first handby each trustor.

To treat no responses as a bad action like in PET [3]and [4], so that the users who continuously join andleave the system receive low trustworthiness, we classifynegative experiences in two different categories: ‘NoResponse’ and ‘Bad Behavior’. ‘No Response’ describes thesituation where a trustee rejects or fails to complete atransaction request due to disconnection. Because it can-not be distinguished whether a disconnection was inten-tional or not, this class includes both cases. The reason isthat, irrespective of the cause of the disconnection, theresult is the same: a service interruption. The term ‘BadBehavior’ encapsulates all the other negative experiences,mostly malicious responses.

To materialize this distinction, we calculate sxy as linearcombination of the observed frequencies of each type of out-come: sxy ¼ vB � sxy:B þ vN � sxy:N, where sxy:B is the number ofwrong transactions, sxy:N is the number of transactions thatreceived no response, and vB and vN are the weightsattached to each type of negative action. These weights areused to assign different levels of importance to each type ofnegative experience.

S�ANCHEZ-ARTIGAS AND HERRERA: ACTIVITY STEREOTYPES, OR HOW TO COPE WITH DISCONNECTION DURING TRUST BOOTSTRAPPING 7

Page 7: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

5.1.3 Trust Metric

A single-valued trust metric, useful for ranking agents, canbe derived from a specific opinion wx

y as follows [17]:

P�wx

y

� ¼ bxy þ axy � uxy ; (6)

where the trust value is the probability expectation valueP ðwx

yÞ calculated from wxy . Observe that the base rate axy

determines the effect that the parameter uxy will have on the

resultant trust value.In many trust systems, the default value of the base rate

is usually 0:5, which signals that before any evidence havebeen received, both outcomes are equally likely to occur.Also in this case, P ðwx

yÞ ¼ 0:5, which is the least informativevalue about an agent when no evidence have been acquired.Further, in this case uncertainty will be maximal (ux

y ¼ 1).Values of axy > 0:5 will result in more uncertainty beingconverted to belief, and vice versa.

5.1.4 Reputation

Reputation in probabilistic trust systems is calculated byaggregating the evidence in form of hrxy ; sxyi from trustfulproviders [3], [17]. However, since we analyze the effects ofdisconnection in those initial situations where no prior evi-dence is available, we also assume that no reputational evi-dence exists anywhere within the society. Moreover, it ispretty obvious that aggregating the evidence provided bythe unknown agents in the group may lead to weak or unre-liable reputations. Hence, for peer selection, we only use thelocal trust values computed at each trustor.

5.2 Stereotype Function

To incorporate our estimates into the trust bootstrappingprocess, we use the base rate as in [8]. For a given trustee y,the base rate axy ¼ fð~AyÞ. When no evidence has beenaccrued for trustee y we have maximum ambiguity, i.e.wx

y ¼ 0; 0; 1; 0:5h i. In this case, axy alone determines the valueof P ðwx

yÞ. However, as more evidence is obtained, the valueof ux

y decreases, and so does the weight carried by axy in thetrust value. This fulfills the requirement that the effect ofour estimates diminishes as direct evidence is accrued. Werefer to this condition as Requirement 1.

Another central observation to be made is that activitystereotypes are useful to form a tentative estimate of the ‘NoResponse’ component of trust evaluation. This means thatthe increase of the base rate above its default value adefmust never preclude the estimated trust value from reduc-ing quickly if the agent misbehaves just in the first interac-tion. Otherwise, a malicious agent can perform an activitywhere agents have historically stayed connected for longtime to attract trustors, and then behave badly.

Although such a requirement is inherently subsumed byRequirement 1, we have derived an upper bound on thebase rate axy that ensures that estimated trust values rapidlyfall below the default trust value adef . That is, in the absenceof evidence and stereotypical knowledge, the default trustvalue for any unknown agent in the society is P ðwx

yÞ ¼ bxyþadef � ux

y ¼ adef , since bxy ¼ 0 and ux

y ¼ 1.The upper bound ensures that a stereotypical estimate

for P ðwxyÞ drops below adef after just I consecutive bad

transactions. The value of I can be seen as an expression ofthe misprediction risk. A small I signals that malicioustrustees can be rapidly discarded. We take as a reference thevalue of adef because stereotypes never decrease the valueof axy below adef . This is explained in detail in the next sec-tion. We term this requirement “Requirement 2”.

Lemma 1. Given default trust value adef 2 ½0; 1�, fulfillingRequirement 2 requires the base rate axy to be:

axy � min adefðI þ 2Þ

2; 1

� �: (7)

Further, there exists an upper bound Imax on the number ofnegative interactions I beyond which Requirement 2 is alwayssatisfied: Imax ¼ 2

adef� 2.

Proof. A rigorous proof is given in Appendix D, available inthe online supplemental material. tuAs in many trust systems the default trust value adef is

usually 0:5 to keep neutrality, it is interesting to know howmany interactions are needed to drop P ðwx

yÞ below 0:5.From (7), it follows that I ¼ 2 consecutive negative interac-tions are sufficient, even if stereotypical prediction raisedP ðwx

yÞ ¼ axy ¼ fð~AyÞ from 0:5 to 1. This certifies that predic-tions on the ‘No Response’ component of trust do not affectthe trust formation process in initial cases.

5.2.1 Stereotypical Estimation

Based on the above observations, we are ready to discuss onthe shape of the stereotypical function f . The function takesas input a vector of activities ~Ay that are currently being exe-cuted by an agent y and returns a prediction of its suscepti-bility to participate continually.

The computation of the predicted value is as follows. Foreach activity Ai in ~Ay, the trustor first calculates the proba-bility that an agent conducting activity Ai remains con-nected for a duration of ‘ transactions. Formally, the timefor the ‘th arrival is distributed as an Erlang-‘ and has mean‘�. For simplicity, we will use the mean as an input time tocompute this probability, denoted by p

‘=�i .

We consider that this value is obtained by asking one ofthe trusted parties who keep a record of the session dura-tions of the prior agents that performed the activity. Noticethat from the uptime session durations of agents, probabil-ity p

‘=�i can be easily obtained by computing

FcAið‘=�Þ ¼ PrðL > ‘=�ljAiÞ;

where FcAi

denotes the empirical complementary uptimedistribution for the agents who participated in activity Ai inthe past. For the acquisition of uptime session lengths, weassume the existence of a secure monitoring protocol foragent availability like AVMON [26] augmented with proofof interactions [27]. These signed certificates can be pre-sented to prove engagement in a specific activity.

Next, the maximum of the probabilities p‘=�i is picked and

normalized to fit the real interval ½adef ; amax�, where amax isthe threshold on the base rate calculated from (7). We choseto use the maximum because it is reasonable to expect thatan agent who is involved in several activities at the sametime stays connected until completion of the longest activity.

8 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015

Page 8: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

The reason of the normalization is to avoid favoring thenew agents for which no activity is known, as these agentsare assigned the default base rate adef . As a result, agentswho show participation in classified activities will alwaysbe preferred over agents for which no activity is known.This should encourage newcomers to participate in knownactivities. Putting all pieces together, f is given by

fð~AyÞ ¼ max�p‘=�i

Ai2~Ay

ðamax � adefÞ þ adef : (8)

As an example, consider a P2P file sharing applicationand a agent who is performing two activities: A1 and A2. A1

consists of downloading an MP3 file of a famous song whileA2 consists of downloading a large video file. After askingexperienced trustors for stereotypical predictions, we getthat p

‘=�1 ¼ 0:01 while p

‘=�2 ¼ 0:8, since activity A2 requires,

in general, more time to accomplish. Assuming amax ¼ 0:75and adef ¼ 0:5 (totally ignorant opinion), the predictedP ðwx

yÞ is fð~AyÞ ¼ p‘=�2 ð0:75� 0:5Þ þ 0:5 ¼ 0:7.

Finally, it is worth noting that activity stereotypes arelittle intrusive to trust management. Clearly, stereotypingdoes not affect the management of the trustworthiness ofrelationships among agents. The underlying trust systemcan operate without them. Stereotypes only reshape trustevaluation of unknown agents to avoid experiencing toomany disconnections during trust bootstrapping. Only inthe particular situation that stereotypical opinions couldbe communicated within the society, a more specific trustmanagement will be needed, as it happens with classicalstereotypes [8].

6 EVALUATION

6.1 Experimental Setup

We validated our approach with exhaustive simulations.Here we only report a subset of the results. It is worth tonote here that because we have been the first to tackle theproblem of disconnection during trust bootstrapping, wefound no way of fairly benchmarking activity stereotypesagainst the existing literature. A clarifying discussion onthis issue is given in Appendix G, available in the onlinesupplemental material.

In our experiments, we simulated a system composed ofNgroups ¼ 250 groups. In each group, a trustor wanted toreceive service from Gsize agents from whom no direct andreputational evidence was forthcoming. The goal of thetrustor was to produce accurate trust evaluations.

6.1.1 Activity Profiles and Threat Model

Each of the Gsize agents in each group was assigned anactivity profile which specified how long it will be onlineand disconnected. Each profile specified two parameters:the parameters of the two exponential distributions fromwhich online and offline durations were drawn. Activiesprofiles can be composed of none, one, or two activities. Thetest profiles used in our experiments are reported in Table 2.As reflected in the table, agents who historically carried outactivity A1 exhibited ontime durations drawn from CDF1� e�0:2x and disconnection durations drawn from CDF1� e�x, which corresponds to a mean ontime and offtime of5 and 1 hours, respectively. Conversely, the agents who

historically participated in activity A2 in the past presenteda high turnover rate with an average ontime and offtime of15 and 30 minutes, respectively. To make the identificationof the agents with longer ontimes harder, only 20 percent ofthe trustees within each group were assigned activity A1. Inpractice, this represented that the probability of bootstrap-ping trust with little interruption was at most of 20 percentin the absence of stereoytpes. Such a of activities is enoughto approximate real user behavior in P2P applications suchas BitTorrent, where a swarm is nothing but a group ofunknown users, or in file-sharing systems such as Gnutellaand Kazaa [28], to name a few.

Regarding the threat model, while the list of potentialattacks is myriad, we use the simplest form of threat thatcan be identified in any trust system: a group of maliciousagents who always give fraudulent service, or with sufficientdegradation to be qualified as a negative interaction. Themain reason for using such a simple model was to isolateany effect caused by the trust model from stereotyping,whose aim is not to predict trustworthiness but minimizedisconnection.

Specifically, a fraction pB of the agents were disposed toact maliciously in each simulation run. By default, we set pBto 0:5 in order to have half of the agents in activity A1 pro-vide bad service and evaluate if Requirement 2 is fulfilled.If true, this implies that our approach is able to react quicklyagainst the trustees who were predicted to stay connectedbut render poor service, that is, when no activity-behavioralcorrelations exist.

6.1.2 Trust Model

Before interacting with any agent in a group, uncertainty onthe trustworthiness of each agent is maximal. Hence, theopinion held by a trustor x about each agent y in the groupis wx

y ¼ h0; 0; 1; axyi. When stereotypes are applied, the baserate axy ¼ fð~AyÞ, which may favorably bias trust evaluationP ðwx

yÞ. When stereotypical prediction is not possible, axy issimply equal to adef . In our experiments, we set adef to theuninformative prior 0:5, which meant that before any inter-action took place, both positive and negative outcomeswere considered equally like.

According to Lemma 1, we fixed amax to 0:75 to ensurethat stereotypical misprediction could be corrected if theresult of the first interaction with an unknown agent wasnegative.

By default, the penalty for bad response vB was set to 1,to equalize it with positive interaction. Observe that in theunderlying trust model, the degree of belief, disbelief anduncertainty that constitute an opinion are calculated byoperating directly on the number of observed positive andnegative experiences, which implicitly assumes that themagnitude of reward obtained and penalty incurred is of 1.Values of vB > 1 will result in greater punishment for badresponse compared with the reward received for positive

TABLE 2Activity Profile

S�ANCHEZ-ARTIGAS AND HERRERA: ACTIVITY STEREOTYPES, OR HOW TO COPE WITH DISCONNECTION DURING TRUST BOOTSTRAPPING 9

Page 9: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

behavior. Default parameters for simulations are summa-rized in Table 3.

Last but not least, we assume that the trustor in eachgroup selects the most trusted agent at every interaction,i.e., the agent with the highest trust value P ðwx

yÞ, whichis the most common decision model in trust literature.

6.2 Results

Here we present the result of our experiments. The mainhypothesis to validate is whether trust bootstrapping will bebetter with stereotypes than without stereotypical information.In this sense, there are two important aspects to evalu-ate. On one hand, if stereotypical biases are present, thentrust bootstrapping times should be shorter, and on theother hand, if stereotypes reduce the number of unsatis-factory interactions.

Due to space constraints, we report here a small subsetof the compiled results to give a sense of the advantagesof activity stereotypes. The rest of the results are given inAppendices H-J, available in the online supplementalmaterial. Appendix H, available in the online supplemen-tal material, examines the effect of the fraction of mali-cious agents pB. Appendix I, available in the onlinesupplemental material, studies what happens when thepenalty vB is greater than vN. Finally, Appendix J, avail-able in the online supplemental material, shows the timeevolution of no responses.

6.2.1 Experiment I: General Effectiveness

Fig. 3 illustrates the average bootstrapping time for two val-ues of the disconnection penalty: vN ¼ 1 and vN ¼ 3, whichrepresent a normal and an aggressive punishment, respec-tively. Recall that the penalty for malicious service vB wasfixed to 1. Therefore, a disconnection penalty 3X greater

than vB can be considered aggressive. Besides the obviousconclusion that informed trustee selection based on activitystereotypes performs significantly better, one importantobservation should be made about this result. Such anobservation is the empirical evidence that contrary to intuitionbut consistent with our analysis in Section 2, an aggressivepunishment can increase the trust bootstrapping time, instead ofreducing it. As in the figure, the average bootstrapping timeis longer for vN ¼ 3 than for vN ¼ 1. The reason is that thetrustees present a marked tendency to return sooner ratherthan later in both activities, which favored waiting for atrustee to come back online in front of switching to a newtrustee. Recall that a higher level of punishment means alarger number of switches due to stronger drops in trust val-ues. This is further evidenced in Fig. 4a, where the numberof agent switches increases with the intensity of the punish-ment. More importantly, activity stereotypes reduce thenumber of agent switches caused by disconnections, whichexplains the reduction in trust bootstrapping times.

Another interesting indicator is the average number ofnegative transactions experienced by a trustor during thetrust bootstrapping process. The added value of this indica-tor is that gives us a clear hint about the effect that discon-nection punishment has on service continuity.

Fig. 4 illustrates this effect and gives a clear picture of theexisting tradeoff between service continuity and risk. Thistradeoff is easy to observe by visual comparison of Fig. 4bwith Fig. 4c. Fig. 4b plots the average number of negativetransactions for moderate punishment across all groups.Fig. 4c does so for aggressive punishment. By comparingFig. 4b with Fig. 4c, it is easy to observe that although a

Fig. 3. Mean bootstrapping time E t‘½ � as ‘ is varied, with and withoutstereotypes, for two distinct penalties for ‘No Response’: vN ¼ 1 (moder-ate) and vN ¼ 3 (aggressive).

TABLE 3Simulation Parameters

Fig. 4. Effectiveness as ‘ is varied, with and without stereotypes, and for two distinct penalties for ‘No Response’: vN ¼ 1 (moderate) and vN ¼ 3(aggressive). (a) Mean number of switches. (b)-(c) Mean number of negative transactions.

10 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015

Page 10: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

higher penalty reduces the occurrence of service interrup-tions, it unfortunately heightens the risk of receiving awrong response, and vice versa for lower penalties. Noticethat while in Fig. 4b the portion of the bars for badresponses is comparatively smaller than in Fig. 4c, theinverse result is obtained for no responses. Either way, activ-ity stereotypes are able to reduce the presence of service interrup-tions, making more attractive moderate punishmentbecause of the smaller risk of bad interaction it entails.

Overall, our results show that activity stereotyping offers aclear improvement in the initial cases where prior direct and repu-tational evidence is lacking, and we believe it is a line of workto be further explored.

7 CONCLUSIONS

In this work, we have analyzed to what extent punishingdisconnection affects trust formation in these initial caseswhere prior evidence is not available, an issue that has beenoverlooked in the literature. First, we have proven analyti-cally the lack of a universally optimal disconnection penaltyand shown its dependence on the connection and discon-nection habits of users. Second, we have presented a newmechanism based on activity stereotypes to make trustbootstrapping quicker and less dependent on the way trustsystems punish disconnection.

ACKNOWLEDGMENTS

This research work was partly sponsored by the EC FP7through project CloudSpaces (FP7� 317555). The authorswould like to thank the anonymous reviewers for their com-ments, which helped to improve the article greatly.

REFERENCES

[1] D.P. Anderson, “BOINC: A System for Public-Resource Comput-ing and Storage,” Proc. IEEE/ACM Fifth Int’l Workshop Grid Com-puting (GRID), pp. 4-10, 2004.

[2] W. Wang et al., “Ripple-Stream: Safeguarding P2P StreamingAgainst Dos Attacks,” Proc. IEEE Int’l Conf. Multimedia and Expo(ICME), pp. 1417-1420, 2006.

[3] Z. Liang andW. Shi, “PET: A PErsonalized Trust Model with Rep-utation and Risk Evaluation for P2P Resource Sharing,” Proc. 38thAnn. Hawaii Int. Conf. System Sciences (HICSS ’05), p. 201b, 2005.

[4] N. Fedotova and L. Veltri, “Reputation Management Algorithmsfor DHT-Based Peer-to-Peer Environment,” Computer Comm.,vol. 32, no. 12, pp. 1400-1409, 2009.

[5] X. Li et al., “A Multi-Dimensional Trust Evaluation Model forLarge-Scale P2P Computing,” J. Parallel and Distributed Computing,vol. 71, no. 6, pp. 837-847, 2011.

[6] D. Stutzbach and R. Rejaie, “Understanding Churn in Peer-to-PeerNetworks,” Proc. Sixth ACM SIGCOMM Conf. Internet Measure-ment (IMC), pp. 189-202, 2006.

[7] X. Liu, A. Datta, K. Rzadca, and E.-P. Lim, “StereoTrust: A GroupBased Personalized Trust Model,” Proc. 18th ACM Conf. Informa-tion and Knowledge Management (CIKM), pp. 7-16, 2009.

[8] C. Burnett, T.J. Norman, and K. Sycara, “Bootstrapping TrustEvaluations through Stereotypes,” Proc. Ninth Int’l Conf. Autono-mous Agents and Multiagent Systems (AAMAS), pp. 241-248, 2010.

[9] Z. Liang and W. Shi, “Analysis of Ratings on Trust Inference inOpen Environments,” Performance Evaluation, vol. 65, no. 2,pp. 99-128, 2008.

[10] M. S�anchez-Artigas, P. Garc�ıa-L�opez, and B. Herrera, “Exploringthe Feasibility of Reputation Systems under Churn,” IEEE Comm.Letters, vol. 13, no. 9, pp. 558-560, July 2009.

[11] M. S�anchez-Artigas, “Churn Delays Uncertainty Decay in P2PReputation Systems,” IEEE Internet Computing, vol. 14, no. 5,pp. 23-30, Sept./Oct. 2010.

[12] M. S�anchez-Artigas and B. Herrera, “Understanding the Effects ofP2P Dynamics on Trust Bootstrapping,” Information Sciences,vol. 236, pp. 33-55, 2013.

[13] R. Hermoso, H. Billhardt, and S. Ossowski, “Role Evolution inOpen Multi-Agent Systems as an Information Source for Trust,”Proc. Ninth Int’l Conf. Autonomous Agents and Multiagent Systems(AAMAS), pp. 217-224, 2010.

[14] R. Conte and M. Paolucci, Reputation in Artificial Societies: SocialBeliefs for Social Order. Kluwer Academic Publishers, 2002.

[15] C. Burnett, T.J. Norman, and K. Sycara, “Sources of StereotypicalTrust in Multi-Agent Systems,” Proc. Int’l Conf. Trust, Security andPrivacy in Computing and Comm. (TrustCom), pp. 25-39, 2011.

[16] M. Sensoy, B. Yilmaz, and T.J. Norman, “Discovering FrequentPatterns to Bootstrap Trust,” Proc. Eighth Int’l Workshop Agents andData Mining Interaction (ADMI), pp. 93-104, 2012.

[17] A. Jøsang, R. Hayward, and S. Pope, “Trust Network AnalysisWith Subjective Logic,” Proc. 29th Australasian Computer ScienceConf. (ACSC), pp. 85-94, 2006.

[18] Z. Yao et al., “Modeling Heterogeneous User Churn and LocalResilience of Unstructured P2P Networks,” Proc. IEEE Int’l Conf.Network Protocols (ICNP), pp. 32-41, 2006.

[19] A. Datta and K. Aberer, “Internet-Scale Storage Systems UnderChurn – A Study of the Steady-State Using Markov Models,” Proc.IEEE Sixth Int’l Conf. Peer-to-Peer Computing (P2P), pp. 133-144,2006.

[20] R. Kumar, Y. Liu, and K. Ross, “Stochastic Fluid Theory for P2PStreaming Systems,” Proc. IEEE INFOCOM, pp. 919-927, 2007.

[21] Z. Yang et al., “Exploring Peer Heterogeneity: Towards Under-standing and Application,” Proc. IEEE Int’l Conf. Peer-to-Peer Com-puting (P2P), pp. 20-29, 2011.

[22] J. Mundinger and J.-Y.L. Boudec, “Analysis of a Reputation Sys-tem for Mobile Ad-Hoc Networks With Liars,” Performance Evalu-ation, vol. 65, no. 3-4, pp. 212-226, 2008.

[23] C.M. Jonker and J. Treur, “Formal Analysis of Models for theDynamics of Trust Based on Experiences,” Proc. Ninth EuropeanWorkshop Modelling Autonomous Agents in a Multi-Agent World(MAAMAW), pp. 221-231, 1999.

[24] G. Song et al., “Replica Placement Algorithm for Highly AvailablePeer-To-Peer Storage Systems,” Proc. First Int’l Conf. Advances inP2P Systems (AP2PS), pp. 160-167, 2009.

[25] S. Marti and H. Garcia Molina, “Taxonomy of Trust: CategorizingP2P Reputation Systems,” Computer Networks, vol. 50, no. 4,pp. 472-484, 2006.

[26] R. Morales and I. Gupta, “AVMON: Optimal and Scalable Discov-ery of Consistent Availability Monitoring Overlays for DistributedSystems,” IEEE Trans. Parallel and Distributed Systems, vol. 20,no. 4, pp. 446-459, Apr. 2009.

[27] A. Singh and L. Liu, “TrustMe: Anonymous Management of TrustRelationships in Decentralized P2P Systems,” Proc. Third Int’lConf. Peer-to-Peer Computing (P2P), pp. 142-149, 2003.

[28] K.P. Gummadi et al., “Measurement, Modeling, and Analysis of aPeer-to-Peer File-Sharing Workload,” ACM SIGOPS Operating Sys-tems Rev., vol. 37, pp. 314-329, 2003.

S�ANCHEZ-ARTIGAS AND HERRERA: ACTIVITY STEREOTYPES, OR HOW TO COPE WITH DISCONNECTION DURING TRUST BOOTSTRAPPING 11

Page 11: Activity Stereotypes, or How to Cope with Disconnection during Trust Bootstrapping

Marc S�anchez-Artigas received the PhD degreein 2009 from the Universitat Pompeu Fabra, Bar-celona, Spain. During his PhD, he worked at�Ecole Polytechnique F�ed�erale de Lausanne(EPFL) under the supervision of professor KarlAberer. In 2009, he joined the Universitat Rovira iVirgili as an assistant professor. He receivedthe Best Paper Award from the 32th IEEEConference of Local Computer Networks (LCN)held in Dublin, Ireland, in 2007. He also servedas a guest editor for the Elsevier Computer Net-

works Journal. He organized the 12th edition of the IEEE P2P 2012 con-ference that was held at Tarragona, Spain. He is actively involved inthe coordination of the European FP7 project called CloudSpaces onpersonal Clouds.

Blas Herrera received the PhD degree in differ-ential geometry from the Universitat Aut�onomade Barcelona, Spain, in 1994. He is currently anactive researcher on mechanics, geometry andcomputer engineering.

" For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

12 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 26, NO. 1, JANUARY 2015