14
Multiparty Access Control for Online Social Networks: Model and Mechanisms Hongxin Hu, Gail-Joon Ahn, Senior Member, IEEE, and Jan Jorgensen Abstract—Online social networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal for hundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing, but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users. To this end, we propose an approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. Besides, we present a logical representation of our access control model which allows us to leverage the features of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-concept prototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method. Index Terms—Social network, multiparty access control, security model, policy specification and management. 1 I NTRODUCTION O NLINE social networks (OSNs) such as Facebook, Google+, and Twitter are inherently designed to enable people to share personal and public information and make social connections with friends, coworkers, col- leagues, family and even with strangers. In recent years, we have seen unprecedented growth in the application of OSNs. For example, Facebook, one of representative social network sites, claims that it has more than 800 million active users and over 30 billion pieces of content (web links, news stories, blog posts, notes, photo albums, etc.) shared each month [3]. To protect user data, access control has become a central feature of OSNs [2], [4]. A typical OSN provides each user with a virtual space containing profile information, a list of the user’s friends, and web pages, such as wall in Facebook, where users and friends can post content and leave messages. A user profile usually includes information with respect to the user’s birthday, gender, interests, education and work history, and contact information. In addition, users can not only upload a content into their own or others’ spaces but also tag other users who appear in the content. Each tag is an explicit reference that links to a user’s space. For the protection of user data, current OSNs indirectly require users to be system and policy administrators for regulating their data, where users can restrict data sharing to a specific set of trusted users. OSNs often use user relationship and group membership to distinguish between trusted and untrusted users. For example, in Facebook, users can allow friends, friends of friends, groups or public to access their data, depending on their personal authorization and privacy requirements. H. Hu, G-J. Ahn and J. Jorgensen with the Security Engineering for Future Computing (SEFCOM) Laboratory, Arizona State University, Tempe, AZ 85287, USA. E-mail: {hxhu,gahn,jan.jorgensen}@asu.edu. Although OSNs currently provide simple access control mechanisms allowing users to govern access to information contained in their own spaces, users, unfortunately, have no control over data residing outside their spaces. For instance, if a user posts a comment in a friend’s space, s/he cannot specify which users can view the comment. In another case, when a user uploads a photo and tags friends who appear in the photo, the tagged friends cannot restrict who can see this photo, even though the tagged friends may have different privacy concerns about the photo. To address such a critical issue, preliminary protection mechanisms have been offered by existing OSNs. For example, Facebook allows tagged users to remove the tags linked to their profiles or report violations asking Facebook managers to remove the contents that they do not want to share with the public. However, these simple protection mechanisms suffer from several limitations. On one hand, removing a tag from a photo can only prevent other members from seeing a user’s profile by means of the association link, but the user’s image is still contained in the photo. Since original access control policies cannot be changed, the user’s image continues to be revealed to all authorized users. On the other hand, reporting to OSNs only allows us to either keep or delete the content. Such a binary decision from OSN managers is either too loose or too restrictive, relying on the OSN’s administration and requiring several people to report their request on the same content. Hence, it is essential to develop an effective and flexible access control mecha- nism for OSNs, accommodating the special authorization requirements coming from multiple associated users for managing the shared data collaboratively. In this paper, we pursue a systematic solution to facilitate collaborative management of shared data in OSNs. We begin by examining how the lack of multiparty access control for data sharing in OSNs can undermine the pro- tection of user data. Some typical data sharing patterns

Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

  • Upload
    lyanh

  • View
    215

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

Multiparty Access Control for Online SocialNetworks: Model and Mechanisms

Hongxin Hu, Gail-Joon Ahn, Senior Member, IEEE, and Jan Jorgensen

Abstract—Online social networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal forhundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing,but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currentlydo not provide any mechanism to enforce privacy concerns over data associated with multiple users. To this end, we proposean approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access controlmodel to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme anda policy enforcement mechanism. Besides, we present a logical representation of our access control model which allows us toleverage the features of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-conceptprototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method.

Index Terms—Social network, multiparty access control, security model, policy specification and management.

F

1 INTRODUCTION

O NLINE social networks (OSNs) such as Facebook,Google+, and Twitter are inherently designed to

enable people to share personal and public informationand make social connections with friends, coworkers, col-leagues, family and even with strangers. In recent years,we have seen unprecedented growth in the application ofOSNs. For example, Facebook, one of representative socialnetwork sites, claims that it has more than 800 millionactive users and over 30 billion pieces of content (web links,news stories, blog posts, notes, photo albums, etc.) sharedeach month [3]. To protect user data, access control hasbecome a central feature of OSNs [2], [4].

A typical OSN provides each user with a virtual spacecontaining profile information, a list of the user’s friends,and web pages, such as wall in Facebook, where users andfriends can post content and leave messages. A user profileusually includes information with respect to the user’sbirthday, gender, interests, education and work history, andcontact information. In addition, users can not only uploada content into their own or others’ spaces but also tagother users who appear in the content. Each tag is anexplicit reference that links to a user’s space. For theprotection of user data, current OSNs indirectly requireusers to be system and policy administrators for regulatingtheir data, where users can restrict data sharing to a specificset of trusted users. OSNs often use user relationshipand group membership to distinguish between trusted anduntrusted users. For example, in Facebook, users can allowfriends, friends of friends, groups or public to access theirdata, depending on their personal authorization and privacyrequirements.

• H. Hu, G-J. Ahn and J. Jorgensen with the Security Engineering forFuture Computing (SEFCOM) Laboratory, Arizona State University,Tempe, AZ 85287, USA. E-mail: {hxhu,gahn,jan.jorgensen}@asu.edu.

Although OSNs currently provide simple access controlmechanisms allowing users to govern access to informationcontained in their own spaces, users, unfortunately, have nocontrol over data residing outside their spaces. For instance,if a user posts a comment in a friend’s space, s/he cannotspecify which users can view the comment. In another case,when a user uploads a photo and tags friends who appearin the photo, the tagged friends cannot restrict who cansee this photo, even though the tagged friends may havedifferent privacy concerns about the photo. To address sucha critical issue, preliminary protection mechanisms havebeen offered by existing OSNs. For example, Facebookallows tagged users to remove the tags linked to theirprofiles or report violations asking Facebook managers toremove the contents that they do not want to share withthe public. However, these simple protection mechanismssuffer from several limitations. On one hand, removing a tagfrom a photo can only prevent other members from seeinga user’s profile by means of the association link, but theuser’s image is still contained in the photo. Since originalaccess control policies cannot be changed, the user’s imagecontinues to be revealed to all authorized users. On theother hand, reporting to OSNs only allows us to either keepor delete the content. Such a binary decision from OSNmanagers is either too loose or too restrictive, relying on theOSN’s administration and requiring several people to reporttheir request on the same content. Hence, it is essentialto develop an effective and flexible access control mecha-nism for OSNs, accommodating the special authorizationrequirements coming from multiple associated users formanaging the shared data collaboratively.

In this paper, we pursue a systematic solution to facilitatecollaborative management of shared data in OSNs. Webegin by examining how the lack of multiparty accesscontrol for data sharing in OSNs can undermine the pro-tection of user data. Some typical data sharing patterns

Page 2: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

2

(a) A disseminator shares other’s profile (b) A user shares her/his relationships

Fig. 1. Multiparty Access Control Pattern for Profile and Relationship Sharing.

with respect to multiparty authorization in OSNs are alsoidentified. Based on these sharing patterns, a multipartyaccess control (MPAC) model is formulated to capturethe core features of multiparty authorization requirementswhich have not been accommodated so far by existingaccess control systems and models for OSNs (e.g., [10],[11], [17], [18], [23]). Our model also contains a multipartypolicy specification scheme. Meanwhile, since conflicts areinevitable in multiparty authorization enforcement, a votingmechanism is further provided to deal with authorizationand privacy conflicts in our model.

Another compelling feature of our solution is the supportof analysis on multiparty access control model and systems.The correctness of implementation of an access controlmodel is based on the premise that the access controlmodel is valid. Moreover, while the use of multiparty ac-cess control mechanism can greatly enhance the flexibilityfor regulating data sharing in OSNs, it may potentiallyreduce the certainty of system authorization consequencesdue to the reason that authorization and privacy conflictsneed to be resolved elegantly. Assessing the implicationsof access control mechanisms traditionally relies on thesecurity analysis technique, which has been applied inseveral domains (e.g., operating systems [20], trust man-agement [25], and role-based access control [26]). In ourapproach, we additionally introduce a method to representand reason about our model in a logic program. In addition,we provide a prototype implementation of our authorizationmechanism in the context of Facebook. Our experimentalresults demonstrate the feasibility and usability of ourapproach.

The rest of the paper is organized as follows. In Sec-tion 2, we present multiparty authorization requirementsand access control patterns for OSNs. We articulate ourproposed MPAC model, including multiparty authorizationspecification and multiparty policy evaluation in Section 3.Section 4 addresses the logical representation and analysisof multiparty access control. The details about prototypeimplementation and experimental results are described inSection 5. Section 6 discusses how to tackle collusionattacks followed by the related work in Section 7. Section 8concludes this paper and discusses our future directions.

2 MULTIPARTY ACCESS CONTROL FOROSNS: REQUIREMENTS AND PATTERNSIn this section we proceed with a comprehensive re-quirement analysis of multiparty access control in OSNs.

Meanwhile, we discuss several typical sharing patternsoccurring in OSNs where multiple users may have differentauthorization requirements to a single resource. We specif-ically analyze three scenarios—profile sharing, relationshipsharing and content sharing—to understand the risks postedby the lack of collaborative control in OSNs. We leverageFacebook as the running example in our discussion sinceit is currently the most popular and representative socialnetwork provider. In the meantime, we reiterate that ourdiscussion could be easily extended to other existing socialnetwork platforms, such as Google+ [6].

Profile sharing: An appealing feature of some OSNs isto support social applications written by third-party devel-opers to create additional functionalities built on the topof users’ profile for OSNs [1], [5]. To provide meaningfuland attractive services, these social applications consumeuser profile attributes, such as name, birthday, activities,interests, and so on. To make matters more complicated,social applications on current OSN platforms can alsoconsume the profile attributes of a user’s friends. In thiscase, users can select particular pieces of profile attributesthey are willing to share with the applications when theirfriends use the applications. At the same time, the userswho are using the applications may also want to controlwhat information of their friends is available to the ap-plications since it is possible for the applications to infertheir private profile attributes through their friends’ profileattributes [30], [38]. This means that when an applicationaccesses the profile attributes of a user’s friend, both theuser and her friend want to gain control over the profileattributes. If we consider the application is an accessor, theuser is a disseminator and the user’s friend is the ownerof shared profile attributes in this scenario, Figure 1(a)demonstrates a profile sharing pattern where a disseminatorcan share others’ profile attributes to an accessor. Boththe owner and the disseminator can specify access controlpolicies to restrict the sharing of profile attributes.

Relationship sharing: Another feature of OSNs is thatusers can share their relationships with other members.Relationships are inherently bidirectional and carry poten-tially sensitive information that associated users may notwant to disclose. Most OSNs provide mechanisms thatusers can regulate the display of their friend lists. A user,however, can only control one direction of a relationship.Let us consider, for example, a scenario where a user Alicespecifies a policy to hide her friend list from the public.However, Bob, one of Alice’s friends, specifies a weaker

Page 3: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

3

(a) A shared content has multiple stakeholders (b) A shared content is published by a contributor

(c) A disseminator shares other’s content published by a contributor.

Fig. 2. Multiparty Access Control Pattern for Content Sharing.

policy that permits his friend list visible to anyone. In thiscase, if OSNs can solely enforce one party’s policy, therelationship between Alice and Bob can still be learnedthrough Bob’s friend list. Figure 1(b) shows a relationshipsharing pattern where a user called owner, who has arelationship with another user called stakeholder, shares therelationship with an accessor. In this scenario, authorizationrequirements from both the owner and the stakeholdershould be considered. Otherwise, the stakeholder’s privacyconcern may be violated.

Content sharing: OSNs provide built-in mechanismsenabling users to communicate and share contents withother members. OSN users can post statuses and notes,upload photos and videos in their own spaces, tag othersto their contents, and share the contents with their friends.On the other hand, users can also post contents in theirfriends’ spaces. The shared contents may be connected withmultiple users. Consider an example where a photographcontains three users, Alice, Bob and Carol. If Alice uploadsit to her own space and tags both Bob and Carol in thephoto, we call Alice the owner of the photo, and Bob andCarol stakeholders of the photo. All of them may specifyaccess control policies to control over who can see thisphoto. Figure 2(a) depicts a content sharing pattern wherethe owner of a content shares the content with other OSNmembers, and the content has multiple stateholders whomay also want to involve in the control of content sharing.In another case, when Alice posts a note stating “I willattend a party on Friday night with @Carol” to Bob’sspace, we call Alice the contributor of the note and she maywant to make the control over her notes. In addition, sinceCarol is explicitly identified by @-mention (at-mention) inthis note, she is considered as a stakeholder of the noteand may also want to control the exposure of this note.Figure 2(b) shows a content sharing pattern reflecting this

scenario where a contributor publishes a content to other’sspace and the content may also have multiple stakeholders(e.g., tagged users). All associated users should be allowedto define access control policies for the shared content.

OSNs also enable users to share others’ contents. Forexample, when Alice views a photo in Bob’s space anddecides to share this photo with her friends, the photowill be in turn posted in her space and she can specifyaccess control policy to authorize her friends to see thisphoto. In this case, Alice is a disseminator of the photo.Since Alice may adopt a weaker control saying the photo isvisible to everyone, the initial access control requirementsof this photo should be compliant with, preventing from thepossible leakage of sensitive information via the procedureof data dissemination. Figure 2(c) shows a content sharingpattern where the sharing starts with an originator (owneror contributor who uploads the content) publishing thecontent, and then a disseminator views and shares thecontent. All access control policies defined by associatedusers should be enforced to regulate access of the contentin disseminator’s space. For a more complicated case,the disseminated content may be further re-disseminatedby disseminator’s friends, where effective access controlmechanisms should be applied in each procedure to regulatesharing behaviors. Especially, regardless of how many stepsthe content has been re-disseminated, the original accesscontrol policies should be always enforced to protect furtherdissemination of the content.

3 MULTIPARTY ACCESS CONTROL MODELFOR OSNS

In this section, we formalize a MultiParty Access Control(MPAC) model for OSNs (Section 3.1), as well as a policyscheme (Section 3.2) and a policy evaluation mechanism

Page 4: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

4

(Section 3.3) for the specification and enforcement ofMPAC policies in OSNs .

3.1 MPAC ModelAn OSN can be represented by a relationship network,a set of user groups and a collection of user data. Therelationship network of an OSN is a directed labeled graph,where each node denotes a user and each edge representsa relationship between two users. The label associatedwith each edge indicates the type of the relationship.Edge direction denotes that the initial node of an edgeestablishes the relationship and the terminal node of theedge accepts the relationship. The number and type ofsupported relationships rely on the specific OSNs and itspurposes. Besides, OSNs include an important feature thatallows users to be organized in groups [38], [37] (or calledcircles in Google+ [6]), where each group has a uniquename. This feature enables users of an OSN to easily findother users with whom they might share specific interests(e.g., same hobbies), demographic groups (e.g., studying atthe same schools), political orientation, and so on. Userscan join in groups without any approval from other groupmembers. Furthermore, OSNs provide each member a Webspace where users can store and manage their personal dataincluding profile information, friend list and content.

Recently, several access control schemes (e.g., [10], [11],[17], [18]) have been proposed to support fine-grainedauthorization specifications for OSNs. Unfortunately, theseschemes can only allow a single controller, the resourceowner, to specify access control policies. Indeed, a flexibleaccess control mechanism in a multi-user environment likeOSNs should allow multiple controllers, who are associatedwith the shared data, to specify access control policies. Aswe identified previously in the sharing patterns (Section 2),in addition to the owner of data, other controllers, includingthe contributor, stakeholder and disseminator of data, needto regulate the access of the shared data as well. We definethese controllers as follows:

Definition 1: (Owner). Let d be a data item in the spaceof a user u in the social network. The user u is called theowner of d.

Definition 2: (Contributor). Let d be a data item pub-lished by a user u in someone else’s space in the socialnetwork. The user u is called the contributor of d.

Definition 3: (Stakeholder). Let d be a data item in thespace of a user in the social network. Let T be the setof tagged users associated with d. A user u is called astakeholder of d, if u ∈ T .

Definition 4: (Disseminator). Let d be a data itemshared by a user u from someone else’s space to his/herspace in the social network. The user u is called a dissem-inator of d.

We now formally define our MPAC model as follows:• U = {u1, . . . , un} is a set of users of the OSN. Each

user has a unique identifier;• G = {g1, . . . , gn} is a set of groups to which the users

can belong. Each group also has a unique identifier;

• P = {p1, . . . , pn} is a collection of user profile sets,where pi = {qi1, . . . , qim} is the profile of a user i ∈U . Each profile entry is a <attribute: profile-value>pair, qij =< attrj : pvaluej >, where attrj is anattribute identifier and pvaluej is the attribute value;

• RT is a set of relationship types supported by theOSN. Each user in an OSN may be connected withothers by relationships of different types;

• R = {r1, . . . , rn} is a collection of user relationshipsets, where ri = {si1, . . . , sim} is the relationship listof a user i ∈ U . Each relationship entry is a <user:relationship-type> pair, sij =< uj : rtj >, whereuj ∈ U , rtj ∈ RT ;

• C = {c1, . . . , cn} is a collection of user content sets,where ci = {ei1, . . . , eim} is a set of contents of auser i ∈ U , where eij is a content identifier;

• D = {d1, . . . , dn} is a collection of data sets, wheredi = pi ∪ ri ∪ ci is a set of data of a user i ∈ U ;

• CT = {OW,CB, ST,DS} is a set of con-troller types, indicating ownerOf, contributorOf, stake-holderOf, and disseminatorOf, respectively;

• UU = {UUrt1 , . . . , UUrtn} is a collection ofuni-directional binary user-to-user relations, whereUUrti ⊆ U × U specifies the pairs of users havingrelationship type rti ∈ RT ;

• UG ⊆ U × G is a set of binary user-to-groupmembership relations;

• UD = {UDct1 , . . . , UDctn} is a collection of binaryuser-to-data relations, where UDcti ⊆ U×D specifiesa set of < user, data > pairs having controller typecti ∈ CT ;

• relation members : URT−−→ 2U , a function mapping

each user u ∈ U to a set of users with whom s/he hasa relationship rt ∈ RT :relation members(u : U, rt : RT ) = {u′ ∈ U |

(u, u′) ∈ UUrt};• ROR members : U

RT−−→ 2U , a function mappingeach user u ∈ U to a set of users with whom s/he hasa transitive relation of a relationship rt ∈ RT , denotedas relationships-of-relationships (ROR). For example,if a relationship is friend, then its transitive relation isfriends-of-friends (FOF):

ROR members(u : U, rt : RT ) = {u′ ∈U | u′ ∈ relation members(u, rt) ∨ (∃u′′ ∈U [u′′ ∈ ROR members(u, rt)) ∧ u′ ∈ROR members(u′′, rt)]};

• controllers : DCT−−→ 2U , a function mapping each

date item d ∈ D to a set of users who are the controllerwith the controller type ct ∈ CT :controllers(d : D, ct : CT ) = {u ∈ U | (u, d) ∈

UDct}; and• group members : G → 2U , a function mapping each

group g ∈ G to a set of users who belong to the group:group members(g : G) = {u ∈ U | (u, g) ∈ UG};groups(u : U) = {g ∈ G | (u, g) ∈ UG};

Figure 3 depicts an example of multiparty social net-work representation. It describes relationships of five in-

Page 5: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

5

Fig. 3. An Example of Multiparty Social NetworkRepresentation.

dividuals, Alice (A), Bob (B), Carol (C), Dave (D) andEdward (E), along with their relations with data and theirgroups of interest. Note that two users may be directlyconnected by more than one edge labeled with differentrelationship types in the relationship network. For example,in Figure 3, Alice (A) has a direct relationship of typecolleagueOf with Bob (B), whereas Bob (B) has a rela-tionship of friendOf with Alice (A). In addition, two usersmay have transitive relationship, such as friends-of-friends(FOF), colleagues-of-colleagues (COC) and classmates-of-classmates (LOL) in this example. Moreover, this exampleshows that some data items have multiple controllers. Forinstance, RelationshipA has two controllers: the owner,Alice (A) and a stakeholder, Carol (C). Also, some usersmay be the controllers of multiple data items. For example,Carol (C) is a stakeholder of RelationshipA as well as thecontributor of ContentE . Furthermore, we can notice thereare two groups in this example that users can participatein: the “Fashion” group and the “Hiking” group, and someusers, such as Bob (B) and Dave (D), may join in multiplegroups.

3.2 MPAC Policy SpecificationTo enable a collaborative authorization management ofdata sharing in OSNs, it is essential for multiparty accesscontrol policies to be in place to regulate access over shareddata, representing authorization requirements from multipleassociated users. Our policy specification scheme is builtupon the proposed MPAC model.Accessor Specification: Accessors are a set of users whoare granted to access the shared data. Accessors can berepresented with a set of user names, a set of relationshipnames or a set of group names in OSNs. We formally definethe accessor specification as follows:

Definition 5: (Accessor Specification). Let ac ∈ U ∪RT ∪ G be a user u ∈ U , a relationship type rt ∈ RT ,or a group g ∈ G. Let at ∈ {UN,RN,GN} be the typeof the accessor specification (user name, relationship type,and group name, respectively). The accessor specificationis defined as a set, accessors = {a1, . . . , an}, where eachelement is a tuple < ac, at >.Data Specification: In OSNs, user data is composed ofthree types of information, user profile, user relationshipand user content. To facilitate effective privacy conflictresolution for multiparty access control, we introduce sensi-tivity levels for data specification, which are assigned by the

controllers to the shared data items. A user’s judgment ofthe sensitivity level of the data is not binary (private/public),but multi-dimensional with varying degrees of sensitivity.Formally, the data specification is defined as follows:

Definition 6: (Data Specification). Let dt ∈ D be a dataitem. Let sl be a sensitivity level, which is a rational numberin the range [0,1], assigned to dt. The data specification isdefined as a tuple < dt, sl >.

Access Control Policy: To summarize the above-mentionedpolicy elements, we introduce the definition of a multipartyaccess control policy as follows:

Definition 7: (MPAC Policy). A multipartyaccess control policy is a 5-tuple P =<controller, ctype, accessor, data, effect >, where

• controller ∈ U is a user who can regulate the accessof data;

• ctype ∈ CT is the type of the controller;• accessor is a set of users to whom the authorization

is granted, representing with an access specificationdefined in Definition 5.

• data is represented with a data specification definedin Definition 6; and

• effect ∈ {permit, deny} is the authorization effectof the policy.

Suppose a controller can leverage five sensitivity levels:0.00 (none), 0.25 (low), 0.50 (medium), 0.75 (high), and1.00 (highest) for the shared data. We illustrate severalexamples of MPAC policies for OSNs as follows:

The MPAC policies(1) “Alice authorizes her friends to view her status identi-

fied by status01 with a medium sensitivity level, whereAlice is the owner of the status.”

(2) “Bob authorizes users who are his colleagues or inhiking group to view a photo, summer.jpg, that heis tagged in with a high sensitivity level, where Bob isa stakeholder of the photo.”

(3) “Carol disallows Dave and Edward to watch a video,play.avi, that she uploads to someone else’s spaceswith a highest sensitivity level, where Carol is thecontributor of the video.”

are expressed as:(1) p1 = (Alice,OW, {< friendOf,RN >},

< status01, 0.50 >, permit)(2) p2 = (Bob, ST, {< colleageOf,RN >,

< hiking,GN >}, < summer.jpg, 0.75 >, permit)(3) p3 = (Carol, CB, {< Dave, UN >,

< Edward, UN >}, < play.avi, 1.00 >, deny)

3.3 Multiparty Policy Evaluation

Two steps are performed to evaluate an access request overmultiparty access control policies. The first step checks theaccess request against the policy specified by each con-troller and yields a decision for the controller. The accessorelement in a policy decides whether the policy is applicableto a request. If the user who sends the request belongs tothe user set derived from the accessor of a policy, the policy

Page 6: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

6

Fig. 4. Multiparty Policy Evaluation Process.

is applicable and the evaluation process returns a responsewith the decision (either permit or deny) indicated bythe effect element in the policy. Otherwise, the responseyields deny decision if the policy is not applicable to therequest. In the second step, decisions from all controllersresponding to the access request are aggregated to makea final decision for the access request. Figure 4 illustratesthe evaluation process of multiparty access control policies.Since data controllers may generate different decisions(permit and deny) for an access request, conflicts mayoccur. In order to make an unambiguous decision for eachaccess request, it is essential to adopt a systematic conflictresolution mechanism to resolve those conflicts duringmultiparty policy evaluation.

The essential reason leading to the conflicts–especiallyprivacy conflicts–is that multiple controllers of the shareddata item often have different privacy concerns over thedata item. For example, assume that Alice and Bob aretwo controllers of a photo. Both of them defines their ownaccess control policy stating only her/his friends can viewthis photo. Since it is almost impossible that Alice and Bobhave the same set of friends, privacy conflicts may alwaysexist when considering multiparty control over the shareddata item.

A naive solution for resolving multiparty privacy con-flicts is to only allow the common users of accessorsets defined by the multiple controllers to access the dataitem. Unfortunately, this strategy is too restrictive in manycases and may not produce desirable results for resolvingmultiparty privacy conflicts. Consider an example that fourusers, Alice, Bob, Carol and Dave, are the controllers ofa photo, and each of them allows her/his friends to seethe photo. Suppose that Alice, Bob and Carol are closefriends and have many common friends, but Dave has nocommon friends with them and also has a pretty weakprivacy concern on the photo. In this case, adopting thenaive solution for conflict resolution may turn out thatno one can access this photo. However, it is reasonableto give the view permission to the common friends ofAlice, Bob and Carol. A strong conflict resolution strategymay provide a better privacy protection. Meanwhile, it mayreduce the social value of data sharing in OSNs. Therefore,it is important to consider the tradeoff between privacyand utility when resolving privacy conflicts. To address thisissue, we introduce a simple but flexible voting scheme for

resolving multiparty privacy conflicts in OSNs.

3.3.1 A voting scheme for decision making of multi-party controlMajority voting is a popular mechanism for decision mak-ing [24]. Inspired by such a decision making mechanism,we propose a voting scheme to achieve an effective multi-party conflict resolution for OSNs. A notable feature ofthe voting mechanism for conflict resolution is that thedecision from each controller is able to have an effect onthe final decision. Our voting scheme contains two votingmechanisms, decision voting and sensitivity voting.

Decision Voting: A decision voting value (DV ) derivedfrom the policy evaluation is defined as follows, whereEvaluation(p) returns the decision of a policy p:

DV =

{0 if Evaluation(p) = Deny1 if Evaluation(p) = Permit

(1)

Assume that all controllers are equally important, anaggregated decision value (DVag) (with a range of 0.00to 1.00) from multiple controllers including the owner(DVow), the contributor (DVcb) and stakeholders (DVst),is computed with following equation:

DVag = (DVow +DVcb +∑i∈SS

DV ist)×

1

m(2)

where SS is the stakeholder set of the shared data item,and m is the number of controllers of the shared data item.

Each controller of the shared data item may have (i) adifferent trust level over the data owner and (ii) a differentreputation value in terms of collaborative control. Thus,a generalized decision voting scheme needs to introduceweights, which can be calculated by aggregating trust levelsand reputation values [19], on different controllers. Dif-ferent weights of controllers are essentially represented bydifferent importance degrees on the aggregated decision. Ingeneral, the importance degree of controller x is “weightx /sum of weights”. Suppose that ωow, ωcb and ωi

sh are weightvalues for owner, contributor and stakeholders, respectively,and n is the number of stakeholders of the shared data item.A weighted decision voting scheme is as follows:

DVag = (ωow ×DVow + ωcb ×DVcb +n∑

i=1

(ωist ×DV i

st))

× 1

ωow + ωcb +∑n

i=1 ωist

(3)

Sensitivity Voting: Each controller assigns a sensitivity level(SL) to the shared data item to reflect her/his privacyconcern. A sensitivity score (SC) (in the range from 0.00 to1.00) for the data item can be calculated based on followingequation:

SC = (SLow + SLcb +∑i∈SS

SList)×

1

m(4)

Note that we can also use a generalized sensitivity votingscheme like equation (3) to compute the sensitivity score(SC).

Page 7: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

7

3.3.2 Threshold-based conflict resolutionA basic idea of our approach for threshold-based conflictresolution is that the sensitivity score (SC) can be utilizedas a threshold for decision making. Intuitively, if the sensi-tivity score is higher, the final decision has a high chanceto deny access, taking into account the privacy protectionof high sensitive data. Otherwise, the final decision isvery likely to allow access, so that the utility of OSNservices cannot be affected. The final decision is madeautomatically by OSN systems with this threshold-basedconflict resolution as follows:

Decision =

{Permit if DVag > SCDeny if DVag ≤ SC

(5)

It is worth noticing that our conflict resolution approachhas an adaptive feature which reflects the changes ofpolicies and sensitivity levels. If any controller changesher/his policy or sensitivity level for the shared data item,the aggregated decision value (DVag) and the sensitivityscore (SC) will be recomputed and the final decision maybe changed accordingly.

3.3.3 Strategy-based conflict resolution with privacyrecommendationAbove threshold-based conflict resolution provides a quitefair mechanism for making the final decision when wetreat all controllers equally important. However, in practice,different controllers may have different priorities in makingimpact on the final decision. In particular, the owner ofdata item may be desirable to possess the highest priorityin the control of shared data item. Thus, we further providea strategy-based conflict resolution mechanism to fulfilspecific authorization requirements from the owners ofshared data.

In this conflict resolution, the sensitivity score (SC) ofa data item is considered as a guideline for the owner ofshared data item in selecting an appropriate strategy forconflict resolution. We introduce following strategies for thepurpose of resolving multiparty privacy conflicts in OSNs.

• Owner−overrides : the owner’s decision has thehighest priority. This strategy achieves the owner con-trol mechanism that most OSNs are currently utilizingfor data sharing. Based on the weighted decisionvoting scheme, we set ωow = 1, ωcb = 0 and ωst = 0,1

and the final decision can be made as follows:

Decision =

{Permit if DVag = 1Deny if DVag = 0 (6)

• Full−consensus−permit : if any controller deniesthe access, the final decision is deny. This strategycan achieve the naive conflict resolution that we dis-cussed previously. The final decision can be derivedas:

Decision =

{Permit if DVag = 1Deny otherwise (7)

1. In real system implementation, those weights need to be adjusted bythe system.

• Majority−permit : this strategy permits (denies,resp.) a request if the number of controllers to permit(deny, resp.) the request is greater than the numberof controllers to deny (permit, resp.) the request. Thefinal decision can be made as:

Decision =

{Permit if DVag ≥ 1/2Deny if DVag < 1/2

(8)

Other majority voting strategies [27] can be easily sup-ported by our voting scheme, such as strong-majority-permit (this strategy permits a request if over 2/3 controllerspermit it), super-majority-permit (this strategy permits arequest if over 3/4 controllers permit it).

3.3.4 Conflict resolution for dissemination controlA user can share others’ contents with her/his friends inOSNs. In this case, the user is a disseminator of the content,and the content will be posted in the disseminator’s spaceand visible to her/his friends or the public. Since a dissem-inator may adopt a weaker control over the disseminatedcontent but the content may be much more sensitive fromthe perspective of original controllers of the content, theprivacy concerns from the original controllers of the contentshould be always fulfilled, preventing inadvertent disclosureof sensitive contents. In other words, the original accesscontrol policies should be always enforced to restrict accessto the disseminated content. Thus, the final decision for anaccess request to the disseminated content is a compositionof the decisions aggregated from original controllers and thedecision from the current disseminator. In order to elimi-nate the risk of possible leakage of sensitive informationfrom the procedure of data dissemination, we leverage arestrictive conflict resolution strategy, Deny-overrides,to resolve conflicts between original controllers’ decisionand the disseminator’s decision. In such a context, if eitherof those decisions is to deny the access request, the finaldecision is deny. Otherwise, if both of them are permit,the final decision is permit.

4 LOGICAL REPRESENTATION AND ANALY-SIS OF MULTIPARTY ACCESS CONTROLIn this section, we adopt answer set programming (ASP),a recent form of declarative programming [28], [29], toformally represent our model, which essentially providea formal foundation of our model in terms of ASP-basedrepresentation. Then, we demonstrate how the correctnessanalysis and authorization analysis of multiparty accesscontrol can be carried out based on such a logical rep-resentation.

4.1 Representing Multiparty Access Control inASPWe introduce a translation module that turns multipartyauthorization specification into an ASP program. Thisinterprets a formal semantics of multiparty authorizationspecification in terms of the Answer Set semantics2.

2. We identify “∧” with “,”, “←” with “ :- ” and a policy of the formA← B,C ∨D as a set of the two policies A← B,C and A← B,D.

Page 8: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

8

4.1.1 Logical definition of multiple controllers andtransitive relationshipsThe basic components and relations in our MPAC modelcan be directly defined with corresponding predicates inASP. We have defined UDct as a set of user-to-datarelations with controller type ct ∈ CT . Then, the logicaldefinition of multiple controllers is as follows:

• The owner of a data item can be represented as:

OW (controller, data)← UDOW (controller, data)∧U(controller) ∧D(data).

• The contributor of a data item can be represented as:

CB(controller, data)← UDCB(controller, data)∧U(controller) ∧D(data).

• The stakeholder of a data item can be represented as:

ST (controller, data)← UDST (controller, data)∧U(controller) ∧D(data).

• The disseminator of a data item can be represented as:

DS(controller, data)← UDDS(controller, data)∧U(controller) ∧D(data).

Our MPAC model supports transitive relationships. Forexample, David is a friend of Allice, and Edward is a friendof David in a social network. Then, we call Edward is afriends of friends of Allice. The friend relation between twousers Allice and David is represented in ASP as follows:

friendOf(Allice,David).

It is known that the transitive closure (e.g., reachability)cannot be expressed in first-order logic [29], however, itcan be easily handled in the stable model semantics. Then,friends-of-friends can be represented as a transitive closureof friend relation with ASP as follows:

friendsOFfriends(U1, U2)← friendOf(U1, U2).friendsOFfriends(U1, U3)← friendsOFfriends(U1, U2),

friendsOFfriends(U2, U3).

4.1.2 Logical policy specificationThe translation module converts a multiparty authorizationpolicy containing the type of the accessor specification at =RN

(controller, ctype, {< ac1, RN >, . . . , < acn, RN >},< dt, sl >, effect)

into an ASP rule

decision(controller, effect)←∨

1≤k≤n

ack(controller,X) ∧

ctype(controller, data) ∧ U(controller) ∧ U(X) ∧D(dt)

For instance, p1 in Section 3.2 is translated into an ASPrule as follows:decision(Alice,permit)← friendOf(Alice,X)∧

OW (Alice, photoId) ∧ U(Alice) ∧ U(X) ∧ status(statusId)

The translation module converts a multiparty authoriza-tion policy including the type of the accessor specificationat = UN

(controller, ctype, {< ac1, UN >, . . . , < acn, UN >},< dt, sl >, effect)

into a set of ASP rules

decision(controller, effect)←∨

1≤k≤n

U(ack) ∧ ctype(

controller, data) ∧ U(controller) ∧D(dt).

For example, p3 in Section 3.2 can be translated intofollowing an ASP rules

decision(Carol,deny)← U(Dave) ∨ U(Edward)∧CB(Carol, videoId) ∧ U(Carol) ∧ video(videoId).

The translation module converts a multiparty authoriza-tion policy including the type of the accessor specificationat = GN

(controller, ctype, {< ac1, GN >, . . . , < acn, GN >},< dt, sl >, effect)

into an ASP rule

decision(controller, effect)←∨

1≤k≤n

UG(ack, X) ∧

ctype(controller, data) ∧ U(controller) ∧ U(X) ∧D(dt)

For example, p2 in Section 3.2 specifies the accessor witha relationship name (RN ) and a group name (GN ). Thus,this policy can be translated into an ASP rule as:

decision(Bob,permit)← colleageOf(Bob,X)∨UG(hiking,X) ∧ ST (Bob, photoId)∧U(Bob) ∧ U(X) ∧ photo(photoId).

4.1.3 Logical representation of conflict resolutionmechanismOur voting schemes are represented in ASP rules as fol-lows:

decision voting(C) = 1← decision(C,permit).decision voting(C) = 0← decision(C,deny).aggregation weight(K)← K =

sum{weight(C) : controller(C)}.aggregation decision(N)← N =

sum{decision voting(C)× weight(C) : controller(C)}.aggregation sensitivity(M)←M =sum{sensitivity voting(C)× weight(C) : controller(C)}.

Our threshold-based conflict resolution mechanism isrepresented as:

decision(controllers,permit)← N > M ∧aggregation decision(N) ∧ aggregation sensitivity(M).decision(controllers,deny)←

not decision(controllers,permit).

Our strategy-based conflict resolution mechanism is rep-resented with ASP as well. For example,

• The conflict resolution strategy Owner−overrides isrepresented in ASP rules as follows

weight(controllers) = 1← OW (controller, data).weight(controllers) = 0← CB(controller, data).weight(controllers) = 0← ST (controller, data).decision(controllers,permit)← N/K == 1∧aggregation weight(K) ∧ aggregation decision(N).

decision(controllers,deny)←not decision(controllers,permit).

Page 9: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

9

• The conflict resolution strategyFull−consensus−permit is represented as:

decision(controllers,permit)← N/K == 1∧aggregation weight(K) ∧ aggregation decision(N).

decision(controllers,deny)←not decision(controllers,permit).

• The conflict resolution strategy Majority−permit isrepresented with following ASP rules:

decision(controllers,permit)← N/K > 1/2 ∧aggregation weight(K) ∧ aggregation decision(N).

decision(controllers,deny)←not decision(controllers,permit).

• The conflict resolution strategy Deny−overrides fordissemination control is represented as follows:

decision(deny)← decision(controllers,deny).decision(deny)← decision(disseminator,deny).decision(permit)← not decision(deny).

4.2 Reasoning about Multiparty Access ControlOne of the promising advantages in logic-based represen-tation of access control is that formal reasoning of theauthorization properties can be achieved. Once we representour multiparty access control model into an ASP program,we can use off-the-shelf ASP solvers to carry out severalautomated analysis tasks.

The problem of verifying an authorization propertyagainst our model description can be cast into the problemof checking whether the program

Π ∪Πquery ∪Πconfig

has no answer sets, where Π is the program correspondingto the model specification, Πquery is the program corre-sponding to the program that encodes the negation of theproperty to check, and Πconfig is the program that cangenerate arbitrary configurations, for example:

user_attributes(alice, bob, carol, dave,edward).

group_attributes(fashion, hiking).photo_attributes(photoid).1{user(X) : user_attributes(X)}.1{group(X) : group_attributes(X)}.1{photo(X) : photo_attributes(X)}.

Correctness Analysis: is used to prove if our proposedaccess control model is valid.

Example 1: If we want to check whether the conflictresolution strategy Full−consensus−permit is correctlydefined based on the voting scheme in our model, the inputquery Πquery can be represented as follows:

decision(controllers,permit)←∨

C⊆CS decision(C,deny).

If no answer set is found, this implies that the authoriza-tion property is verified. Otherwise, an answer set returnedby an ASP solver serves as a counterexample that indi-cates why the description does not entail the authorizationproperty. This helps determine the problems in the modeldefinition.

Authorization Analysis: is employed to examine over-sharing (does current authorization state disclose the data tosome users undesirable?) and under-sharing (does currentauthorization state disallow some users to access the datathat they are supposed to be allowed?). This analysisservice should be incorporated into OSN systems to enableusers checking potential authorization impacts derived fromcollaborative control of shared data.

Example 2: (Checking over-sharing) Alice has defineda policy to disallow her family members to see a photo.Then, she wants to check if any family members can seethis photo after applying conflict resolution mechanism forcollaborative authorization management considering differ-ent privacy preferences from multiple controllers. The inputquery Πquery can be represented as follows:

check :- decision(permit), familyof(alice, x),ow(alice, photoid), user(alice),user(x), photo(photoid).

:- not check.

If any answer set is found, it means that there are familymembers who can see the photo. Thus, from the perspectiveof Alice’s authorization, this photo is over shared to someusers. The possible solution is to adopt a more restrictedconflict resolution strategy or increase the weight value ofAlice.

Example 3: (Checking under-sharing) Bob has defined apolicy to authorize his friends to see a photo. He wants tocheck if any friends cannot see this photo in current system.The input query Πquery can be specified as follows:

check :- decision(deny), friendof(bob, x),ow(alice, photoid), user(bob),user(x), photo(photoid).

:- not check.

If an answer set contains check, this means that thereare friends who cannot view the photo. Regarding Bob’sauthorization requirement, this photo is under shared withhis friends.

5 IMPLEMENTATION AND EVALUATION

5.1 Prototype Implementation

We implemented a proof-of-concept Facebook applicationfor the collaborative management of shared data, calledMController (http://apps.facebook.com/MController). Ourprototype application enables multiple associated users tospecify their authorization policies and privacy preferencesto co-control a shared data item. It is worth noting that ourcurrent implementation was restricted to handle photo shar-ing in OSNs. Obversely, our approach can be generalizedto deal with other kinds of data sharing, such as videos andcomments, in OSNs as long as the stakeholder of shareddata are identified with effective methods like tagging orsearching.

Figure 5 shows the architecture of MController, whichis divided into two major pieces, Facebook server andapplication server. The Facebook server provides an entrypoint via the Facebook application page, and provides

Page 10: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

10

Fig. 5. Overall Architecture of MController Application.Owner Control Privacy Policy Decision AggregationSensitivity SettingStrategy Selection ContributorControl Privacy PolicySensitivity Setting StakeholderControl Privacy PolicySensitivity SettingStrategy Repository Sensitivity Aggregation Decision MakingRequestsFinal DecisionsRecommendation Privacy PolicyDisseminatorControl Decision

Fig. 6. System Architecture of Decision Making inMController.

references to photos, friendships, and feed data throughAPI calls. Facebook server accepts inputs from users, thenforwards them to the application server. The applicationserver is responsible for the input processing and collabora-tive management of shared data. Information related to userdata such as user identifiers, friend lists, user groups, anduser contents are stored in the application database. Userscan access the MController application through Facebook,which serves the application in an iFrame. When accessrequests are made to the decision making portion in theapplication server, results are returned in the form of accessto photos or proper information about access to photos.In addition, when privacy changes are made, the decisionmaking portion returns change-impact information to theinterface to alert the user. Moreover, analysis services inMController application are provided by implementing anASP translator, which communicates with an ASP reasoner.Users can leverage the analysis services to perform com-plicated authorization queries.

MController is developed as a third-party Facebook ap-plication, which is hosted in an Apache Tomcat applicationserver supporting PHP and MySQL database. MControllerapplication is based on the iFrame external applicationapproach. Using the Javascript and PHP SDK, it accessesusers’ Facebook data through the Graph API and FacebookQuery Language. Once a user installs MController inher/his Facebook space and accepts the necessary permis-sions, MController can access a user’s basic informationand contents. Especially, MController can retrieve and listall photos, which are owned or uploaded by the user, orwhere the user was tagged. Once information is imported,the user accesses MController through its application pageon Facebook, where s/he can query access information, set

privacy for photos that s/he is a controller, or view photoss/he is allowed to access.

A core component of MController is the decision mak-ing module, which processes access requests and returnsresponses (either permit or deny) for the requests. Fig-ure 6 depicts a system architecture of the decision makingmodule in MController. To evaluate an access request,the policies of each controller of the targeted content areenforced first to generate a decision for the controller. Then,the decisions of all controllers are aggregated to yield a finaldecision as the response of the request. Multiparty privacyconflicts are resolved based on the configured conflictresolution mechanism when aggregating the decisions ofcontrollers. If the owner of the content chooses automaticconflict resolution, the aggregated sensitivity value is uti-lized as a threshold for decision making. Otherwise, multi-party privacy conflicts are resolved by applying the strategyselected by the owner, and the aggregated sensitivity scoreis considered as a recommendation for strategy selection.Regarding the access requests to disseminated content, thefinal decision is made by combining the disseminator’sdecision and original controllers’ decision adopting corre-sponding combination strategy discussed previously.

A snapshot of main interface of MController is shownin Figure 7 (a). All photos are loaded into a gallery-style interface. To control photo sharing, the user clicksthe “Owned”, “Tagged”, “Contributed”, or “Disseminated”tabs, then selects any photo to define her/his privacy prefer-ence by clicking the lock below the gallery. If the user is notthe owner of selected photo, s/he can only edit the privacysetting and sensitivity setting of the photo.3 Otherwise, asshown in Figure 7 (c), if the user is the owner of the photo,s/he has the option of clicking “Show Advanced Controls”to assign weight values to different types of controllers 4

and configure the conflict resolution mechanism for theshared photo. By default, the conflict resolution is setto automatic. However, if the owner chooses to set amanual conflict resolution, s/he is informed of a sensitivityscore of shared photo and receives a recommendation forchoosing an appropriate conflict resolution strategy. Oncea controller saves her/his privacy setting, a correspondingfeedback is provided to indicate the potential authorizationimpact of her/his choice. The controller can immediatelydetermine how many users can see the photo and shouldbe denied, and how many users cannot see the photoand should be allowed. MController can also display thedetails of all users who violate against the controller’sprivacy setting (See Figure 7 (d)). The purpose of suchfeedback information is to guide the controller to evaluatethe impact of collaborative authorization. If the controlleris not satisfied with the current privacy control, s/he mayadjust her/his privacy setting, contact the owner of the photo

3. Note that users are able to predefine their privacy preferences, whichare then applied by default to photos they need to control.

4. In our future work, we will explore how each controller can beassigned with a weight. Also, an automatic mechanism to determineweights of controllers will be studied, considering the trust and reputationlevel of controllers for the collaborative control.

Page 11: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

11

Fig. 7. MController Interface.

to ask her/him to change the conflict resolution strategies,or even report a privacy violation to OSN administratorswho can delete the photo. A controller can also performauthorization analysis by advanced queries as shown inFigure 7 (b). Both over-sharing and under-sharing can beexamined by using such an analysis service in MController.

5.2 System Usability and Performance Evaluation5.2.1 Participants and ProcedureMController is a functional proof-of-concept implementa-tion of collaborative privacy management. To measure thepracticality and usability of our mechanism, we conducteda survey study (n=35) to explore the factors surroundingusers’ desires for privacy and discover how we mightimprove those implemented in MController. Specifically,we were interested in users’ perspectives on the currentFacebook privacy system and their desires for more controlover photos they do not own. We recruited participantsthrough university mailing lists and through Facebook itselfusing Facebook’s built-in sharing API. Users were giventhe opportunity to share our application and play with theirfriends. While this is not a random sampling, recruiting us-ing the natural dissemination features of Facebook arguablygives an accurate profile of the ecosystem.

Participants were first asked to answer some ques-tions about their usage and perception of Facebook’sprivacy controls, then were invited to watch a video

(http://bit.ly/MController) describing the concept behindMController. Users were then instructed to install theapplication using their Facebook profiles and complete thefollowing actions: set privacy settings for a photo they donot own but are tagged in, set privacy settings for a photothey own, set privacy settings for a photo they contributed,and set privacy settings for a photo they disseminated. Asusers completed these actions, they answered questions onthe usability of the controls in MController. Afterward, theywere asked to answer further questions and compare theirexperience with MController to that in Facebook.

5.2.2 User Study of MController

For evaluation purposes, questions (http://goo.gl/eDkaV)were split into three areas: likeability, simplicity, and con-trol. Likeability is a measure of a user’s satisfaction witha system (e.g. “I like the idea of being able to controlphotos in which I am tagged”). Simplicity is a measure howintuitive and useful the system is (e.g. “Setting my privacysettings for a photo in MController is Complicated (1) toSimple (5)” with a 5-point scale). Control is a measure ofthe user’s perceived control of their personal data (e.g. “IfFacebook implemented controls like MController’s to con-trol photo privacy, my photos would be better protected”).Questions were either True/False or measured on a 5-pointlikert scale, and all responses were scaled from 0 to 1 fornumerical analysis. In the measurement, a higher number

Page 12: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

12

TABLE 1Usability Evaluation for Facebook and MController Privacy Controls.

MetricFacebook MController

Average Upper bound on 95% Average Lower bound on 95%confidence interval confidence interval

Likability 0.20 0.25 0.83 0.80Simplicity 0.38 0.44 0.72 0.64Control 0.20 0.25 0.83 0.80

indicates a positive perception or opinion of the systemwhile a lower number indicates a negative one. To analyzethe average user perception of the system, we used a 95%confidence interval for the users’ answers. This assumes thepopulation to be mostly normal.

Before Using MController. Prior to using MController,users were asked a few questions about their usage ofFacebook to determine the user’s perceived usability ofthe current Facebook privacy controls. Since we wereinterested in the maximum average perception of Facebook,we looked at the upper bound of the confidence interval.

An average user asserts at most 25% positively about thelikability and control of Facebook’s privacy managementmechanism, and at most 44% on Facebook’s simplicity asshown in Table 1. This demonstrates an average negativeopinion of the Facebook’s privacy controls that users cur-rently must use.

After Using MController. Users were then asked to per-form a few tasks in MController. Since we were interestedin the average minimum opinion of MController, we lookedat the lower bound of the confidence interval.

An average user asserts at least 80% positively aboutthe likability and control, and at least 67% positivelyon MController’s simplicity as shown in Table 1. Thisdemonstrates an average positive opinion of the controlsand ideas presented to users in MController.

Improvement. Besides viewing user opinions and ana-lyzing the usability of our proof-of-concept application,we also briefly investigated the potential improvement ofour application. The lowest score for MController is in thearea of simplicity. On average, users found setting privacysettings for a photo more complicated in MController thanFacebook. This means that while users appreciated theprivacy controls of MController presented to them, it wouldbe desirable to further simplify the process implemented inMController. This could be achieved by adopting learn-based generation of privacy settings for OSNs [15], [32].

5.2.3 Performance Evaluation

To evaluate the performance of the policy evaluation mech-anism in MController, we changed the number of the con-trollers of a shared photo from 1 to 20, and assigned eachcontroller with the average number of friends, 130, which isclaimed by Facebook statistics [3]. Also, we considered twocases for our evaluation. In the first case, each controllerallows “friends” to access the shared photo. In the secondcase, controllers specify “friends of friends” as the acces-sors instead of “friends”. In our experiments, we performed1,000 independent trials and measured the performance ofeach trial. Since the system performance depends on other

processes running at the time of measurement, we hadinitial discrepancies in our performance. To minimize suchan impact, we performed 10 independent trials (a total of10,000 calculations for each number of controllers). Forboth cases, the experimental results showed that the policyevaluation time increases linearly with the increase of thenumber of controllers. With the simplest implementation ofour mechanism, where n is the number of controllers of ashared photo, a series of operations essentially takes placen times. There are O(n) MySQL calls and data fetchingoperations and O(1) for additional operations. Moreover,we could observe there was no significant overhead whenwe run MController in Facebook.

6 DISCUSSIONS

In our multiparty access control system, a group of userscould collude with one another so as to manipulate the finalaccess control decision. Consider an attack scenario, wherea set of malicious users may want to make a shared photoavailable to a wider audience. Suppose they can accessthe photo, and then they all tag themselves or fake theiridentities to the photo. In addition, they collude with eachother to assign a very low sensitivity level for the photoand specify policies to grant a wider audience to access thephoto. With a large number of colluding users, the photomay be disclosed to those users who are not expected togain the access. To prevent such an attack scenario fromoccurring, three conditions need to be satisfied: (1) thereis no fake identity in OSNs; (2) all tagged users are realusers appeared in the photo; and (3) all controllers of thephoto are honest to specify their privacy preferences.

Regarding the first condition, two typical attacks, Sybilattacks [14] and Identity Clone attacks [8], have beenintroduced to OSNs and several effective approaches havebeen recently proposed to prevent the former [16], [36]and latter attacks [22], respectively. To guarantee thesecond condition, an effective tag validation mechanismis needed to verify each tagged user against the photo.In our current system, if any users tag themselves orothers in a photo, the photo owner will receive a tagnotification. Then, the owner can verify the correctness ofthe tagged users. As effective automated algorithms (e.g.,facial recognition [13]) are being developed to recognizepeople accurately in contents such as photos, automatic tagvalidation is feasible. Considering the third condition, ourcurrent system provides a function (See Figure 7 (d)) toindicate the potential authorization impact with respect toa controller’s privacy preference. Using such a function,the photo owner can examine all users who are grantedthe access by the collaborative authorization and are not

Page 13: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

13

explicitly granted by the owner her/himself. Thus, it enablesthe owner to discover potential malicious activities incollaborative control. The detection of collusion behaviorsin collaborative systems has been addressed by the recentwork [31], [35]. Our future work would integrate an effec-tive collusion detection technique into MPAC. To preventcollusion activities, our current prototype has implementeda function for owner control (See Figure 7 (c)), where thephoto owner can disable any controller, who is suspectedto be malicious, from participating in collaborative controlof the photo. In addition, we would further investigate howusers’ reputations–based on their collaboration activities–can be applied to prevent and detect malicious activities inour future work.

7 RELATED WORK

Access control for OSNs is still a relatively new researcharea. Several access control models for OSNs have beenintroduced (e.g., [10], [11], [17], [18], [23]). Early accesscontrol solutions for OSNs introduced trust-based accesscontrol inspired by the developments of trust and repu-tation computation in OSNs. The D-FOAF system [23]is primarily a Friend of a Friend (FOAF) ontology-baseddistributed identity management system for OSNs, whererelationships are associated with a trust level, which indi-cates the level of friendship between the users participatingin a given relationship. Carminati et al. [10] introduceda conceptually-similar but more comprehensive trust-basedaccess control model. This model allows the specificationof access rules for online resources, where authorized usersare denoted in terms of the relationship type, depth, andtrust level between users in OSNs. They further presented asemi-decentralized discretionary access control model and arelated enforcement mechanism for controlled sharing of in-formation in OSNs [11]. Fong et al. [18] proposed an accesscontrol model that formalizes and generalizes the accesscontrol mechanism implemented in Facebook, admittingarbitrary policy vocabularies that are based on theoreticalgraph properties. Gates [12] described relationship-basedaccess control as one of new security paradigms thataddresses unique requirements of Web 2.0. Then, Fong [17]recently formulated this paradigm called a Relationship-Based Access Control (ReBAC) model that bases autho-rization decisions on the relationships between the resourceowner and the resource accessor in an OSN. However,none of these existing work could model and analyzeaccess control requirements with respect to collaborativeauthorization management of shared data in OSNs.

The need of joint management for data sharing, espe-cially photo sharing, in OSNs has been recognized by therecent work [7], [9], [21], [33]. Squicciarini et al. [33]provided a solution for collective privacy managementin OSNs. Their work considered access control policiesof a content that is co-owned by multiple users in anOSN, such that each co-owner may separately specifyher/his own privacy preference for the shared content. TheClarke-Tax mechanism was adopted to enable the collective

enforcement of policies for shared contents. Game theorywas applied to evaluate the scheme. However, a generaldrawback of their solution is the usability issue, as itcould be very hard for ordinary OSN users to comprehendthe Clarke-Tax mechanism and specify appropriate bidvalues for auctions. Also, the auction process adopted intheir approach indicates that only the winning bids coulddetermine who can access the data, instead of accommo-dating all stakeholders’ privacy preferences. Carminati etal. [9] recently introduced a new class of security policies,called collaborative security policies, that basically enhancetopology-based access control with respect to a set ofcollaborative users. This work considers user collaborationduring both access control enforcement and policy specifi-cation, and employs semantic web technologies to supporta rich way for denoting collaborative users based on theirrelationships with the associated resources. In contrast, ourwork proposes a formal model to address the multipartyaccess control issue in OSNs, along with a general policyspecification scheme and a simple but flexible conflictresolution mechanism for collaborative management ofshared data in OSNs. In particular, our proposed solutioncan also conduct various analysis tasks on access controlmechanisms used in OSNs, which has not been addressedby prior work.

8 CONCLUSION

In this paper, we have proposed a novel solution for collab-orative management of shared data in OSNs. A multipartyaccess control model was formulated, along with a multi-party policy specification scheme and corresponding policyevaluation mechanism. In addition, we have introduced anapproach for representing and reasoning about our proposedmodel. A proof-of-concept implementation of our solutioncalled MController has been discussed as well, followed bythe usability study and system evaluation of our method.

As part of future work, we are planning to investigatemore comprehensive privacy conflict resolution approachand analysis services for collaborative management ofshared data in OSNs. Also, we would explore more criteriato evaluate the features of our proposed MPAC model.For example, one of our recent work has evaluated theeffectiveness of MPAC conflict resolution approach basedon the tradeoff of privacy risk and sharing loss [21].In addition, users may be involved in the control of alarger number of shared photos and the configurations ofthe privacy preferences may become time-consuming andtedious tasks. Therefore, we would study inference-basedtechniques [15], [34] for automatically configure privacypreferences in MPAC. Besides, we plan to systematicallyintegrate the notion of trust and reputation into our MPACmodel and investigate a comprehensive solution to copewith collusion attacks for providing a robust MPAC servicein OSNs.

REFERENCES

[1] Facebook Developers. http://developers.facebook.com/.[2] Facebook Privacy Policy. http://www.facebook.com/policy.php/.

Page 14: Multiparty Access Control for Online Social Networks: Model and Mechanisms 2012 Base Paper/Multiparty... · 2014-08-08 · Multiparty Access Control for Online Social Networks: Model

14

[3] Facebook Statistics. http://www.facebook.com/press/info.php?statistics.

[4] Google+ Privacy Policy. http://http://www.google.com/intl/en/+/policy/.

[5] OpenSocial Framework. http://code.google.com/p/opensocial-resources/.

[6] The Google+ Project. https://plus.google.com.[7] A. Besmer and H. Richter Lipford. Moving beyond untagging: Photo

privacy in a tagged world. In Proceedings of the 28th internationalconference on Human factors in computing systems, pages 1563–1572. ACM, 2010.

[8] L. Bilge, T. Strufe, D. Balzarotti, and E. Kirda. All your contactsare belong to us: automated identity theft attacks on social networks.In Proceedings of the 18th international conference on World wideweb, pages 551–560. ACM, 2009.

[9] B. Carminati and E. Ferrari. Collaborative access control in on-line social networks. In Proceedings of the 7th InternationalConference on Collaborative Computing: Networking, Applicationsand Worksharing (CollaborateCom), pages 231–240. IEEE, 2011.

[10] B. Carminati, E. Ferrari, and A. Perego. Rule-based access controlfor social networks. In On the Move to Meaningful Internet Systems2006: OTM 2006 Workshops, pages 1734–1744. Springer, 2006.

[11] B. Carminati, E. Ferrari, and A. Perego. Enforcing access control inweb-based social networks. ACM Transactions on Information andSystem Security (TISSEC), 13(1):1–38, 2009.

[12] E. Carrie. Access Control Requirements for Web 2.0 Security andPrivacy. In Proc. of Workshop on Web 2.0 Security & Privacy(W2SP). Citeseer, 2007.

[13] J. Choi, W. De Neve, K. Plataniotis, and Y. Ro. Collaborativeface recognition for improved face annotation in personal photocollections shared on online social networks. Multimedia, IEEETransactions on, 13(1):14–28, 2011.

[14] J. Douceur. The sybil attack. Peer-to-peer Systems, pages 251–260,2002.

[15] L. Fang and K. LeFevre. Privacy wizards for social networking sites.In Proceedings of the 19th international conference on World wideweb, pages 351–360. ACM, 2010.

[16] P. Fong. Preventing sybil attacks by privilege attenuation: A designprinciple for social network systems. In Security and Privacy (SP),2011 IEEE Symposium on, pages 263–278. IEEE, 2011.

[17] P. Fong. Relationship-based access control: Protection model andpolicy language. In Proceedings of the first ACM conference onData and application security and privacy, pages 191–202. ACM,2011.

[18] P. Fong, M. Anwar, and Z. Zhao. A privacy preservation modelfor facebook-style social network systems. In Proceedings of the14th European conference on Research in computer security, pages303–320. Springer-Verlag, 2009.

[19] J. Golbeck. Computing and applying trust in web-based socialnetworks. Ph.D. thesis, University of Maryland at College ParkCollege Park, MD, USA. 2005.

[20] M. Harrison, W. Ruzzo, and J. Ullman. Protection in operatingsystems. Communications of the ACM, 19(8):461–471, 1976.

[21] H. Hu, G.-J. Ahn, and J. Jorgensen. Detecting and resolving privacyconflicts for collaborative data sharing in online social networks.In Proceedings of the 27th Annual Computer Security ApplicationsConference, ACSAC ’11, pages 103–112. ACM, 2011.

[22] L. Jin, H. Takabi, and J. Joshi. Towards active detection of identityclone attacks on online social networks. In Proceedings of thefirst ACM conference on Data and application security and privacy,pages 27–38. ACM, 2011.

[23] S. Kruk, S. Grzonkowski, A. Gzella, T. Woroniecki, and H. Choi.D-FOAF: Distributed identity management with access rights dele-gation. The Semantic Web–ASWC 2006, pages 140–154, 2006.

[24] L. Lam and S. Suen. Application of majority voting to pattern recog-nition: an analysis of its behavior and performance. Systems, Manand Cybernetics, Part A: Systems and Humans, IEEE Transactionson, 27(5):553–568, 2002.

[25] N. Li, J. Mitchell, and W. Winsborough. Beyond proof-of-compliance: security analysis in trust management. Journal of theACM (JACM), 52(3):474–514, 2005.

[26] N. Li and M. Tripunitara. Security analysis in role-based accesscontrol. ACM Transactions on Information and System Security(TISSEC), 9(4):391–420, 2006.

[27] N. Li, Q. Wang, W. Qardaji, E. Bertino, P. Rao, J. Lobo, andD. Lin. Access control policy combining: theory meets practice. In

Proceedings of the 14th ACM symposium on Access control modelsand technologies, pages 135–144. ACM, 2009.

[28] V. Lifschitz. What is answer set programming? In Proceedings ofthe AAAI Conference on Artificial Intelligence, pages 1594–1597.MIT Press, 2008.

[29] V. Marek and M. Truszczynski. Stable models and an alternativelogic programming paradigm. In The Logic Programming Paradigm:a 25-Year Perspective, pages 375–398. Springer Verlag, 1999.

[30] A. Mislove, B. Viswanath, K. Gummadi, and P. Druschel. You arewho you know: Inferring user profiles in online social networks.In Proceedings of the third ACM international conference on Websearch and data mining, pages 251–260. ACM, 2010.

[31] B. Qureshi, G. Min, and D. Kouvatsos. Collusion detection andprevention with fire+ trust and reputation model. In Computerand Information Technology (CIT), 2010 IEEE 10th InternationalConference on, pages 2548–2555. IEEE, 2010.

[32] A. Squicciarini, F. Paci, and S. Sundareswaran. PriMa: an effectiveprivacy protection mechanism for social networks. In Proceedingsof the 5th ACM Symposium on Information, Computer and Commu-nications Security, pages 320–323. ACM, 2010.

[33] A. Squicciarini, M. Shehab, and F. Paci. Collective privacy manage-ment in social networks. In Proceedings of the 18th internationalconference on World wide web, pages 521–530. ACM, 2009.

[34] A. Squicciarini, S. Sundareswaran, D. Lin, and J. Wede. A3p:adaptive policy prediction for shared images over popular contentsharing sites. In Proceedings of the 22nd ACM conference onHypertext and hypermedia, pages 261–270. ACM, 2011.

[35] E. Staab and T. Engel. Collusion detection for grid computing. InProceedings of the 2009 9th IEEE/ACM International Symposium onCluster Computing and the Grid, pages 412–419. IEEE ComputerSociety, 2009.

[36] B. Viswanath, A. Post, K. Gummadi, and A. Mislove. An analysis ofsocial network-based sybil defenses. In ACM SIGCOMM ComputerCommunication Review, volume 40, pages 363–374. ACM, 2010.

[37] G. Wondracek, T. Holz, E. Kirda, and C. Kruegel. A practical attackto de-anonymize social network users. In 2010 IEEE Symposium onSecurity and Privacy, pages 223–238. IEEE, 2010.

[38] E. Zheleva and L. Getoor. To join or not to join: the illusionof privacy in social networks with mixed public and private userprofiles. In Proceedings of the 18th international conference onWorld wide web, pages 531–540. ACM, 2009.

Hongxin Hu is currently working toward thePh.D. degree from the School of Comput-ing, Informatics, and Decision Systems En-gineering, Ira A. Fulton School of Engineer-ing, Arizona State University, Tempe, AZ.His current research interests include accesscontrol models and mechanisms, securityand privacy in social networks, and securityin distributed and cloud computing, networkand system security and secure softwareengineering.Gail-Joon Ahn is an Associate Professor inthe School of Computing, Informatics, andDecision Systems Engineering, Ira A. Ful-ton Schools of Engineering and the Directorof Security Engineering for Future Comput-ing Laboratory, Arizona State University. Hisresearch interests include information andsystems security, vulnerability and risk man-agement, access control, and security ar-chitecture for distributed systems, which hasbeen supported by the U.S. National Science

Foundation, National Security Agency, U.S. Department of Defense,U.S. Department of Energy, Bank of America, Hewlett Packard,Microsoft, and Robert Wood Johnson Foundation.

Jan Jorgensen is junior pursuing his B.S. incomputer science from Ira A. Fulton Schoolof Engineering and Barrett Honors College,Arizona State University. He is an under-graduate research assistant in the SecurityEngineering for Future Computing Labora-tory, Arizona State University. His current re-search pursuits include security and privacyin social networks.