11
A Scalable and Privacy Preserving Remote Attestation Mechanism Tamleek Ali 1 , Masoom Alam 1 , Mohammad Nauman 2 , Toqeer Ali 2 , Muhammad Ali 1 , Sajid Anwar 1 1 Institute of Management Sciences, Peshawar, Pakistan 2 Computer Science Research & Development Unit (CSRDU), Pakistan {tamleek, masoom, muhammad.ali, sajid.anwar}@imsciences.edu.pk {nauman, toqeer}@csrdu.org Abstract Assurance of fulfillment of stakeholder’s expectations on a target platform is termed as remote attestation. Without such an assurance, there is no way of knowing whether the policies of the remote owner will be enforced as expected. Existing approaches toward remote attestation work at different levels of the software stack and most of them only measures single entities (OS and/or application) on a remote platform. Several dynamic attestation techniques have been proposed that aim to measure the internal working of an application. In TCG-based attestations we use Platform Configuration Register (PCR) for storing and advocating the platform or application integrity to a remote party. Currently a single PCR is used to capture the behavior of one application or purpose. As there can be more than one applications running on a target system, we need to have mechanisms to remotely certify the internal behavior of multiple applications on a single system. In this paper we propose the idea of using a single PCR for multiple instances of a target application, while preserving the privacy of other application instances. Moreover, our technique also keeps the trusted status of each application intact. We change the working of the existing remote attestation techniques by enabling them to incorporate multiple instances of the entities (operating systems, programs etc.) that they measure. We also propose technique for measurement and verification of a single instance by its respective stakeholder while keeping the privacy of others. The mechanism proposed in this paper is applied on different attestation techniques that work at different levels of the software stack. We also provide a proof-of-concept implementation of the proposed technique and discuss the pros and cons of our approach. 1 Introduction Remote attestation — a term introduced by Trusted Computing Group (TCG) [1] — is an approach for establishing trust decision about a remote platform. Remote attestation allows a challenger to verify whether the behavior of a target platform/application is trusted. Several approaches have been proposed for remote attestation of a target platform. These techniques are defined at different levels of abstraction. The lower level techniques include Integrity Measurement Architecture (IMA) [10] that presents binary hashes of executables to the challenger, and Policy Reduced Integrity Measurement Architecture (PRIMA) [5] that controls the information flows to and from a trusted application. Similarly, medium level technique such as property based attestation [9] allows mapping of system configurations to some generic properties. [2] proposed a high level framework in which behavior of a model is identified and measured. Recently, efforts have been made that aim to measure dynamic behavior of an application. In these techniques, various other types of trust tokens, represented as arbitrary data structures, are collected and reported through PCRs [7, 2]. Remote attestation of program execution [4] is a technique to dynamically measure the behavior of an application on a remote platform. They assess the benign behavior of the remotely executing program by the sequence in which the program makes system calls. All these techniques one way or the other use PCR for storing hashes of their integrity tokens. Since in Trusted Computing technology PCR is the way to store trust tokens of a system in the TPM and 1

A Scalable and Privacy Preserving Remote Attestation Mechanism

Embed Size (px)

Citation preview

A Scalable and Privacy Preserving Remote Attestation

Mechanism

Tamleek Ali1, Masoom Alam1, Mohammad Nauman2, Toqeer Ali2, Muhammad Ali1, Sajid Anwar1

1 Institute of Management Sciences, Peshawar, Pakistan2 Computer Science Research & Development Unit (CSRDU), Pakistan

{tamleek, masoom, muhammad.ali, sajid.anwar}@imsciences.edu.pk{nauman, toqeer}@csrdu.org

Abstract

Assurance of fulfillment of stakeholder’s expectations on a target platform is termed as remoteattestation. Without such an assurance, there is no way of knowing whether the policies of theremote owner will be enforced as expected. Existing approaches toward remote attestation workat different levels of the software stack and most of them only measures single entities (OS and/orapplication) on a remote platform. Several dynamic attestation techniques have been proposed thataim to measure the internal working of an application. In TCG-based attestations we use PlatformConfiguration Register (PCR) for storing and advocating the platform or application integrity to aremote party. Currently a single PCR is used to capture the behavior of one application or purpose.As there can be more than one applications running on a target system, we need to have mechanismsto remotely certify the internal behavior of multiple applications on a single system. In this paper wepropose the idea of using a single PCR for multiple instances of a target application, while preservingthe privacy of other application instances. Moreover, our technique also keeps the trusted status ofeach application intact. We change the working of the existing remote attestation techniques byenabling them to incorporate multiple instances of the entities (operating systems, programs etc.)that they measure. We also propose technique for measurement and verification of a single instanceby its respective stakeholder while keeping the privacy of others. The mechanism proposed in thispaper is applied on different attestation techniques that work at different levels of the software stack.We also provide a proof-of-concept implementation of the proposed technique and discuss the prosand cons of our approach.

1 Introduction

Remote attestation — a term introduced by Trusted Computing Group (TCG) [1] — is an approachfor establishing trust decision about a remote platform. Remote attestation allows a challenger toverify whether the behavior of a target platform/application is trusted. Several approaches have beenproposed for remote attestation of a target platform. These techniques are defined at different levelsof abstraction. The lower level techniques include Integrity Measurement Architecture (IMA) [10] thatpresents binary hashes of executables to the challenger, and Policy Reduced Integrity MeasurementArchitecture (PRIMA) [5] that controls the information flows to and from a trusted application. Similarly,medium level technique such as property based attestation [9] allows mapping of system configurations tosome generic properties. [2] proposed a high level framework in which behavior of a model is identified andmeasured. Recently, efforts have been made that aim to measure dynamic behavior of an application.In these techniques, various other types of trust tokens, represented as arbitrary data structures, arecollected and reported through PCRs [7, 2]. Remote attestation of program execution [4] is a techniqueto dynamically measure the behavior of an application on a remote platform. They assess the benignbehavior of the remotely executing program by the sequence in which the program makes system calls.

All these techniques one way or the other use PCR for storing hashes of their integrity tokens. Sincein Trusted Computing technology PCR is the way to store trust tokens of a system in the TPM and

1

PCR_QUOTE is the operation provided by the TPM to vouch for the existence of such tokens, so we needto use them in an efficient manner. For example, [7] proposed a technique for remote attestation ofattribute update and information flow behaviors of a Usage Control (UCON) [8] system. This techniqueconsiders the measurement of a single UCON application instance. They used PCR 11 for attestation ofattribute behavior and PCR 12 for information flow behavior of a UCON application. Usage of PCRsin this way will inevitably lead to scarcity of PCRs. Moreover, it is quite possible that a system hasmultiple applications running and if one PCR is used for a single application, the system will barely meetthe needs of the applications. So there is a need to use PCRs in such a way that it can accommodateconfigurations of multitude of applications. We propose a technique in which we can measure multipleapplication behaviors in one PCR through aggregation.

Scalability being an essential characteristic for wide acceptance of an attestation technique, we pro-pose a mechanism to scale remote attestation of one operating system to multiple operating systemsand one application to multiple, without making any changes to the corresponding measurement mecha-nisms. For supporting scalability at the OS level, we introduce a hypervisor-level measurement agent thattakes measurements from the guest OS-kernels, logs them and extends the PCR accordingly. Similarly,for application level attestation techniques we delegated the logging activity to a kernel level behaviormonitor.

However, aggregating different application instances’ behavior into one PCR creates privacy issues,as different application logs will be stored in one system-wide log to capture the behavior of all theapplications running on that system and as different applications may belong to different stakeholders.Thus, reporting system-wide log may result in violation of another application’s privacy. For validation,one remote challenger would need to evaluate his/her application’s log only. We amend the existingremote attestations’ reporting mechanism to tweak the logs by reporting only the challenger’s own logentries while hiding the others entries. The logs are stored in an unprotected securityfs. We assume thata malicious user has the ability to change the log text. To avoid this kind of problem first the log integrityis verified locally and then sent to the challenger for verification. The measurement correctness can beensured by the challenger with the hashes which the PCR is extended with. Similarly, the applicationor OS verification is done as it is in the corresponding attestation technique.

Contribution: Our contributions in this paper are as follows: 1) We identify the problem of scarcity

Healthcare Service Provider

Financial Services Provider

Healthcare Record Reader

Financial Record Reader

Instance 1 Instance 2 Instance 1 Instance 2

Log

TPM

Log

PolicyDB PolicyDB

Figure 1: Motivating use case

2

of PCRs for accommodating different attestation techniques. 2) We propose a technique for measuringand verifying behavior of multiple instances of an application using a single PCR. 3) We resolve theprivacy problems arising due to the re-use of PCRs by applications belonging to different stakeholders.This means that our technique addresses the privacy issues by hiding the behavior logs of the otherinstances of an application. Thus, the challenger (or stakeholder) of a specific instance may verify onlyits own application instance while still keeping the attestation tokens in a trusted state.

Outline: Section 2 details the real world use cases to motivate the technique presented in thispaper. Section 3 presents the target architecture at three levels of a software stack and elaborates theverification process of different instances of a trusted application. Implementation details are presentedin Section 4. Finally, we conclude the paper in Section 6.

2 Problem Description

Due to the hardware cost involved there is a limitation on the number of PCRs in the TPM. On theother hand there are a number of remote attestation techniques (each using one or more PCRs) thatmight be deployed on a single platform. Further each attestation technique can be applied on multipleapplication instances. Therefore there is a need to device a mechanism that can enable the shared useof PCRs across different attestation techniques and across multiple instances of a single application onwhich a specific remote attestation technique is applied.

2.1 Motivating Use Case

Multiple Usage Control Applications: Usage control [8] is the need of contemporary security applications.Remote owner of a resource needs to verify that her object is used in accordance with the policy specified.It is quite possible that there are different usage control applications running on a system. For examplea law enforcement department needs to have access to the health records of a citizen and may also needto access the financial records of the citizen during some investigation. Health and financial informationabout a person has several constraints and policies associated with it (cf. Figure 1). Health and financialservice providers have their own trusted applications using which the law enforcement department cancheck and update these records. These applications are used by the law enforcement department to haveaccess to citizen’s corresponding data. These usage control applications are running on the same system.Each application has its own policy about the usage of a citizen record. Similarly, it is even possiblethat the system executes more than one application that opens different citizen records. Here each ofthe stakeholders would need to attest its own application to remotely certify that the citizen record isbeen accessed and updated according to the associated policy.

3 Target Architecture

Attestation techniques can be categorized at different levels in the software stack. We can consider TCG-based attestation at the lowest level where the kernel [10] measures the trusted state of the system bylogging the hashes of executables. Similarly, Linux Kernel Integrity Measurement (LKIM) [6] works at thekernel level but it aims to verify the dynamic behavior of the operating system by contextual inspection ofthe kernel. Remote attestation of program execution [4] measures the behavior of a program in executionby the sequence in which it makes system calls.

All attestation techniques that uses TPM as root of trust for measurement, have logging entities1 atdifferent levels of the software stack. The loggers in different attestation techniques take hashes of their

1Here onwards we use words logging entity, logger and measurement agent interchangeably

3

upper level entities and extend PCR with it. For example, in case of IMA the kernel works as a loggingentity that measures each executable at load time and extends PCR-10 with its hash. Similarly, thereare techniques where the logging entities reside above the kernel level, known as behavior monitor [2].The behavior monitor measures the internal working of the application by logging the internal activitiesand extends a PCR with the hashes of these logs. The logs are sent to the challenger who verifies themfor trusted enforcement of her policies in their corresponding techniques. We change the semantics ofeach measurement agent of different attestation techniques to incorporate multiple instances for remoteattestation. We apply our approach on attestation techniques at different software stack levels. For thispurpose, we have taken three attestation techniques at different levels of the software stack.

Our target architecture is likely to have multiple virtual machines running. Similarly, each of theVM will have multiple applications running on it. Techniques for measuring the trustworthiness of anoperating system take hashes of the executable at load time [10]. To make the Integrity MeasurementArchitecture able to report the trustworthiness of many operating systems, we need to change the loggingand reporting mechanism. Similarly, each of the target OS can have multiple applications running onit. To measure the behavior of each application we need to change the working of measurement agents,to make them able to log and report behavior of the individual application while keeping behaviorlogs of the other applications confidential. Behavior of the applications is measured by the kernel levelmeasurement agent and the virtual machines are measured at the hypervisor level. Below, we describehow we apply our solution at different levels of the software stack, by modifying the existing remoteattestation techniques.

3.1 Scalable Behavior Attestation

Traditional attestation techniques [10, 9, 5] rely solely on the binary hashes of executables running onthe client. A chain of trust is established from the core root of trust (i.e. the TPM) to the application.However, all of these techniques measure the target application statically without considering its innerworking [3]. A recent technique, Model-based Behavioral Attestation (MBA) [2], proposes a high-levelframework for measuring the internal working of the target application based on the dynamic behaviorsof the different components of the application. We note that the MBA framework relies on the existenceof a small monitor module in the target application as part of the Trusted Computing Base (TCB).The behavior monitor, being part of the TCB, can measure the dynamic behavior of the rest of theapplication in a trusted manner. During an attestation request, the monitor sends these measurementsto the challenger where they can be verified. If the behavior depicted by these measurements is compliantwith the object owner’s policy, the challenger can be assured that the security policy is indeed beingenforced as expected. For establishing trust on the behavior monitor, the following two criteria have tobe met:

1. The monitor module has to be verified for correctness using formal methods. While formal verifi-cation of large systems is a complex procedure and quickly becomes infeasible, verification of smallcomponents is easier and can yield many benefits. The monitor is a relatively small component andits formal verification adds significantly to the confidence in the correctness of the functionalityand subsequently to its reported measurements.

2. Its hash has to be attested using traditional attestation techniques such as IMA [10] or PRIMA [5].

In this paper we change the working of the behavior monitor so that it can measure the behaviorof multiple instances of a usage control application and the mechanism for reporting the behavior of aspecific application’s activities to the respective challenger in a trusted manner. We also describe how thereported behavior of any specific application can be verified against the challenger’s policy at the remote

4

Operating System

Securityfs

Trusted Application Instances

Client Side

PolicyDB

Behavior Monitor

TPM

log_req

App 2

log_req

App 1

Read policy

Att-Request

Att-Response

store

pcr_extend

AUP Log SHA-1 AppID

INIT:App-1 283fcdd3f44598b2cb1616c9c83029f38a1f2fe0 App-1 s1.a:o1.a:s1.a=2:o1.a=1 b1b2be91fa51fce791bfc76b9a00398a96e5774e App-1INIT:App-2 2d7ee564109ede8848c7073fe5722f85eaeb36ca App-2s3.a:o4.a:s3.a=6:o4.a=3 99d6e8874aaf0865f697427fbf759f4310b54346 App-2s3.a:o5.a:s3.a=7:o5.a=9 4de1a039b4afe847d9527c2aa0e958abfaa83428 App-2s1.a:o1.a:s1.a=3:o1.a=2 665d5db56b06d52c0e36eec6c749cb4574c64276 App-1s1.a:o1.a:s1.a=4:o1.a=3 7abc78d9c6689bda64202b8742bb605b4d802553 App-1INIT:App-3 79526fdbf385b5bb644ff90735873d7500b6d575 App-3s4.a:o2.a:s4.a=6:o2.a=3 6f8db9b4844f9cae0b4345275f957659d7c6ed95 App-3s1.a:o1.a:s1.a=4:o1.a=3 3c39e9194cdc1d8108b2e33f9f4e7bc30edac5c6 App-1INIT-App-4 22c2fc59132814a2df40bb86002b34a6507ecfbc App-4s3.a:o5.a:s3.a=8:o5.a=6 60c4a78ead90280a93a65ba10bc88fcd2a6b3c0e App-2s4.a:o2.a:s1.a=7:o2.a=4 cb21b4e15d9c1962c34fb9f23737b10c5e114d14 App-3….….s6.a:o2.a:s1.a=7:o2.a=4 adf958706d4b5ea03b8f56d9fcde857a277ec5fb App-4

Figure 2: Scalable Behavioral Attestation

end. In case of attestation of a usage control application, there are different behaviors to be measuredfor establishment of trusted state of the application e.g., information flow behavior, attribute updatebehavior and state transition behavior [2]. Here, we take the example of attribute update behavior inwhich all the updates are logged by the behavior monitor and extended into PCR-11. For details abouta single instance attribute update and information flows attestation we refer the reader to cf. [7]. Inorder for the attribute updates occurring on the client to be considered as trusted, the challenger needsto be able to verify that, for each update, there exists a ground policy [13] which requires the updateperformed at the client end. This is a similar operation to the verification procedure used by the IntegrityMeasurement Architecture [10]. Hashes of entries in the Attribute Update Log (AUL) are concatenatedin sequence to give the final value of the PCR. For each entry AULe in the AUL, the PCR value at AULeis given by:

PCRAULe = SHA-1(PCRAULe−1 || SHA-1(AULε))

where AULε is the portion of AULe that represents the operation performed for updates (cf. Fig-ure 3), column 1). Each application instance using an object of a remote stakeholder is associated witha policy. We introduce a global attribute update log (cf. Figure 2) where the behavior monitor storesattribute updates of all the trusted applications. Each application event is logged with its system wide

5

Algorithm 1 Algorithm for make response

Input: Nonce sent in challenge, Application IDOutput: A Response for verifying specific application instance1: Lock-AUL(app id)2: PCR READ()3: Validate Log()4: Take PCR QUOTE(Nonce)5: Add PCR QUOTE to response6: foreach AULε do7: if AULε.app-id == app-id then8: add-to-response(AULe) //add to response without modification9: else

10: hide-log-add-to-response(AULe)11: end if12: end for13: unlock(AUL)14: return response

application ID (App-id) which is the association of a remote stakeholder with its application. PCR-11 isextended with the hash of each AUL entry. So, one PCR accumulates logs of all the applications runningon the system.

The challenge response for a specific application should not include the logs of the other applications,as it will result in privacy violation for the other applications. If the remote stakeholder of an objectsends a challenge to remotely certify the usage of its object, then the behavior monitor will call make-response function (Algorithm 1) that will retrieve attribute update logs of the corresponding applicationinstance while hiding the logs of the other applications.

The make-response function takes an application ID and a challenger’s nonce as arguments. As itis quite possible that during this challenge/response session other application instances are updatingthe AUL and extending the PCR 11 as well, it may result in a mismatch between the response andthe PCR QUOTE. To avoid this inconsistency, the make-response function locks the AUL (line 1).During the lock, the measurement agent will queue further AUL entries and does not extend the PCR.On unlock, the AUL is updated with all the queued entries and the PCR is extended with them. Toavoid the unauthorized/malicious log tweaking we validate the PCR locally. For verification of the logfirst PCR value is read from the TPM (line 2) and then the log is validated against the PCR value(line 3). The make-response algorithm requests the TPM for PCR QUOTE with the nonce sent bythe remote stakeholder (line 4). The PCR QUOTE is added to the response (line 5). As the logs arestored on unprotected file system, so to avoid In line 7, each entry in the AUL is checked, if it belongsto the stakeholder, it is added to the response without any further processing (line 8). Otherwise, theattribute update log value (AULε) and application ID are hidden and the resulting structure is addedto the response (line 10). At the end, AUL is unlocked (line 13) and the queued attribute updates areapplied and the is PCR extended accordingly.

During the verification of attribute update behavior (cf. Algorithm 2) first of all the signatureperformed by the client’s TPM on the PCR value (line 1) is validated. This ensures that the PCR valuescan be trusted to be signed by a genuine TPM and not by a software masquerading as a TPM. Similarly,the Attestation Identity Key [11] and the nonce are also verified. Failing any of the conditions will declarethe response to be incorrect which means that either response has been tempered with or the signatureis not performed by a genuine TPM. If all the three conditions in step one return true, the responseis further verified for the correct policy enforcement. The log is retrieved from the response and eachentry is accumulated to form the final PCR value. During the accumulation of the PCR QUOTE, if the

6

AUP Log SHA-1 AppID

INIT:App-1 283fcdd3f44598b2cb1616c9c83029f38a1f2fe0 App-1s1.a:o1.a:s1.a=2:o1.a=1 b1b2be91fa51fce791bfc76b9a00398a96e5774e App-1xxxxxxxxxxxxxxxxxxxxxxx 2d7ee564109ede8848c7073fe5722f85eaeb36ca App-xxxxxxxxxxxxxxxxxxxxxxxx 99d6e8874aaf0865f697427fbf759f4310b54346 App-xxxxxxxxxxxxxxxxxxxxxxxx 4de1a039b4afe847d9527c2aa0e958abfaa83428 App-xs1.a:o1.a:s1.a=3:o1.a=2 665d5db56b06d52c0e36eec6c749cb4574c64276 App-1s1.a:o1.a:s1.a=4:o1.a=3 7abc78d9c6689bda64202b8742bb605b4d802553 App-1xxxxxxxxxxxxxxxxxxxxxxx 79526fdbf385b5bb644ff90735873d7500b6d575 App-xxxxxxxxxxxxxxxxxxxxxxxx 6f8db9b4844f9cae0b4345275f957659d7c6ed95 App-xs1.a:o1.a:s1.a=4:o1.a=3 3c39e9194cdc1d8108b2e33f9f4e7bc30edac5c6 App-1xxxxxxxxxxxxxxxxxxxxxxx 22c2fc59132814a2df40bb86002b34a6507ecfbc App-xxxxxxxxxxxxxxxxxxxxxxxx 60c4a78ead90280a93a65ba10bc88fcd2a6b3c0e App-xxxxxxxxxxxxxxxxxxxxxxxx cb21b4e15d9c1962c34fb9f23737b10c5e114d14 App-x......xxxxxxxxxxxxxxxxxxxxxxx adf958706d4b5ea03b8f56d9fcde857a277ec5fb App-x

Figure 3: Verification of logs for a single application

Algorithm 2 Algorithm for response verification

Input: Challenge response from target application.Output: A boolean value which is true only if verification against benchmark is successful.1: if !(verifyCerificate and verifyAIK and verifyNonce) then2: return false3: else4: get AUL from the response5: foreach AULe do6: if app-id == AULε.app-id then7: PCRi = SHA-1(PCRi−1 || SHA-1(AULε))8: validate AULe9: else

10: PCRi = SHA-1(PCRi−1 || extract hash(AULe))11: end if12: end for13: PCR QUOTE cal = make-QUOTE(PCRi,Nonce)14: if !(PCR QUOTEcal == PCR QUOTErec) then15: return false16: end if17: end if18: return true

entry belongs to the challenger than the hash of value is recalculated and also taken as part for furthervalidation of this update (line 8). Next step is to verify the AUL against the policy. The validationmechanism is specific to each attestation technique. For example, in case of IMA the validation ismatching a hash of an executable with the one stored in the validation database, which confirms thatthe executable is a known good one. In case of behavioral verification, these logs are gathered to forma Attribute Update Graph, which is further verified for trusted policy enforcement [7]. PCR QUOTEcal

(line 13) is the calculated PCR QUOTE and PCR QUOTErec is the received one. If both of thesematch, it means that the measurement done at the client end is correct (line 14).

3.2 Scalable Program Execution Attestation

Remote attestation of program execution [4] is one of the few techniques to dynamically measure thebehavior of an application on a remote platform. They assess the benign behavior of the remotelyexecuting program by the sequence in which the program makes system calls. This technique assumesthat the source code has already been analyzed. Based on the analysis, a SysTrap table — a datastructure maintained in kernel space to record the system calls made by an application — is made as

7

benchmark on the challenger’s side.Program execution remote attestation [4] does not explicitly specify how it makes use of the PCR to

securely store the sequence of system calls. However, for trusted execution and ensuring the validity ofthe stored information against the benchmark at the remote challenger end, this techniques would needto extend PCR with the system call information stored in the SysTrap table. Doing so will soon resultin scarcity of the PCRs for measurement of multiple programs execution on a single system.

As before, if multiple stakeholders want to verify the remote execution of their programs, and eachprogram uses a different PCR, it will lead to a scarcity of the PCRs. If, however, they use the samePCR, it will require a change in the measurement and reporting mechanism to preserve the privacyof different stakeholders that is presented in this paper. The approach presented here can scale theprogram execution remote attestation technique to measure the dynamic behavior of multiple programsin execution at a time. We assume that two different programs are in execution and using the samePCR to be extended by the hashes of the SysTrap table. The make response function will work thesame way as mentioned in Section 3.1. The system calls are stored in a system-wide log and PCR isextended with their hashes. To verify the secure transmission of the SysTrap logs, the challenger will usethe same response verification algorithm. The validation mechanism as mentioned in [4] is that for eachsystem call, an analyzing procedure is called. This procedure checks whether a record corresponding tothis particular system call, its caller and callee are present in the SysTrap table. So the challenger candetect any discrepancy in the application running at the remote end.

3.3 Scalable Integrity Measurement Architecture

Integrity Measurement Architecture is the basic and publicly available remote attestation technique.This approach provides evidence of the system integrity start from the BIOS till the current runningstate with the help of a hardware chip called Trusted Platform Module. With the increasing use ofvirtualized platform there is a need to provide attestation feature to all the VMs running on a VirtualMachine Monitor. In IMA, the kernel works as the logging entity. This is the first working remoteattestation technique proposed by IBM [10]. In this technique hash of each executable, module andlibrary is taken before loading and PCR 10 is extended with it. To scale IMA to multiple operatingsystems, we delegate the logging mechanism to h-MA – a measurement agent residing in domain 0 of thehypervisor. The h-MA performs the logging and PCR extend operations on behalf of the guest kernel.It is also responsible for maintaining the entries in a global measurement log where each hash entry willbe associated with its operating system ID.

To remotely certify the individual OS, The reporting mechanism is modified to hide the entriesbelonging to the other operating systems running on the same hardware platform. Since the hash of theexecutables stored in the logs created by IMA are not randomized, they may allow a verifier to recognizethe application that led to the hash. For circumventing this problem, we can use an explicit randomizerto act as a salt to the hash. The randomizer ensures that the applications running in another domaincannot be deduced from the hash reported to a challenger while still ensuring that the challenger canvalidate the hashes loaded on her own VM. For this purpose, the validation mechanism of IMA is slightlymodified. The known good hash of the executable (stored in the validation database) is appended withthe hash of the randomizer and SHA-1 is computed over the resulting value. This value is then comparedwith the value reported in the log by the client. If the two values match, it can be concluded that theapplication was of a known good hash.

8

Figure 4: SML Views at dom0 (left) and domU (right)

4 Implementation

Our proposed architecture, as discussed in Section 3, is applicable on three levels of the software stack. Forthe demonstration of the approach, we have created a proof-of-concept implementation of the proposedarchitecture at each of these levels. We have created an application that is able to enforce usage controlpolicies called Secure Document Reader (SDR). The behavior monitor is able to measure the dynamicbehavior of the application during the usage of the protected resources and store this measurement inthe trusted logs. The application is written in Java and communicates with the TPM using the TrustedJava (jTSS) [12] libraries running on top of the Linux operating system with IMA enabled (kernel version2.6.30) in a Dell Optiplex 760 desktop system. The application uses PCR-11 and PCR-12 for storing theattribute update and information flow logs respectively. We execute two instances of the application,each allowing the usage of different protected objects representing data originating from two differentservice providers. Both instances of the application are independent of each other and expect to be ableto use PCR-11 and PCR-12 exclusively. The behavior monitor implements the approach described inthis paper and serializes the access to PCRs during behavior measurement and recording. Meanwhilethe remote owner could remotely certify that her policy was been enforced by the target application.During attestation, the behavior monitor anonymizes the trusted log depending on the service provider.The value of the PCR is reported using the PCR quote operation.

4.1 Hypervisor-level Implementation

For demonstrating our approach at the lowest level of abstraction, we have chosen the XEN hypervisor[3]. XEN is a high-performance hypervisor/virtual machine monitor (VMM) provided under an opensource license.

In our implementation, we have used the latest xen-unstable source (i.e. xen-4.0.0-rc3). Themachine used for our testbed was a Dell Optiplex 760. We created a dom0 Linux kernel (v2.6.31.6) withthe XEN options enabled. The domU kernels were two instances of the 2.6.30.2 Linux kernels recompiledwith IMA options enabled. In the kernel source, we implemented two hypercalls extend_sml(char*,

byte*) and read_sml() which were invoked from the storing function of IMA i.e. ima_add_template_-entry() in ima_queue.c source file. This function is modified to invoke the hypercall extend_sml().The usual mechanism of extending the PCR is then carried out. In IMA, this function is ima_pcr_-

extend in the same file. The TPM function can then implement the normal PCR extend operation usingTPM front-end and back-end drivers. Figure 4 shows the complete SML visible at the dom0 level.

When an OS in domU wishes to retrieve the SML, it invokes the read_sml() hypercall. The mecha-nism of XEN is such that the hypercall cannot return a value. Instead, the hypervisor sends return data

9

4 6 8 10 12 14 16 18 20 22 24 260

100

200

300

400

500

600

700

800

Application Uptime

Atte

stat

ion

Tim

e (m

s)

App1

App2

App1 + App2 (No privacy)

App1 + App2 (Privacy−preserved)

(a) Scalable Behavioral Attestation Performance

0 5 10 15 20 25 300

0.5

1

1.5

2

2.5x 10

4 Performance Evaluation

System Uptime

SM

L Le

ngth

0 5 10 15 20 25 30

5

10

15

20

25

30

35

40

Tim

e (m

s)

Dom0DomU1

DomU2

Attestation Dom0Attestation DomU1

Attestation DomU2

(b) Scalable IMA Performance

Figure 5: Scalable Remote Attestation Performance

to the OS using an asynchronous call with the event mechanism. After the hypercall, the VMM returnsthe anonymized SML (in accordance with the mechanism described in Section 3.3) to the domU kernel.In this way, the domU kernel never gets to see the entries extended by the other domU kernels. Figure 4shows the screen captures of SML views in both dom0 (left) and two domU VMs (right) running on topof the hypervisor.

5 Performance Evaluation

Another added benefit of the privacy preserving attestation is its performance. We recognize thatscalability adds to the performance because of the increase in the number of applications in case ofapplication level attestation and virtual machines in case of scalable IMA. However, we believe thatthe added feature of scalability justifies this overhead. We measure the performance of attestation inthree different ways while having the same execution envoirnments. First we perform the attestationof individual applications using normal behavioral attestation. After deployment of scalable approachwe could manage to attest both the applications. Results of performance evaluation of the scalableand privacy preserving attestation (cf. Figure 5(a)) show that it takes less time then that of scalableapproach without privacy preservation. Although we do not compare the performance of the individualapplication with the scalable approach as we have already argued that they are not viable due to scarcityof PCRs.

Figure 5 (b) shows the comparison of different VMs’ on the bases of SML length and attestationtime against system uptime. Note that the attestation time for Dom0 is relatively constant and is notbeing affected by the SML length for other domains. This is because the privacy-preservation algorithmensures that the entries in SML corresponding to other domains are not validated.

6 Conclusion and Future Work

Older remote attestation techniques measure the trustworthiness of an application only by its static hash,which is not enough to depict its behavior at runtime. Thus, recently proposed attestation techniques tryto capture the dynamic behavior of an application. These techniques uses arbitrary data structures tocapture the dynamic behavior. To remotely verify the correctness and validation of these data structures,they need to be stored in the PCR so that the remote parties can then verify that the values are not sentby a masquerading TPM. These attestation techniques make use of a PCR to capture the behavior ofan application, which results in the scarcity of PCRs for multiple applications. This leads to scalability

10

being an important limitation of any attestation technique. We have proposed a method for scalingdifferent attestation techniques available at different levels of the software stack. We have implementedthis technique to dynamically measure the behavior of multiple applications simultaneously running ona system. We have shown the applicability of our approach at three levels of the software stack – virtualmachine, operating system and application level – by modifying three existing approaches of remoteattestation. Extension of our proposed architecture to other remote attestation techniques to show thecomplete applicability of the approach remains a future direction in this line of research.

References

[1] Trusted Computing Group. http://www.trustedcomputinggroup.org/.

[2] M. Alam, X. Zhang, M. Nauman, T. Ali, and J-P. Seifert. Model-based Behavioral Attestation. In SACMAT

’08: Proceedings of the thirteenth ACM symposium on Access control models and technologies., New York,

NY, USA, 2008. ACM Press.

[3] Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt,

and Andrew Warfield. Xen and the art of virtualization. In SOSP ’03: Proceedings of the nineteenth ACM

symposium on Operating systems principles, pages 164–177, New York, NY, USA, 2003. ACM.

[4] Liang Gu, Xuhua Ding, Robert Deng, Bing Xie, and Hong Mei. Remote Attestation on Program Execution.

In STC ’08: Proceedings of the 2008 ACM Workshop on Scalable Trusted Computing, New York, NY, USA,

2008. ACM.

[5] Trent Jaeger, Reiner Sailer, and Umesh Shankar. PRIMA: Policy-Reduced Integrity Measurement Archi-

tecture. In SACMAT ’06: Proceedings of the eleventh ACM Symposium on Access Control Models and

Technologies, pages 19–28, New York, NY, USA, 2006. ACM Press.

[6] Peter A. Loscocco, Perry W. Wilson, J. Aaron Pendergrass, and C. Durward McDonell. Linux Kernel

Integrity Measurement Using Contextual Inspection. In STC ’07: Proceedings of the 2007 ACM Workshop

on Scalable Trusted Computing, pages 21–29, New York, NY, USA, 2007. ACM.

[7] M. Nauman, M. Alam, T. Ali, and X. Zhang. Remote Attestation of Attribute Updates And Information

Flows in a UCON System. In Trust’09: Proceedings of the Second International Conference on Technical

and Socio-Economic Aspects of Trusted Computing. Springer, 2009.

[8] Jaehong Park and Ravi Sandhu. Towards Usage Control Models: Beyond Traditional Access Control. In

SACMAT ’02: Proceedings of the seventh ACM Symposium on Access Control Models and Technologies,

pages 57–64, New York, NY, USA, 2002. ACM Press.

[9] Ahmad-Reza Sadeghi and Christian Stuble. Property-based Attestation for Computing Platforms: Caring

about Properties, not Mechanisms. In NSPW ’04: Proceedings of the 2004 Workshop on New Security

Paradigms, pages 67–77, New York, NY, USA, 2004. ACM Press.

[10] Reiner Sailer, Xiaolan Zhang, Trent Jaeger, and Leendert van Doorn. Design and Implementation of a TCG-

based Integrity Measurement Architecture. In SSYM’04: Proceedings of the 13th conference on USENIX

Security Symposium, Berkeley, CA, USA, 2004. USENIX Association.

[11] TCG Specification Architecture Overview v1.2, page 11-12. Technical report, Trusted Computing Group,

April 2004.

[12] Trusted Computing for the Java(tm) Platform. http://trustedjava.sourceforge.net/.

[13] Xinwen Zhang, Ravi Sandhu, and Francesco Parisi-Presicce. Safety Analysis of Usage Control Authoriza-

tion Models. In ASIACCS ’06: Proceedings of the 2006 ACM Symposium on Information, computer and

communications security, pages 243–254, New York, NY, USA, 2006. ACM.

11