18
CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCE Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910 Published online 24 June 2010 inWiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cpe.1614 Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing Ge Cheng 1 , Hai Jin 1, , , Deqing Zou 1 and Xinwen Zhang 2 1 Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, People’s Republic of China 2 Computer Science Lab, Samsung Information Systems America, San Jose, CA, U.S.A. SUMMARY In the cloud computing infrastructure, there is an increasing demand to maintain and verify the integrity of software stacks running on remote systems and protect users’ sensitive data. However, due to the fact that software stacks running on cloud platforms are usually provided and maintained by different authorities (or providers) who are potentially untrusting to each other, the problem of measuring and protecting runtime system integrity becomes very challenging and has not been well addressed yet. In this paper, we present an integrity measurement and protection architecture for software stacks running on a guest operating system (OS) of a virtualized platform in cloud environment. Our solution does not change the guest OS, and thus is transparent to the OS authority. Furthermore, our architecture ensures that sensitive information of users is protected once the integrity of software stacks is broken during runtime. We implement our solution on Xen, and present a simple prototype-based Nimbus. We demonstrate the capability of dynamically detecting the integrity change of programs in cloud computing, and our evaluation results show that the solution is effective for integrity protection with acceptable performance overhead. Copyright © 2010 John Wiley & Sons, Ltd. Received 9 December 2009; Revised 17 April 2010; Accepted 18 April 2010 KEY WORDS: integrity measurement; integrity protection; trusted computing; cloud computing; authority Correspondence to: Hai Jin, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, People’s Republic of China. E-mail: [email protected] Contract/grant sponsor: National Basic Research Program of China; contract/grant number: 2007CB310900 Contract/grant sponsor: National Natural Science Foundation of China; contract/grant number: 60973038 Copyright 2010 John Wiley & Sons, Ltd.

Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

Embed Size (px)

Citation preview

Page 1: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

CONCURRENCY AND COMPUTATION: PRACTICE AND EXPERIENCEConcurrency Computat.: Pract. Exper. 2010; 22:1893–1910Published online 24 June 2010 inWiley InterScience (www.interscience.wiley.com). DOI: 10.1002/cpe.1614

Building dynamic andtransparent integritymeasurement and protectionfor virtualized platform incloud computing

Ge Cheng1, Hai Jin1,∗,†, Deqing Zou1 andXinwen Zhang2

1Services Computing Technology and System Lab, Cluster and Grid ComputingLab, School of Computer Science and Technology, Huazhong University of Scienceand Technology, Wuhan 430074, People’s Republic of China2Computer Science Lab, Samsung Information Systems America, San Jose, CA,

U.S.A.

SUMMARY

In the cloud computing infrastructure, there is an increasing demand to maintain and verify the integrityof software stacks running on remote systems and protect users’ sensitive data. However, due to thefact that software stacks running on cloud platforms are usually provided and maintained by differentauthorities (or providers) who are potentially untrusting to each other, the problem of measuring andprotecting runtime system integrity becomes very challenging and has not been well addressed yet. In thispaper, we present an integrity measurement and protection architecture for software stacks running on aguest operating system (OS) of a virtualized platform in cloud environment. Our solution does not changethe guest OS, and thus is transparent to the OS authority. Furthermore, our architecture ensures thatsensitive information of users is protected once the integrity of software stacks is broken during runtime.We implement our solution on Xen, and present a simple prototype-based Nimbus. We demonstratethe capability of dynamically detecting the integrity change of programs in cloud computing, and ourevaluation results show that the solution is effective for integrity protection with acceptable performanceoverhead. Copyright © 2010 John Wiley & Sons, Ltd.

Received 9 December 2009; Revised 17 April 2010; Accepted 18 April 2010

KEY WORDS: integrity measurement; integrity protection; trusted computing; cloud computing; authority

∗Correspondence to: Hai Jin, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School ofComputer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, People’s Republicof China.

†E-mail: [email protected]

Contract/grant sponsor: National Basic Research Program of China; contract/grant number: 2007CB310900Contract/grant sponsor: National Natural Science Foundation of China; contract/grant number: 60973038

Copyright q 2010 John Wiley & Sons, Ltd.

Page 2: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

1894 G. CHENG ET AL.

1. INTRODUCTION

Cloud computing provides flexible outsourcing computations and storage for enterprises and organi-zations, and enables resource-on-demand and pay-as-you-go computing models. While this model,exemplified by Amazon’s Elastic Compute Cloud (EC2) [1], Microsoft’s Azure Service Platform [2],and Rackspace’s Mosso [3] provides a number of advantages, such as economies of scale, dynamicprovisioning, and low capital expenditures, and so on. It also rings new security risks.Most of these risks arise from new trust relationships in cloud infrastructures. For example, In

Berkeley’s view [4], there are normally three types of authorities in cloud computing: (1) cloudproviders—the owners of hardwares and system softwares in the data centers; (2) software-as-a-service (SaaS) providers—service providers that rent the resource of cloud providers and providethe services to the public; (3) SaaS users—the users who request services from SaaS providers.Sometimes, a SaaS provider can also be a SaaS user or a cloud provider can also be a SaaS provider.As user sensitive data and applications are often transferred to the cloud, how to guarantee thesecurity of such user information is a critical issue. A major challenge is that how can a clouduser trust that the cloud environment really exists and that the system is not compromised beforehe/she uses the cloud service/SaaS service and how can he/she believe that his sensitive data willbe carefully protected once the environment is compromised.By trust here we mean that a component is authentic: the integrity of its code and data is

protected, it can only be updated by its authority, and then its behaviour is predictable. In collab-orative distributed systems, it is mandatory for a remote platform to provide its integrity status ina trustworthy way to others in order to detect and prevent applications from being deployed onuntrusted even hostile platforms.The integrity of such platforms can not only be threatened by malicious attacks, such as defects,

Trojan horses, and viruses, but also by the update of components frommultiple independent authori-ties. While the first type of threats is relatively easy to eliminate with such widely deployed anti-virussoftware, the latter becomes very challenging. Specifically, it mandates that after a component isupdated, the authorities of other components can still trust the updated code and data, and conse-quently the behaviors of the updated component. From another point of view, a computing system incloud computing environments should be trusted by a remote client or user such that, for example,its declared quality of service is preserved, or users’ valuable or privacy-sensitive data on cloudsare protected from other entities including cloud service providers.For this purpose it is very important to verify whether the platform is a known-good implemen-

tation and is running with a known-good configuration. Trusted Computing Group (TCG) [5] hasspecified a small and low-cost Trusted Platform Module (TPM) hardware component to enhancethe security of desktop and portable computers.Various mechanisms have been developed to use such hardware to generate a proof of a system’s

integrity, such as remote attestation and authenticated boot [6,7]. Other approaches have beenproposed to extend integrity measurement and verification up to application level [8–10]. Theseapproaches provide a way to start with a small trusted computing base (TCB) including TPM andoperating system (OS) kernel, and try to build a proof for the whole system by measuring each pieceof software components according to the sequence of platform booting and application loading.With the development of virtualization technology, how to build a trusted platform with virtual

machine monitor (VMM) and TPM is a new research focus. There are some popular approaches.

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 3: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

BUILDING INTEGRITY MEASUREMENT AND PROTECTION 1895

One is Virtualizing the TPM. Each virtual machine (VM) with need of TPM functionality canaccess to its own virtualized TPM (vTPM) [11]. The other is to take the VMM as TCB and measurethe trusted virtual machine on the partition block level by the VMM [12,13].Unfortunately, previously proposed TCG-like integrity measurement and attestation mechanisms

have some shortcomings which make them unpractical for virtualized platform in cloud computing.From our point of view, the most critical reason is that a typical cloud platform is a multi-authoritiescomputing environment. Each authority can have different sets of trusted software components, thustraditional TCG-based integrity measurement and attestation mechanism cannot provide a consistentconclusion for the trust statues of the platform, as the known-good integrity values of softwarecomponents are different for different authorities. Instead, the target platform in the previous workfocuses on PC and servers, which are usually owned and controlled by a single authority such asan enterprise.Second and related to the first, by extending the integrity measurement and verification to appli-

cation level, existing approaches have a large TCB including whole OS [8,9], and they are nottransparent to the OS and require modifications of the OS kernel. This introduces two critical issuesfor cloud computing: as a usual general-purpose OS is very large and complex, these approachesare frequently error-prone and vulnerable; typically, an OS maintained and controlled by a differentauthority from that of application level software components. For example, in many cloud infras-tructures software components from SaaS providers are usually running in middleware and/orapplication level, whereas OS and kernel are properties of cloud providers or other service providers.These make traditional approaches not viable in cloud environments.Third, traditional approaches lack the ability to protect sensitive information when a system’s

integrity is broken during runtime, which is only featured by some expensive trusted hardwaresuch as IBM 4758 secure coprocessor [14]. However, for cloud computing, it is critical for SaaSproviders and cloud users to have the assurance that their sensitive data are protected once thetrustworthiness of a cloud platform is detected to be compromised.In our work we leverage the advantages of virtualization technology to address the above prob-

lems. We provide a dynamic measurement and integrity protection architecture which the upperauthority is permitted to provide in their protection strategies for their sensitive data. For example,in cloud computing, for a SaaS user, the configuration of SaaS including operating system andother necessary middleware may be provided by SaaS provider. We should permit the SaaS usersto protect their sensitive data. We ensure that secrets belonging to the upper authority are onlyaccessed in an appropriate environment that the authority trusts, through monitoring the changes ofsoftware stacks in VMs and checking the integrity of the corresponding software according to thestrategies defined by the upper authority. The monitoring and integrity checking points are hookedinto the VMM to control the VM’s accesses to the disk and memory. Our solution is transparent toplatform OS as it is implemented in the VMM layer.In this paper, we present a formal security foundation for integrity requirements in multi-authority

computing environments based on a trust dependency concept.We then illustrate our implementationwhich consists of modules for integrity measurement, monitoring, and access control in Xen. Weleverage hardware-enhanced virtualization extensions to offer fast system call tracing and strongmemory context protection.The remainder of this paper is organized as follows: In Section 2, we describe the threat model.

In Section 3, we formally analyze the integrity protection requirements for the authorities of upper

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 4: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

1896 G. CHENG ET AL.

components on a platform, which builds the theoretical foundation of our work. We describe ourimplementation in Xen hypervisor in Section 4. Section 5 analyzes the effectiveness and runtimeperformance of our implementation. In Section 6, we examine some issues with this architecture.Section 7 presents the related work and Section 8 concludes this paper.

2. THREAT MODEL

From our point of view, different cloud providers offer services at various layers of the softwarestack. At the lower layers, Infrastructure as a service (IaaS) providers such as Amazon, Flexiscale,and GoGrid allow their customers to access to entire VMs hosted by the provider. A customer, anduser of the system which we called the cloud user, is responsible for providing the entire softwarestack running inside a VM. The cloud user may offer SaaS to the public or only handle the businessfor themselves. We use the term private cloud user to refer to the cloud user not made available tothe general public which has received less attention than the users who offer public services. Thecloud provider can also provide higher layer services, such as Google apps offers complete onlineapplications that can be directly executed by their users. In this paper, we focus on the cloud userwho provides the SaaS to the public.As illustrated in Figure 1, in cloud computing, authorities of different softwares usually do

not trust each other. In contrast, the software components from different authorities may affecteach other’s security. For example, the ability to perform ‘hot-updates’ potentially gives an OSvendor a backdoor into applications’ secrets. What if application developers do not necessarilytrust the service vendor to be honest, or to release bug-free updates? What if a software compo-nent wants to reserve the right to inspect and approve the updates of lower level software compo-nents?The existing security technologies can be used in cloud computing. However, the protection

provided by those technologies is not symmetrical to different software stacks in cloud computing,such as the traditional intrusion detection, where virus protection technology can be used to preventa malicious cloud user or SaaS user from attacking the cloud platform. The cloud provider canalso provide isolation environment for different SaaS providers by virtualization technology (if thecloud architecture is based on the virtualization technology). However, from the upper authorities’point of view, there is no mechanism to ensure that the underlay authorities will not harm theirinterest by taking the advantages that they provide in the underlay software stacks, which is thecore issue of this paper.The cloud provider is certainly in a position to violate customer confidentiality or integrity.

However, this situation may be alleviated by leveraging the TCG-style trusted boot to extend thechain of trust to VMM. In our work, we consider the provider and its infrastructure to be trustworthy,which means that we do not consider such attacks that rely upon subverting a cloud’s administrativefunctions through insider abuse or vulnerabilities in the cloud management systems (e.g. virtualmachine monitors). In our threat model, Victims are SaaS users running in confidentiality andrequiring services from SaaS providers. The adversaries are malicious SaaS providers and otherattackers outside the cloud or inside the cloud which can leverage the vulnerabilities of softwarecomponents running on the system to compromise the integrity of the execution environment ofSaaS users.

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 5: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

BUILDING INTEGRITY MEASUREMENT AND PROTECTION 1897

No Trus t

No Trust

Trust

SaaS User A SaaS User B

SaaS Provider/Cloud User A

Cloud Provider

Utility Computing

SaaS Provider/Cloud User B

Web Applications

Utility Computing

Web Applications

Trust

No

Trus

t

Trus

tFigure 1. Trust relationship in cloud computing.

3. FORMAL FOUNDATIONS

In this section, we analyze how to protect the sensitive data of upper authorities according to theirintegrity protection requirements, considering the existing of multiple independent authorities on asingle platform. We start with a simple and abstract model for program execution and then presentthe basic concepts and principles related to the trusted state. Our analysis is influenced by theoutgoing authentication problem [6].

3.1. Program dependency

Assumption: A computing environment has exactly one memory place to hold software and thememory is untampered from outside. The computing environment cleans all memory states whenit is restarted. We denote the environment which satisfies the above assumption as �CE .Authority: An authority can authorize updating or loading a program p in �CE . As aforementioned,

the computing environments we face force us to partition the code space in �CE into three layers:OS layer, service provider layer, and application layer. Software in different layers within �CE istypically controlled by different, mutually untrusted authorities. Thus we need to tolerate maliciousauthorities including those of OS and bootstrap. Under this scenario we consider a system stateas the collection of the content of memory and CPU registers. The instructions and data can beaffected by former loaded programs.Entity: A program p, including code and data, is loaded and executed inside the computing

environment �CE at a particular moment.

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 6: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

1898 G. CHENG ET AL.

A system state is not determined by one entity; on the contrary it is determined by the entities thatcome from all the software levels. For this reason we need to explore what happens to a particularplatform: not only long-term action sequences, but also specific instants along that sequence.History and Run: A history is a finite sequence of computations for a particular computing envi-

ronment. A run is an unbounded sequence of computations for a particular computing environment.H← R means history H is a prefix of run R.When a program p is loaded in run R, the system state is changed, and R becomes R′. Thus we

can say that entity e corresponds to a series of procedures which are loaded into the memory in aparticular sequence. We refer to S as the system state, and at a particular moment the system stateSR can be denoted by the set of all entities which run in �CE , that is, SR={e1,e2, . . . ,en}. We notethat p belongs to an authority, and the authority might authorize the computing environment �CEto load p to change the state.The system state is determined by an entity set in run R, and the entities interact with each other.

However, the relationship between them is complex and some entities have the ability to read orwrite other entities.Dependency Function: Let E be the set of all entities in �CE , for ∀e1, e2∈E , if e1 can read/write

the data of e2, then e2Depdata(e1); if e1 can write/control the code of e2, then e2Depcode(e1), whereDep represents the union of Depdata and Depcode on E .Naturally, we have the following deduction:

e1∈Dep(e1) Idempotent

if e2∈Dep(e1), then Dep(e2)∈Dep(e1) Transitive

Relation Dep depends on run R. LetR−→ be the transitive closure of Dep.

For entity e in run R, we define DepR(e)={ f :e R−→ f }. For entity e in run R, DepR(e) listsall the entities in �CE that can subvert the correct operations of entity e in run R. As mentionedabove, an entity’s action can be possibly damaged by other entities. We need some notion of trust.Usually, an authority Au has some ideas of which applications it might trust and of which ones itdoes not trust.Trustset: For an authority Au, Trustset(Au) denotes the set of entities that Au trusts.

3.2. Integrity protection requirement

We use C to denote a system configuration which consists of the relevant properties, including avector of conditions for each authority: its trustset, authority status, code contents, and protecteddata. A system state consists of a program running sequence and programs permitted to run in �CE .We denote this by �ENC.Suppose Au is an application authority in a valid configurationC , for �ENC0∈Trustset(Au), �ENC0

denotes a state that the protected data of authority Au has its initial contents, but no program in�ENC0 writes to the protected data since these contents have been initialized. �ENCi denotes anupdated system state when program pi is loaded.Let p0, . . . , pi , . . . be a valid program loading sequence, which have been loaded into a system in

configuration C . If pk (0≤k≤ i) is the first program in this sequence such that �ENCk /∈Trustset(Au)

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 7: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

BUILDING INTEGRITY MEASUREMENT AND PROTECTION 1899

Authority Au

Trusts Distrusts

Security Destroy Security Preserve

Programs Loading Action

Trustset (Au) Distrustset (Au)

Programs Loading Action

Programs Loading Action

Figure 2. An authority stops trusting a computing environment when a loaded programdoes not belong to trust set.

is true, the contents of the protected data are destroyed or the system returns to �ENCk−1, asillustrated in Figure 2. If this is satisfied, only programs loaded before �ENCk can directly accessthe protected data. In particular, Aumay stop trusting a system state when transition from �ENCk−1to �ENCk includes a loading of any code in any underlying layer which Au does not trust.

3.3. Trust validation

As entity e interacts with other entities in the same �CE which depends on DepR(e), a desiredintegrity monitoring mechanism in VMM should determine the trust of DepR(e).The question is how to identify an entity and how to determine the changes of the entity from

the virtual layer. It is very difficult to monitor the changes of the whole memory to achieve this.An alternative method is called as load time integrate measurement which identifies an entityby checking the hash value of the corresponding program when the program is loading into thememory [7,8]. In this paper, we assume that code measurements are sufficient to describe thechanges of an entity. Thus, self-changing code can be evaluated because the self-changing abilityof code is reflected in the measurement and can be taken into account in verification.Trust State: For entity e, run R is trusted by authority Au only if DepR(e)⊆Trustset(Au).In order to determine whether DepR(e)⊆Trustset(Au) after p is loaded into R and R transits

into R′, the primary function of integrity monitoring in VMM is to trace the entity. We note that theentity is determined by the sequence of loaded programs. Let trace(p, R, C) denote the collectionof loaded entity hash value provided by VMM when the authority Au loads p in run R.Validating trust state: Validating a trust state is a mechanism that determines whether �CE is

trusted to authority Au when p is loaded in run R, according to Trustset(Au) and the collection ofloaded entity credentials Trace (p, R, C).The algorithm to validate a trust state is determined by the collection of loaded entity credentials

Trace(p, R,C) and Trustset(Au). Naturally, Trustset(Au) is associated with the application require-ment of authority Au. Therefore for those entities trusted by Au they vary with Au with differentselections.

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 8: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

1900 G. CHENG ET AL.

Validation by VMM is reliable and complete, if and only if for any entity e, Trustset(Au), andany history H and run R where H←R, the following is true:

validate(Au,Trace(p, R,C))⇔DepR(e)⊆ trustset(Au)

The above expression shows that if a VMM can meet authority Au’s security requirement, it has totrace the program loading procedure and validate the loading program belonging to Au’s trustset.

4. IMPLEMENTATION

Our solution is built on hardware virtualization extensions such as Intel VT [15] and AMD SVM[16]. In this section, we discuss our implementation on Xen HVM DomU based on Intel VT. Wefirst give an overview of our implementation, followed by the description of measurement andprotection of sensitive data in disk by hooking disk I/O. We then show the mechanism to traceprogram loading and protect memory sensitive data. At the end of this section we describe how tovalidate the integrity of a system and make access control decisions.

4.1. Implementation overview

According to the formal model described in the previous section, in order to protect sensitive dataspecified by an authority when the integrity of its trustset is broken, we need to monitor the process

Xen

VMExit

System Call Tracer

VMCS

PV-on-HVM Driver

BackendDriver

DeviceDriver

Tap FIFO Hash(block X)

Compare

Decision Making Engine

Tapdisk Driver

Trace Module

Domain 0

DomainU(Full Virtulization)

TrustSet and Protected Data

Hardware(CPU +Virtualization Extensions)

Figure 3. VMM-based integrity protection architecture.

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 9: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

BUILDING INTEGRITY MEASUREMENT AND PROTECTION 1901

of program loading, verify whether loaded programs belong to the corresponding authority’s trustset,and examine the integrity of the loaded programs.We leverage virtualization technology to fulfill the above requirements. As shown in Figure 3,

all measurement operations and access control of disk files are achieved by hooking disk I/Ooperations. Monitoring loaded programs and protecting sensitive data in memory are implementedby intercepting corresponding system calls.We leverageBlktap architecture, X86 fast system call entrymechanism, andXenmemorymanage-

ment subsystem to achieve the measurement, monitoring and access control. Our implementationincludes a set of functional modules: trace module (TAM), system call tracer (SCT), and decision-making engine (DME). TAM collects the information of disk operations and measures the trustset and controls accesses to the disk. SCT collects and filters system call arguments and providesmemory protection. DME makes decision of measurements or access control according to infor-mation sent by TAM and SCT.

4.2. Measurement and disk access control

Blktap is a user-mode driver which directly manages disk activity with relatively little performancecost [17]. TAM intercepts file operations in the user-mode of Dom0 when a disk data is processedby the tapdisk driver of Blktap.TAM does not block disk reading operations, but only sends operation parameters (including the

starting sector location and the number of sectors) to DME which makes access or measurementdecisions according to these parameters. According to the decision from DME, TAM makes oneof the following three types of actions: (1) measurement operation—TAM copies the buffer ofdisk reading operations to the measurement buffer, or invokes Blktap asynchronous I/O operationsaccording to the measurement parameters from DME to read the specified data to the measure-ment buffer. At the same time, TAM returns disk reading operations and invokes a hash functionto take the measurement. When this function returns, TAM submits the hash value to DME;(2) normal operation—TAM does not take any action and the reading operation continues; (3) denyoperation—TAM cleans the file buffer and returns a reading error.For any disk writing operation, TAM blocks it, sends the arguments of the operation to DME,

and then enforces DME decision: permits write operation or not.

4.3. Monitoring and memory access control

Our implementation monitors system state at the following two stages:Booting Stage: In the booting stage of amonitoredVM, the function ofBIOS is offered by theVMM

but it does not usually directly load the OS. Instead, it only loads a portion of a boot loader residingin MBR into the memory and transfers the control to the loaded code. Thus TAM must measure theboot loader and the OS image. We achieve this by intercepting the disk data flow in the booting stagebefore it is processed by the tapdisk driver, and measuring all the data received by DME.Runtime Stage: During runtime, we dynamically monitor programs loaded and where the

protected data are loaded into the memory. System calls are intercepted using X86 fast system call

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 10: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

1902 G. CHENG ET AL.

entry mechanism. X86 fast system call is generally used on Windows XP and Linux kernel 2.6. TheSYSENTER instruction triggers the transition from the user mode to the kernel mode. The kernelentry address is specified by two special registers: SYSENTER CS MSR and SYSENTER EIP MSR.Whenever user-mode applications require system services, the service number and parameters aretransferred into the kernel, and then the instruction SYSENTER executes.In our implementation, the value of register SYSENTER EIP MSR is set to a magic address which

leads to page fault every time by SCT. Whenever a system call in the monitored VM is invoked, apage fault occurs at the special address. When page fault occurs and the page fault linear addressis equal to the magic address, it indicates that the system call has happened, and its parametersrelated to the current process are gathered to record reading/writing operations. Ultimately, thereal entry address of a system call is set in the EIP register, and the handler executes in themonitored VM.It is dispensable to inspect all system calls and their arguments. In fact, we focus on system calls

for file operations and loading modules and applications, such as read, write, int module, execve,and fork. Modules are dynamically loaded into kernel space through insmod. Applications replacethe current execution code via the system call execve. The arguments of these system calls mayinclude the relative pathname of executable files. The absolute pathname can be resolved accordingto the task struct structure of the current process. After that, the pathname is transferred to DME.If the protected data is loaded into the memory, accessing to the data is controlled by SCT. The

most important thing is the data and the location of the data. Through intercepting read systemcall, we can obtain such information in real time. From the arguments of read system call, the filedescriptor and buffer address are easily obtained. Similarly, the absolute pathname can be analyzedthrough the file descriptor and the task struct structure of the current process. This information ispassed to DME.To control the access to the protected data in the memory, we leverage Xen’s shadow paging

mechanism. This technique maintains two kinds of page tables for each VM: guest page tables(GPTs), which are controlled by the guest, and shadow page tables (SPTs), which are controlledby the hypervisor. Xen controls the actual machine frames used by each VM, while it also provideseach guest OS the illusion that it has full control of the memory. To achieve the memory accesscontrol for protected data, we need to control the propagation of entries from GPTs to SPTs andXen’s page fault handler.We can trace the page information that the protected data is stored with protected page tables

(PPTs) created by SCT. SCT populates the PPTs with references to the physical pages correspondingto the linear address space of the protected data. Once the access requirement to the protected datain memory needs to be controlled, SCT removes references to the program’s protected pages fromthe SPTs and flushes the TLBs. Owing to this setup, access to code/data from the SPTs to the PPTsor vice versa leads to page faults that invoke SCT in the hypervisor. This technique only providespage-level protection, which is problematic if a page contains protected and accessible regions atthe same time. We need to provide byte-level protection by modifying Xen’s page fault handler.Each time a page fault occurs due to a failed access operation, we check the target’s virtual address,which is stored in the CR2 CPU register. Next, we check the protection list to see if the targetaddress requires protection. If yes, a page fault exception is propagated to the guest OS to preventthe access attempt; if not, the guest is permitted to access a non-protected region of a frame thatcontains a protected region.

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 11: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

BUILDING INTEGRITY MEASUREMENT AND PROTECTION 1903

4.4. Decision-making engine

DME processes the life cycle of an authority’s protected data according to the information sentby TAM, SCT, and the trust set defined by the authority. DME supports the authority to describeits trust set and protected data in a higher-level file system–oriented view, by specifying whichdirectories and files belong to the trust set or which files are protected data, and 160 bit hash valuesare used to identify the integrity of these programs.At the VMM layer, most of the operations captured by TAM and SCT are low-level operations,

which are closely related to the specific system architecture, whereas the trust set and protecteddata are described with higher level semantics. DME translates the easy-to-manage higher-levelrepresentations into a raw physical operation. Specific semantic information translation is closelyrelated to guest OS and selected file systems. DME builds three structures called trust inte file,prote file, and mem pro file for this purpose. The first two record the translated results accordingto the authority’s trust set and protection data, and the third records the memory address of theprotected data. All the files and directories of the first two structures have a block node includingall the blocks which the directories and the files have occupied.When a target VM boots, DME first compares the hash value of the boot loader and OS kernel

image with the values in trust inte file. If any of them does not match, it means that the boot loaderor Os kernel image is not satisfied with the authority’s requirement. The access permit bit of theprote file is set. After initialization, for each change DME reads a new record from the tap FIFOsent by TAM. Next, the record’s block number is hashed into the trust inte file and prote file. Ifthe record’s block number is found in the prote file, it indicates that this disk I/O operation isaccessing the protected data. DME then checks the access permitted bit. If the bit is set, whichmeans that the authority expected trusted environment is broken, the access requirement should bedenied. DME sends a deny operation instruction to TAM. Otherwise, if the access permitted bit isnot set, DME checks whether the measurement buffer is empty. Because loaded kernel modules andprograms are measured asynchronously with file reading operations, DME must wait until all themeasurements are finished. If the protected data is allowed to access, for a reading operation, DMEsets an opening bit to indicate that the protected data is to be loaded into memory. For a writingoperation, DME records the block with the change of the protected data after the writing operation iscompleted.If the block numbers are matched in trust inte file, which means the disk I/O operation is

accessing the trust set, for a reading operation, if the file has never been measured before orhas been changed, DME sends the measurement instruction (including the entire block this filloccupied) to TAM. The hash value is compared with that in trust inte file. If the hash value hitsin trust inte file, it indicates that the loaded data and program are satisfied with the requirementsof the authority. On the contrary, if it misses, the access permitted bit is set. DME also needsto verify whether the kernel modules and user-level executables come from trust set. Accordingto the description in Section 4.3, TAM can capture their path information so that hence DMEcan match the path information in trust inte file. If they do not match, the access permitted bit isset.Then DME checks whether the opening bit is set. If it is set and the permit access bit is also set,

it indicates that some protected data is loaded into the memory and at the same time the integrityof the system is broken. DME then sends memory protected instruction to SCT.

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 12: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

1904 G. CHENG ET AL.

In order to minimize the performance impact, we take a new measurement only if a target filehas not been measured or it might have been changed since the last measurement in trust inte file.Thus we use caching to reduce the performance overhead.

5. SYSTEM EVALUATION

In this section, we first demonstrate the integrity protection capability of our implementation, andthen analyze its performance overhead.

5.1. Effectiveness

We have developed a prototype-based Nimbus [18] to demonstrate our system’s capability ofdynamically detecting the integrity changes of programs defined in a SaaS user’s trustset andprotecting his sensitive data.Nimbus includes a set of open source tools that together provide an ‘Infrastructure-as-a-Service

(IaaS)’ cloud computing solution. To support our architecture the Nimbus nodes need to adopt ourmodified Xen described in Section 4. We also implement a remote attestation daemon in Domain 0of each node, which is in charge of reporting the measurement list of VMs running on this node. Asillustrated in Figure 4, a SaaS provider deploys its services by virtual machine images in Nimbus-based cloud platform. In our prototype, the SaaS provider simply provides an FTP service by rentingthe resources of our cloud. Because the resource nodes of the cloud should not to be exposed tothe SaaS users, we implement an attestation delegation service (ADS) in Nimbus as a bridge forSaaS Users to attest the relevant VMs. The SaaS users can request Nimbus to verify the integrity ofthe FTP service’s configuration by sending an attestation command with the IP address of the FTPservice node to the ADS. The ADS communicates with the relevant remote attestation daemon andreturns the measurement list of the kernel and the loaded programs of the FTP server to the SaaSuser.The user verifies the measurement list from the FTP server. If it satisfies the SaaS user’s security

requirement, the SaaS user signs the measurement list as a trust set and authorizes Nimbus toprotect his/her sensitive data. The user needs to send the user instructions to the FTP server andthe signed trust set to the ADS. The ADS informs the corresponding cloud nodes to monitor theintegrity changes of the service’s configuration during runtime. If any change violates the user’ssecurity policy, the sensitive data of the SaaS user will be protected.As shown in Figure 5, the hypervisor with our design in the cloud node monitors the integrity

changes of the FTP server. When the integrity of the file /etc/profile which is included in theSaaS user’s trust set, does not match the hash value in the trust set, the dynamic measurement andprotection module will detect this change and protects the sensitive data of the SaaS user. As shownin Figure 6, even the root user or the FTP server manager cannot access the SaaS user’s data, oncehis/her desired system integrity is broken.In our system, the environment is measured and user sensitive data is sealed according to the

SaaS user’s requirement, and then the user can decide whether the environment is trustworthy ornot. For example, if the user requires high privacy protection for his/her data, he/she will uploadit to an FTP server. The user is required to choose the FTP server with the ability to satisfy his/her

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 13: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

BUILDING INTEGRITY MEASUREMENT AND PROTECTION 1905

Hardware

VMM(Xen)

OS

Workspace Service

Cloud U

ser\SaaS

Provider

OS

Domain U

Notifications

from VMMs

Requests to Service

Status Updates

Domain 0

SaaS User

Remote Attestation

Daemon

Dynamic Measurement and Protection

Module

Control Commands

to VMMs

RepositoryRequests to GridFTP

Full Files

to Run

Push FilesAfterwards

Node 2…………………………

...

Node 3

Migration Migration

Requests to Service

Attestation and Protection Requirment

Manage Module

Attestation Delegation

Service

Figure 4. Prototype architecture.

Figure 5. Detecting the integrity change of programs.

privacy protection requirement. The cloud provider can guarantee that it cannot lie to the FTP userby measuring the FTP server execution environment and sealing the data.The environment is identified by measured executable code and data, and any change to them

will cause the integrity of the environment to be destroyed and become no longer accessible. Inreal environments, to satisfy the requirements that enable upgrade and other forms of data sharing,controllable secrets’ disclosure of the environment must be provided to the outside. Our design canimplement many forms of upgrade policies, for example, when the SaaS provider needs to updateits service in a node, we should take the following steps: (1) attestation delegation service (ADS)requires the SaaS provider for its update digest; (2) the SaaS user’s data is sealed temporarily;

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 14: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

1906 G. CHENG ET AL.

Figure 6. Protecting the sensitive data of an authority.

Table I. The overhead of disk I/O (ms).

1KB 16KB 128KB 1MB 10MB

R UNCHECK 7.7 17.8 107.8 862.4 10171.4R CHECK 8.7 21.8 139.6 864.2 12731.4M HASH 8.7 22.1 144.7 923.2 16322.2W UNCHECK 545.1 1173.2 5033.9 10580.2 16000.9W CHECK 547.2 1181.4 5097.9 10068.2 17623.9

(3) the SaaS user confirms whether the update is satisfied with his/her requirement or not; (4) ifsatisfied, the user will sign the update digest and the hypervisor unseals the user’s data.Another example is about group trusted updating based on cryptographic certification. Suppose

the SaaS user has delegated a certificate set he trusts to ADS. The latter can require the SaaS forthe certificate of the update code. If the update code comes from a vendor whose certificate in theSaaS user’s trusted certificate is set, his/her data will be unsealed.Our architecture is transparent to guest OS, hence the VM can be migrated between nodes.

However, the VMM must support our architecture in the target node. Before VM migration, thesource node needs to inform the target node with the user’s trusted certificate set and the informationof data to be protected.

5.2. Performance

Our prototype system runs on a 2.33GHz Intel Core Duo processor with 2MB L2 cache, 2GBRAM, and 80GB 7200RPM disk. The metrics include the latency of disk I/O and system calltracing. We use notation CHECK to represent the disk I/O with our design which needs time tomake decisions, while using UNCHECK to represent the case in a common Xen system without ourdesign. M HASH denotes file reading operation with the file’s hash value measured. Measurementsare made using the Linux time command. The script is executed in different file sizes. The size ofsampled files varies from 1KB to 10MB for each mode. Table I presents the experiment results.Table I shows that the disk I/O performance overhead can be negligible. Most of the hash

measurement is executed asynchronously with regard to actual disk I/O. The asynchrony createdby the use of a FIFO allows DME, the most performance-intensive component of the architecture,to execute in parallel with actual disk operations.The system call tracing is a key mechanism for our design to interpret the actions of guest

OS and protect SaaS users’ sensitive data. We test the performance of system call tracing by the

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 15: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

BUILDING INTEGRITY MEASUREMENT AND PROTECTION 1907

Figure 7. Performance of system call tracing.

standard benchmarks which perform a series of tests on Linux web servers, database servers, andCPU-intensive applications. We also measure the efficiency of file compression and decompressionbased on Linux kernel source. The size of kernel code, linux-2.6.18.8.tar.gz is 58.6MB.Figure 7 presents the performance of system calls tracing. The results show that our imple-

mentation adds extra latency to system calls. Latency-sensitive benchmarks, such as web serverbenchmark, incur a relatively high performance cost. The latency is mainly raised by the notifi-cation mechanism. In our current implementation, system call tracing is achieved by modifyingthe hypervisor and the decision-making engine is located in the user space of domain0. All thesystem call tracing information need to be transferred from the hypervisor to the engine in domain0.A full in-hypervisor implementation would have much lower latency. In addition, system callswhich require I/O access are not affected by the extra latency even in our current implementation.

6. DISCUSSION

Trust chain: the architecture we introduced in Section 4 is hosted in a virtual machine. It is thereforepossible to take an integrity measurement for each virtual machine and record the measurements.However, it is necessary that a challenger can establish trust in an environment which consists ofmore than the content of the virtual machine. The reason is that each operating system is runninginside a virtual machine that is fully controlled by the hypervisor. Furthermore, all the modules ofour design can run as processes inside a VM whose own execution environment must be trusted.Therefore it is necessary that attestation allows a challenger to learn about measurements insidethe virtual machine, but also about those of the environment of hypervision. In addition, thesemeasurements must include the hypervisor and the entire boot process.Optimistic and transparent attestation: Terra measures the trusted virtual machine monitor on the

partition block level. Thus, on the one hand, Terra produces about 20 Megabyte of measurementvalues (i.e. hashes) when attesting an exemplary 4 Gigabyte VM partition. On the other hand,because those measurements correspond to blocks, it is difficult to interpret the varyingmeasurementvalues. Berger et al. [11] illustrate how to virtualize a TPM and present a driver-pair that utilizes

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 16: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

1908 G. CHENG ET AL.

these concepts. However this needs the gust OS support if we want to extend the TCG conceptmeasurement into the application level. For example, IMA is used for providing measurements.Our architecture selectively measures those parts of the system that contribute to the dynamic

runtime system. Note that the attesting system’s enforcement requirement is different for differentchallengers, hence it is convenient to allow the upper authorities to describe their integrity require-ment through hash values. In traditional approaches, it is very difficult to maintain the systemintegrity in the operating system. First of all, operating systems are usually so large that they mayhave many leakages. Secondly in cloud computing, a cloud provider should support different soft-ware configurations of cloud users. It is very difficult to require a cloud user to provide support toextend the TCG-based chain of trust to application level. Our architecture achieves measurementand protection at VMM level, hence it is transparent to the above software stacks, thus more suitablein cloud computing.Completeness of the implementation: Similar to IMA, in our implementation, except the static

integrity measurements, we measure dynamic loaded kernel and process. We did not measure thedynamic link library, because if a process which belongs to an authority’s trustset requires dynamiclink library, the library is bound to the trustset, and all the programs and data in the trustsets will bemeasured to ensure its integrity by the Blktap hook. Unless information flows among processes areunder a mandatory restriction, the integrity of all processes must be measured. For most systemsthat use the discretionary policy, the integrity of all processes must be measured because all canimpact each other. Various defects in existing operating systems make any running programs loadedinto memory have the possiblity to leverage these defects and impact the correctness of user-specificprocedures and security of sensitive data, which makes us to extend the measurement border to allthe processes loaded into the memory.

7. RELATED WORK

The IBM 4758 security coprocessor [14] implements both secure boot and authenticated boot,albeit in a restricted environment. It promises secure boot guarantees by verifying all partitionsof the whole system before activating them. By enforcing signature verification on executablesbefore loading them into the system, it further protects the security of sensitive data when thepredefined integrity of the system is broken. Amechanism called outgoing authentication [6] enablesattestation that links each subsequent layer to its predecessor. Smith and Weingart [19] discuss howto ensure security deployment and the correctness of software updating, which relies on the securityco-processor in multi-authority environments. High design and development cost prevents it frombeing widely applied, and it is difficult to support large-scale commercial applications.TCG has proposed an open interface for hardware TPM which provides cryptographic function-

alities and protected storage [5]. TPM enables the verification of static platform configurations,both in terms of content and loading order, by collecting a sequence of hashes over target codes.Researchers have examined how to use TPM to prove that a platform has booted with a valid OSwith trusted BIOS and OS loader [8,9].IMA [8] is a scheme to extend the TCG specified measurements to programs in application layer.

Donald et al. [9] discuss how to convert a business Linux system into a trusted virtual computingplatform with TPM and ensure its trusted environment through integrity check. However, both

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 17: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

BUILDING INTEGRITY MEASUREMENT AND PROTECTION 1909

approaches need modifying OS, which limits their application, e.g. on Linux platforms. Anothermajor issue of TCG and IMA is that they only have load-time integrity measurement, thus noruntime integrity guarantee.Vtpm [11] presents the design and implementation of a system that enables trusted computing

for an unlimited number of virtual machines on a single hardware platform. To this end, Vtpmvirtualizes the TPM. As a result, the TPM’s secure storage and cryptographic functions are availableto operating systems and applications running in virtual machines.Terra [12] is a trusted computing architecture built on a trusted VMM that authenticates software

running in a VM for challenging parties. Terra measures the trusted VMM on the partition blocklevel. Thus, on the one hand, Terra produces about 20MB of measurement values (i.e. hashes) whenattesting a 4GB VM partition. On the other hand, it is difficult to interpret varying measurementvalues. Our system selectively measures those parts of a system that contribute to the dynamicruntime system integrity; it does so on a high level that is rich in semantics and enables remoteparties to interpret varying measurements at file level.Trusted Cloud [20] argues that concerns about the confidentiality and integrity of their data

and computation are a major deterrent for enterprises looking to embrace cloud computing. Theypresent the design of a trusted cloud computing platform (TCCP) that enables IaaS services suchas Amazon EC2 to provide a closed box execution environment. TCCP guarantees confidentialexecution of guest VMs, and allows users to attest to the IaaS provider and determine whether theservice is secure before they launch their VMs.

8. CONCLUSIONS AND FUTURE WORK

In this paper, we have presented the design and implementation of a virtualization-based integrityprotection approach which permits an authority to bind his sensitive data with integrity requirements.Our approach can guarantee that the sensitive data specified by an authority can only be accessedby programs in an environment that the authority trusts. This approach is applicable to multi-layersoftware environment where an authority of the upper software can maintain the security of thesoftware when the integrity of the underlying software components is broken. The experimentalresults show that the design is effective and the overhead is acceptable.The main feature of our solution is that it can enhance the security of an ordinary commercial

platform with the same capabilities provided by the IBM 4785 security coprocessor. Our solutionnot only measures and reports the integrity of a system, but also protects the sensitive data whenthe system’s integrity is compromised. The approach can be applied in cloud or grid computingenvironments with multiple independent authorities to protect their sensitive data and to maintainthe integrity of an entire system.

ACKNOWLEDGEMENTS

The paper is supported by the National Basic Research Program of China (2007CB310900), and the NationalNatural Science Foundation of China (60973038).

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe

Page 18: Building dynamic and transparent integrity measurement and protection for virtualized platform in cloud computing

1910 G. CHENG ET AL.

REFERENCES

1. Amazon Elastic Compute Cloud (EC2). Available at: http://aws.amazon.com/ec2/ [2009].2. Microsoft Azure Services Platform. Available at: http://www.microsoft.com/azure/default.mspx [2009].3. Mosso R. Available at: http://www.rackspacecloud.com/ [2009].4. Armbrust M, Fox A, Griffith R, Joseph AD, Katz R, Konwinski A, Lee G, Patterson D, Rabkin A, Stoica I, Zaharia M.

Above the clouds: A Berkeley view of cloud computing. Technical Report No. UCB/EECS-2009-28, University ofCalifornia at Berkley, U.S.A., 2009.

5. Trusted Computing Group. Available at: https://www. trustedcomputinggroup.org/ [2009].6. Smith SW. Outbound authentication for programmable secure coprocessors. International Journal of Information Security

2004; 3(1):28–41.7. Maruyama H, Seliger F, Nagaratnam N, Ebringer T, Munetoh S, Yoshihama S, Nakamura T. Trusted platform on demand.

Technical Report RT0564, IBM, 2004.8. Sailer R, Zhang X, Jaeger T, Doorn LV. Design and implementation of a TCG-based integrity measurement architecture.

Proceedings of the 13th Conference on USENIX Security Symposium. USENIX Association: Berkeley, CA, U.S.A., 2004;223–238.

9. MacDonald R, Smith SW, Marchesini J, Wild O. Bear: An open-source virtual secure coprocessor based on TCPA.Technical Report TR2003-471, Dartmouth College, 2003.

10. Jaeger T. Sailer R, Shankar U. PRIMA: policy reduced integrity measurement architecture. Proceedings of the 11th ACMSymposium on Access Control Models and Technologies. ACM Press: New York, NY, U.S.A., 2006; 19–28.

11. Berger S, Caceres R, Goldman K, Perez R, Sailer R, Doorn LV. vTPM: Virtualizing the trusted platform module.Proceedings of the 15th Usenix Security Symposium. USENIX Association: Berkeley, CA, U.S.A., 2006.

12. Garfinkel T, Pfaff B, Chow J, Rosenblum M, Boneh D. Terra: A virtual machine-based platform for trusted computing.Proceedings of the 19th ACM Symposium on Operating Systems Principles. ACM Press: New York, NY, U.S.A., 2003;193–206.

13. TaMin R, Litty L, Lie D. Splitting interfaces making trust between applications and operating systems configurable.Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation. USENIX Association:Berkeley, CA, U.S.A., 2006; 279–292.

14. Dyer JG, Lindemann M, Perez R, Sailer R, Doorn LV, Smith SW. Building the IBM 4758 secure coprocessor. IEEEComputer 2001; 34(10):57–66.

15. Intel Virtualization Technology. Available at: http://www. intel.com/technology/virtualization [2009].16. AMD64 Virtualization: Secure Virtual Machine Architecture Reference Manual. AMD Publication no.33047 rev.3.01,

2005.17. Warfield A. Virtually Persistent Data. Xen Developer’s Summit (Fall 2006), San Jose, CA, U.S.A., 2006.18. Nimbus. Available at: http://workspace.globus.org [2007].19. Smith SW, Weingart S. Building a high-performance, programmable secure coprocessor. Computer Network 1999;

31(9):831–860.20. Santos N, Gummadi KP, Rodrigues R. Towards Trusted Cloud Computing. Workshop on Hot Topics in Cloud Computing.

USENIX Association: Berkeley, CA, U.S.A., 2009.

Copyright q 2010 John Wiley & Sons, Ltd. Concurrency Computat.: Pract. Exper. 2010; 22:1893–1910DOI: 10.1002/cpe