9

Click here to load reader

006 Jcit Vol6 No12

Embed Size (px)

DESCRIPTION

science

Citation preview

Page 1: 006 Jcit Vol6 No12

Research on Remote Data Possession Checking

Lanxiang Chen, Shuming Zhou School of Mathematics and Computer Science, Fujian Normal University, Fuzhou 350108

Key Lab of Network Security and Cryptology, Fuzhou 350108, China lxiangchen, [email protected]

Abstract Storage as a service (SaaS) or storage service provider (SSP) model enables users to access their

data anywhere and at any time. It also can comply with a growing number of regulations. For users whose storage requirements are unpredictable and users who need low-cost storage, the SaaS model is a good choice for its convenience and efficiency. However, security and reliability of SaaS have affected its widespread use. When users store their data in the SaaS server, they mostly concern about whether the data is intact; whether it can be recovered when there is a failure. This is the goal of remote data possession checking (RDPC) schemes. This paper firstly gives a simplified architecture of SaaS model and discusses the security requirements. Secondly, the state of the art research on RDPC is reviewed. The RDPC schemes are categorized into two types, namely provable data possession (PDP) and proof of retrievability (POR). Thirdly, the factors considered in designing an RDPC scheme are presented. Based on these factors, the comparisons of the existing schemes are illustrated. Finally, some future directions for RDPC research are proposed.

Keywords: Remote data possession checking; storage security; storage reliability; provable data

possession; proof of retrievability 1. Introduction

Advances in networking technology and the rapid accumulation of information have fueled a trend toward outsourcing storage to external service providers, namely storage as a service (SaaS) or storage service provider (SSP) model. By doing so, organizations can concentrate on their core tasks rather than incurring the substantial hardware, software and personnel costs involved in maintaining data. Specially, for users whose storage requirements are unpredictable and users who need low-cost storage, SaaS model is a good choice for its convenience and efficiency. The online services, such as Google, Yahoo! and Amazon, are starting to charge users for their storage. Customers often use these services to store valuable data such as email, family photos and videos, and disk backups etc.

However, the SaaS model presents a number of interesting challenges. One problem is to verify that the server continually and faithfully stores the entire file entrusted to it by the client. If the server is untrusted in terms of both security and reliability: it might maliciously or accidentally erase the data or place it onto temporarily unavailable storage media. This could occur for numerous reasons including cost-savings or external pressures (e.g., government censure). The client's limited resources and the limited bandwidth between the client and server are the factors that exacerbating the problem. When users store their data in the external service providers, they mostly concern about whether the data is intact; whether it can be recovered when there is a failure. So a critical issue in storing data on untrusted servers is that verifying the storage server keeps on holding their data completely and correctly, which is named as remote data possession checking (RDPC). This is the topic of this paper.

The remainder of this paper is organized as follows. Section 2 gives a simplified architecture of SaaS model and discusses the security requirements of SaaS. Section 3 discusses the category of RDPC and details the state of the art research on RDPC. Section 4 compares the existing schemes from the factors considered in designing an RDPC scheme. Section 5 presents some future directions of RDPC and section 6 concludes the paper. 2. The architecture of storage as a service

Research on Remote Data Possession Checking Lanxiang Chen, Shuming Zhou

Journal of Convergence Information Technology(JCIT) Volume6, Number12, December 2011 doi:10.4156/jcit.vol6.issue12.6

42

Page 2: 006 Jcit Vol6 No12

A simplified architecture of storage as a service (SaaS) or storage service provider (SSP)

model is illustrated as Figure 1. It consists of users and service providers. Users store their data on the storage servers of SSPs. Then they can verify whether their data is intact periodically using any networked devices.

Storage Service Providers

Data archivingData backup

others…

Is the data stored intact?

Users

Figure 1. A simplified architecture of SaaS

Generally, for storage service providers, To save storage cost, they may discard un-visited or little visited data, or mitigate online data

to second-level low speed storage devices, such as magnetic tape; They may conceal missing data accidents, such as management faults, hardware faults or

attacks; They may tamper or leak users’ data; They cannot achieve the performance and reliability which they declared, for instance, they

declared that they have deposited t copies, but they have only one copy in fact. Thus, for users, the security and reliability of SaaS focus on the following aspects. Data confidentiality protection; Data integrity protection; Data availability and reliability protection.

Confidentiality of data can avoid SSP to leak users’ data and it can be achieved by encrypting. Data integrity can avoid SSP to tamper users’ data and it can be achieved by computing cryptographic hash. Confidentiality and data integrity are the most important aspects of remote data storage. In addition, authentication and accountability are also important. But in this paper, we focused on remote data possession checking, which enables users to check whether their data is still stored intactly on the storage devices of SSPs. Moreover, in the case of a failure, an appropriate technique for data recovery must be considered too. 3. Remote data possession checking

Remote data possession checking is a topic that focuses on how to frequently, efficiently and securely verify that a storage server can faithfully store its client’s (potentially very large) original data without retrieving it. The storage server is assumed to be un-trusted in terms of both security and reliability. There are two types of schemes, namely provable data possession

Research on Remote Data Possession Checking Lanxiang Chen, Shuming Zhou

43

Page 3: 006 Jcit Vol6 No12

(PDP) and proof of retrievability (POR). The difference between PDP and POR is that POR checks the possession of data and it can recover data in case of a failure.

Generally, to design an RDPC scheme, the following factors must be considered. Computation complexity, which refers to the initialization and verification overheads in the

client and the proof generating overheads on the server. It means that the scheme should be efficient in terms of computation.

Communication complexity, which refers to the amount of communication between client and server required by the scheme. It means that the amount of communication should be low.

Storage cost, which refers to the additional storage of client and server required by the scheme. It means that the additional storage should be as low as possible.

Data updating, including modifying, inserting, adding and deleting etc. It can only be used for static data if it doesn’t support data update, such as data archive.

The number of verification. It ought to run the verification an unlimited number of times. Public verification. It ought to support public verification. Data recovery, which means that the scheme can recover the data in case of a failure. It can be

achieved by introducing error correcting code or erasure code. Provable security. Generally, it is necessary to prove that the scheme is secure. Data blocks access, which refers to that how much data blocks the scheme needs to access.

In addition, some applications may hope to have a third-party auditor to periodically verify the data and assist in returning the result to the users.

Based on the factors listed above, the state of the art research works on RDPC will be discussed in the following. 3.1. Provable data possession

Initial solutions to PDP were provided by Deswarte and Quisquater [1]. They use RSA based hash functions to hash the entire file at every challenge. Let N be an RSA modulus, g∈ZN

*. The verifier stores a = gF mod N for file F (suitably represented as an integer). To challenge the prover to demonstrate possession of F, the verifier transmits a random element gr. The prover returns s = (gr)F mod N, and the verifier checks that s = ar mod N. This scheme has the drawbacks of requiring the prover to exponentiate over the entire file F and accessing the entire file’s blocks. This is clearly prohibitive for the server whenever the file is large. So is the scheme of Filho and Baretto [2], but it is used to prevent data corruption in transfer.

Ateniese et al. have done some works on PDP schemes. They first formally define protocols for PDP and present two provably secure PDP schemes in [3]. Both schemes use homomorphic verifiable tags. Because of the homomorphic property, tags computed for multiple file blocks can be combined into a single value. The client pre-computes tags for each block of a file and stores the file and its tags with a server. Then the client can verify that the server possesses the file by generating a random challenge against a randomly selected set of file blocks. Using the queried blocks and their corresponding tags, the server generates a proof of possession. However, it does not guarantee that the client can retrieve the file in case of a failure. As it relies on modular exponentiation over files, their schemes are computationally intensive. In addition, the schemes don’t consider data updating.

Recently, Ateniese et al. provide a general mechanism for building public-key homomorphic linear authenticators (HLAs) from any identification protocol satisfying certain homomorphic properties in the random oracle model (RO) in [4]. Then they show how to turn any public-key HLA into publicly verifiable proofs of storage (PoS) with communication complexity independent of the file length and supporting an unbounded number of verifications in the standard model. The public-key HLAs can be layered on top of erasure codes or used in conjunction with a probabilistic approach for multiple audits to obtain better performance while retaining public verifiability. However, it relies on modular exponentiation over files, which is computationally expensive.

In [5], they present a provably secure PDP scheme based on symmetric key cryptography. And the scheme supports some dynamic operations, including modification, deletion and appending. But not fully dynamic: it cannot perform block insertions anywhere, only append-

Research on Remote Data Possession Checking Lanxiang Chen, Shuming Zhou

44

Page 4: 006 Jcit Vol6 No12

type insertions are possible. In setup, they store pre-computed answers as metadata. Thus, the number of updates and challenges is limited and fixed a priori. And each update requires re-creating all the remaining challenges. In addition, since it is based upon symmetric key cryptography, it is unsuitable for public verification.

In [6], Curtmola et al. firstly provide a provably secure multiple-replica PDP (MR-PDP) scheme. It allows a client who stores t replicas of a file to verify that the server have held the t copies. It extends previous work on PDP for a single copy of a file in [3]. It is computationally much more efficient than using a single-replica PDP scheme to store t separate, unrelated files. Another advantage of MR-PDP is that it can generate further replicas on demand with little expense, when some of the existing replicas fail. Unfortunately, the scheme is also based on RSA and it doesn’t consider data updating.

Xiao et al. provide a scheme based on symmetric key cryptography which is called data possession checking (DPC) [7]. The main contribution is that they proposed a challenge renewal mechanism based on verification block circular queue to allow the dynamic increase of the number of effective challenges which can be issued by the checker. Experimental results showed that the computational overhead of a check with a confidence level of 99.4 is 1.8ms, which is negligible compared with the cost of disk I/O. The computational overhead of file preprocessing is reduced by three orders of magnitude by avoiding using public-key cryptosystem. But they don’t prove the security of the scheme.

Erway et al. present a framework and a construction for dynamic provable data possession (DPDP) [8], which extends the PDP model to support data updating. They use a new version of authenticated dictionaries, implemented with authenticated skip lists [9] based on rank information. They prove the security of these updates using collision resistant hash functions. They also show how the DPDP scheme can be extended to construct complete file systems and version control systems at untrusted servers. However, their schemes are also based on RSA.

In [10], Sebe et al. present an RDPC protocol such that it allows an unlimited number of verifications and the maximum running time can be chosen at setup time and traded off against storage at the verifier. But their scheme needs RSA operations both on server and client. Reference [11] proposed an efficient RDPC scheme and they proposed an RSA based challenge-updating method. Its drawback lies in data dynamics. 3.2. Proof of retrievability

Juels and Kaliski introduced the notion of proof of retrievability (POR) and proposed a formal POR protocol definition and accompanying security definitions [12]. Their scheme use disguised blocks, called sentinels, hidden among regular file blocks that the server cannot differentiate from encrypted blocks. However, for large files and practical protocol parameterizations, the associated expansion factor can be fairly modest, e.g., 15%. In addition, the scheme can only be applied to encrypted files and can handle a limited number of challenges, because each challenge consumes some sentinel blocks.

In [13], they gave two POR schemes. The first one, built from BLS (the abbreviation of three names: Boneh, Lynn and Shacham) [14] signatures and secure in the random oracle model, has the shortest query and response with public verifiability. The second one, based on pseudorandom functions (PRFs) and secure in the standard model, has the shortest response with private verifiability, but a longer query. They encode file using an erasure code, which, thanks to the underlying system of MACs, is able to turn into an error-correcting code. But the schemes don’t consider data updating.

Dodis et al. first formally define the POR code in [15]. They identify several different variants, such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness of POR schemes. The constructions either improve and generalize the prior POR constructions, or give the known POR schemes with the required properties. The main insight of their work comes from a simple connection between POR schemes and the notion of hardness amplification, extensively studied in complexity theory. However, they don’t consider communication and computation overheads, and data updating.

As different forward error-correcting codes (FEC) result in tradeoffs in performance, flexibility and reconfigurability, rate of error correction and output data format, Curtmola et al.

Research on Remote Data Possession Checking Lanxiang Chen, Shuming Zhou

45

Page 5: 006 Jcit Vol6 No12

distill the key performance and security requirements for integrating FECs into PDP and describe an encoding scheme and file organization for RDPC in [16]. They built a Monte-Carlo simulation to evaluate tradeoffs in reliability, space overhead and performance. They provided a detailed analysis of the scheme that quantifies the probability of the success of an attacker given different encodings, attack strategies and client checking strategies. However, the scheme requires generating MACs for each block, which will result in large additional storage.

In [17], Bowers et al. introduce a theoretical framework for PORs which leads to improvements in the previously proposed POR constructions of Juels-Kaliski [12] and Shacham-Waters [13]. However, they pointed out that data updating and public verification are public unsolved problems. In follow-up work, they introduce HAIL (High-Availability and Integrity Layer) [18], in which the key insight is to embed MACs in the parity blocks of the dispersal code. As both MACs and parity blocks can be based on universal hash functions, it is possible to create a block that is simultaneously both a MAC and a parity block. The goal of HAIL is to ensure resilience against a mobile adversary. This kind of adversary can potentially corrupt all servers across the system lifetime. However, it can control only b out of the n servers within any given time step. If corruptions are detected on some servers, then F can be reconstituted from redundancy in intact servers and known faulty servers will be replaced. However, HAIL is designated to protect static data and it doesn’t support public verification.

Schwarz and Miller present an RDPC scheme for distributed erasure-coded data that realizes availability through replication [19]. They use XOR-based, parity m/n erasure codes to create n shares of a file that stored at multiple sites. The main idea is to compare the contents of the shares using algebraic signatures, which have the property that the signature of the parity block equals the parity of the signatures of the data blocks. To make the scheme collusion resistant, they blind data and parity by XORing them with a pseudo-random stream. The limitation of the scheme lies in that file access and computation complexity at the server and the communication complexity are all linear in the total number of file blocks per challenge. And they don’t prove the security of the scheme.

Wang et al. firstly study the problem of ensuring the integrity of data storage in cloud computing [20-22]. In addition, Chen et al. propose a secure data storage strategy in cloud computing [23]. In [20], they utilize homomorphic token and ECC to achieve the integration of storage correctness insurance and data error localization. The scheme supports dynamic operations on data blocks, including data update, delete and append. In [21], they consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. To achieve efficient data dynamics, they improve the POR model by manipulating the classic Merkle Hash Tree (MHT) construction for block tag authentication. In [22], they also consider introducing a TPA to audit the cloud data storage. They utilize public-key based homomorphic authenticator and uniquely integrate it with random mask technique to achieve a privacy preserving public auditing system. To support multiple auditing tasks, they explore the technique of bilinear aggregate signature to extend the scheme into a multi-user setting. However, all the data possession schemes are based on public-key cryptography and they don’t consider data recovery. 3.3. Others

The first proposed POR-like construction is that of Lillibridge et al. [24]. They proposed a scheme to back up user data over the Internet. Each computer has a set of partner computers, which collectively hold its backup data. In return, it holds a part of each partner’s backup data. File blocks are dispersed in shares across n partners using an (m, n)-erasure code. Because the scheme requires cooperation, it is potentially vulnerable to free riding, which tempts cheating or disruption a partner’s data. To defend against these attacks, file owners perform spot checks on the integrity of one another’s fragments to ensure partners continue to hold data using MACs. These MACs also have the effect of allowing reconstruction of F in the case of data corruption. But it will result in large additional storage. And they do not offer formal definitions or analysis of their scheme.

Shah et al. proposed methods for auditing storage services in [25]. In their approach, a third-party auditor verifies a storage provider’s possession of an encrypted file via a challenge-

Research on Remote Data Possession Checking Lanxiang Chen, Shuming Zhou

46

Page 6: 006 Jcit Vol6 No12

response MAC over the full encrypted file. Because challenges are pre-computed, the scheme supports only a finite number of challenges and requires metadata linear in the number of challenges. Subsequently, they described methods for privacy preserving auditing and extraction of digital contents in [26]. Auditing involves a third-party auditor that remotely verifies that the stored data are intact. For extraction, the auditor verifies the data is intact and returns it to the customer, ensuring that he received the original data. The solution removes the burden of verification from the customer and provides a method for independent arbitration of data retention contracts. However, the scheme requires generating MACs for entire file, which results in additional computation and storage overheads. And they don’t consider data update and recovery.

Very close in spirit to a PDP is the concept of storage complexity, which enable a prover to demonstrate that it is making use of storage space at least |F|. The prover does not prove directly that it is storing file F, but proves that it is has committed sufficient resources to do so. PDP is also a form of memory checking which verifies that all reads and writes to a remote memory behave identically to reads and writes to a local memory. An alternative to checking remote storage is to make data resistant to undetectable deletion through entanglement, which encodes data to create dependencies among unrelated data throughout the storage system. Thus, deleting any data reveals itself as it deletes other unrelated data throughout the system. There are also some approaches for verifying the correctness of query results on outsourced database, which are not discussed in this paper. 4. The comparisons of these schemes

From the view of cryptographic techniques they used, all schemes can be categorized into public-key and symmetric-key cryptography based RDPC. It is known to all that public-key based schemes are computation expensive.

According to the factors which should to be considered, the comparisons of some schemes are listed in Table 1. As we know that PDP doesn’t provide data recovery. Some schemes either based on public-key or they don’t consider data updating, or don’t support public verification, or the number of verification is limited. The evaluation factors are as follows.

(a) Total data needs to access; (b) Computing overhead, it includes the initialization and verification overheads in the

client and the proof generating overheads on the server; (c) Communication overhead, it refers to the total data transferred between client and

server; (d) Storage cost, it refers to the additional storage of client and server; (e) Supporting data updating, including modify, insert, add and delete etc.; (f) The number of verification; (g) Supporting public verification; (h) The probability to check errors; (i) Data recovery, it means whether it can recover data in case of a failure; (j) The method of security proving. Generally it is based on standard or random oracle

model. What needs to explain is that when considering data recoverability, it will surely increase the

computation overheads of the user and server. Then to be fair, the computation overhead refers only to that of challenge and response. And for different recovery schemes, their redundancy and error correction capability are also different. That is very difficult to analyze.

From the Table 1, almost all of the schemes have different features. In summery, according to the state of the art research works, there are some drawbacks of existing schemes as follows.

Based on public-key cryptography, therefore the computation overheads are very intensive, especially when the total data is massive;

Not considering data updating, therefore can only be used for static data archiving. In reality, many outsourced storage applications need to handle dynamic data;

Not considering data recovery, when detecting destructions, it cannot restore the data; The number of verification is limited;

Research on Remote Data Possession Checking Lanxiang Chen, Shuming Zhou

47

Page 7: 006 Jcit Vol6 No12

Not supporting public verification; Not proving the security of the schemes, therefore it cannot guarantee its security; The efficiencies wait for further enhancing.

Table 1. The comparisons of schemes

schemes (a) (b) c/s (c) (d) c/s (e) (f)/(g) (h) (i) (j)

[19] О(n) О(1)/О(n) О(n) О(1)/О(n) n/a ∞/n 1-(1-f)C y n/a

[3] О(1) О(1)/О(1) О(1) О(1)/О(n) a ∞/y 1-(1-f)C n/a RO

[5] О(1) О(1)/О(1) О(1) О(1)/О(n) amd(t) t/n 1-(1-f)C no RO

[4] О(n) О(n)/О(n) О(1) О(1)/О(n) n/a ∞/y 1-(1-f)C n/a ST

[8](DPDPII) О(n) О(logn)/О(nєlogn) О(logn) О(1)/О(n) amid ∞/n 1-(1-f)Ω(logn) n/a ST

[18] О(n) О(logn)/О(logn) О(1) О(1)/О(n) n/a ∞/y 1-(1-f)C y ma

Note: data updating refers to append (a), modify (m), insert (i), and delete (d). ’t’ means that the number of

verification is limited. ‘n’ is the number of file blocks. ‘f’’ is the proportion of file blocks that are destroyed. ‘C’ is

the number of blocks required for verification. ma is short for mobile adversary. c and s is short for client and

server. ST and RO is short for standard model and random oracle model.

Although all of the schemes have their drawbacks, every scheme has their appropriate

application areas. The scheme of [19] is appropriate for using Internet machines to back up users' data. They provide a very clever way to check whether every machine has stored intactly each others' data using algebraic signatures. Ateniese et al. firstly present to use homomorphic verifiable tags [3] to reduce the communication overhead and give the security proving. Ateniese et al. present a provably secure PDP scheme [5] based on symmetric key cryptography, which is appropriate for the application requiring minimum computing overhead. Ateniese et al. provide a general mechanism for building public-key HLAs from any identification protocol satisfying certain homomorphic properties [4]. They also show how to turn any public-key HLA into publicly verifiable proofs of storage with communication complexity independent of the file length and supporting an unbounded number of verifications Erway et al. present a framework and a construction for dynamic PDP which supports data updating [8]. And they also show how the DPDP scheme can be extended to construct complete file systems and version control systems at untrusted servers. HAIL, introduced by Bowers et al. [18], embeds MACs in the parity blocks of the dispersal code. As both MACs and parity blocks can be based on universal hash functions, it is possible to create a block that is simultaneously both a MAC and a parity block. And HAIL ensures resilience against a mobile adversary. If corruptions are detected on some servers, then F can be reconstituted from redundancy in intact servers and known faulty servers will be replaced. It is very suitable for the application requiring high availability. 5. Future directions

As the drawbacks of existing schemes listed above, there is a large room for improvement. The following aspects are considered to be future directions of RDPC.

(1) The design of efficient RDPC schemes. On the one hand, it is to improve computation, communication and storage efficiency. On the

other hand, it is to improve detection efficiency, to detect the error and to recover the data with high probability and accuracy.

(2) Supporting more extensive application environments. On the one hand, any networking devices can be used, such as PDA, mobile phone, wireless

phone and net book etc. It means that these schemes are suitable for wireless environments. As the computation and storage capability is limited, the requirement of these schemes is much

Research on Remote Data Possession Checking Lanxiang Chen, Shuming Zhou

48

Page 8: 006 Jcit Vol6 No12

strict. On the other hand, the schemes should be adapted to various data sets, including massive data set.

(3) Effectively supporting data updating. The main data updating operations include modifying, inserting, adding and deleting. It is an

important feature for storage service, which will determine whether the users choose the service. (4) Providing quality of service (QoS). On the one hand, it is to provide different QoSs. On the other hand, to guarantee to achieve

declared performance and QoS, users can assess the QoS of SSPs by utilizing performance tracking tool and MR-PDP protocol. For example, when the declared bandwidth is 100KB/s, it is 100KB/s in fact. And when SSPs declared that there are n copies, there are n copies in fact.

(5) Providing security proving. If a scheme is not proved to be secure, users won’t select to use it. As the most schemes

utilize modern cryptography, the security proving methods can be categorized into two types, namely standard model and random oracle model. But for different threat model and application environments, for example, there may be mobile attacks in some environments, it needs to consider the dynamic of the threats. Thus, different schemes require appropriate security proving methods.

In addition, introducing a third-party auditor to remove the burden of verification from the users and provide a method for independent arbitration of data retention contracts is also a hot issue. But it will cause the problem of privacy-preserving, which means that it must not leak users’ data to the third-party auditor. 6. Conclusions

Remote data possession checking is a topic that focuses on how to frequently, efficiently and securely verify that a storage server is faithfully storing its client’s (potentially very large) original data without retrieving it. There are two types of schemes, namely provable data possession (PDP) and proof of retrievability (POR). The difference between PDP and POR is that POR checks the possession of data and it can recover data in case of a failure. In this paper, a simplified architecture of SaaS model is presented and the security requirements are discussed. A series of evaluation factors considered in designing an RDPC scheme are listed. Then the state of the art research works on RDPC are reviewed. At last, according to these factors listed above, the existing schemes are compared. From the results of comparisons, the drawbacks of the current schemes are pointed out, which help to define the future directions for improving the existing schemes. Although all of the schemes are not perfect, they have their appropriate application areas, which are also discussed. Acknowledgments

The work was supported by the Natural Science Foundation of Fujian Province (No. 2011J05148), the Science and Technology Projects of Educational Office of Fujian Province (No.JA10079 and No.JB10041) and Fujian Province Science and Technology Cooperation Projects (No.2010H6007). 7. References [1] Yves Deswarte, Jean-Jacques Quisquater, Ayda Saïdane, “Remote integrity checking”, In: Proc. of

IICIS '03, pp.1–11, 2003. [2] D´ecio Luiz Gazzoni Filho, Paulo S´ergio Licciardi Messeder Barreto, “Demonstrating data

possession and uncheatable data transfer”, IACR ePrint archive, 2006. Report 2006/150, http://eprint.iacr.org/2006/150.

[3] Giuseppe Ateniese, Randal Burns, Reza Curtmola, Joseph Herring, Lea Kissner, Zachary Peterson, Dawn Song, “Provable data possession at untrusted stores”, In: Proc. of ACM-CCS '07, pp.598–609, 2007.

[4] Giuseppe Ateniese, Seny Kamara, Jonathan Katzy, “Proofs of Storage from homomorphic identification protocols”, In: Proc. of ASIACRYPT '09, pp. 319-333, 2009.

Research on Remote Data Possession Checking Lanxiang Chen, Shuming Zhou

49

Page 9: 006 Jcit Vol6 No12

[5] Giuseppe Ateniese, Roberto Di Pietro, Luigi V. Mancini, Gene Tsudik, “Scalable and efficient provable data possession”, In: Proc. of SecureComm '08, pp.1-10, 2008.

[6] Reza Curtmola, Osama Khan, Randal Burns, Giuseppe Ateniese, “MR-PDP: Multiple-replica provable data possession”, In: Proc. of ICDCS '08, pp.411-420, 2008.

[7] Xiao Da, Shu Jiwu, Chen Kang, Zheng Weimin, “A Practical Data Possession Checking Scheme for Networked Archival Storage”, Journal of Computer Research and Development, 46(10):1660-1668, 2009.

[8] C. Chris Erway, Alptekin Kupcu, Charalampos Papamanthou, Roberto Tamassia, “Dynamic provable data possession”, In: Proc. of ACM-CCS '09, pp.213-222, 2009.

[9] Charalampos Papamanthou, Roberto Tamassia, “Time and space efficient algorithms for two-party authenticated data structures”, In: Proc. of ICICS'07, pages 1–15, 2007.

[10] Francesc Sebe, Josep Domingo-Ferrer, Antoni Martınez-Balleste, Yves Deswarte, Jean-Jacques Quisquater, “Efficient remote data possession checking in critical information infrastructures”, IEEE Trans. on Knowl. and Data Eng., pp.1034-1038, 2007.

[11] Lanxiang Chen, Gongde Guo, "An Efficient Remote Data Possession Checking in Cloud Storage", JDCTA: International Journal of Digital Content Technology and its Applications, Vol. 5, No. 4, pp. 43-50, 2011.

[12] Ari Juels1, Burton S. Kaliski Jr., “Pors: proofs of retrievability for large files”, In: Proc. of ACM-CCS '07, pp.584-597, 2007.

[13] Hovav Shacham, Brent Waters, “Compact proofs of retrievability”, In: Proc. of ASIACRYPT '08, pp.90-107, 2008.

[14] Dan Boneh, Ben Lynn, Hovav Shacham, “Short signatures from the Weil pairing”, J. Cryptology, vol. 17(4), pp.297-319, 2004.

[15] Yevgeniy Dodis, Salil Vadhan, Daniel Wichs, “Proofs of retrievability via hardness amplification”, In: Proc. of TCC '09, pp.109--127, 2009.

[16] Reza Curtmola, Osama Khan, Randal Burns, “Robust remote data checking”, In: Proc. of StorageSS '08, pp.63-68, 2008.

[17] Kevin D. Bowers, Ari Juels, Alina Oprea, “Proofs of retrievability: theory and implementation”, In: Proc. of ACM-CCSW '09, pp.43-54, 2009.

[18] Kevin D. Bowers, Ari Juels, Alina Oprea, “HAIL: a high-availability and integrity layer for cloud storage”, In: Proc. of ACM-CCS '09, pp.187-198, 2009.

[19] Thomas Schwarz S.J., Ethan L. Miller, “Store, forget, and check: Using algebraic signatures to check remotely administered storage”, In: Proc. of ICDCS '06, pp.12, 2006.

[20] Cong Wang, Qian Wang, Kui Ren, Wenjing Lou, “Ensuring data storage security in cloud computing”, In: Proc. of IWQoS '09, pp.1-9, 2009.

[21] Qian Wang, Cong Wang, Jin Li, Kui Ren, Wenjing Lou, “Enabling public verifiability and data dynamics for storage security in cloud computing”, In: Proc. of ESORICS '09, pp.355-370, 2009.

[22] Cong Wang, Qian Wang, Kui Ren, Wenjing Lou, “Privacy-preserving public auditing for data storage security in cloud computing”, In: Proc. of INFOCOM 2010, San Diego, CA, 2010.

[23] Danwei Chen, Yanjun He, "A Study on Secure Data Storage Strategy in Cloud Computing", JCIT, Vol. 5, No. 7, pp. 175-179, 2010.

[24] Mark Lillibridge, Sameh Elnikety, Andrew Birrell, Mike Burrows, Michael Isard, “A cooperative Internet backup scheme”, In USENIX Annual Technical Conference, pages 29–41, 2003.

[25] Mehul A. Shah, Mary Baker, Jeffrey C. Mogul, Ram Swaminathan, “Auditing to keep online storage services honest”, In: Proc. of HotOS '07, pp.1-6, 2007.

[26] Mehul A. Shah, Ram Swaminathan, Mary Baker, “Privacy-preserving audit and extraction of digital contents”, Cryptology ePrint Archive, Report 2008/186, 2008.

Research on Remote Data Possession Checking Lanxiang Chen, Shuming Zhou

50